Virtual reality users must learn to use what they see
- Date:
- December 4, 2017
- Source:
- University of Wisconsin-Madison
- Summary:
- When most people put on a virtual reality headset, they still treat what they see like it's happening on any run-of-the-mill TV screen, research finds.
- Share:
Anyone with normal vision knows that a ball that seems to quickly be growing larger is probably going to hit them on the nose.
But strap them into a virtual reality headset, and they still may need to take a few lumps before they pay attention to the visual cues that work so well in the real world, according to a new study from University of Wisconsin-Madison psychologists.
"The companies leading the virtual reality revolution have solved major engineering challenges -- how do you build a small headset that does a good job presenting images of a virtual world," says Bas Rokers, UW-Madison psychology professor. "But they have not thought as much about how the brain processes these images. How do people perceive a virtual world?"
Turns out, they don't perceive it like the real world -- at least not without training, according a study Rokers and postdoctoral psychology researcher Jacqueline Fulvio published recently in the journal Nature Scientific Reports.
In 2015, Fulvio found that people were flunking her simple test of three-dimensional perception using a flat screen and standard 3D movie glasses. They were not good at discerning which direction a target was moving.
"Most importantly, they confused whether the object was coming toward them or going away from them," she says. "It was a surprising finding. Nobody believed it, because it's not something that happens often in the real world. You'd get hurt."
The researchers decided to move the test to virtual reality to provide more realistic indications of motion in three dimensions -- such as binocular cues, in which slightly different views from the left and right eye reveal depth, and parallax, where closer objects appear to be moving faster than those farther away.
"We thought it was as easy as taking the same object-tracking task, putting it in the virtual environment, and having people do it the same way," Fulvio says. "And they did do it the same way. They made the same mistakes."
Given a one-second snippet of the movement of a small, round target across a plane that stretched away from the viewer at roughly eye level, study participants correctly moved a virtual paddle to intercept the target's course less than a quarter of the time.
What Fulvio and Rokers found was that when most people put on a virtual reality headset, they still treat what they see like it's happening on any run-of-the-mill TV screen.
"There's no depth to a computer screen. There are no binocular cues. Close one eye, close the other eye, nothing changes," says Rokers, whose work was funded by Google. "If you take that expectation into a VR headset, where you do have binocular cues, you somehow just don't use them."
Unless you're trained to use those cues.
Fulvio began giving study subjects visual and audible feedback. Once they'd watched the one-second flight and set their virtual paddle to catch the target, the game would reveal the full path of the target and a cowbell noise for success or swish for a miss.
The visual feedback nearly doubled success rates. (The cowbell improved scores, too, but less so.)
"They were getting better, but how were they getting better? What were they doing differently?" Fulvio asked.
When she turned off the VR system's head tracking, taking away the effects of players' head movements and making them passive viewers, they were bad again. When she gave a little of that freedom back -- restoring the system's response to head movements, but making the virtual world shifts lag behind players by as much as half a second -- they were still bad.
Interestingly, even players who reported keeping their heads stock still showed improvements when the virtual reality system was incorporating the smallest wobbles of their heads into the scene they were seeing.
"These are head motions people make, tiny jitters, that are not planned movements," Rokers says. "When you think you're sitting still, your head is moving a little bit. And, it turns out, people actually use that information to improve depth perception. It's tiny. It's almost involuntary. But the visual system actually exploits that."
The results -- that tiny head movements and typical binocular cues of motion are there for the taking in virtual reality, but that most people only use them if they are actively shown how VR differs from a flat computer screen -- should help virtual reality creators improve uptake of their products.
"Google packages a virtual reality YouTube viewer with their headset. That's a passive experience, and not the best thing to do," Rokers says. "What they should be doing is packaging action games with their headset, something that forces users to interact with the environment. That teaches them to use the information available in virtual reality, and treat it more like the real world and less like a computer screen."
"Otherwise you just have a really fancy TV, really close to your face," says Fulvio, who has moved on to testing the extent to which people's expectations influence their perception of flat versus virtual depth by having her study subjects watch TV inside virtual reality.
Rokers says showing the effects of teaching people to use cues to three-dimensional motion that they are otherwise ignoring may ultimately help refine treatment for vision disorders such as blind spots or amblyopia ("lazy eye") in which the brain can be trained to compensate for perceptual limitations.
Story Source:
Materials provided by University of Wisconsin-Madison. Original written by Chris Barncard. Note: Content may be edited for style and length.
Journal Reference:
- Jacqueline M. Fulvio, Bas Rokers. Use of cues in virtual reality depends on visual feedback. Scientific Reports, 2017; 7 (1) DOI: 10.1038/s41598-017-16161-3
Cite This Page: