Introduction
Virtual reality (VR) has been technologically possible for decades, but it has not seen wide scale commercial viability until recently. Currently, the modern tech industry is in a period of rapid innovation and growth regarding VR technology and application. Because of its hands on and easily understood design, VR has made its way into video games, classrooms, and the workplace.1 VR uses the brain’s natural perception of depth to transport you to another world. You use the same motions and visuals that you have had since birth to look and interact with virtual environments. People who are unfamiliar with this technology can intuitively interact in virtual environments with ease.
Despite VR being relatively easy to operate, there is a great disconnect between our physical reality and a simulated environment experienced through VR. When a user puts on a VR headset, the display envelopes their vision. This is a singular experience, because only one person can have the headset on at a time. However, when developers want to improve accessibility and convey the experience of VR to an audience, they must use a screen. Everyone has access to a screen, but in its current stage, VR is still in an early adopter phase. This presents a problem because VR content does not naturally fit a flat plane.2 If VR is to enter and improve diverse entertainments such as educational and professional industries, producers of VR programs need to showcase their content in the most visually appealing, easily accessible, and understandable way. Ideally, this is the method which most closely replicates the unique and immersive experience of a VR headset onto a 2D screen. Our research team at Soft Interaction, under the Department of Visualization at Texas A&M University, is exploring the best methods to display VR to a larger audience. Our goal is to make VR content accessible to multiple people in a live environment while not compromising the experience of entering a virtual world. This way, people can have the experience of VR without needing the hardware. Through mixed reality, we can achieve all of this and more. |
THERE IS A GREAT DISCONNECT BETWEEN OUR PHYSICAL REALITY AND A SIMULATED ENVIRONMENT EXPERIENCED THROUGH VR |
Methods
Our research team has set out to address the problem of inaccessibility in VR applications by adapting their content for screens. The challenge was to display how a person acts and interacts in the virtual world in a naturalistic, aesthetic, and practical way. To adapt the content, we built a “mixed reality” system, which composites a live actor into a virtual environment. This creates a final image which is similar to a live action camera shot of an actor, yet the environment is completely virtual (Figure 1).
Our team filmed a live actor in front of a green screen. Then, we attached a positional tracker to the real camera, and created a virtual camera to mirror the position, rotation, and field of view. The actor is then composited into the shot in the position they would be occupying. Because the virtual camera tracks in the same way as the real camera, the virtual scene acts similarly to a real environment. This yields a final shot which seamlessly places the actor in the virtual world.
Results
We defined three guiding criteria for the success of our research:
Naturalistic display of the original content onto a screen.
Visually aesthetic presentation of VR content.
Interactive, multi-person, usability and demonstration.
To make the experience seem naturalistic, objects would have to interact with the actor as they would in the real world. Therefore, we would have to include variable opacity in our composition. If an object appears between the camera and the actor, it must block the subject from view in the same way it would naturally. If the object has translucency, then it should only partially cover the subject. We achieved this through the four panel compositing method, developed by Kert Gartner and the Owlchemy Lab team for the production of VR game trailers.3
|
In order to differentiate the background from the foreground, we implemented LIV compositing software4 to split the virtual camera into 4 distinct outputs (Figure 2). Each output was layered together with the green screen footage of the actor to create the final composite. The background layer was programmed to display exactly what that the virtual camera saw, capturing all objects behind the actor. Then, we separated the actor from the green screen so that only his/her body is visible. Next, we used an alpha matte (a black and white template) to extract only the objects between the camera and the subject. In places where the alpha matte is white, something is obstructing the camera’s view. In that position on screen, we rendered the image in front of the actor with the foreground layer. Any places where the alpha matte is a gradient between white and black, are either translucent objects or objective markers for navigation and direction in the program. All of this combines to give a final image which places a person seamlessly into a digital space (Figure 3). Once we placed the actor in the scene, we had to consider the aesthetics of the shot to ensure the greatest production value and transmission of information. By the nature of mixed reality, much of the cinematography was similar to live action production. In the production of our video on Texas A&M University’s “Cyber Security VR” application, developed by Dr. Hwaryoung Seo’s research team,5 we approached filming as if we were on location. We can detail the process of mixed reality video production through our experience of filming a demonstration for “Cyber Security VR.” Though this program is used as technical training for server farm technicians, the method of filming any VR program in mixed reality is similar. In “Cyber Security VR,” the user, who is training to be a technician, tours a high security server farm. This is a facility which houses computers or servers that process incoming and outgoing data online. The user is instructed on how to perform maintenance on a faulty server rack. It is important that this training can be done in VR, because actual server farms are highly secure and losses can be costly. In a similar fashion to live filmmaking, we filmed the scenes from multiple angles. This is called coverage. With a wide amount of coverage, we could ensure the best camera angles for any given action. However, we experienced a few challenges. In many instances, our virtual camera would start behind a wall or prop, and the action would be blocked. In the preproduction of our next demonstration for “Muscle Action VR,” an educational software that simulates how your muscles move in real time, we decided to attach the physical camera to a steady cam and film the demonstration at any angle that the camera operator moves. This would add dynamic movement to the shot, while also allowing us to adjust angles to obtain more presentable footage. Lastly, it was important that our footage be practical and applicable. A large portion of VR demonstration occurs at live events, so we wanted the ability to display actors in virtual environments in real time. This would allow our VR demo to become more communal, allowing passersby to immediately understand the application before trying it themselves. In order to do this, we used LIV VR compositing software7 to composite each video layer in real time and output to a display. While this has proven to work in the lab, we will showcase both the live broadcast method and the final version of Muscle Action VR at VIZA GOGO student showcase in May of 2020. |
IN STANDARD VR, THESE WOULD ALL BE SINGULAR EXPERIENCES, BUT WITH MIXED REALITY, ENTIRE GROUPS CAN INTERFACE WITH THE PROGRAMS
ConclusionIt is not likely that society will completely forgo two-dimensional displays as our main form of information transfer any time soon. For VR to thrive, it must be available to audiences whose primary interface is through screens. By developing methods which can artistically and practically capture VR on a screen, we can make it more accessible to the general public. In entertainment, we can advertise products and games by showing real people using the software, and having fun. In education we can step into virtually any environment, and experience learning topics firsthand. School children could explore life science in a unique way by visiting jungles, oceans, and highlands while staying in the safety of their classrooms. Furthermore, doctors and nurses in training could experience surgery without the threat of human mortality. In standard VR, these would all be singular experiences, but with Mixed Reality, entire groups can interface with the programs. This is just the beginning of how VR can expand into more practical roles. With the advent of Mixed Reality, we can view VR content in a more human way. |
IN EDUCATION WE CAN STEP INTO VIRTUALLY ANY ENVIRONMENT, AND EXPERIENCE LEARNING TOPICS FIRSTHAND |
Acknowledgments
I would like to thank Texas A&M University and the Department of Visualization for sponsoring our research. Assistant Department Head and Laboratory Director Bill Jenks, as well as Soft Interaction Research Director Dr. Hwaryoung Seo, have helped us immeasurably. I would also like to thank my graduate student collaborators, Eunsun “Sunny” Chu, Anantha Natarajan, and Austin Payne, as well as the other branches of Soft Interaction responsible for Cyber Security VR, and Creative Anatomy VR. Methodologically, I would like to thank Kert Gartner, LIV Development Team, and Owlchemy Labs for their extensive documentation of mixed reality content creation and development. Lastly, I want to thank the amazing teams behind the programs we used. These programs include LIV Compositor, OBS Virtual Camera, Black Magic Media Express, Adobe After Effects, and Owlchemy Labs “Job Simulator.”
References
|
John Donaldson ‘22John Donaldson ‘22 is a Visualization major from Rowlett, Texas. John works with Soft Interaction Labs, Research Team to further VR educational content. Currently, he is the student class representative for his year, as the Media Chair for TAMU Chillennium Game Jam, and a student employee for the Department of Visualization. John plans to continue research and class related activities until he graduates. Vertical Divider
|