TRAINING AND SIMULATION
I/ITSEC NEWS: Video Technology Brings 3D Characters to Simulators
Laura Heckmann photoORLANDO — Arizona-based simulation company VirTra introduced an extended reality training headset that uses volumetric video technology to capture realistic 3D characters within training simulations.
The extended reality headset, called the V-XR solution, made its debut on the exhibit floor of the National Training and Simulation Association’s Interservice/Industry Training, Simulation and Education Conference, Nov. 27—Dec.1 in Orlando.
A blend of the physical and digital world, extended reality is in the family of virtual reality, encompassing virtual, augmented and mixed reality.
The V-XR headset pulls from a limitless library of constructed training scenarios using technology that can replicate facial expressions, clothing indentations and generate responsive, artificial intelligence-powered actions from its virtual participants.
Typically, virtual reality goggles use computer-generated imagery to construct characters — technology unable to produce the same movements and facial expressions as real people. The V-XR system replicates textures, motion and human behavior beyond the capabilities of traditional motion capture and CGI. Nuances in facial expressions and demeanor could be used to pick up on agitation, lying or simply more pronounced body language.
Inside the simulation, one scenario depicted an interaction with a translator intended to give the trainee facial nuances to determine if they were being tricked, while another one produced a 3D version of the company’s chief executive officer, John Givens, inside a suicide de-escalation simulation. The 3D character responded to actions from the user, turning its head and eyes following as the user circled the character around the exhibit floor.
Another scenario ended with a 3D character aggressively approaching the trainee and responding to an arm extended into its virtual chest. Currently, the technology has a series of built in paths the character can go down based on the actions of the user. The company hopes to eventually utilize voice recognition and artificial intelligence to further improve the branching between human-character reactions.
The technology used to create the lifelike virtual characters comes from VirTra’s volumetric video capture studio in Phoenix, Arizona. A large green sphere, the studio’s interior utilizes 58 specialized cameras infused with AI in a 360-degree environment to capture 3D details of human subjects that can then be accessed from the V-XR headset.
The studio utilizes specialized sound-proofing technology to create clean audio for dialogue and advanced video processing to capture simultaneous video feeds. “The software that we have now, and AI tech — it cuts out everything around the character perfectly,” Givens said in an interview at the show.
Within the studio, any character — from law enforcement to military — can be filmed with human actors and dropped into a customized scenario, Miranda Fuller, vice president of marketing at VirTra, added.
The headset has a law enforcement version set to launch April 1, but the military side remains intentionally incomplete, Givens said. The customizable nature of the training scenarios means if you can imagine it, you can film it — but VirTra needs the imagination to decide what direction to take its military version, he said. I/ITSEC was a golden opportunity to glean scenarios from inspired military personnel.
“We have a few ideas on how we're gonna apply it, but that's why we brought it to the show,” Givens said. Some scenarios already presented included training a sailor how to operate a console and drop a skiff in the water, seeing just how close it got to the edge, or maintenance scenarios on an aircraft.
Typically, the military gets stuck in top-down designs, trying to design programs with requirements already built and no room for flexibility, he said. VirTra’s baseline virtual environment can be used “for all kinds of training,” he said. “Take any checklist on an aircraft, take satellite repositioning, it doesn't matter. All of that can be done in that virtual environment.”
People tend to latch on to one thing, he said. VirTra’s volumetric technology can be applied to a “multitude” of training applications.
With the rapid pace of changing technology, “the biggest thing in any of these simulators is ‘content is king,’” Givens said. “It's only as good as your content. And so the reason why it's worked so well is because of the content.”
The content created within the VirTra studio can then be used on headsets, dropped in simulators and accessed on a phone, he said, elevating it beyond typical gaming engines.
“So the reality is, you really do need it where the soldier is at the moment, and the training application that they need, you want to get it to them where they are. And you don't have to travel to a simulation center or travel to the headquarters.”
All you need is a headset and the vision for their training scenario, he added.
Topics: Training and Simulation