CHERRY HILL, N.J. — Training troops in simulations that project virtual scenarios onto surrounding screens has become the norm in recent years. In the future, it could be possible for soldiers to carry their video games in their helmets.
The Army wants to shift to head-worn systems that will immerse soldiers in real-world environments wherever they may need to hone their skills, whether in their home bases or deployed overseas.
Recent advances in cameras, computer processing power and display technologies will enable the development of wearable augmented reality systems, technologists say. The devices may even enhance battlefield operations by providing troops with virtual information superimposed over the live view of their surroundings, experts predict.
Researchers at Lockheed Martin Corp.’s Advanced Technology Laboratories are exploring how people might best utilize such systems.
“We’re really focused on the interaction side of it,” said John Sausman, principal investigator for the project at ATL’s Informatics Lab.
The team is researching how much information can be displayed, how to present the data on the display and how users will interface with the wearable technologies.
As part of that effort, engineers have completed a prototype system that could be applied to the Army’s human terrain teams. These small groups, composed of military and civilian social scientists, are embedded with combat units. They venture out into the neighboring villages and towns to meet with the local population. They collect information that could help commanders improve security and address grievances in the area.
“We thought it would be an interesting application for augmented reality because you’re trying to interact with people and you don’t want to be looking at your handheld,” said Sausman.
Much of the information collected by human terrain teams is jotted down in notebooks or captured in audio recordings during meetings. But the information is not readily accessible to others. If the data — names, aliases, biometrics, social networks, etc. — were digitized and stored in databases accessible to facial recognition software, then it could be presented to team members unobtrusively during meetings.
For example, if a person wearing the system encountered two people talking on a street corner, facial recognition software might be able to identify the first person, but it might not recognize the second person. The system would alert the user, who could pull up the first person’s social network and skim through his five closest associates to identify the second person. Or the user could collect information from the unknown person to beef up the system’s database.
In another scenario, the user might encounter a suspicious vehicle parked on the side of the road. The head-worn system might be able to identify the person sitting inside and also identify the vehicle based upon prior reports of roadside bomb events. It could tie the person behind the wheel to those reports and lead officials to make an arrest.
To demonstrate the concept, the company built a head-worn system from commercially available technologies. On the exterior of the unit, there are small cameras situated where a person’s eyes would be. On the interior, small screens display the view from the cameras. The system is bulky to wear and could not be deployed in its current configuration, but the team is confident that the hardware will continue to mature and miniaturize in the marketplace. User interaction techniques with that hardware, in the meantime, need to be developed to stay on par.
“This is a good opportunity where the hardware is lagging behind, because we don’t often get called in early enough,” said Polly Tremoulet, program manager for the user-centered interfaces group at the lab. “We’re trying to stay ahead of the curve in this particular instance, knowing the technology is coming. We can anticipate better form factors — lighter equipment, more ruggedized equipment — for the military user.”
Optical display hardware, which are similar to eyeglasses, and see-through augmented reality video hardware are rapidly progressing, Sausman said. The technology is becoming less obtrusive for daily and hourly use. But determining exactly what can be displayed on the hardware and where it might be read without impinging upon a person’s field of view remains the challenge.
Early on, the team thought that displaying information about a person in an identification card-like layout in the upper left-hand corner of the field of view would be best. The display contained a mug shot of the person, his name, date of birth and a “web” illustration of his network of closest friends and associates. But researchers learned that the information was best delivered along the bottom of the display, similar to the way stock exchange and news tickers scroll along the bottom of TV news channels.
In a demonstration of the prototype, Sausman peered at two of his colleagues who held up paper signs so that the face detection software, enhanced by a simulated face recognition program, would identify them. Detecting a person’s face and correlating it to a particular identity are two different computer processes. Rather than invest in recognition software and incorporating it into the system, the team chose to simulate the process.
“We’re comfortable with simulating that. We know the technology is going to catch up,” said Tremoulet.
As Sausman looked at colleague Adam Gifford, the software detected Gifford’s “face” and displayed three photos of possible identities in order of confidence level. “I’m able to carry on a conversation with Adam and get this information,” said Sausman. A standard military icon blinked in the bottom right hand corner to catch his attention.
The team is researching how the wearer will interact with the system to either input additional information or clear out the screen for a new scenario. Without a keyboard and mouse or a touch screen display, the user will have to interface with the system using different mechanisms. One option is gestures. The user could place his hands in view of the camera and signal the commands using different motions. He could also wear a glove that would pick up hand movements. Vocal commands are another option. A user could simply talk to the system.
“To use the tools available today, although they’re not perfect and not the exact hardware wanted in the end, we can create those information displays and interaction paradigms on what we have now so that we’re ahead of the hardware curve,” said Sausman. “Once applicable hardware is released, then we’re ready for that sort of situation.”