‘Evil Digital Twins’ Could Sow Chaos in Metaverse

By Mikayla Easley

iStock illustration

The Pentagon and industry have turned to the modeling-and-simulation technique known as “digital twins” to support development of some of their most critical programs. But experts warn that the technology could also be used by adversaries for ransomware, phishing and even cyber warfare.

Digital twins use real-world data — including both physical and behavioral characteristics — to create a true-to-reality, simulated model of an environment, object or person in order to run scenarios and study the model. When paired with artificial intelligence, digital twins can run “what-if” scenarios and make predictions based on the available data.

The technology has been critical for a number of Defense Department programs — including the development of the B-21 Raider stealth bomber, upgrades to the Navy’s aging shipyards and the Pentagon-wide joint all-domain command and control intiative.

However, digital twins have sinister applications for adversaries looking to conduct attacks in cyberspace, said Jason Pittman, collegiate faculty member at the University of Maryland Global Campus in the School of Cybersecurity and Technology. The advent of “evil digital twins” used to conduct criminal activities in cyberspace will come in 2023, he predicted.

If harnessed by the Pentagon’s adversaries, the technology could be utilized to spread disinformation on the internet or create digital copies of people that can operate on behalf of the enemy, he explained.

“There’s no reason to suspect that you couldn’t have an evil version of me — the classic doppelganger in some sense — that then can go act as if it’s me in the virtual world that we live in,” Pittman said.

For example, an adversary could create an evil digital twin to inject disinformation into a user’s social media habits in order to shift their perceptions on a range of topics — from contentious political debates to news about conflicts around the world, he said.

Social media websites often use data collected from a user’s account to create a normal digital twin. The company will then run algorithms in order to predict what content that user will most likely click on in real life, Pittman explained.

“The idea of the evil digital twin then is to take that and turn it into something that, instead of giving me content … based on my last three clicks or my viewing habits over the last six months, you inject disinformation, propaganda or whatever you want to call it,” he said.

Organized crime groups and terrorist cells could harness evil digital twins for these tactics, Pittman said. The technology could be used by the groups to gather information and carry out their objectives — whether it’s to deploy ransomware attacks or distribute propaganda and misinformation, he said.

But in more extreme cases, adversaries could exploit more advanced iterations of evil digital twins. One example would be creating malicious avatars of someone that look and act exactly like them in virtual environments like the metaverse, Pittman said.

While the copy would look and act just like the original, an algorithm fueled by data is actually what is operating that evil digital twin, he added.

The Defense Department has a number of programs similar to the concept of the metaverse that are designed to develop hybrid-virtual environments for training scenarios. In these digital environments, the Pentagon could theoretically mock up their own evil digital twins that are modeled after an adversary in order to better train warfighters.

On the flip side, this means an adversary could do the exact same thing, Pittman said.

He pointed to GPS data from a soldier’s wearable sensor that is synched to the cloud during a training exercise in the physical world as an example of how an adversary could begin to create an evil digital twin.

“You can actually look at that on a map, and now you have the whole outline of the perimeter of the base,” he said. “Now I can create those twins on the base, moving around as they move. And then I can rig up evil digital twins within that to interact with that.”

This would be a problem for the Defense Department. Because the evil digital twin would be able to autonomously operate exactly like the person it is masquerading as — albeit in a malign way — with such acute specificity, it becomes difficult to differentiate who is real and who is not, Pittman explained.

“How will you ever know? How can you come to administer some type of test or instrument to understand that what is being presented is somehow verifiable as real,” he said. “But what the hell is real in that environment, anyway?”

One solution that could help the Defense Department protect against evil digital twins is a concept known as zero trust architecture, Pittman said. The framework requires all users to be authenticated and authorized after every digital interaction within a network.

“You can’t do anything about the twin — whether it’s good or bad,” Pittman said. “But if you just don’t extend trust to any twins, then at least maybe you can contain it.”

The Pentagon is laying out a strategy to implement a zero trust framework by 2027. In November, the Office of the Chief Information Officer released “The DoD Zero Trust Strategy” outlining the technologies and culture changes needed to protect its cyber networks.

Topics: Cyber

Comments (0)

Retype the CAPTCHA code from the image
Change the CAPTCHA codeSpeak the CAPTCHA code
Please enter the text displayed in the image.