The explosive growth of computer-based simulation and modeling
in civilian and military science has fueled the market for high-performance
supercomputers, which now stands at about $5 billion worldwide.
But the advanced capabilities achieved in personal computers in
recent years often have blurred the lines between what researchers
and technology developers considered to be “supercomputers”
and mainstream PCs.
In other words, yesterday’s supercomputers are today’s
desktop PCs. “We do not have a strict definition of a HPC
[high-performance computer] since it changes as technology evolves,”
according to Bill Gabor, who works at the Defense Department’s
High Performance Computing Modernization Office (HPCMO), in Arlington,
Va. Cray Henry, also from HPCMO, said any computer that costs more
than a million dollars is considered a supercomputer.
Silicon Graphics Inc. (SGI) described supercomputers as “a
class of computers that are recognized as delivering industry-leading
performance, in terms of computational abilities (both number of
processors and performance), bandwidth, memory capacity, storage
capacity and visualization.”
Supercomputers are about speed. Steve Conway, of Cray Inc., headquartered
in Seattle, explained the difference with an example of time. A
supercomputer can run calculations in eight hours that would take
a PC two to five years to run. The first supercomputers, back in
the 1970s, ran approximately 133 million calculations per second.
Conway said, the world record, today, is one trillion per second.
The need for a supercomputer is based on how much information is
needed and how fast it has to be processed.
The field of modeling and simulation, meanwhile, has benefited
enormously from the wider availability of supercomputers in the
The U.S. Army Tank-Automotive Command-Tank-Automotive Research,
Development and Engineering Center (TACOM-TARDEC) uses supercomputers
to test new ground vehicle models. John Schmuhl, from TARDEC said,
simulations are used to analyze designs, interaction and integration
of crews with the vehicle displays and controls and accelerated
component durability testing.
In general, Schmuhl said, “the purpose is to support technology
upgrades for aging ground vehicle systems and, in particular, the
high-priority Army Interim Brigade Combat Team and Future Combat
Systems programs.” A recent example he cited was the joint
Army/Marine Corps Medium Tactical Truck Remanufacture Program and
its follow-on contract activity, which resulted in contracts exceeding
one billion dollars.
Other vehicle programs that have benefited from simulation and
modeling include the Abrams main battle tank’s M1A2 SEP commander’s
station, the M2 driver’s station, and the humvee truck driver’s
station on TACOM-TARDEC’s ride motion and crew station/turret
The vehicles and the drivers are subjected to various on-road and
off-road tests. Scenarios, with complex terrain features, are run
while mobility and dynamics performance are observed and analyzed.
Schmuhl explained that in real-time simulations, the crew of the
vehicle can be given control and influence over the simulation,
so they are not “merely going along for the ride.” Computer-generated
imagery, realistic vehicle sounds and communications are added to
the simulation to give soldiers more control. Virtual battles even
can be created between manned simulators or computer-generated forces.
The realistic nature of the simulations is made possible by the
new computers and the amount of data they can handle, experts said.
Things such as vehicle weight, center of gravity, moment of inertia
effects, propulsion systems performance, weapon stabilization and
control and weapon firing characteristics can be replicated accurately
when simulations are run for tracked or wheeled vehicles. Even variables
such as mobility, ride quality and maneuverability—which are
different between tracked and wheeled vehicles—can be modeled.
“In the end, the interaction of all of these variables are
observed, varied and controlled in the simulations,” said
With new supercomputers, the most notable enhancement is the ability
to perform more complex, higher fidelity engineering simulations,
over longer time periods in real or near-real time.
Thousands of simulations, examining countless combinations of variables
can be run. The architecture of the computers also is evolving to
support more open software methodologies, which makes it easier
to program with complex models. High speed and high bandwidth are
critical for integrated analytical and physical modeling and simulation,
especially in the field of interactive, immersive graphics.
Distributed Mission Training
Another Defense Department project that requires a great deal of
computer power is the so-called Distributed Mission Training, or
DMT. DMT links pilots in flight simulators with virtual forces,
using imagery from actual tanks or planes in the field. Trainees
in the simulator can interact with the crew of the live vehicle.
John Burwell, director of marketing for SGI Federal, in Silver
Spring, Md., explained that simulators, located anywhere, can be
networked so that pilots and drivers, as well as command and control
crews, can benefit from the training. Burwell described a recent
DMT demonstration by the U.S. Air Force. An F-16 fighter aircraft
simulator was networked with a command post, which was receiving
satellite information about the surrounding area, enabling command
decisions to be made. At the same time, there was an A-10 attack
plane simulator linked to a U.K. Tornado fighter, so there were
two scenarios running simultaneously.
Outside the military, the civilian space program is one of the
most prolific users of modeling and simulation technology.
Bill Feiereisen, chief of the Numerical Aerospace Simulation Systems
Division of NASA Ames Research Center, in San Diego, related some
of the most recent work by the agency. One of the main projects
is the Reusable Launch Vehicle Program. The current space shuttle
first flew in 1981, and had been in the design stage for a decade.
NASA is looking for a replacement.
“In that period of time, the design processes that were used
really did not involve large simulations, because we really didn’t
have that capability,” Feiereisen explained in an interview.
“We have this design tool now that allows us to simulate
not only the flow about a shuttle of the future, but it also allows
us to simulate the chemical processes that happen during re-entry,”
said Feiereisen. The friction of the atmosphere during re-entry
generates enough heat to rip the atmosphere molecules apart, which
in turn causes a chemical reaction, he said. “If we can understand
all those kinds of things, then it gives us the ability to design
materials to be able to withstand the heat. It allows us to be able
to plan trajectories for re-entry that are more efficient.”
The goal is to understand the stresses placed on a vehicle and
design a lighter, cheaper structure that can take more pounds into
orbit while using less fuel. The newer vehicles will be similar
in look to the current shuttle, but with more modern structures
and booster rockets and a more efficient thermal protection system.
“It [the computer] allows us to do all kinds of things that
we really had to do with ad hoc engineering methods before,”
Various design problems also can be caught more quickly. Feiereisen
mentioned one instance where during a series of studies on one of
the next-generation shuttle vehicles, the X-37, the designers were
able to run a series of calculations on the supercomputers before
a design review. They discovered that the wing tips were too small.
“If they had actually flown that test vehicle in that configuration
the wing tips would have burned off.
“The ability to simulate saved them a possible mistake,”
said Feiereisen. “Now that’s not to say it wouldn’t
have been caught some other way, but in this particular case, just
because we had the power of that computer available, it was caught
At the Ames research center today, an SGI Origin 3000 with a 1024
speed processor is used to do all the numerical calculations for
NASA, said Bob Pencek, director of systems engineering for SGI.
Improvements in weather simulations are important in the space program
because changes in weather conditions affect launch schedules. Computer
models are used to anticipate hurricanes around Florida and whether
it would be safe to have the shuttle exposed, said Feiereisen. The
same techniques can be used to study the Earth’s climate over
the long term. NASA explores such questions as: Is it getting hotter?
Are we really losing all the forests? Are the oceans warming up?
Are we losing polar ice caps?
Advanced computers also have important biological uses, Feiereisen
said in a briefing to reporters. Doctors and researchers are developing
a so-called ventricular assistance device—a pump worn externally,
that works the heart muscle. The device was found to destroy red
blood cells faster than the body can reproduce them. Simulations
were run to test the pump and its effect on blood cell counts. Simulations
also are being used to improve doctors’ ability to perform
hypothetical surgeries and plan for the best method of operation
before the patient is involved.
Feiereisen also explained that the more powerful computers available
today can run simulations on a molecular level. For instance, at
NASA Ames, research is being done in astrobiology, the study of
the origins of life. Scientists can watch simulations of the interaction
of molecules and study the possible causes of life.
However, even with the speed and memory available, the computers
are still orders of magnitude away from what is needed. Being able
to study reactions and activities on a molecular level could lead
to future advances in the emerging field of nanotechnology. Imagine
tiny machines with tiny computers that are capable of doing things
such as going through the circulatory system and sucking out cholesterol
Another Ames project is the Future Flight Central (FFC), located
at Mottfield, Calif. It is a mock-up of an airport control tower.
According to Cedric Walker, simulation software manager for FFC,
“the main objective is to look at ways to improve the efficiency
and productivity of surface operations at any airport and also improve
the safety.” For a fee ranging from $50,000 to $400,000, airports
can use the facility to research concepts in a safe environment
without disrupting actual airport operations or endangering lives,
explained Walker. For instance, the San Francisco Airport just finished
using FFC to research the best placement for a new control tower
to get the “optimum visibility requirements.”
The first project run in the facility, Walker said, was a Defense
Department and Boeing study of the UCAV, or unmanned combat air
vehicle. The UCAV is being developed as a potential substitute for
conventional human-operated fighters and bombers.
Los Angeles International Airport will be using the facility to
evaluate where a new tower could best be positioned. With the FFC,
Pencek said, the simulation can be changed to provide varying views
from different positions. Variables such as weather and night or
day visibility conditions can be adjusted.