At the Defense Department, every three or four years, we seem to
have this compelling need to reform the acquisition process and
to seek innovative approaches for reducing the time and effort required
to field new military systems and capabilities.
The reality today is that, after decades of presidential commissions,
Defense Science Board task forces, etc., we still have major acquisition
programs taking 15 to 20 years—and costing an arm and a leg—to
get through the process.
Just look at the track record of some of our current, high-visibility
The Comanche program was born in 1981, when the Army’s budget
showed up with the new LHX program planned to replace the UH-1 utility
and AH-1 Cobra attack helicopters. Twenty-one years and over five
billion dollars later, what do we have to show for it? We’re
still seven or more years away from fielding the Comanche.
The Army/Marine Corps JVX program also started in 1981. Again,
more than 20 years and at least 11 billion dollars spent, we have
a V-22 Osprey program that is still several years from deployment.
And our forces are still without a replacement for the Marine Corps
fleet of CH-46s.
In a similar vein, I recall the debate in the early- to mid-1980s
about the scope and requirements for the Air Force’s Advanced
Tactical Fighter program. More than 15 years and 27 billion dollars
later, we’re still several years away from full operational
capability for the F-22.
While I’ve obviously singled out three high-visibility programs—all
aircraft developments undertaken in the 1980s—I dare say that
you will find similar histories for a whole host of our cutting-edge
development programs over the past 20 or so years.
So what has gone wrong? A Defense Science Board panel in 1990 found
that the root cause of the program stretch-outs and cost increases
was the lack of appreciation of the technical challenges faced by
the programs at their outsets. Program after program had entered
full-scale development before they were ready.
When these technical problems surfaced, primarily in development
testing, the stretch-out and cost growth cycle began. We found that
the messenger of bad tidings, most often the test and evaluation
(T&E) community, was then tarred with responsibility for the
Clearly, we have failed, time and again, to do our homework early
on, or to make the up-front investments required for an informed
understanding of the technical and cost risks inherent in a program
before we launched off into full-scale development and procurement.
In essence, we have been “rushing to failure.”
However, I take exception to the views of those who continue to
depict the T&E community as a major roadblock in the way of
acquisition reform. Nevertheless, the T&E community must continue
to be flexible and ready to rapidly adjust to continuing efforts
to streamline the acquisition process.
We, the testing community, must seriously consider restructuring
ourselves, if necessary, as well as our thinking, to better meet
the new challenges.
Among the major new initiatives is capabilities-based acquisition.
The idea here is a continuous process of design, development and
testing of a new concept or system until we demonstrate and validate
a level of capability deemed worth considering for procurement and
One of the features of this approach is that, up to this point,
there are no hard and fast requirements, threat-based or otherwise,
against which to measure the operational effectiveness or suitability
of the system. This approach has been established as the acquisition
strategy for the programs that fall under the newly established
Missile Defense Agency; other programs are also considering it as
How all this will work in detail is still a little murky. For example,
I can imagine the difficulty for industry, working on big projects
without a clear specification of what might be produced in the end.
Manufacturing planning may become a big problem, requiring innovative
For us in the testing community, one of the more obvious approaches
to capabilities-based acquisition is to move further away from the
so-called pass/fail mentality to one of providing independent assessments
of the capabilities (and limitations) of the system as tested to-date.
We won’t be making judgments as to effectiveness or suitability
against requirements, but rather presenting our best judgment as
to the capability demonstrated to-date in whatever environments—open-air
testing, hardware-in-the loop, or human-in-the-loop-to which the
system has been subjected.
Other initiatives being used today include spiral development and
block upgrades. We have quite a bit of experience with such approaches,
particularly in testing software-intensive systems. Here, we will
plan our T&E strategies to assess incremental improvements in
capabilities as opposed to using the full-up, or ultimate, system
requirements spelled out in an operational requirements document
(ORD) as a bench-mark. At the least, our assessments will consider
whether each spiral or block provides a measurable improvement in
military capability over its predecessor.
Undoubtedly, the biggest financial commitment by a program in this
context will be to field the first spiral or block I. At a minimum,
block I should demonstrate that it does not represent a decrease
in military capability over legacy systems. If new functionality
is added in a spiral or block, we will need to carry out some level
of regression testing. The new functionality—if it is to be
worth the disruption to the force by requiring retraining, additional
training or new operational concepts—ought to represent a
In spiral developments, we may need a formal feedback mechanism—spiral
reporting, so to speak—to ensure that problems or deficiencies
identified in T&E for each spiral are addressed and corrected
by the program office.
Unfortunately, I am concerned that our T&E infrastructure is
not in the best of shape needed to meet the challenges of the future.
Program delays have tended to ease the burden faced by the test
ranges. Who knows what would happen if all the programs that claimed
to be ready for testing in 2002 actually showed up for testing?
If the latest acquisition initiatives deliver what they hope for,
then a greater fraction of programs should be ready for testing.
In this respect, I fear the T&E community might not be prepared
for success in acquisition reform.
Thomas P. Christie is the director of operational test and evaluation
at the Office of the Secretary of Defense. This article was adapted
from a February 26 keynote speech to the NDIA Test and Evaluation
National Conference, in Savannah, Georgia. (The complete, unedited
speech can be found on line at www.ndia.org)