Vast columns of Soviet tanks, troops, jets, and ships haunted American defense leaders in the 1970s and 1980s. They couldn’t hope to match Russia’s growing Cold War numerical advantages, so they had to find another way to deter the Soviet Union.
In the 1950s, the US had resorted to a lopsidedly large nuclear force to offset the USSR’s overwhelming conventional capability. As the Soviets achieved rough nuclear parity in the early 1970s, however, it stunted the utility of the nuclear option. The conventional mismatch again raised the specter that the West wouldn’t be able to hold Soviet conventional forces at bay in Europe or elsewhere.
Thus arose what became known as the “Second Offset.” Though the US couldn’t afford to match the Soviets tank for tank, it could field smaller numbers of extremely capable, high-quality equipment, leap-ahead technologies, and associated operational concepts. It was quality vs. quantity.
This approach didn’t headline speeches. Epic debates on US and Soviet nuclear strategy usually overshadowed it, and harvesting the gains of the Second Offset took the better part of 20 years.
Yet this quiet approach was a tour de force that bridged across the Nixon, Ford, Carter, and Reagan presidencies and ultimately fueled the precision targeting revolution in the 1990s. According to Jimmy Carter’s Secretary of Defense, Harold Brown, some of the Second Offset’s deepest roots lay with the Air Force.
The need for the Second Offset began to sharpen with Russia’s deployment of the fearsome new Soviet SS-19 nuclear missile, carrying multiple, independently targetable re-entry vehicles, or MIRVs. American leaders realized Soviet strategic nuclear parity or even potential superiority might create a window of vulnerability, giving Moscow free rein in international politics at US expense.
The top spokesmen for this theory were Eugene V. Rostow and Paul H. Nitze. They formed the Committee on the Present Danger in 1976. “If we continue to drift, we shall become second best to the Soviet Union in overall military strength,” they warned. “Then we could find ourselves isolated in a hostile world, facing the unremitting pressures of Soviet policy backed by an overwhelming preponderance of power.”
In 1976 the Soviets deployed their first mobile theater nuclear missile, the SS-20. 1978 was the tipping point, as the USSR’s inventory of 25,393 warheads for the first time topped the US’s inventory of 24,243. The Russians had added over 8,000 warheads since 1974. The fear was that if the Soviets had nuclear supremacy, they might just be willing to risk a conventional push into NATO.
“Soviet military leaders in their doctrinal writings expressed the belief that they could win a blitzkrieg victory in Europe,” recalled Brown in his book Star Spangled Security. Brown served as Secretary of the Air Force from 1965 to 1969 and Secretary of Defense from 1977 to 1981.
Improving NATO’s conventional forces with superior firepower to disrupt a ground attack became a top priority. Specifically, that meant developing forces able to find, fix, and destroy the forward line of Soviet troops while striking follow-on echelons as they attempted a thrust into West Germany.
The offset strategy sought advanced technologies for precision attack in order for NATO to whittle down superior numbers of Soviet tanks and other conventional forces to battle-manageable levels.
“We do not plan our theater nuclear forces to defeat, by themselves, a determined Soviet attack in Europe, and we rely mainly on conventional forces to deter conventional attack,” Brown told Congress in 1980. “As one example, we cannot permit a situation in which the SS-20 and Backfire [bomber] have the ability to disrupt and destroy the formation and movement of our operational reserves, while we cannot threaten comparable Soviet forces.”
To threaten those Soviet forces, the US needed rapid precision attack of Soviet counter air and ground force targets.
Airmen had been on this quest for over a decade. Brown credited USAF’s 1963 Project Forecast, directed by Gen. Bernard A. Schriever, as the genesis of precision strike. One of Schriever’s top recommendations was to concentrate on zero circular error probable, or CEP.
Early ICBMs like Minuteman I had a CEP of 1.3 to 1.7 miles, as cited by Donald A. MacKenzie in his book Inventing Accuracy: A Historical Sociology of Nuclear Missile Guidance. ICBMs could therefore strike enemy cities under a strategy of mutually assured destruction, but a valid counterforce strategy depended on accuracies good enough to hit Soviet military targets directly.
“From that idea flowed generations of increasingly accurate weapons called precision guided munitions,” Brown wrote in his 2012 memoir.
It wasn’t quite that simple, of course. Research had to switch from improving floating gyros and other elements of inertial navigation to harnessing the power of electro-optics, lasers, and ultimately, global positioning.
By the early 1970s, the airmen’s quest for precision had put in place a strong basis for building up precision attack. One milestone was the 1972 destruction of the Thanh Hoa Bridge in North Vietnam, using laser guided bombs. The success of that strike—following 871 unsuccessful attacks—proved the value of laser targeting.
Seeking the Holy Grail
In those early days of aided precision, F-4 Phantoms used electro-optical guided bombs, with TV cameras on the bomb transmitting a picture to the weapon systems officer in the aircraft. The WSO adjusted contrast to pick out the target, then transmitted the selection to the bomb, which flew itself to impact. Laser guided bombs went one better: The bomb could follow the low-power laser beam illuminating the target from a pod carried under a fighter and operated by the fighter crew. Both systems worked well—if visibility was good.
Offsetting the Soviet conventional advantage would require much more, though. The “Holy Grail” was a way to hit Soviet tanks on the move, especially in rear echelon areas. Ideally, it all had to be done at night and in bad weather, too.
In 1973, the Advanced Research Projects Agency, ARPA, launched the Long-Range Research and Development Planning Program “to provide the President and the joint force with better tools to respond to a Warsaw Pact attack,” recounted Deputy Secretary of Defense Robert O. Work in a January 2015 speech.
The offset coalesced around an operational concept.
Step One was accurately tracking moving tanks and other mechanized vehicles.
Step Two was developing munitions to hit the small targets precisely.
Step Three concentrated on ways to deliver munitions: either via ground launch or from aircraft. Accordingly, those aircraft needed standoff missiles or a way to penetrate close to the target—especially important against moving armor.
“The objective of our precision guided weapon systems is to give us the following capabilities: to be able to see all high value targets on the battlefield at any time; to be able to make a direct hit on any target we can see, and to be able to destroy any target we can hit,” testified William J. Perry, undersecretary of defense for research and engineering (also called DDR&E), in 1978.
This would emerge only after careful work on a fusion of systems—a core attribute of the offset. The Pave Tack pod, for example—built by Ford Aerospace—illustrated the maturation of precision. Pave Tack fused several technologies: forward-looking infrared, a laser rangefinder and a laser designator.
How would the Pentagon focus its research efforts? Perry’s role as DDR&E was crucial. The offset emerged at a time when direction, management, and funding were highly concentrated in that post, created by President Dwight D. Eisenhower. In his memoir Waging Peace, Eisenhower said legislation passed in 1958 set up the job for a “nationally recognized leader in science and technology” who would advise the Secretary of Defense and “supervise all research and engineering activities in the department.”
Focus on the Battlefield
DDR&E was at its peak power by the early 1970s. For example, ARPA reported directly to DDR&E. Brown, John S. Foster Jr., Malcolm R. Currie, and Perry held the post from 1965 to 1981. Consistent leadership of research and development efforts by astute scientists and engineers kept work on track even as Administrations changed.
Another ingredient for success may have been the comparatively low-key approach. The original offset strategy was by no means a dominant part of the strategic dialogue of the mid-1970s and 1980s, as academics and agitators alike spent far more energy on détente, arms control, and the perils of nuclear parity. Nuclear weapons strategy overwhelmed all else and typically relegated debates on the offset strategy’s conventional force improvements to the realm of congressional testimony. In fact, the offset strategy proceeded without much countervailing debate—at least until some of the programs fed by it moved into the procurement phase.
The most lasting cohesion came from focusing on the battlefield. The centerpiece of the offset was not any one technology in particular. It was an operational concept for precision: how to see and target Soviet ground forces and debilitate them quickly enough to prevent them from overrunning Europe. That operational imperative for precision drove forward through the ups and downs of research and development. Programs might start with one intent, then go on to deliver real capability in another, next generation application.
A case in point was Assault Breaker. This concept posited standoff weapons attacking moving, rear echelon armor massed deep behind enemy lines. According to a 1981 Government Accountability Office report, components included: airborne ground moving target indicator radar; missiles with submunitions for airborne or surface launch; and anti-armor self-guided munitions. Topping it all off was a comprehensive communications, command, and control network. The program sought a “uniquely high rate of kill at a much smaller risk and cost than present tactics permit,” summarized GAO.
The offset strategy also required aircraft to deliver weapons both in direct attack and at standoff range.
Medium-to-high technology aircraft were among the biggest programs. One was known by the code name Tacit Blue (and by testers as “The Whale”). Highly classified at the time, this rounded aircraft was designed to loiter over the battle area, detecting moving targets with radar while protected by its stealthy shape. Tacit Blue was no mere model: The craft weighed in at 30,000 pounds and completed 135 test flights before the program ended in 1985.
Though no operational version of Tacit Blue resulted from the prototyping effort, it spun off stealth technology that found its way into the B-2 bomber, while the radar became the centerpiece of the E-8 JSTARS ground surveillance aircraft.
Assault Breaker was a canonical offset program in that it spawned much interesting research and experimentation. The Army’s Corps Support Weapon System was another spinoff. In CSWS, USAF’s Pave Mover target radar on an F-111 aircraft would view the cluster of Soviet tanks and provide down link guidance to a ground station, which would then launch missiles as the Pave Mover kept track. The missiles would dispense wide area anti-armor submunitions.
Ultimately the offset strategy depended on investment in major programs to deliver capability to the combatant. One favorite of Brown was the Airborne Warning and Control System, or AWACS. Brown accelerated the program as the Carter Administration began, and the purchase of E-3 AWACS aircraft by NATO “sent a signal to the Soviet Union,” he observed. AWACS made NATO “more useful not only militarily but also politically, because the planes showed the Soviet Union that the United States and NATO had become more integrated,” added Brown.
The thinking behind the offset strategy was of course a spur to stealth programs such as the F-117 and the B-2. The Soviets’ vast investment in air defense radars could be rendered obsolete by aircraft whose radar signature was so sharply attenuated that they could fly undetected between the radars.
Offset strategy programs kicked into high gear under President Ronald Reagan, who took office in January 1981 primed to rebuild US military power.
The situation was worse than the new Administration had thought.
According to Reagan’s Ruling Class, “the principal shock was to find out, through daily briefings, the extent and the size of the Soviet buildup and the rapidity with which it had taken place—in all areas, land, sea, and air,” Defense Secretary Caspar W. Weinberger told reporters after a short time in office.
“There was the window of vulnerability, which the Administration at that time felt very strongly about being able to close,” said retired Air Force Lt. Gen. Richard M. Scofield, who spent much of the Reagan years leading the F-117 and then the B-2 program.
The Reagan Administration would also move offset technologies from Pentagon research portfolios to major service programs. The Administration provided funding and continued focus, and through the 1980s, a new wave of capabilities specific to the tactical air forces came into being.
The change was remarkable. As late as 1978, Gen. David C. Jones despaired of USAF capabilities to hit moving targets—or any targets—at night and in Europe’s poor weather.
“It would be prohibitively expensive for us to build all, or even most, of our aircraft to operate all night or in bad weather,” he said that year.
By 1983, however, USAF had several programs under development that would yield just that capability. An infrared seeker for the Maverick anti-tank missile was one. The LANTIRN pod system, combining navigation and targeting in low-light conditions, was another. The air-to-air Sparrow missile follow-on begun in 1977 was now gelling under the name AMRAAM.
Lt. Gen. Kelly H. Burke, a senior acquisition leader, explained in a hearing on DOD’s 1981 appropriations that the “confluence of technology” propelling LANTIRN and other programs would soon give USAF’s single-seat fighters “a very good night/under the weather capability at low altitude with multiple kills per pass.” This was just the force needed to parry Soviet conventional power and keep the enhanced communist nuclear forces at bay.
The true maturation of the offset depended on the US armed services funding major programs—or collaborating together.
The 31 Initiatives
One early 1980s collaboration between the Army and the Air Force, led by their respective Chiefs of Staff, was called the 31 Initiatives. These were framed in tactical doctrine spanning concepts for air defense, suppression of enemy air defenses, rear area operations, joint munitions development, special operations, and fusion of combat information.
Many of the Assault Breaker concepts reappeared in the 31 Initiatives. The joint munitions work and combat information initiatives prompted offset technologies. For example, Initiative 20 designated a single Air Staff manager for improving night attack capabilities. The operational concept was to shore up close air support and precision attack at night; but the means to do so drew on technologies funded under the offset strategy.
Two of the 31 were clear descendants of the offset strategy. Initiative 18 set in motion the Joint Tactical Missile System first dubbed JTACMS but later known simply as ATACMS. This was the use of precise, standoff weapons akin to the idea of using American precision to offset Warsaw Pact mass. The Army would adapt its JTACMS to a ground-launched system with better range than its artillery. The Air Force sought an air-launched weapon for rapid strikes on air defenses and other offensive counterair targets.
Initiative 27 pledged the Army and Air Force to fund JSTARS. This was a direct result of the offset funding of Tacit Blue and airborne battlefield radar. Though JSTARS was not the program first envisaged in the heyday of the offset in the 1970s, it became, over time, a way to reveal enemy movement on the battlefield. JSTARS’ operational payoff began in the 1990s, in Iraq, Serbia, and Afghanistan.
Deputy Secretary of Defense Work reckons the Second Offset took the better part of two decades to bear fruit; by his account, the ARPA program of 1973 marked the true beginning. Fortunately, the offset’s research and development efforts carried real weight in international diplomacy long before battlefield forces were fully equipped.
The first big success registered in 1984. As Work told it, the Soviet General Staff looked at intelligence on the developing “reconnaissance strike complexes”—their term for what in the West was becoming known as the Revolution in Military Affairs—and concluded that Western militaries employing these “very accurate terminally guided conventional munitions would achieve the same destructive effects as tactical nuclear weapons.”
Work said the Soviets were “very model-driven at that time,” and once they ran the models, “they said, ‘Game over.’?”
Airmen took the lead in demonstrating the early results of the offset strategy. In 1986, Operation El Dorado Canyon—the retaliatory raid on Libya for its role in bombing US servicemen at a West Berlin nightclub—gave the world a taste of these technologies. Air Force F-111s in the raid employed the Pave Tack infrared acquisition pod to deliver 500-pound bombs precisely. At least one scored a direct hit on a Libyan Il-76 transport airplane parked at Tripoli’s airport. Navy A-6s also conducted precision attacks.
Five years later, in Operation Desert Storm, precision attacks grabbed world headlines. The US had developed technology it knew the USSR “couldn’t copy,” said Work. “And we demonstrated [it] in 1991 to the rest of the world, and it really had a giant impact.”
Desert Storm saw the use of AWACS, a pair of experimental JSTARS, new radar missiles, anti-radar missiles, laser guided bombs, satellite guided missiles, satellite-aided navigation and timing, and stealth. All together, the Second Offset technology thoroughly overwhelmed the Soviet-built Iraqi air and ground forces. The Soviet Union realized its military technology had been rendered obsolete, and this massive vulnerability played no small role in the final dissolution of the Soviet Union that same year.
The Second Offset didn’t stop there. Laser guided bombs worked well, but not in bad weather. After an aggressive development program, every bomb-dropping aircraft in the US combat fleet became a precision-attack platform with the widespread deployment of the JDAM bomb, guided by Global Positioning System satellites. Innovative design and large production made extreme precision not only widespread, but relatively cheap. The calculus of air warfare had been turned on its head: No longer did airmen have to plan for how many aircraft were needed to destroy each target; now it was about how many targets could be destroyed by a single aircraft.
The Second Offset played a big role in the air campaign against Serbia in 1999. For the first time, enemy real estate was given up solely because of American attack from the air.
In order to succeed, the Second Offset demanded an initial vision, time, investment over many years, and the willingness of the political parties to keep it going when political power changed hands back and forth.
Brown summed it up best: “The Carter Administration initiated and developed these programs, the Reagan Administration paid for their acquisition in many cases, and the … Bush Administration employed them.”