Some say the world will end in fire,
Some say in ice.
From what I’ve tasted of desire—
The risk of accidental nuclear war
William Perry, who served as Jimmy Carter’s undersecretary of defense, recalls that one night in 1979, he was sleeping soundly, when he got a phone call. “When I picked up the phone, the voice on the other end identified himself as the watch officer for the North American Air Defense Command. The first thing he said to me was that his computers were showing two hundred nuclear missiles on their way from the Soviet Union to the United States. And for one horrifying moment, I believed we were about to witness the end of civilization.
“I can’t really put to words to it. I was—I was stunned. I was just completely stunned.”
This wasn’t the first call the watch officer had made. When the alert first came in, he had contacted the White House. Because the call came in the middle of the night, it went to Carter’s national security advisor, Zbigniew Brzezinski.
It was the height of the Cold War. When Brzezinski got the call, he assumed it was a real attack. Even so, he ordered confirmation of the Soviet launch before he woke the president. Sitting alone, in the middle of the night, Brzezinski decided not to wake his wife. He wanted to spare her the terror of what he thought would be their last minutes on earth.
Just as Brzezinski was going to wake the president, he received a third call, this time saying the other reporting systems were not reporting incoming missiles. It was a false alarm.
“Had he woken the president,” Perry says, “in the middle of the night, no background information or anything, the president would have had to decide, in less than five minutes, whether to launch the missiles before they were struck in their silos. That’s the kind of horrible decision a president would have to make in that case. And since the information he got—that there were two hundred missiles coming to the United States, a major attack, he would undoubtedly have launched a major attack in return.”
It took three days to figure out what had gone wrong. It turned out that when they had changed watch that night, the operator had put in a training tape by mistake. “The computer was showing a perfectly realistic simulation,” says Perry. “It was designed to be realistic.” It was so realistic that launch control centers for Minuteman missiles were put on alert and Strategic Air Command launched fighter jets to prepare to retaliate.
“It changed forever my way of thinking about nuclear weapons,” says Perry. “Up until then, a false alarm—an attack by mistake, starting a nuclear war by mistake—was a theoretical issue. But from that point on, it was never theoretical to me. It was always very, very real, because it got me right in my guts. It’s affected my thinking and my action to this day.”
On August 31, 1983, the pilot of KAL 007 made a navigational error and strayed into Soviet airspace. Soviet jets shot it down, killing 23 children and 63 American citizens, including an American congressman. Six days later, Ronald Reagan delivered one of the angriest speeches of the Cold War.
I remember these events well. I was a teenager. Students in my class worried there would be a war—a nuclear war. We did not know if we would survive. We had no idea how right we were to worry and how close we really came.
Not long afterward, NATO conducted an exercise, Able Archer, that simulated a nuclear launch. Reagan’s speech had so spooked the Kremlin that Yuri Andropov and his top aides believed this was the preliminary to a real first strike. They sent out a molinya, a flash message, to their operatives in the West, warning them to prepare for nuclear war, and readied their nuclear forces and air units in Eastern Europe. Soviet bombers laden with nuclear weapons sat on their runways with their engines roaring, on red alert.
On September 26, Lieutenant Colonel Stanislav Petrov sat watch in the Serpukhov-15 bunker. Shortly after midnight, red lights lit up the bunker. The word LAUNCH, in Russia, flashed up on a gigantic screen. According to satellite data, a nuclear missile had been launched from the United States.
Petrov stared in incredulity at his computer screen. Why? Why just one missile? It made no sense. Against his standing orders, he decided not to press the button that would send this information up the chain of command and precipitate the launching of a massive counterattack.
Then the satellite spotted a second missile.
Then a third.
Then a fourth.
Then a fifth.
Everyone in the bunker began screaming. Sweat poured off Petrov’s face. According to the computer, they would be vaporized within minutes.
By the grace of God, Petrov decided this couldn’t be happening. He didn’t know what was going on, but it just couldn’t be what it seemed to be. It just could not be. He broke his orders outright and reported it as a false alarm. The sirens wailed as the minutes ticked past. The bombs didn’t fall.
Petrov was right, of course: It wasn’t happening. The signals had been caused by a freak alignment of sunlight on high-altitude clouds above North Dakota and the Molniya orbits of the satellites. A lone Soviet lieutenant colonel prevented the Apocalypse. The Kremlin rewarded Petrov for breaking his orders by reprimanding him and assigning him to a less sensitive post. He took early retirement and suffered a nervous breakdown.
The United States still has an official policy of launch on warning—a hair-trigger alert. Launch on warning has long been viewed as key to nuclear deterrence: Only if adversaries know they have no hope of destroying our missiles in their silos may a balance of terror be maintained. But launch on warning also raises the risk of starting a nuclear war by mistake.
For example, the system rests upon a network of sensors, satellites, and computers that detect incoming missiles. If one of these warning systems indicates an attack, US nuclear forces would move to an increased state of readiness while the information was double-checked. This in itself could trigger war. It would be detected by the adversary, who might respond by raising their own readiness. The detection of this response would confirm the original—and erroneous—information. Likewise, an accidental nuclear explosion, anywhere, and especially during a moment of heightened international tension, could lead swiftly to disaster.
“One of the firm beliefs in the United States, and the Soviet Union as well, was that the other side had a plan to attack us without warning in a disarming surprise attack,” says Perry. “We were so focused on a surprise attack that we set up a system that was very sensitive, that would detect that attack early enough that we could actually launch our missiles before the attack hit the US soil.”1
The evening he describes was far from the only false alarm in our history. It wasn’t even the first time the command and control system was triggered by training tape. The list of false alarms is long. Once, a bear that climbed a fence at an air force base was mistaken for Soviet special forces. A .46 cent computer chip has mistakenly reported two thousand missiles en route from the Soviet Union. A command center has mistaken a series of power outages for a coordinated attack. A command center has confused a rising moon with a missile attack. A command center has confused a fire at a broken gas pipeline with the enemy jamming a satellite by laser.
Once, an unstable pilot deliberately turned on the two arming switches on his plane’s nuclear bombs.
Lost nuclear-armed bombers have flown into the Russian warning net. Air Force officers have tampered with missiles so better to launch them without orders. B-52 bombers have crashed with nuclear weapons aboard, then vanished from the official histories.
During the Suez Crisis, NORAD received a host of simultaneous reports—from aircraft over Turkey, Soviet MiGs over Syria, and the Soviet Black Sea fleet in the Dardanelles—that signified a Soviet offensive. All of these reports turned out to be misinterpreted or entirely in error: It was a wedge of swans over Turkey, a fighter escort for the Syrian president, a scheduled exercise of the Soviet fleet.
On the night of November 24, 1961, communication went dead between the headquarters of the Strategic Air Command and and NORAD. Headquarters found themselves cut off from ballistic missile early warning Sites in Greenland, Alaska, and England. There were two possible explanations: the coincidental failure of all the communication systems—or enemy action. But the communication systems had redundant, independent routes. Every Strategic Air Command base was put on alert. The B-52 pilots started the engines. At the last minute, headquarters made contact, by radio, with an orbiting B-52 near Greenland, which reported there was no attack.
The explanation for the failure of all of these supposedly independent lines of communication? Upon investigation, it was discovered that all the telephone and telegraph routes ran through a single relay station in Colorado—which had been shut down by an overheated motor.
False warnings during the Cuban missile crisis repeatedly led pilots and radar operators to believe the US was under nuclear attack. On October 24, a Soviet satellite exploded, leading the US to believe that the USSR was launching a massive ICBM attack. The NORAD Command Post logs remain classified.
One day later, a guard at the Duluth Sector Direction Center saw someone climbing the security fence. He activated the sabotage alarm, which set off sabotage alarms at all bases in the area. At Volk Field, Wisconsin, the alarm was miswired: the alarm that went off was the one that ordered nuclear armed F-106A interceptors to take to the air. The pilots believed World War III had begun.
On the next day, a test launch of a Titan-II ICBM confused observers at the Moorestown Radar site. They couldn’t figure out who had launched it. Only after this did the Air Force put in place a protocol for notifying radar warning sites in advance of test launches.
The list continues. And these are just the stories we know about. There are surely more we don’t know about, probably many more. Such incidents are highly likely to be concealed, because they reflect poorly on the units and commanders concerned.2
Normal accident theory
On considering this list of near-misses, some have concluded that since none of these incidents led to disaster, the risk must be minimal. This is not a rational conclusion. In other arenas of life, the rate of near catastrophes is closely correlated to the rate of actual catastrophes. We understand this instinctively: The driver who gets into a fender-bender every time he takes to the road is obviously more at danger of a fatal accident than the driver with a spotless record. Nuclear weapons have existed for less than a century. Only a small number of nations have possessed them. Drawing firm conclusions about the risk of an accident from such a limited set of data is impossible, all the more so when we consider how different the coming century’s geopolitics will be from the past century’s.
In 1999, Charles Perrow published Normal Accidents: Living with High Risk Technology. It has become a classic in organizational sociology. He advances what is now known as normal accident theory. Systems, he argues, vary in two important ways. First, they may be linear or complex.
Linear interactions are those in expected and familiar production or maintenance sequence, and those that are quite visible even if unplanned. Complex interactions are those of unfamiliar sequences, or unplanned and unexpected sequences, and either not visible or not immediately comprehensible.
Second they may be tightly or loosely coupled. In a tightly coupled system, one event follows rapidly and invariably from another without human intervention. Usually, such systems are automated. It is loosely coupled when events unfold slowly, many outcomes are possible, and there is ample time for intervention to fix a problem before it becomes serious. Perrow argues that when systems are both complex and tightly coupled, accidents are not merely possible, but inevitable.
Systems with many complex interactions, he argues, share certain characteristics. They are highly vulnerable to common-mode failures: failures caused when critical components share a common feature that causes them all to break down:
The argument is basically very simple. We start with a plant, airplane, ship, biology laboratory, or other setting with a lot of components (parts, procedures, operators). Then we need two or more failures among components that interact in some unexpected way. No one dreamed that when X failed, Y would be out of order and the two failures would interact so as to both start a fire and silence the fire alarm. Furthermore, no one can figure out the interaction at the time and thus know what to do. The problem is something that just never occurred to the designers.
Perrow argued that as our technologies become more complex, the odds of catastrophe increase; efforts to improve safety by means of more complex technology will only beget more accidents. We tend to respond to accidents by adding new safety features. These, he argues, can reduce safety by adding complexity. Boeing’s 737 Max is a classic example.
The most striking aspect of Perrow’s thesis is his claim that this risk cannot be mitigated by improved design, culture, management, or human agency. “No matter how hard we try,” he writes,
no matter how much training, how many safety devices, planning, redundancies, buffers, alarms, bells and whistles we build into our systems, those that are complexly interactive will find an occasion where the unexpected interaction of two or more failures defeats the training, the planning, and the design of safety devices.
Perrow’s thesis is open for debate, of course, and many debate it. But the early warning systems that nuclear deterrence demands are classic complex, tightly coupled systems.
Predictions of a nuclear winter, in the early 1980s, were based on flawed studies; these claims were highly contested, and rightly so. But recent scholarship suggests that even a small-scale regional nuclear war would indeed have a serious effect on the climate and thus on global food production. No one can be sure. But the discovery in 2006 of forest fire smoke in the stratosphere, linked to extreme pyrocumulonimbus storms, lends support to the theory.
In 2007, A. Robock et al. published Climatic consequences of regional nuclear conflicts. Using modern climate models and new estimates of smoke generated by fires in modern cities, they calculated the effects of a regional nuclear war on the climate, modeling the effect of exchanging 100 Hiroshima-size bombs—less than 0.03 percent of the explosive yield of the world’s collective nuclear arsenal:
We find significant cooling and reductions of precipitation lasting years, which would impact the global food supply. The climate changes are large and long-lasting because the fuel loadings in modern cities are quite high and the subtropical solar insolation heats the resulting smoke cloud and lofts it into the high stratosphere, where removal mechanisms are slow. While the climate changes are less dramatic than found in previous “nuclear winter” simulations of a massive nuclear exchange between the superpowers, because less smoke is emitted, the changes are more long-lasting because the older models did not adequately represent the stratospheric plume rise.
In 2012, Özdoğan et al. published Impacts of a nuclear war in South Asia on soybean and maize production in the Midwest United States. Their model similarly found that a nuclear exchange between India and Pakistan would lead to a significant drop in corn and soybean yields in the American Midwest. The nuclear winter hypothesis remains debatable, and no one is sure how significant the effect would be. But there is not much debate about this: a limited nuclear exchange would severely disrupt global food supplies.
It is unlikely that a limited exchange of nuclear weapons would end the human race. But were 100 weapons used—.005 percent of the world’s current arsenal—it is also unlikely the impact would be confined to the states that exchanged the weapons. Recent studies suggest two billion people would starve; after all, nearly a billion are already chronically malnourished, so a decline of ten percent in global crop yields would tip the balance.
The most likely candidates for this kind of exchange—but not the only ones—are India and Pakistan. Toon et al. modeled the effects of a limited exchange between India and Pakistan in The effects of Rapidly expanding nuclear arsenals in Pakistan and India portend regional and global catastrophe:
Pakistan and India may have 400 to 500 nuclear weapons by 2025 with yields from tested 12- to 45-kt values to a few hundred kilotons. If India uses 100 strategic weapons to attack urban centers and Pakistan uses 150, fatalities could reach 50 to 125 million people, and nuclear-ignited fires could release 16 to 36 Tg of black carbon in smoke, depending on yield. The smoke will rise into the upper troposphere, be self-lofted into the stratosphere, and spread globally within weeks. Surface sunlight will decline by 20 to 35%, cooling the global surface by 2° to 5°C and reducing precipitation by 15 to 30%, with larger regional impacts. Recovery takes more than 10 years. Net primary productivity declines 15 to 30% on land and 5 to 15% in oceans threatening mass starvation and additional worldwide collateral fatalities.
An undimmed risk
One of the strangest aspects of our culture’s general climate of hysteria is that we no longer seem to worry about nuclear war. We’re citizens of one of the most anxious cultures in recorded history, but we don’t seem to fear the most obvious risk.
The prospect of a nuclear Apocalypse dominated our consciousness during the Cold War. Our popular culture was saturated with references to it. But this seems to us now as archaic and antique as the steam engine, at least to judge from the history books our children read:
Fear of total human annihilation is a tough feeling to live with every day. For children growing up in the Cold War, mutually assured nuclear destruction literally haunted their dreams. Many of them wrote letters to the president, begging Eisenhower, Kennedy, Johnson, and their successors not to push the button. Others just prayed the bomb would kill them instantly, preferring swift death to years of sickness and grief.
I’ve repeatedly asked myself how we collectively decided this risk just magically went away—allowing us to speak of it in the past tense—but I’ve never been able to answer the question to my own satisfaction.
There is no reason whatever to think the risk eradicated, nor even diminished. It is true that the United States and Russia now have significantly fewer nuclear weapons. But this misses the point. We had enough to destroy ourselves many times over. We still have more than enough. There are an estimated 13,900 nuclear weapons in the world. Russia and the United States possess 93 percent of them. This is enough—more than enough.
Some believe the risk of deliberate nuclear war has been reduced with the end of the Cold War. They have no good reason to believe this. Putin’s regime is as hostile to the United States as the Soviet Union was. North Korea is certainly as hostile to the United States as the Soviet Union was. It has tested ICBMs designed to strike the entire continental United States. It has a large inventory of theater ballistic missiles.
Only recently, the head of US Strategic Command testified before Congress that China is putting its nuclear forces on higher alert, and neither the United States nor its allies understands quite why:
While China keeps the majority of its forces in a peacetime status, increasing evidence suggests China has moved a portion of its nuclear force to a Launch on Warning (LOW) posture and are adopting a limited “high alert duty” strategy. To support this, China continues to prioritize improved space-based strategic early warning, and command and control as specific nuclear force modernization goals. Their networked and integrated platform advancements will enable skip-echelon decision-making processes and greater rapid reaction. This shifting posture is particularly unsettling, considering the immature nature of Chinese strategic forces and compressed timelines needed to assess and frame a response, increasing the potential for error and miscalculation. Collectively, China’s strategic nuclear modernization expansion raises troubling concerns and complements the conventional capability growth reported by INDOPACOM and other Combatant Commands.
In the same presentation, he noted:
Over the last decade, Russia has recapitalized roughly 80 percent of its strategic nuclear forces, strengthening its overall combat potential with an imposing array of modernization efforts and novel weapons programs designed to ensure a retaliatory strike capability by all three triad legs. Upgrades incorporate new technologies into weapons systems, such as the nuclear-armed ICBM launched Avangard hypersonic glide vehicle. Other weapons programs, such as the Poseidon nuclear-powered and nuclear-armed underwater vehicle, and the Skyfall nuclear-powered and nuclear-armed cruise missile, threaten to redefine Russia’s nuclear force with asymmetric strategic weapons capabilities never before fielded. In October 2020, Russia successfully tested its multi-role Tsirkon hypersonic anti-ship missile with land attack capability. These new capabilities are specifically designed to thwart ballistic missile defenses, challenge deterrence, and target our capabilities, increasing risk to allies, partners, and the US homeland.
What’s more, we know that terrorists have attempted to procure nuclear weapons. This is a staggering constellation of nuclear risk.
Whether or not you appraise the risk of a deliberate nuclear exchange as high, there is no doubt about this proposition: The risk of serious accidents, including accidental war, remains as high now as it was during the Cold War. It is probably higher, because new nuclear nations lack the West’s and Russia’s experience and technical infrastructure. The number of nuclear powers continues to increase. There was, at least, a hotline between the United States and the Soviet Union. There is no hotline to North Korea.
Ask yourself how well the global systems that were designed to minimize the risk of a pandemic have served us. Consider the competence and care of the people involved in supervising these systems.
Do you think these same people are capable of managing nuclear risk?
For discussion of these and other incidents, see Scott Sagan, The Limits of Safety: Organizations, Accidents, and Nuclear Weapons.