Some say the world will end in fire,
Some say in ice.
From what I’ve tasted of desire—
William Perry, who served as Jimmy Carter’s undersecretary of defense, recalls that he was sleeping soundly on the night, in 1979, when he got the phone call. “When I picked up the phone, the voice on the other end identified himself as the watch officer for the North American Air Defense Command. The first thing he said to me was that his computers were showing two hundred nuclear missiles on their way from the Soviet Union to the United States. And for one horrifying moment, I believed we were about to witness the end of civilization.
“I can’t really put to words to it. I was—I was stunned. I was just completely stunned.”
This wasn’t the first call the watch officer had made. When the alert first came in, he had contacted the White House. Because the call came in the middle of the night, it went to Carter’s national security advisor, Zbigniew Brzezinski.
It was the height of the Cold War. When Brzezinski got the call, he assumed it was a real attack. Even so, he ordered confirmation of the Soviet launch before he woke the president. Sitting alone, in the middle of the night, Brzezinski decided not to wake his wife. He wanted to spare her the terror of what he thought would be their last minutes on earth.
Just as Brzezinski was going to wake the president, he received a third call, this time saying the other reporting systems were not reporting incoming missiles. It was a false alarm.
“Had he woken the president,” Perry says, “in the middle of the night, no background information or anything, the president would have had to decide, in less than five minutes, whether to launch the missiles before they were struck in their silos. That’s the kind of horrible decision a president would have to make in that case. And since the information he got—that there were two hundred missiles coming to the United States, a major attack, he would undoubtedly have launched a major attack in return.”
It took three days to figure out what had gone wrong. It turned out that when they had changed watch that night, the operator had put in a training tape by mistake. “The computer was showing a perfectly realistic simulation,” says Perry. “It was designed to be realistic.” It was so realistic that launch control centers for Minuteman missiles were put on alert and Strategic Air Command launched fighter jets to prepare to retaliate.
“It changed forever my way of thinking about nuclear weapons,” says Perry. “Up until then, a false alarm—an attack by mistake, starting a nuclear war by mistake—was a theoretical issue. But from that point on, it was never theoretical to me. It was always very, very real, because it got me right in my guts. It’s affected my thinking and my action to this day.”
US State Department adviser Marshall Shulman subsequently wrote, in a Top Secret memo that is now declassified: “False alerts of this kind are not a rare occurrence.”
On August 31, 1983, the pilot of KAL 007 made a navigational error and strayed into Soviet airspace. Soviet jets shot it down, killing 23 children and 63 American citizens, including an American congressman. Six days later, Ronald Reagan delivered one of the angriest speeches of the Cold War.
I remember these events well. I was a teenager. Students in my class worried there would be a war—a nuclear war. We had no idea how right we were to worry and how close we really came.
Not long after the downing of KAL 007, NATO conducted an exercise, Able Archer, that simulated a nuclear launch. Reagan’s speech had so spooked the Kremlin that Yuri Andropov and his top aides believed this was the preliminary to a real first strike. They sent out a molinya, a flash message, to their operatives in the West, warning them to prepare for nuclear war, and readied their nuclear forces and air units in Eastern Europe. Soviet bombers laden with nuclear weapons sat on their runways with their engines roaring, on red alert.
On September 26, Lieutenant Colonel Stanislav Petrov sat watch in the Serpukhov-15 bunker. Shortly after midnight, red lights lit up the bunker. The word LAUNCH, in Russia, flashed up on a gigantic screen. According to satellite data, a nuclear missile had been launched from the United States.
Petrov stared in incredulity at his computer screen. Why? Why just one missile? It made no sense. Against his standing orders, he decided not to press the button that would send this information up the chain of command and precipitate the launching of a massive counterattack.
Then the satellite spotted a second missile.
Then a third.
Then a fourth.
Then a fifth.
Everyone in the bunker began screaming. Sweat poured off Petrov’s face. According to the computer, they would be vaporized within minutes.
By the grace of God, Petrov decided this couldn’t be happening. He didn’t know what was going on, but it just couldn’t be what it seemed to be. It just could not be. He broke his orders outright and reported it as a false alarm. The sirens wailed as the minutes ticked past. The bombs didn’t fall.
Petrov was right, of course: It wasn’t happening. The signals had been caused by a freak alignment of sunlight on high-altitude clouds above North Dakota and the Molniya orbits of the satellites. A lone Soviet lieutenant colonel prevented the Apocalypse. The Kremlin rewarded Petrov for breaking his orders by reprimanding him and assigning him to a less sensitive post. He took early retirement and suffered a nervous breakdown.
Near misses
The United States still has an official policy of launch on warning—a hair-trigger alert. Launch on warning has long been viewed as key to nuclear deterrence: Only if adversaries know they have no hope of destroying our missiles in their silos may a balance of terror be maintained. But launch on warning also raises the risk of starting a nuclear war by mistake.
For example, the system rests upon a network of sensors, satellites, and computers that detect incoming missiles. If one of these warning systems indicates an attack, US nuclear forces would move to an increased state of readiness while the information was double-checked. This in itself could trigger war. It would be detected by the adversary, who might respond by raising their own readiness. The detection of this response would confirm the original—and erroneous—information. Likewise, an accidental nuclear explosion, anywhere, and especially during a moment of heightened international tension, could lead swiftly to disaster.
“One of the firm beliefs in the United States, and the Soviet Union as well, was that the other side had a plan to attack us without warning in a disarming surprise attack,” says Perry. “We were so focused on a surprise attack that we set up a system that was very sensitive, that would detect that attack early enough that we could actually launch our missiles before the attack hit the US soil.”1
The evening he describes was far from the only false alarm in our history. It wasn’t even the first time the command and control system was triggered by training tape. The list of false alarms is long. A bear climbing a fence at an air force base was mistaken for Soviet special forces. A malfunctioning .46-cent computer chip resulted in a report of two thousand missiles en route from the Soviet Union. A command center mistook a series of power outages for a coordinated attack. A command center confused a rising moon with a missile attack. A command center mistook a fire from a broken gas pipeline for the enemy’s jamming of a satellite by laser.
Once, an unstable pilot deliberately turned on the two arming switches on his plane’s nuclear bombs.
Lost nuclear-armed bombers have flown into the Russian warning net. Air Force officers have tampered with missiles so better to launch them without orders. B-52 bombers have crashed with nuclear weapons aboard, then vanished from the official histories. A US bomber carrying four nuclear bombs crashed near Thule, Greenland, in 1968, contaminating the area with plutonium. Once, two bombs fell out of a bomber that lost a wing over North Carolina. One of them was fine, the other one wasn’t; the crash destroyed five of its six safety devices. “By the slightest margin of chance,” recalled Robert McNamara, “literally the failure of two wires to cross, a nuclear explosion was averted.”
In 2010, the Warren Air Force Base in Wyoming lost contact with the 50 armed Minuteman III ICBMs under its command, all of which had been on high alert. For the better part of an hour, they had no ability to detect an unauthorized launch. It was ultimately discovered that during routine maintenance, someone had installed a circuit card incorrectly.
During the Suez Crisis, NORAD received a host of simultaneous reports that signified a Soviet offensive from aircraft over Turkey, Soviet MiGs over Syria, and the Soviet Black Sea fleet in the Dardanelles. All of these reports turned out to be entirely in error: It was a wedge of swans over Turkey, a fighter escort for the Syrian president, a scheduled exercise of the Soviet fleet.
On the night of November 24, 1961, communication went dead between the headquarters of the Strategic Air Command and and NORAD. Headquarters found themselves cut off from ballistic missile early warning Sites in Greenland, Alaska, and England. There were two possible explanations: the simultaneous and coincidental failure of all the communication systems, or enemy action. But the communication systems had redundant, independent routes, or so they thought. Every Strategic Air Command base was put on alert. The B-52 pilots started the engines. When SAC tried to call NORAD’s headquarters, the line was dead—a very ominous sign. At the last minute, headquarters made radio contact with a B-52 flying over Greenland, which reported there was no attack.
The explanation for the failure of all of these supposedly independent lines of communication? Upon investigation, it was discovered that all the telephone and telegraph routes ran through a single relay station in Colorado, which had been shut down by an overheated motor.
False warnings during the Cuban missile crisis repeatedly led pilots and radar operators to believe the US was under nuclear attack. On October 24, a Soviet satellite exploded, leading the US to believe that the USSR was launching a massive ICBM attack. The NORAD Command Post logs remain classified.
One day later, a guard at the Duluth Sector Direction Center saw someone climbing the security fence. He activated the sabotage alarm, which set off sabotage alarms at all bases in the area. At Volk Field, Wisconsin, the alarm was miswired: the alarm that went off was the one that ordered nuclear armed F-106A interceptors to take to the air. The pilots believed World War III had begun.
On the next day, a test launch of a Titan-II ICBM confused observers at the Moorestown Radar site. They couldn’t figure out who had launched it. Only after this did the Air Force put in place a protocol for notifying radar warning sites in advance of test launches.
The list of close calls owed to early warning sensors that provided ambiguous data is particularly worrying. On January 25, 1995, Norway launched a rocket on a mission to study the aurora borealis. It had notified Russia in advance of the launch, but the notice wasn’t distributed to the right personnel. Unfortunately, the flight characteristics of the rocket resembled those of an American submarine-launched ballistic missile, and when Russia’s early warning radar detected the launch, Russian nuclear forces went on full alert. Yeltsin retrieved the launch codes, preparing to retaliate.
Fortunately, it was 1995. Tensions between Russia and the US were low. When the Russian satellites that monitor the US picked up no signs of a larger attack, Russian leaders concluded, correctly, that it must be a false alarm. Imagine this happening at a moment of much greater tension—now, for example—and imagine a disorganized US administration that fails to send out customary warnings before training exercises, or sends them to the wrong people.
This is not by any means a comprehensive list of the near misses we know about. In some of these incidents, we’ve been so close that the firing system—the explosives around the nuclear warhead—detonated, but failed to trigger a nuclear chain reaction. (At least one such accident left the entire area radioactive.)
The list of stories we don’t know about is probably much longer. Such incidents are highly likely to be concealed, because they reflect poorly on the units and commanders concerned, or classified.2
It’s safe to assume, too, that every other nuclear power has an equally long list, if not a longer one. If you’ve ever driven through India or Pakistan, you know that the phrase “scrupulous culture of safety” does not come to mind. Nor is the phrase evoked by China’s long history of deadly industrial accidents. I would be surprised to learn that these countries’ protocols for preventing an accidental launch are significantly more robust than ours. And the performance of the Russian military in Ukraine speaks for itself.
Normal accident theory
On considering this list of near-misses, some have concluded that since none of these incidents led to disaster, the risk must be minimal. This is not a rational conclusion. In other arenas of life, the rate of near-catastrophes is closely correlated to the rate of actual catastrophes. We understand this instinctively: It’s why getting into a fender-bender will make your insurance go up, and it’s why we don’t want to be driven by someone who gets in a little accident every time he takes to the road.
Nuclear weapons have existed for less than a century. Only a small number of nations have possessed them. Drawing firm conclusions about the risk of an accident from such a limited set of data is impossible, all the more so when we consider how different from the past century, geopolitically, this one is apt to be. For the better part of the second half of the twentieth century, we lived in a bipolar world. Both superpowers were aware that their adversary possessed a secure second-strike capability. This was nothing like the world we can now expect to come into being.
A world from which the United States has withdrawn by means of formally or informally abrogating its role as a security guarantor of last resort will be a world with many more nuclear powers, all of them inexperienced in managing a nuclear arsenal, many of them politically unstable. It will also be a world of uncontrollable regional rivalries, nuclear blackmail, and imbalanced balances of terror. The period during which an adversary is suspected or known to be developing nuclear weapons, or increasing its capacity to deliver them, is particularly dangerous. During this time, there is an especially strong incentive to risk a first strike. For example, if an adversary has developed land based forces without the other legs of the triad, it’s rational to attempt to take out the adversary’s missiles before he develops a secure second strike capacity. This means an adversary’s ambiguous signals, including those caused by faulty chips or scientific research missions, are more likely to be interpreted as a first strike. Many of the countries that are apt to develop nuclear capabilities if they lose faith in the US’s protection will not be able to develop even the inadequate safeguards we deploy to prevent accidents. The methods they use to detect an enemy launch, for example, may not be as reliable as ours. And as unreliable as ours may be, we’ve at least acquired experience thanks to all these near-misses: We’ve become better at distinguishing enemy bombers from wedges of swans. New nuclear powers won’t have that experience.
In 1999, Charles Perrow published Normal Accidents: Living with High Risk Technology. It has become a classic in organizational sociology. He advances what is now known as normal accident theory. Systems, he argues, vary in two important ways. First, they may be linear or complex:
Linear interactions are those in expected and familiar production or maintenance sequence, and those that are quite visible even if unplanned. Complex interactions are those of unfamiliar sequences, or unplanned and unexpected sequences, and either not visible or not immediately comprehensible.
Second they may be tightly or loosely coupled. In a tightly coupled system, one event follows rapidly and invariably from another without human intervention. Usually, such systems are automated. It is loosely coupled when events unfold slowly, many outcomes are possible, and there is ample time for intervention to fix a problem before it becomes serious. Perrow argues that when systems are both complex and tightly coupled, accidents are not merely possible, but inevitable.
Systems with many complex interactions, he argues, share certain characteristics. They are highly vulnerable to common-mode failures: failures caused when critical components share a common feature that causes them all to break down:
The argument is basically very simple. We start with a plant, airplane, ship, biology laboratory, or other setting with a lot of components (parts, procedures, operators). Then we need two or more failures among components that interact in some unexpected way. No one dreamed that when X failed, Y would be out of order and the two failures would interact so as to both start a fire and silence the fire alarm. Furthermore, no one can figure out the interaction at the time and thus know what to do. The problem is something that just never occurred to the designers.
Perrow argued that as our technologies become more complex, the odds of catastrophe increase, and efforts to improve safety by means of more complex technology will only beget more accidents. We tend to respond to accidents by adding new safety features. These, he argues, can reduce safety by adding complexity. Boeing’s 737 Max is a classic example.
The most striking aspect of Perrow’s thesis is his claim that this risk cannot be mitigated by improved design, culture, management, or human agency. “No matter how hard we try,” he writes,
no matter how much training, how many safety devices, planning, redundancies, buffers, alarms, bells and whistles we build into our systems, those that are complexly interactive will find an occasion where the unexpected interaction of two or more failures defeats the training, the planning, and the design of safety devices.
Perrow’s thesis is open for debate, of course, and many debate it. But the early warning systems that nuclear deterrence demands are classic complex, tightly coupled systems.
Nuclear famine
Predictions of a nuclear winter, in the early 1980s, were based on flawed studies; these claims were highly contested, and rightly so. But recent scholarship suggests that even a small-scale regional nuclear war would indeed have a serious effect on the climate and thus on global food production. No one can be sure. But the discovery in 2006 of forest fire smoke in the stratosphere, linked to extreme pyrocumulonimbus storms, lends support to the theory.
In 2007, A. Robock et al. published Climatic consequences of regional nuclear conflicts. Using modern climate models and new estimates of smoke generated by fires in modern cities, they calculated the effects of a regional nuclear war on the climate, modeling the effect of exchanging 100 Hiroshima-size bombs—less than 0.03 percent of the explosive yield of the world’s collective nuclear arsenal:
We find significant cooling and reductions of precipitation lasting years, which would impact the global food supply. The climate changes are large and long-lasting because the fuel loadings in modern cities are quite high and the subtropical solar insolation heats the resulting smoke cloud and lofts it into the high stratosphere, where removal mechanisms are slow. While the climate changes are less dramatic than found in previous “nuclear winter” simulations of a massive nuclear exchange between the superpowers, because less smoke is emitted, the changes are more long-lasting because the older models did not adequately represent the stratospheric plume rise.
In 2012, Özdoğan et al. published Impacts of a nuclear war in South Asia on soybean and maize production in the Midwest United States. Their model similarly found that a nuclear exchange between India and Pakistan would lead to a significant drop in corn and soybean yields in the American Midwest.
The nuclear winter hypothesis remains debatable, and no one is sure how significant the effect would be. But there is not much debate about this: a limited nuclear exchange would severely disrupt global food supplies.
It is unlikely that a limited exchange of nuclear weapons would end the human race. But were 100 weapons used—.005 percent of the world’s current arsenal—it is also highly unlikely that the impact would be confined to the states that exchanged the weapons. Recent studies suggest two billion people would starve; after all, nearly a billion are already chronically malnourished, so a decline of ten percent in global crop yields would tip the balance.
Two of the more likely candidates for this kind of exchange are India and Pakistan. Toon et al. modeled the effects of a limited exchange between India and Pakistan in The effects of Rapidly expanding nuclear arsenals in Pakistan and India portend regional and global catastrophe:
Pakistan and India may have 400 to 500 nuclear weapons by 2025 with yields from tested 12- to 45-kiloton values to a few hundred kilotons. If India uses 100 strategic weapons to attack urban centers and Pakistan uses 150, fatalities could reach 50 to 125 million people, and nuclear-ignited fires could release 16 to 36 Tg of black carbon in smoke, depending on yield. The smoke will rise into the upper troposphere, be self-lofted into the stratosphere, and spread globally within weeks. Surface sunlight will decline by 20 to 35 percent, cooling the global surface by 2° to 5°C and reducing precipitation by 15 to 30 percent, with larger regional impacts. Recovery takes more than 10 years. Net primary productivity declines 15 to 30 percent on land and 5 to 15 percent in oceans threatening mass starvation and additional worldwide collateral fatalities.
An undimmed risk
One of the strangest aspects of our culture’s general climate of hysteria is that we no longer seem to worry about nuclear war. We’re citizens of one of the most anxious cultures in recorded history, but we don’t seem to fear the most obvious risk.
The prospect of a nuclear Apocalypse dominated our consciousness during the Cold War. Our popular culture was saturated with references to it. But this seems to us now as archaic and antique as the steam engine, at least to judge from the history books our children read:
Fear of total human annihilation is a tough feeling to live with every day. For children growing up in the Cold War, mutually assured nuclear destruction literally haunted their dreams. Many of them wrote letters to the president, begging Eisenhower, Kennedy, Johnson, and their successors not to push the button. Others just prayed the Bomb would kill them instantly, preferring swift death to years of sickness and grief.
I’ve repeatedly asked myself how we collectively decided this risk went away—allowing us to speak of it in the past tense—but I’ve never been able to answer the question to my own satisfaction. There is no reason whatever to think the risk eradicated, nor even diminished. It is true that the United States and Russia now have significantly fewer nuclear weapons than they did at the height of the Cold War. But this misses the point. We had enough to destroy ourselves many times over then, and we still have more than enough.
If you calculate the destruction wreaked by every bomb dropped in the Second World War, including the two atomic bombs, the total pales in comparison to the destructive potential carried by just one of our nuclear-armed submarines: The nuclear submarine can do seven times more damage. The United States has some 5,400 nuclear weapons, 1,744 of which are deployed, with roughly half on hair- trigger alert. The Russian arsenal contains slightly more weapons, about 6,000, of which slightly fewer, 1,584, are deployed. The size of China’s arsenal was estimated in 2023 to be 500; this will double by the end of the decade. The UK announced in 2021 that it would raise the ceiling on its nuclear warhead stockpile by more than 40 percent and would no longer publish information about the number of warheads it deploys. France deploys about 290 nuclear weapons. North Korea probably has about 30. India has about 160 and is scrambling to make more; Pakistan, likewise. Israel probably has about a hundred. This is enough—more than enough—to destroy the world.
Recently, we have heard this phrase quite a bit: “The risk of nuclear war hasn’t been this high since the Cold War.” It’s a dangerously misleading phrase, dangerous because it encourages listeners to think we’ve seen worse and survived. The risk is as high as it has ever been. It’s impossible to quantify this risk precisely, but Putin’s regime is as hostile to the United States as the Soviet Union was, and even more belligerent. North Korea is certainly as hostile to the United States as the Soviet Union was. It has tested ICBMs designed to strike the entire continental United States. It has a large inventory of theater ballistic missiles. The risk of an accident is certainly higher than it ever was. We have made these systems more complex, there are more nuclear powers, and the American political system is far less stable. For all their flaws, at least our military leaders have not yet bungled us into an accidental nuclear war. If Trump replaces them with unqualified loyalists, another safeguard will be eroded.
In 2021, the head of US Strategic Command testified before Congress that China is putting its nuclear forces on higher alert, and neither the United States nor its allies understands quite why:
While China keeps the majority of its forces in a peacetime status, increasing evidence suggests China has moved a portion of its nuclear force to a Launch on Warning posture and are adopting a limited “high alert duty” strategy. To support this, China continues to prioritize improved space-based strategic early warning, and command and control as specific nuclear force modernization goals. Their networked and integrated platform advancements will enable skip-echelon decision-making processes and greater rapid reaction. This shifting posture is particularly unsettling, considering the immature nature of Chinese strategic forces and compressed timelines needed to assess and frame a response, increasing the potential for error and miscalculation. Collectively, China’s strategic nuclear modernization expansion raises troubling concerns and complements the conventional capability growth reported by INDOPACOM and other Combatant Commands.
In the same presentation, he noted:
Over the last decade, Russia has recapitalized roughly 80 percent of its strategic nuclear forces, strengthening its overall combat potential with an imposing array of modernization efforts and novel weapons programs designed to ensure a retaliatory strike capability by all three triad legs. Upgrades incorporate new technologies into weapons systems, such as the nuclear-armed ICBM launched Avangard hypersonic glide vehicle. Other weapons programs, such as the Poseidon nuclear-powered and nuclear-armed underwater vehicle, and the Skyfall nuclear-powered and nuclear-armed cruise missile, threaten to redefine Russia’s nuclear force with asymmetric strategic weapons capabilities never before fielded. In October 2020, Russia successfully tested its multi-role Tsirkon hypersonic anti-ship missile with land attack capability. These new capabilities are specifically designed to thwart ballistic missile defenses, challenge deterrence, and target our capabilities, increasing risk to allies, partners, and the US homeland.
What’s more, we know that terrorists have attempted to procure nuclear weapons.
This is a staggering constellation of nuclear risk.
Whether or not you appraise the risk of a deliberate nuclear exchange as high, there is no doubt about this proposition: The risk of serious accidents, including accidental war, is at least as high now as it was during the Cold War. It is probably higher, because new nuclear nations lack the West’s and Russia’s experience and technical infrastructure. If the NPT collapses—as it is very likely to do under the pressure of Trump’s presidency, American isolation, and the betrayal of Ukraine—this risk will be even greater. There was, at least, a hotline between the United States and the Soviet Union. There is no hotline to North Korea.
For reasons detailed very well by Annie Jacobsen in, Nuclear War: a Scenario, a single ICBM launch could very plausibly lead to a war that brings 400,000 years of human evolution to an end in 90 minutes. The danger lies in the speed with which decisions must be made: vulnerable land-based missile systems mean use it or lose it. Each step in the cascade of mishaps she envisions is not only plausible but precedented. We don’t know the odds that this will happen. But the argument that it won’t happen because it hasn’t happened so far is a cognitive error—a whole set of them, in fact.
While the consequences of an accidental nuclear war are unthinkably grave, this does not entail that nuclear weapons are managed with commensurate care. The systems we use to prevent an accidental nuclear war are not good—they are flawed in such obvious ways that everyone, when they learn of them, is astonished and disbelieving. The President would have seven minutes to decide whether to end life on earth? It couldn’t be. But it is. It’s possible he’d be asked to make this decision because someone installed the wrong computer chip ? That couldn’t be. But it is.
There’s a limit to human competence. We cannot make people as competent as nuclear weapons are deadly. Ask yourself how well the global systems that were designed to detect and minimize the risk of a deadly pandemic served us. Consider the competence of the people involved in supervising these systems. Detecting and managing a pandemic should, in principle, be is a less difficult task: There is more time to get it right. But we got so much wrong all the same.
So why hasn’t an accidental nuclear war happened yet? The simple, terrible answer is this: Because we’ve been lucky. Luck, by its nature, runs out.
What’s the alternative hypothesis? Do you think the reason this hasn’t happened yet is because our species is just so very competent? So good at assessing and managing risk? So skilled at building and operating complex systems to mitigate those risks? So likely to be led by men of infinite wisdom and emotional control? So likely to exhibit the judgement and temperament you would hope they would if called upon to decide your fate in less than the time it took you to read this newsletter?
Really?
For discussion of these and other incidents, see Scott Sagan, The Limits of Safety: Organizations, Accidents, and Nuclear Weapons.
Great essay and comments. Here are my two cents.
When I was a very little boy, one dinner time my father started talking about nuclear war, nuclear weapons, and the radiation that would come right through the walls and kill you. I remember how terrified I was, and I remember looking out the window and saying, Right through the glass? Yes, right through the glass.
I was too young to know about the Cuban missile crisis, but years later my father told me how frightened he had been. The Korean War had ended only a few years before, and my father's six years in the artillery in WWII was not a distant memory.
Funny how a scare turns to fascination. I read everything with the word nuclear in it, and 60 years later I build equipment for the nuclear industry. I am sure it started with that conversation.
I have been trained in something called Root Cause Analysis, common in the nuclear industry. When there is a catastrophe or a near miss (funny term that!) you analyse the snot out of it until you know what has to be done to prevent recurrence.
Complex systems like nuclear power plants, nuclear weapons systems, economies and countries have many checks, approvals, barriers to failure, redundancies and automated detection systems, so a catastrophe almost always has multiple causes.
Complex systems are never static. They either adapt and renew, or they stagnate and deteriorate. People leave, experience is lost, procedures are revised, equipment is replaced, minor problems have temporary fixes that become permanent, all with unforeseen consequences. Niall Ferguson in one of his books said that complex system can go on for years, seemingly healthy, with the residents blissfully unaware of the rot within. Then one day, a small operator error, a minor equipment failure, a valve that doesnt close at TMI, a foreclosure in 2008, or a gun shot from an obscure Serbian nationalist triggers a cascade of events that crashes through all the inadequate measures and best intentions in place. Black Swan! Unforeseen but predictable.
It is interesting that when you stand amidst the wreckage and the bodies, looking back in time, with a little investigation the path to failure is relatively easy to see, right back to the initiating event. But, when you go back in time and look forward, all you see is a mass of potential problems and which one is key? Resources are finite and you can rarely deal with them all. I like Kissinger's observation that choices are never between a good option and a bad option. They are about finding the least worst option.
One cause that comes up frequently is over confidence, ie hubris. With respect, I hear that in some of the comments.
I think America and its allies are going to be tested soon.
I like WigWag's comment about the disconnect between the college educated elite and the working class joes. No wonder Labour is switching to the Republicans in the US and the Tories in the UK. Charles Murray observed that today there is a cohort of wealthy young Americans that are going from high school to college to business, academia or politics, who have never worked with their hands, never mowed lawns, never waited on tables or cleaned toilets or worked in a factory.
Chilling and important, Claire.
It also lead me to a disturbing insight about stubbornness in refusing to abandon long-held views in the face of evidence. You mention KAL007. I am objectively certain later revelations support the “tragic accident” view you cite.
However, I was the leader and minder of 30 British university students studying Russian in Soviet Ukraine that fatal night, and not only recall every tense hour, and the fear we shared with our hosts, but came to support the cogent arguments of R.W. Johnson that it was not an accident. Many did: https://www.lrb.co.uk/the-paper/v08/n13/paul-foot/the-scandal-that-never-was
Here I am, in 2021, knowing the later revelations and stil unwilling truly to accept them,Still thinking more about hysterical Americans pouring Stolichnays in the gutters, while our Soviet hosts hugged us. About the air boycott, so we barely got to make it home vis Air India
So I tell the story your way, but a part of me is waiting for new evidence....
I relate this to shed light on the stubborn cognitive dissonance of others... and my own.
Expertise on how to break through this?