That was an edifying spectacle in Iowa, wasn’t it. David French wrote something I’ve been meaning to write for quite some time. There has been a broad breakdown in competence in the United States. No one quite understands why. But as he points out, American history, roughly since the turn of the century, has been a history of staggering incompetence, as an exercise in counterfactual imagination suggests:
What are the ripple effects if Palm Beach County election officials designed a less-confusing ballot for the 2000 election? How does America change if our intelligence agencies were more accurate in their assessment of Saddam Hussein’s chemical and nuclear weapons programs? Or, if we still failed on that front, how is our nation different if military and civilian leaders had not made profound mistakes at the start of the Iraq occupation?
We can do this all day. Let’s suppose for a moment that industry experts were better able to gauge the risks of an expanding number of subprime mortgage loans. . Would we be more trusting of government if it could properly launch a health care website, the most public-facing aspect of the most significant social reform in a generation? How can we accurately judge foreign threats if ISIS is dubbed a “jayvee team” the very year that it explodes upon the world stage and creates the largest jihadist state in modern history?
The United States was once known for extraordinary competence. Consider the D-Day invasion, the Manhattan Project, the Berlin Airlift, the moon landing: In example after example, the United States government—not the private sector, note—mobilized vast talent to overcome historically unprecedented military, economic, technological, and governance challenges. So widely-known was our government for competence that to this day, we’re the object of conspiracy theories worldwide. Whatever we do, however dumb and cack-handed, is presumed to be deliberate, because so mighty a superpower as the United States could not possibly be capable of screwing up in such stupid ways. Just yesterday I was assured that the CIA had unleashed the Wuhan coronavirus—cui bono, after all? How could I be so naive as to think it a mere coincidence that the virus just spontaneously emerged near a virus research facility?
This kind of thinking owes much to the belief that the United States’ government is greatly more competent than it is. That belief, in turn, is a function of our competence of yore. Nothing we’ve done in this century would warrant it.
The loss of competence is bipartisan. The GOP is gloating over the Iowa meltdown. They would, but they shouldn’t. The worst American mistakes of this century were made under the GOP’s watch. I don’t think this is significant, though. They could just as easily have been made with Democrats in power. As usual, partisanship is preventing us from thinking about problems that are bipartisan, national, and systemic.
What exactly has gone wrong?
The Software of American Public Problem Solving
The historian Philip Zelikow wrote one of the best analyses of this problem I’ve read—the best, in fact—in a little-remarked essay for the Texas National Security Review. “The “hardware” of policymaking,” he writes, “—the tools and structures of government that frame the possibilities for useful work”—are obviously important:
Less obvious is that policy performance in practice often rests more on the “software” of public problem-solving: the way people size up problems, design actions, and implement policy. In other words, the quality of the policymaking.
“Software,” he argues, includes organizational cultures for obtaining and evaluating information, doing analysis, and recording what has been done. It includes commonly understood habits that routinely highlight gaps in information or analysis.
These are the qualities, he argues, that made for competent policy in the mid-twentieth century—and they neither came out of the academy nor did they return to the academy. Rather, they came from the strong, decentralized problem-solving culture of American business, and from the military—in turn influenced by British staffing systems, which Americans envied and imitated.
the wartime and immediate postwar experience profoundly influenced organizational culture for another generation or so. A great many Americans had been drawn into the work of higher-level policy design on numerous topics. “One analyst referred to [the war] as the largest program in postdoctoral education for faculty in the nation’s history.”
The military and business cultures of the United States in this period, he notes, “were intensely oriented toward practical problem-solving.”
They emphasized meticulous written staff work: unending flows of information and estimates, habitual preparation of meeting records or minutes, constant and focused debates about priorities and tradeoffs, and guidance directives drafted with concise precision that a lawyer would envy.
The result, especially by 1943 and afterward, was marked in dozens of projects from the atom bomb to the Marshall Plan to the Berlin Airlift. Any close study of such efforts reveals superior construction of large-scale, complex multi-instrument policy packages, including frequent adjustments.
The point about constant adjustment and iteration is notable. Even in military technology, most of the key Allied innovations turned out to be second-generation innovations. In other words, they were not the airplanes or ships that were available or in production at the start of the war. Instead, they were new or improved models of every kind, several of which had not even been imagined before the war. They were developed with agility and on a massive scale by a number of agencies and scores of companies in response to ongoing lessons learned, lessons that were constantly, consciously being extracted and studied.
It is difficult for those who have not pored through the archives to appreciate the scale and scope of this work, ranging from economic statecraft to amphibious operations to science policy. The extraordinary sets of official history volumes from World War II, familiar to historians of the period, give a sense for the work. They are also a striking illustration of the organizational culture that would produce such meticulous and admirable historical analyses.
The organizational culture that accomplished so much during the war was passed along mainly through imitation and apprenticeship. But the best practices did not migrate into standardized training or academic degree programs. [my emphasis]
Naturally, as that generation aged and died, these skills atrophied. That generation knew a great deal about making effective policy. They could not figure out how to teach it to the next generation. They failed to put into place an appropriate educational system for training an equally competent policy-making class.
This is a powerful explanation. It fits the facts. It makes intuitive sense.
It explains, too, something else that has always puzzled me. Whenever Americans point to European healthcare systems as something to emulate, I hesitate—not because they are wrong to say that health care is provided more rationally and less expensively in every other developed country. This is true. The French healthcare system is a marvel. The French bureaucracy, in general, works exceedingly well.
But as half of America will quickly point out—and they, too, are right to point it out—when our government gets involved in these things, we get the VA. The American right concludes from this that government itself is the problem: Only the private sector has the appropriate motivation to be competent. The American left points to Europe, or what they understand of it, and concludes this isn’t so: Obviously, in countries where the government takes a larger role in providing for health care, better outcomes result.
Both are missing a crucial point. It’s not government, per se, but our government that screws everything up. Zelikow’s hypothesis—that there’s something wrong with the way we educate our public servants—is an important idea. It may well be that we need to fix this before we can hope to fix anything else.
If it can be fixed.
American education for public service, he notes, has always been radically different from the rest of the world’s. The notion of a professional career in public service didn’t even emerge until the late 19th century.
In the postwar United States, Zelikow notes, the study of public administration lost prestige against the rising idea of the social sciences.
Partly in response, a new trend in public policy education took shape. The social sciences were developing new techniques for the systematic analysis of public policies using analytic models, many derived from economic theory, along with quantitative methods. …
All this momentum produced one of the most significant changes in professional higher education of that generation. “At the heart of this shift [during the 1960s and early 1970s] was a growing faith in the power and prestige of economics as a field, a method, and even a science.”
In the new curricula, the definition of “policy analysis” was narrowed to economics, statistics, and quantitative analysis; students focused on cost-benefit analysis, behavioral economics, game theory, and operations research. “But most policy making challenges,” he notes, “and the related staff work, call upon different sets of skills.”
Traditional graduate studies related to policy work tend to bifurcate into two very distinct tracks—a professional master’s degree program and an academic PhD program. Both of these programs serve important purposes, but they leave a major gap. The PhD students develop rigorous research skills to investigate theories in their fields, but are largely insulated from consideration of real-world policymaking. Professional master’s students are exposed to some complexities and challenges of practice. The strength of these programs can be training in quantitative analytic methods, public administration, and advocacy. For various reasons, they do not provide rigorous training in the kind of strategic and design thinking needed for problem-solving, nor do they impart enough relevant substantive knowledge.
Meanwhile, law schools began to provide larger and larger numbers of public employees.
Lawyer-officials have ready gifts. They know how to make an argument. They are usually experienced writers. On a good day, they are relatively rigorous in attending to factual and legal detail. The tradeoff for these “generalist” skills, however, is a lack of much subject-matter or foreign expertise. Experienced as advocates who can pick evidence to defend a position, lawyers are not necessarily trained to weigh and sift positions on both sides. Experienced in being asked to decide what “can” be done, lawyers are not trained to analyze what “should” be done, even in policies having to do with policing or the administration of justice. They have no necessary experience in policy design, analysis, or implementation.
Read his whole article for more insight into the ways our education system fails to create competent public administrators. It is very insightful. It explains a great deal.
He points as well to significant transformations in the organizational culture of our government—particularly, to the decline of careful record-keeping:
Through the war and postwar years, careful records were usually kept for all high-level meetings among American officials. This is hard to do. It was not done mainly out of regard for historians, although that was a factor. It was done because such records were considered essential for good government. It forced reflection on what had been said or not said. It helped others stay current if they had a need to know what was going on. …
It is now rare to find any good records kept of what is said at meetings among American officials. The quality of the records of meetings with foreigners has also deteriorated. The usual excuse given is the horror of leaks. But that horror was perfectly familiar to officials of the wartime and postwar generation as well. Though constantly irritated by leaks, those past officials thought that the net value of routines of good governance took precedence. The real reasons for the change are likely more banal. There was no conscious policy choice across the administrations to quit preparing good records. It is just hard to do it. Without a routinized discipline, it vanishes from the day-to-day culture. …
Developing these habits, the Americans during the 1940s were strongly influenced, through common work in various Allied organizations, by long-established and relatively high-quality British processes for collective policy analysis and staff work. Eisenhower was both a product and exemplar of such Allied experience. …
Although the origins are almost forgotten, the 1947 creation of America’s National Security Council system was greatly influenced by the model of British systems, including the British War Cabinet system. Many of the Americans had come to know, imitate, and grudgingly admire those staffing methods. They consciously adapted analogous habits of systematic paper preparation, record-keeping, historical evaluation, peer commentary, lucid guidance, and collective decision-making. Eisenhower well understood this background about why the National Security Council was created and how it was originally expected to function. He was the last American president who did.
As the quality of written staff work declines, he notes, fewer decisions may be made from written records. Instead, “high-level meetings proliferate. They become a surrogate for good written analysis and advocacy.”
This makes the delegation of analysis and action more difficult. Overworked principles make policy based on poorly-documented meetings. Their subordinates, in turn, become less responsible, and because they know their work is less meaningful, they reinforce the degenerative cycle.
He concludes with an interesting observation.
As the immensely powerful Qing empire in China began to decay in the early 1800s, a leading scholar began calling for reform of the Confucian system that selected and trained the country’s administrative elite. He looked around and saw “everything was falling apart … the administration was contaminated and vile.” The scholar, Bao Shichen, “found himself drawn toward more practical kinds of scholarship that were not tested on the civil service exams.”
Bao “would in time become one of the leading figures in a field known broadly as ‘statecraft’ scholarship, an informal movement of Confucians who were deeply concerned with real-world issues of administration and policy.” Tragically, for Bao and many of his reformist allies, though their efforts made some headway, it was not enough. They could not reverse the decline of their empire.
But he believes we might have better luck:
The United States government has plenty of problems too. Fortunately, it is not yet at the point the Qing dynasty reached. Americans can reflect on a proud heritage, not far in the past, when Americans were notorious across the world for their practical, can-do skills in everything from fixing cars to tackling apparently insurmountable problems, public as well as private. These seemingly bygone skills were not in their genes or in the air. They need not be consigned to wistful nostalgia. The skills were specific. They were cultural. And they are teachable.
This may be so. But there are very few examples of civilizations reversing, or even slowing, the process of decline. None, in fact, that I can think of.
That said, despair is a crime and we have to try.
A final point: Everything that went wrong in Iowa is exactly what I’m worried will go wrong with the world’s nuclear weapons. The system was excessively complex, untested, and managed by people who clearly lacked the imagination to envision what would happen when the system was tested under stress and by real people.
It’s an astonishing account of incompetence, particularly in the failure of the designers to appreciate that the great majority of the intended users would be elderly, and thus baffled by procedures for bypassing a phone’s security settings, two-factor ID, and PIN codes. (Don’t these people have parents?)
… the app had to be downloaded by bypassing a phone’s security settings, a complicated process for anyone unfamiliar with the intricacies of mobile operating systems, and especially hard for many of the older, less tech-savvy caucus chairs in Iowa.
The app also had to be installed using two-factor authentication and PIN passcodes. The information was included on worksheets given to volunteers at the Iowa precincts tallying the votes, but it added another layer of complication that appeared to hinder people.
In the end, only one-quarter of the 1,700 precinct chairs successfully downloaded and installed the app, according to a Democratic consultant who spoke on the condition of anonymity to avoid losing work. Many who resorted to calling in the results found that there were too few operators to handle the calls.
Some also took pictures of the worksheets they had been given — the PINs fully visible — and tweeted them out in frustration. Had the app worked, the information might have given trolls or hackers a chance to download the program and tamper with it.
We are no longer competent enough to manage an election. This is extremely painful, but we have to acknowledge this, because it is so dangerous. The deeper reasons for this go far beyond partisanship.
As a matter of priority, our first should be ensuring that the most worst scenarios that could ensue from our decline in competence don’t come to pass.
We need to be sure those nukes don’t go off by accident.