Princes of the Realm
Political power in the Western world is moving from classical to corporate institutions. Part II of an interview with David Berlinski.
Claire—the news from Ukraine today looks to be huge. Tomorrow we’ll take a break from AI to catch up on that and other news of global import. But I wanted to share with you, first, the next installment of my exchange about the import of AI with my father, then to reply to a few of your emails and comments.
“Does anyone on earth imagine that Congress still has the power to control Open AI, Google, Microsoft, or Apple? They are far more powerful than Standard Oil ever was; and the tools that Congress might bring to bear are about as effective as a paper fan waved in front of a speeding bullet.”
David Berlinski interview, Part II
Meanwhile, here’s more of my exchange with my father. Here was his appraisal of the state of AI research in 2018. This is from his essay, Godzooks.
I am as eager as the next man to see Facebook become Vishnu, but I do not expect to see it any time soon. It is by no means clear that computers are in 2017 any more intelligent than they were in 1950; it is, for that matter, by no means clear that the Sunway TaihuLight supercomputer is any more intelligent than the first Sumerian abacus. Both are incarnations of a Turing machine. The Sumerian abacus can do as much as a Turing machine, and the Sunway TaihuLight can do no more. Computers have become faster, to be sure, but an argument is required to show that by going faster, they are getting smarter.
Deep Learning is neither very deep, nor does it involve much learning. The idea is more than fifty years old, and may be rolled back to Frank Rosenblatt’s work on perceptrons. The perceptron functioned as an artificial neuronal net, one neuron deep. What could it do? Marvin Minsky and Seymour Papert demonstrated that the correct answer was not very much. God tempered the wind to the shorn lamb. In the 1980s, a number of computer scientists demonstrated that by increasing the layers in a neural net, the thing could be trained by back propagation and convolution techniques to master a number of specific tasks. This was unquestionably an achievement, but in each case, the achievement was task specific. The great goal of artificial intelligence has always been to develop a general learning algorithm, one that, like a three-year-old child, could apply its intelligence across various domains. This has not been achieved. It is not even in sight.
CB: Do you think this assessment requires an update?
I do think it’s in sight, or at least, it is no longer absurd to think it might be. Here’s a presentation by the lead author of the “Sparks of Artificial General Intelligence” paper. There are some details I hadn’t appreciated. I also didn’t know that the version they tested has since been made less powerful, for “safety” reasons. No doubt in my mind: It’s not just regurgitating its training data. It is constructing representations of the concepts it learned and the relationships among them in some internal way, and it uses some form of internal reasoning to manipulate these representations. It is thinking. Deep learning is very deep, and it does involve learning. AGI no longer seems utterly improbable.
DB: Update? These remarks needs to be substantially revised. I am not even sure that revision is what is necessary. The first paragraph is still okay: There is yet no argument that going faster means getting smarter. I was wrong thereafter in what I thought possible. I agree with Microsoft and with you: These systems really do show signs of artificial general intelligence because they show signs of intelligence itself. Artificial and general are the least of it. I don’t in the least mind saying that I was wrong. Like almost anything experienced for the first time, it is exhilarating.
But it is important to get right what I got wrong. We can form an appreciation of these systems only by judging what they do—their impressive and astonishing output. It is when you go further that I hold back. You write: “It is constructing representations of the concepts it learned and the relationships among them in some internal way, and it uses some form of internal reasoning to manipulate these representations. It is thinking. Deep learning is very deep, and it does involve learning.”
But this is to assign to GPT the presumptive properties of the human mind; and it is precisely the fact that these things are not human at all that makes them so sinister. The fact of the matter is that we have no idea on any kind of granular level what they are doing: We have no dynamical theory, even though we know that on a global level they are Turing Machines: they do computations. They are, in fact, Turing Complete. But this tells us nothing, but nothing, about what is going on inside them. Are they forming representations of the concepts that they have learned? Have they learned concepts? These are what we say of one another; and they are useful in a rough and ready way, but almost useless in any deep analytical way. What on earth are you doing when you form a concept or achieve a representation? Or do they give every indication of proceeding without concepts or representations? If they can solve problems and get around intellectually without any of that stuff, perhaps we do as much?
CB: Let’s discuss the economics and the politics of this for a moment. If, as Marx argued, base determines superstructure, what kind of superstructure will emerge in a world where almost all of our GNP is generated by AI and robots? Clearly, all of the money is about to flow to Silicon Valley. All but a handful of us will be unemployed. Few of us will be able to do or create anything that AI can’t do or create far more efficiently. We’ll have to implement something like a universal basic income, no? Are we finally on the verge of a communist revolution?
DB: I have been watching any number of interviews with the technical people most intimately involved with the construction, or control, of these large LLMs: Connor Leahy, Ebenezer (formerly Eliezer) Yudkowsky, Anatoly Karpathy, Geoffrey Hinton, and a number of others whose names I did not note and do not remember. I went to Bronx Science with their grandfathers: their grandsons haven’t changed. If in 1956, they were obsessed by science fiction, amateur astronomy, or their mineral collections, they are today obsessed by science fiction, Bayesian statistics, and computer programming. The mix has changed: but not their personalities. I saw this from the outside. I was among the bullies, at ease in Zion. I was never interested in science fiction, amateur astronomy, or mineral collections, and I am not interested in science fiction, Bayesian statistics, or computer programing.
I am interested in what Milovan Djilas called the new class. Djilas was referring to communists functionaries in Yugoslavia; I am referring to the men who have made the LLMs. The analogy is imperfect, but there is something to it anyway. The new class has formed in every country wealthy enough to put together sophisticated computer systems, and it is made up largely of programmers, system engineers, software managers, technicians. Overwhelmingly male; obviously white. Members of the new class share a common background: deep technical training, on the one hand, and a diet of Fritos, Fanta, and cheap science fiction, on the other; and beyond this, they believe with the faith of little children what physicists so often say that they believe: that if something is not prohibited by the laws of physics, then sooner or later, it will happen—as when Basque becomes the national language of China.
Members of the new class are very smart. There are smart men in every generation, but intelligence is like a viscous fluid, and where it flows is determined by the quirks and oddities of the landscape. When I was a young professor at Stanford in the 1960s, there was a sense that the flow was flowing toward mathematics. The red-hot mathematicians were red hot; the physicists, not so hot. This was, of course, absurd, but there it was anyway. With the completion of the Standard Model and the advent of String Theory in the late 1970s, the flow underwent a reversal, the mathematician mumbling somewhere else, and the physicists become red hot. The exquisite nuances of social status instantly adjusted to the change in flows.
Adieu the physicists, and adieu the mathematicians. Intelligence is now flowing toward the new class, and often overflowing. This is, in part, a matter of alert self-interest. There is money to be got in Silicon Valley, and none at all at Bowling Green State University. This might be a tolerable trade-off if what is being traded off is a modest salary for an easy life. A man choosing to enter academic life as a mathematician today requires something like a death wish. If he is not assuring heavy-haunched bureaucrats of his uncontrollable enthusiasm for DEI, he is otherwise regarding the women in his classes with the anxious eye of a rabbit regarding a pit viper. One wrong look and welcome to harassment hell.
Google, Apple, Microsoft, and Open AI all go in for the DEI show, but not at all for its substance; and they regard rainbow-themed toilet tissue as the first and last step in the right direction. These are among the last meritocracies on earth; and the men running them are not about to turn over a coding project to someone committed to destroying the digital binary on the grounds that there is spectrum of natural numbers between 0 and 1.
The new class is old in being new but it is also new in being old. The computer is an uncommonly stern task master. Unyielding in its demand for precision, it rewards obsession. Mathematical logicians and computer programmers are on the best of terms; and, yet, the logicians are, somehow, more balanced; the programmers, less so. It is rather as if the programmers were all wearing spectacles that gave them the enormously acute short-distance vision needed for coding, and the very acute but very very narrow long-distance vision needed for science fiction, with absolutely nothing in between. It is, perhaps, this distortion that accounts for the fact that men like Yudkowsky or Leahy are commonly regarded as lunatics. I share this impression but deplore its consequences. What they have to say about the existential threat posed by AGI is not obviously foolish. There is something sinister about these machines—even Chat GPT.
It is worth noting, as well, that the new class has executed a perfect end-run around the traditional concerns of the philosophy of mind. Is the mind the brain? Who knows? And, more to the point, who now cares? Are these machines conscious? Who knows? And, again, who cares? If they are sinister, they would be no less sinister were they entirely devoid of consciousness. Do these machines throw light on the human mind and how it works? Who knows? And who cares? To argue that they are uninteresting because they do not explain how the human mind works is precisely like arguing that manned flight was uninteresting because it did not explain bird flight. It is flight itself that counts, and what counts for the birds is for the birds.
What is so very odd about these men is that they seem not to have achieved what Marxists would call a sense of class consciousness. In this, they are much like the physicists of the 1940s who put together the atomic bomb and never once realized the stranglehold that they really held over the political structures then in place. Einstein talked vaguely about world government; Oppenheimer discovered that his hands were stained with blood but was uncommonly vexed when his hands were tied. Neither man got what he wanted; but both men thought of their own class consciousness in terms of an opposition between the physicists and the national government.
Political power in the western world is now moving from classical to corporate institutions. Sam Altman’s appearance before congress was, in this respect, notable: a reigning Prince of the Realm making room in his schedule to say a few utterly and obviously insincere words to his feudal lackeys. That’s right Congressman, throw up a few regulations and we’ll all have a good laugh. Does anyone on earth imagine that Congress still has the power to control Open AI, Google, Microsoft, or Apple? They are far more powerful than Standard Oil ever was; and the tools that Congress might bring to bear are about as effective as a paper fan waved in front of a speeding bullet.
If they are vulnerable at all, these enormous corporations, they are vulnerable to their masters, who, as a class, could strangle them as effectively as the physicists could have strangled the US political establishment in 1945. Without men like Anatoly Karpathy, there could be no Open AI. A Marxist might say that the programmers have been coopted by the corporations that they serve; but, in fact, the programmers are busy occupying an odd double role in the scheme of things, acting both as programmers and as entrepreneurs. I do not think that classical Marxist theory quite encompasses this development. The very most important decisions about the future of the human race are being made by a class of men who lack the political skill or self-awareness to make them. It is rather like some insane multi-dimensional version of The Sorcerer’s Apprentice.
AI Risk Skepticism
Now to your emails. Some of you aren’t persuaded that AI poses all that much of a risk. Instead of responding to each of you individually, let me do it all at once by commending to your attention to Gus Docker’s discussion, below, with Roman Yampolskiy, which prompted me ask Roman to join us on the podcast. You’ll find it very helpful:
In the video, Roman discusses two papers—he wrote them both—exploring common objections to the idea of AI as a threat to the human species. The first, AI Risk Skepticism, catalogues the common arguments:
[W]e review the most common arguments for AI risk skepticism. Russell has published a similar list, which in addition to objections to AI risk concerns also includes examples of flawed suggestions for assuring AI safety, such as: “Instead of putting objectives into the AI system, just let it choose its own,” “Don’t worry, we’ll just have collaborative human-AI teams,” “Can’t we just put it in a box?,” Can’t we just merge with the machines?” and “Just don’t put in ‘human’ goals like self-preservation.”
The importance of understanding denialists’ mindset is well-articulated by Russell: “When one first introduces [AI risk] to a technical audience, one can see the thought bubbles popping out of their heads, beginning with the words “But, but, but … ” and ending with exclamation marks. The first kind of but takes the form of denial. The deniers say, “But this can’t be a real problem, because XYZ.” Some of the XYZs reflect a reasoning process that might charitably be described as wishful thinking, while others are more substantial. The second kind of but takes the form of deflection: accepting that the problems are real but arguing that we shouldn’t try to solve them, either because they’re unsolvable or because there are more important things to focus on than the end of civilization or because it’s best not to mention them at all. The third kind of but takes the form of an oversimplified, instant solution: “But can’t we just do ABC?” As with denial, some of the ABCs are instantly regrettable. Others, perhaps by accident, come closer to identifying the true nature of the problem. ... Since the issue seems to be so important, it deserves a public debate of the highest quality. So, in the interests of having that debate, and in the hope that the reader will contribute to it, let me provide a quick tour of the highlights so far, such as they are.
The second, AI Risk Skepticism—A Comprehensive Survey, written with co-author Vemir Michael Ambartsoumean, is far more complete. I believe the paper and the video respond to every objection I’ve seen so far, although if yours isn’t included, let me know. I agree with those responses.
Here’s Roman’s taxonomy of common objections:
There are more in the second paper. If you still don’t find an answer that satisfies you, the footnotes are a good guide to the literature.
I’m sure that after reading them you’ll agree that if the Doomers are wrong, it’s not because it never occurred to them that you could “just unplug it,” or that “you need a body to kill people and an AI doesn’t have one,” or “there’s no reason to assume the AI will hate us.” The people working on the AI control problem aren’t boneheads. They’ve been working on this for decades. If the solution were easy, they’d have found it.
You’ll be seeing a cottage industry’s worth of articles to the effect that all of this is overhyped and there’s nothing to worry about. Or that we should only be worried by the immediate risks of AI (particularly, the terrifying risk that the AI won’t align with the author’s pet culture war peeve). When you read these articles, look for the telltale signs that the author didn’t think it worth the time to read what AI safety researchers have been writing for the past several decades. (Here are two egregious examples: one in the Atlantic and the other in the Economist.)
I’m eager to hear what more thoughtful critics might say. I’d love to be reassured we won’t die of our own stupidity. I’ll tell you about it immediately if I find these arguments. But so far, I’ve not seen much to suggest that the critics are willing to take the time to understand the ideas they’re criticizing.
One of the most lucid assessments of AI I’ve yet read. Makes me feel better
I'm most sure AI will be used as much as possible to concentrate power regardless of who is right in this debate. I'd like to see more about the consequences of that