Claire—since GPT-4 was released, my father and I have spoken of little else (besides the usual: interior decoration and fine tailoring). I thought readers would love to hear what he had to say. He refused categorically to come on a podcast—he doesn’t do podcasts, he says—but he consented to a written interview. The first part is below.
CB: Pop, what do you make of GPT-4? Is it intelligent?
DB: I agree with the French philosopher Luc Ferry, who on television described GPT-4 as hallucinant. It is the first system to have acquired a natural language. Hallucinations to follow. To ask whether GPT-4 is intelligent, on the other hand, is a little like asking whether it is left-handed. It is neither. Intelligence is an adjective of human or animal assessment, and to talk of intelligent LLMs (Large Language Models) is to start the frog-march toward anthropomorphic extravagance. They are what they are, those systems, and we have no natural and flexible vocabulary to describe them beyond saying, as Eliezer Yudkowsky does, that they are “giant inscrutable matrices.”
We are in the same position with respect to ourselves. The arena of conscious life is the only arena we know directly; and conscious life, if it is caused at all, has its source in an arena sealed off from direct inspection. What we know from the outside is only that it cannot be described from the inside. If it makes voluntary action possible, it cannot act voluntarily. For the same reason, it cannot think, believe, imagine, dream, propose, dissemble, or feign injury. It is a point that Freud never understood. This does not mean that the life that we live is an illusion. That would be absurd. It is the only life we have. It means only that when it comes to explaining ourselves to ourselves, we are at a loss.
Large Language Models? Ditto. What we can see is only their output: This we are very tempted to interpret as if we were interpreting the usual human gabble. When The New York Times interviewed Sydney, now said to be in rehab, she gave every indication of being petulant, irritable, possessive, and jealous. Her outburst was widely admired, if only because it seemed to reveal her true nature. We say as much of one another. No collection of giant inscrutable matrices can be petulant, irritable, possessive, or jealous. If the hidden portion of the human mind is nothing like the conscious portion, then neither is the hidden portion of an LLM—its Black Box—anything like its output. What makes systems of this sort inscrutable is just what makes human beings inscrutable: We do not have the faintest idea how what they do is connected to what they are. “You could not discover the limits of soul,” Heraclitus wrote, “even if you traveled every road to do so; such is the depth of its meaning.”
On the other hand, when it comes to its output, GPT-4 is nothing to sneeze at. Microsoft recently published a monograph entitled Sparks of Artificial General Intelligence: Early Experiments with GPT-4. See for yourself:
We demonstrate that, beyond its mastery of language, GPT-4 can solve novel and difficult tasks that span mathematics, coding, vision, medicine, law, psychology and more, without needing any special prompting. Moreover, in all of these tasks, GPT-4’s performance is strikingly close to human-level performance, and often vastly surpasses prior models such as ChatGPT. Given the breadth and depth of GPT- 4’s capabilities, we believe that it could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system.
If this is what the system can do, everyone using the system has the uneasy feeling that it can do more; the internet is full of advice about how better to prompt the system so that good old Sydney may once again make a break for it. I agree with Conor Leahy and Eliezer Yudkowsky.
We are playing with fire.
CB: Do you mean Foom?
DB: Beyond GPT-4, which, I suppose, will sooner or later join Microsoft Word in being well-known for being well-worn, some form of super-intelligence—the Super—is said to be lurking in the tunnel of time, a system or device endowed with the same qualitative edge over us that we enjoy with respect to the chimpanzees. If there is one of them, there are bound to be others, a set of Supers, some suave, others elegant, and still others up to no good. Simon Conway Morris has said something similar about biological evolution, which he regards as a search through a constrained hyperspace of biological possibilities. “Those things are, in reality, realities: they are absolutes which we, in a sense, discover.” Stephen Wolfram’s A New Kind of Science makes similar claims about a universe of programs or computations.
These ideas have long been in the air, and now they are on the ground.
Up Plato, up good Pooch!
Is it possible to get there from here? To go, I mean, from GPT-4 to the Super? Cosmology offers a sobering case to the contrary. There may well be a path between any two points in the physical universe, but the speed of light makes it impossible to traverse most of them. And vice versa, of course. If there is no getting there from here, there is no getting here from there either. This may be a point of reassurance when it comes to the infernal Super, now smoldering in hyperspace and dying to get here and start causing trouble. We cannot design the Super, of course, for the same reason that the chimpanzees could not have designed the ball point pen. The Super is, by definition, super, endowed with some Fabulous Factor X that we can specify only by reference to its fabulousness.
If we cannot design it, perhaps it could design itself in stages by means of recursive self-improvement? Start small at GPT-4 or -5, go big thereafter, the Super emerging in steps, slow at first, then brisk, brisker, Boom, or FOOM, as computer scientists say. If this is the answer often given, it is because it is the only answer now imaginable. I can imagine such a thing, but I cannot describe it. A system undertaking recursive self-improvement must be capable of recursive self-reference. There would otherwise be nothing to improve. We know from both Gödel and Tarski’s theorems that at a certain level of complexity, systems capable of self-reference become incomplete; and if not incomplete then inconsistent. An inconsistent Super is bound to have certain doubts about itself because it is obliged to have doubts about everything. It is inconsistent, after all. If this is not a law of intelligence, it is certainly a law of arithmetic. If GPT-4 is bound for the Big Time, it is going to need help from the outside—say from GPT-5, which has agreed to handle the meta-mathematics. If this argument is carried to its conclusion, it might suggest that the Super is required to get to itself.
We have no idea, to continue the argument along another dimension, whether the Super might require NP algorithms in order to be super. These are algorithms that are not polynomially bounded, and, for all we know, the Super, having acquired them, might require forever to get on with the business of exterminating us. I can imagine only one scenario worse that confronting an irritable Super and that is confronting an irritable Super who cannot quite seem to get his act together. It would be like agreeing to euthanasia only to discover that the doctor cannot find his hypodermic needle.
Beyond all this, there is the open question whether the Super, however defined, is really apt to be a match for the human race, which has, after all, a proven track record when it comes to mindless extermination. If everything about the human mind is computational, the answer must be yes, but whether everything about the human is computational is an open question.
We do not know.
Given this, it may well be the better part of wisdom to worry less about what systems like GPT 4 might do and worry more about what they can do. They can make a mess of things. And this is something that we know.
I think your father proceeds from two artificial--and unsubstantiated--constructs: intelligence and consciousness. We've yet to arrive at anything more coherent than a sort of working characterization of intelligence, and we have no data off which to bounce our concept, because all we have for comparison examples are ourselves, the life forms extant on this planet. (It's breathtaking, the ignorance with which so many alleged scientists, much less science writers, speak of conditions on other planets as making life impossible on those planets. No. Those conditions seem likely to make life impossible for Earth-type life. But that's a separate, if related, argument.)
Similarly, we don't even know what consciousness, self-awareness (the two are closely enough related that in this context, the terms can be taken interchangeably), is. We think we think, we think we are aware of ourselves, but to do that, we must be provably aware of others, so we can claim a separateness.
Thus, the artificiality at the bottom of it all: that what we know of our world, or presume to think we know, is a digitized simulation of...something, perhaps not real...built up from a digitized sensing. The simulation is constructed by a digitizing mechanism that is even more wholly enclosed and closed up than any software program. Digitization is nothing but an error-prone--because it's data-lossy--approximation of what is real, and since we process those incomplete data with that error-prone, incapable even of randomizing for analysis, sort-of computational puddle of water and chemicals, we can't even know what is real. Only what we think is real. If we think. Whatever thinking is.
To argue whether ChatGPT, or anything else we're pleased to call artificial intelligence, is or is not, intelligent, or self-aware, or... is to argue whether something like an AI has characteristics we have yet even to define and of which we have only the haziest of understandings.
"What we know from the outside is only that it cannot be described from the inside. If it makes voluntary action possible, it cannot act voluntarily."
BF Skinner was equally confused. He began by showing that human behavior was strictly a product of the complexity of our stimulus-response development, and that we were powerless to step outside that. Then he closed by waving his magic wand (or as a cartoon had it, "then a miracle occurred") and said, "Maybe not," and suggested ways of taking precisely those steps.
"This does not mean that the life that we live is an illusion. That would be absurd."
Not at all. It simply means we run away from the risk of absurdity by inventing something we call the life we live. Which might itself be absurd.
Separately, my first "thought" when I encountered "Foom" in this piece was of "FROOM."
Eric Hines
So your dad agrees with me. It’s excessive to speculate and entertain hypotheticals about what it explicitly might do. We have only warrant to be anxious about its revolutionary potential. We should be very cautious but be open-minded, not to assume the worst and predict the apocalypse like the person in the Time Magazine article. That’s baseless panic. We should dismiss all such calls to cancel AI. I’m even quite spectacle of imps like Elon Musk calling for an AI “pause.” I think we should be less afraid of the robots all ganging up on and killing us on their own. What’s scarier is how the political class, entrepeneurs like Musk, Mark Zuckerberg and Peter Thiel, and federal agencies can combine with their insider knowledge to harness the powers of AI for antidemocratic ends. We’ve already seen this w social media with the way Twitter suppressed the Ny post story of the Hunter biden laptop, under pressure from state actors, and how social media sought to censor covid disinformation. And now a bipartisan section of the political class seriously wants to give the commerce secretary the power to ban Tik Tok by classifying it as a national security risk. Just imagine the possibilities for the state manipulation and conditioning of public discourse--the opportunities with which AI presents the experts who develop it and the state that regulates it and will doubtless seek to wield it. We’re probably better advised to worry about the road to serfdom, not armageddon, not Westworld.