Claire—since GPT-4 was released, my father and I have spoken of little else (besides the usual: interior decoration and fine tailoring). I thought readers would love to hear what he had to say. He refused categorically to come on a podcast—he doesn’t do podcasts, he says—but he consented to a written interview. The first part is below.
CB: Pop, what do you make of GPT-4? Is it intelligent?
DB: I agree with the French philosopher Luc Ferry, who on television described GPT-4 as hallucinant. It is the first system to have acquired a natural language. Hallucinations to follow. To ask whether GPT-4 is intelligent, on the other hand, is a little like asking whether it is left-handed. It is neither. Intelligence is an adjective of human or animal assessment, and to talk of intelligent LLMs (Large Language Models) is to start the frog-march toward anthropomorphic extravagance. They are what they are, those systems, and we have no natural and flexible vocabulary to describe them beyond saying, as Eliezer Yudkowsky does, that they are “giant inscrutable matrices.”
We are in the same position with respect to ourselves. The arena of conscious life is the only arena we know directly; and conscious life, if it is caused at all, has its source in an arena sealed off from direct inspection. What we know from the outside is only that it cannot be described from the inside. If it makes voluntary action possible, it cannot act voluntarily. For the same reason, it cannot think, believe, imagine, dream, propose, dissemble, or feign injury. It is a point that Freud never understood. This does not mean that the life that we live is an illusion. That would be absurd. It is the only life we have. It means only that when it comes to explaining ourselves to ourselves, we are at a loss.
Large Language Models? Ditto. What we can see is only their output: This we are very tempted to interpret as if we were interpreting the usual human gabble. When The New York Times interviewed Sydney, now said to be in rehab, she gave every indication of being petulant, irritable, possessive, and jealous. Her outburst was widely admired, if only because it seemed to reveal her true nature. We say as much of one another. No collection of giant inscrutable matrices can be petulant, irritable, possessive, or jealous. If the hidden portion of the human mind is nothing like the conscious portion, then neither is the hidden portion of an LLM—its Black Box—anything like its output. What makes systems of this sort inscrutable is just what makes human beings inscrutable: We do not have the faintest idea how what they do is connected to what they are. “You could not discover the limits of soul,” Heraclitus wrote, “even if you traveled every road to do so; such is the depth of its meaning.”
On the other hand, when it comes to its output, GPT-4 is nothing to sneeze at. Microsoft recently published a monograph entitled Sparks of Artificial General Intelligence: Early Experiments with GPT-4. See for yourself:
We demonstrate that, beyond its mastery of language, GPT-4 can solve novel and difficult tasks that span mathematics, coding, vision, medicine, law, psychology and more, without needing any special prompting. Moreover, in all of these tasks, GPT-4’s performance is strikingly close to human-level performance, and often vastly surpasses prior models such as ChatGPT. Given the breadth and depth of GPT- 4’s capabilities, we believe that it could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system.
If this is what the system can do, everyone using the system has the uneasy feeling that it can do more; the internet is full of advice about how better to prompt the system so that good old Sydney may once again make a break for it. I agree with Conor Leahy and Eliezer Yudkowsky.
We are playing with fire.
CB: Do you mean Foom?
DB: Beyond GPT-4, which, I suppose, will sooner or later join Microsoft Word in being well-known for being well-worn, some form of super-intelligence—the Super—is said to be lurking in the tunnel of time, a system or device endowed with the same qualitative edge over us that we enjoy with respect to the chimpanzees. If there is one of them, there are bound to be others, a set of Supers, some suave, others elegant, and still others up to no good. Simon Conway Morris has said something similar about biological evolution, which he regards as a search through a constrained hyperspace of biological possibilities. “Those things are, in reality, realities: they are absolutes which we, in a sense, discover.” Stephen Wolfram’s A New Kind of Science makes similar claims about a universe of programs or computations.
These ideas have long been in the air, and now they are on the ground.
Up Plato, up good Pooch!
Is it possible to get there from here? To go, I mean, from GPT-4 to the Super? Cosmology offers a sobering case to the contrary. There may well be a path between any two points in the physical universe, but the speed of light makes it impossible to traverse most of them. And vice versa, of course. If there is no getting there from here, there is no getting here from there either. This may be a point of reassurance when it comes to the infernal Super, now smoldering in hyperspace and dying to get here and start causing trouble. We cannot design the Super, of course, for the same reason that the chimpanzees could not have designed the ball point pen. The Super is, by definition, super, endowed with some Fabulous Factor X that we can specify only by reference to its fabulousness.
If we cannot design it, perhaps it could design itself in stages by means of recursive self-improvement? Start small at GPT-4 or -5, go big thereafter, the Super emerging in steps, slow at first, then brisk, brisker, Boom, or FOOM, as computer scientists say. If this is the answer often given, it is because it is the only answer now imaginable. I can imagine such a thing, but I cannot describe it. A system undertaking recursive self-improvement must be capable of recursive self-reference. There would otherwise be nothing to improve. We know from both Gödel and Tarski’s theorems that at a certain level of complexity, systems capable of self-reference become incomplete; and if not incomplete then inconsistent. An inconsistent Super is bound to have certain doubts about itself because it is obliged to have doubts about everything. It is inconsistent, after all. If this is not a law of intelligence, it is certainly a law of arithmetic. If GPT-4 is bound for the Big Time, it is going to need help from the outside—say from GPT-5, which has agreed to handle the meta-mathematics. If this argument is carried to its conclusion, it might suggest that the Super is required to get to itself.
We have no idea, to continue the argument along another dimension, whether the Super might require NP algorithms in order to be super. These are algorithms that are not polynomially bounded, and, for all we know, the Super, having acquired them, might require forever to get on with the business of exterminating us. I can imagine only one scenario worse that confronting an irritable Super and that is confronting an irritable Super who cannot quite seem to get his act together. It would be like agreeing to euthanasia only to discover that the doctor cannot find his hypodermic needle.
Beyond all this, there is the open question whether the Super, however defined, is really apt to be a match for the human race, which has, after all, a proven track record when it comes to mindless extermination. If everything about the human mind is computational, the answer must be yes, but whether everything about the human is computational is an open question.
We do not know.
Given this, it may well be the better part of wisdom to worry less about what systems like GPT 4 might do and worry more about what they can do. They can make a mess of things. And this is something that we know.
Reading Dr. Berlinski’s psychoactive reflections is always a great pleasure. One wonders what the chances might be of the Super becoming self destructive
I don't have anything intelligent to add to the discussion other than I'm looking forward to the next part!
Considering the fair amount you've written about AI, maybe you could give it its own space on the CG like how Global Eyes has its own section?