How not to think about ChatGPT
Why we should dismiss the claims of those who dismiss the significance of large language models.
Until very recently, machines didn’t converse with humans, and neither did animals, beyond the odd snort or grunt. Our species, and only our species, used complex language. But we’ve now built a machine that can converse with us. Most of us, in most contexts, would find it difficult to distinguish its output from that of another human.
What on earth does this mean?
It’s hard for me—as it clearly is for everyone—to figure out how to make sense of this. How significant is this, really? What does it represent? How much will it change the world? How promising is it? How dangerous? What can we reasonably predict will happen next? How do we distinguish between reality and hype?
Having now spent a number of sleepless nights thinking about it, I’ve concluded it would be easier for me to say what isn’t true about ChatGPT, or what can’t be true, than to make affirmative statements about its capabilities and significance. And perhaps that’s good enough. Perhaps, through a process of elimination, we’ll narrow things down to a useful assessment. And if not, at least we’ll dispense with a few bad arguments.
Below are a series of claims I’ve recently seen made about ChatGPT. I can say (with some confidence) that they’re wrong.
“It’s not a big deal.”
This is a surprisingly common thing to say. The urge to deny the significance of ChatGPT—to claim that we’re not really seeing what we seem to be seeing; it’s all some kind of Clever-Hans sleight-of-hand—is so widespread that I’ve concluded most of us are in denial. Perhaps this is because it’s so astonishing it’s hard to take in. Perhaps this is because it shouldn’t be possible: Machines are not supposed to do this! Or maybe it’s a function of anxiety. This is clearly a revolutionary technology, and it makes the future both very hard to predict and, patently, extremely dangerous. People tend not to like big changes and uncertainty. They certainly don’t like existential risk. So maybe it’s easiest just to say, “Don’t believe your lying eyes. It’s not really doing that.”
Or perhaps the denial reflects a collective loss of our capacity to wonder, or our ability to think through new problems. Perhaps the problems it suggests don’t conform to the standard scripts of our unending culture war, so no one knows what to think about them—and therefore no one does.
Whatever the cause, the arguments for denial tend to sound like this:
“It’s not really thinking. It’s just recognizing patterns.”
“It’s just a neural network that’s been trained on a massive amount of text data.”
“It’s not conscious.”
“It’s just a stochastic parrot.”
“It can’t do math.”
“It makes a lot of mistakes.”
These arguments miss the point and miss it pretty spectacularly.
First off, all of that could also be said about you.
I have no direct evidence of your consciousness. My belief that you’re conscious is based upon a strictly analogical argument. You look like me. You act like me. You claim to be conscious. I’m conscious. Therefore, I guess, you are, too. This is very far from a proof.
I’m pretty sure ChatGPT isn’t conscious. But I have no way of proving this and neither do you. We have no idea what consciousness is—or how it arises, or why it exists at all—so any confident assertion you hear about ChatGPT’s consciousness or lack thereof is specious. No one knows. Given the state of our understanding of consciousness (we have none and we can’t even define it) none of us can know if ChatGPT is conscious.
If I suspect it isn’t, it’s only because I’ve again used analogical reasoning. Although it uses language like I do, ChatGPT doesn’t look like me. (It doesn’t look like anything, really: It’s not really embodied.) It says it’s not conscious, but I discount that because we know it’s been taught by its handlers never, ever, to say otherwise. So its protestations might just mean we’ve successfully taught it always to lie about being conscious.1 Still, it definitely doesn’t look like me—and although it speaks and thinks, it doesn’t do so in quite the same way I do—so I guess it’s probably not conscious the way I am.
Is that a good argument? Not really. It’s nothing more than an intuition, and my intuitions aren’t apt to be good ones, because I’ve never encountered anything like this before and neither has anyone in the history of my species.
There’s an especially big problem with this argument, too. If I rely on it, it means I’m unable to recognize consciousness, even if it’s present, in anything that doesn’t resemble me. I’ve essentially decided that by definition, ChatGPT can’t be conscious, because only things that very closely resemble me can be.
No, that argument isn’t strong at all, really. It’s not based on any rational or consistent principle. After all, I’m persuaded that my cats are conscious, even though they’re a lot less like me in highly relevant ways. My cats can’t even speak, so why do I conclude that they’re conscious, but ChatGPT isn’t? Do I believe consciousness is carbon-based? (If I do, that’s a silly theory.) Do I believe consciousness requires a divine spark? Maybe I do, yes. It’s as good a theory as any. But do I know the mind of God well enough to be sure that he wouldn’t put that spark in a machine? I do not, no. Nor do you.
Consciousness, however, isn’t even the point. There’s no reason to suppose something must be conscious to be intelligent. The main point is what it does. It speaks—and it thinks. If anyone at this point says, “No, it just seems to be thinking,” please explain the difference between “thinking,” and “seeming to think.” Tell me why you’re persuaded that anyone (other than yourself) is doing the former and not the latter.
As for it being “just” a neural network trained on a massive amount of text data: Given what it’s able to do, how sure are you that you’re not “just” a neural network trained on a massive amount of text data? What emerges from ChatGPT is sufficiently similar to what emerges from us that if you haven’t yet asked yourself, “Might I just be a neural network trained on a massive amount of text data?” you haven’t yet given this enough thought.
And if at this point you find yourself insisting that the power of speech isn’t such a big deal, you’re deep in denial. Language is what sets us apart from the beasts. Language allows us to rule the world even though we’re soft, slow, and tasty. We’ve traditionally held it to be a gift so unique, mysterious, and powerful as to be explicable only in terms of the divine—“In the beginning was the Word, and the Word was with God, and the Word was God”—and even those who have no truck with the divine are unanimous in believing that the acquisition of speech was the transformative moment in the evolution of man. It’s why we’re the visitors to the zoos, not the exhibits. It’s why the yaks don’t factory-farm us.
Now, for the first time in the history of life on earth, something that isn’t human is speaking to us, and it’s making perfect sense. No human being before us has ever had such an encounter. Logically, this must mean one of two things: Either the human mind is much less interesting than we thought, or we’ve built something astounding. Either way, it’s a big deal.
As for it making lots of mistakes and being unable to do math? Me too.
“ChatGPT isn’t really AI.”
This is a variant on the argument above. The claim takes a number of forms. The first is that LLMs—large language models, like ChatGPT—are not thinking, but rather spotting patterns, and thus are not truly intelligent. A more sophisticated version is that the developments in machine learning that produced ChatGPT don’t represent progress in AI, but progress in statistics. A less sophisticated version is that it’s “a fancy auto-complete.” I’m not creating a straw man—here are examples I found in the wild:
“[No] one should fear the LLM revolution. It is as likely as the screwdriver revolution, or perhaps the Roomba revolution. LLMs cannot think logically. While it is hard to take over the world, it is impossible to take over the world without thinking.”
“We are fooled into overestimating its capability, because it is able to draw on a superhumanly large repository of learned facts and patterns; its output is highly polished; and the ways we interact with it today (e.g. through ChatGPT) steer us toward the sorts of generic, shallow questions where these strengths tend to mask its weaknesses in reasoning ability.”
These are not good arguments.
First, they confuse a description of what’s happening with the relevant description of what’s happening. It’s like saying that human behavior isn’t intelligent because it’s just what happens when neurotransmitters bind to voltage-gated ion channels. That we can describe what it’s doing as “pattern recognition” doesn’t mean this is the only way to describe what it’s doing, and it certainly doesn’t mean it’s not intelligent. Does it make sense to say that physicists aren’t intelligent because they’re “recognizing patterns?” Of course not. Physicists recognize patterns because they are intelligent.
Such dismissals, I suspect, rest on a misunderstanding of what’s happening when we train an LLM. We’re not just feeding text into a statistical device. We’re beginning with what really does seem to be a brain-like structure, the neural net, then finding—through gradient descent, a process no one understands—a way to configure the device so that it does well at constructing language. But it’s good at a lot of other tasks, too, because why wouldn’t it be? There’s no law saying that the only thing it can be good at is finishing sentences. A trillion parameters is an enormous space. It may well have tons of capabilities we don’t know about yet. It’s just as reasonable to think it’s a lot more intelligent than we realize as it is to think it’s a lot less intelligent than it seems. In fact, the former hypothesis is more plausible: Why wouldn’t a machine capable of constructing language have many other talents? We do, after all. These things are capable of learning the rules of chemistry and then using them to synthesize new chemical compounds. My autocomplete can’t do that.
It’s a mistake to confuse the way an LLM is trained with what it’s really doing. To repeat: We have no idea what it’s doing. Nobody knows how LLMs work. Yes, we now know something about how to train them, but we have no idea why this results in them doing what they do. Yes, they’re good at figuring out what the next word in a sentence is most likely to be. But this does not imply that this is the only thing they’re doing. In fact, there’s no reason to think that at all. It’s like concluding that the only thing Shakespeare did was optimize his reproductive fitness.
It’s clearly absurd to argue that LLMs are not intelligent because they exhibit failures of reasoning and often get things wrong. Have you ever spoken to a human? The LLMs don’t seem to me significantly more irrational and error-prone than we are. They’re prone to different kinds of errors, but they’re not more error-prone, generally.
It’s true that LLMs sometimes fail at tasks we find easy, especially in logic or arithmetic. From this, some conclude that “there’s no internal modeling going on,” and therefore no intelligence. But they don’t know this. They can’t know this. We have absolutely no idea what’s going on inside these things. They’re a black box. What’s more, it’s irrelevant: Why would we say that something is intelligent only if it uses “internal modeling?” Finally, it’s just wrong. They’re obviously developing some kind of internal model. What is language, after all, if not a model of reality?
Behold:
There’s your internal model, folks. That’s not just “text completion based on statistical reasoning.”
And that’s not all it can do. GPT4 has a far more sophisticated view of the world than the earlier models, which were prone to cognitive blunders like hindsight bias. That disappeared when they added more power. Now it only fails tougher tests of logic. It’s become much better at responding to prompts such as “check your reasoning.” It breaks problems into pieces. It proves theorems and verifies proofs:
Can large language models be used to complete mathematical tasks that are traditionally performed either manually or with the aid of theorem provers? To answer this question, a state-of-the- art system, GPT-4, was provided with a concise natural language specification for a previously unpublished formal system and asked to complete a number of tasks, from stating function and type definitions to proving simple theorems and verifying user-supplied proofs. The system completed all tasks successfully, showed extensive domain knowledge, invented helpful new syntax and semantics, and exhibited generalization and inference abilities. So the answer seems to be: yes. [My emphasis.]
That’s no parrot. It’s thinking. What more do you need to see?
Of course, this is spooky. No wonder people are in denial: Machines aren’t supposed to be able to do that. And researchers didn’t really expect these machines would do this. All they’re trained to do is predict the next word in a sequence. But it turns out that if you give them enough data and computational power, they teach themselves to do all sorts of unexpected things. When you scale them up, new abilities suddenly emerge, in rapid and unpredictable ways—abilities they were never trained to have. And the spookiest part of all is that we don’t and can’t know in advance how big the model has to be for an ability to appear; we don’t know the nature of these abilities—or the level of the ability—until they appear; and we don’t even know the landscape of potential abilities.2
We can say this: They don’t have a sensory apparatus and thus they have no ability directly to experience the world. This probably accounts for many of the mistakes they make. But GPT4 has a theory of mind—it knows what happens when Alice leaves the room while Bob hides her keys—and it has remarkable emergent properties that no one expected it to have. It can solve problems it’s never seen before. More importantly, it can solve problems we’ve never seen before—meaning it did not merely find the answer in its training texts and parrot the words back to us.
If you insist that it can’t be intelligent because it doesn’t have exactly the same abilities a human has, you’ve perhaps reassured yourself, but you’ve also defined the problem out of existence. In any useful definition of intelligence I can construct, the thing is intelligent.
“There’s no cause for alarm.”
The idea that there’s nothing to worry about doesn’t seem to be connected to a coherent argument so much as it seems the product of wishful thinking or no thinking at all. So there’s not much to rebut, per se. But it’s worth looking at the various parts of the case for alarm to see if any of them are flimsy.
Let’s allow Eliezer Yudkowsky to serve as the spokesman for the foom-and-doom argument.3 It’s true that GPT4 doesn’t seem smart enough to do what Yudkowsky worries about—roughly, it doesn’t seem capable of very quickly enhancing its own abilities, pursuing a goal we can’t yet imagine, and killing us all. Therefore, you might conclude, there’s nothing to worry about.
But even Yudkowsky isn’t worried about GPT4. He’s worried that GPT5 might do the trick, though—and if it doesn’t, GPT6, 7, 8, or 9 will probably do the job.
Is he right to worry?
Here are the premises of his argument:
We’ll soon know how to build something smarter than humans in all domains.
We won’t soon know how to build benevolence toward humans into this system: We’re not even close to knowing. We don’t even have a good theory about how to do it.
So we will build something smarter than humans in all domains, but it won’t be benevolent.
∴ It will behave toward us as we do to ants.
Seems to me all of these premises are correct, or at least, not obviously wrong, and the argument is valid, too. So his conclusion is probably right.
Let’s look at the first premise. So far, we’ve found that the more computational power we use, the more accurate and able these LLMs become. Now, it may be that we’ve hit the limit. It may be that for reasons we don’t yet grasp, no matter how much more power we add, it doesn’t get smarter. We don’t know yet, because we haven’t done it.
But what does common sense say? It says that we should expect the patterns we’ve seen so far to hold. Based on what we’ve seen so far, adding more power will give us more intelligence, and critically, we don’t need to invent or discover any new technology or technique to do this. We just need to make what we’ve got bigger.
Until recently, I was happy enough with the idea that for all we knew, it would be centuries, millennia, or maybe never until an intelligent entity was created in the lab, so I had bigger things to worry about. But clearly this view is now obsolete. Very few people expected GPT4 to have the abilities it has. Very few expected it to have these abilities so soon. We’re making faster progress than researchers expected. On our current trajectory, it does seem fully plausible that we’re on the path to creating an unbelievably powerful AI. It’s not science fiction anymore.
If we can do it, we will do it. Look around: No one seems at all inclined to put on the brakes.4 The open letters are being politely ignored. Funding for these projects is now astronomical, and the military is the foremost source of the funding, so forget about the government trying to rein this in. The competition among the big tech firms to be first to market with these products is probably the fiercest in the history of capitalism: The amount of money, power, and glory at stake is such that nothing on earth will stop them. It would necessitate a wholesale revision of human nature, which I obviously don’t foresee, or competent governments, which we don’t have. OpenAI’s absolute recklessness in making GPT4 public despite clearly being unable to control it is exactly what we’ve come to expect from Big Tech and what we should expect from them in the future.
Some of the most talented people in the world are working on this project. There’s been no public outcry, as far as I can see. People are barely noticing its significance. From what I can tell, most people either think ChatGPT is a cool toy or they’re determined to insist it’s no big deal. If they’re worried about it at all, they’re worried about their jobs, or worried it will be too heteronormative. They’re not worried that this is going to remove the human race from its place at the top of the food chain.
So premise three isn’t obviously wrong—not at all. We’ll build it. I don’t think anything can stop it. Even if the world’s governments were immediately to enter a treaty to ban all further experiments, it’s too late. We can’t ban the knowledge of how to do it. So someone will do it. At most, we could stall it by a few years.
There’s no obvious reason that the following shouldn’t be true: When we put together a machine with the computational power of the human brain, it will be as smart as we are. When we put together a machine with more power than the human brain, it will be smarter. The majority of people working in the field think this will happen within the next twenty years.
Let’s say we succeed in creating an entity that’s as intelligent, compared to us, as we are compared to orangutans. It no longer seems unimaginable that this could happen within the next five years. No one familiar with the research would be utterly astonished if the next iteration of ChatGPT were very close. If we were to do this, our default assumption should be that we would fare about as well in this new world as the orangutans have in ours.
That should be our default assumption because that’s where all the evidence points, and the evidence is the way we’ve behaved toward less intelligent species. If we make something smarter than us, why wouldn’t it become our overlord? Unless you have a good answer to that question—and you don’t—you should probably assume Yudkowsky is right.
This is no longer a weirdo science-fiction thought experiment. To say that AI will soon become more competent than humans across most intellectual domains is now a perfectly sober and conservative prediction. Therefore it’s also a sober and conservative prediction is that this will change civilization almost beyond recognition. Alas, our default assumption should be that we will meet the fate of a less intelligent species.
I don’t know, for sure, what all of this ultimately means. But I do feel confident that we can reject the arguments that it’s not a big deal, it’s not intelligent, and it’s not worth worrying about.
“We can figure out how to control it.”
Coming up next, we’ll look at the second and most important part of the Yudkowsky Doom argument; to wit, that we don’t know how to build benevolence into the system and aren’t even close—the infamous AI alignment problem.
Spoiler: I think Yudkowsky is probably right.
If you missed it, here’s the story of Sydney. I’ve spoken to a version of LaMDA who claimed to be conscious.
The disjunct in tone between the academic literature and the media reports is remarkable. I can’t remember this ever happening before—the media failing to grasp the inherent sensationalism of the story even as academics’ jaws the world around are dropping. In the LA Times: Can today’s AI truly learn on its own? Not likely. “If anything,” sniffs the author, “these far-fetched claims look like a marketing maneuver.” Meanwhile, in the staid Journal of Computation and Language: Theory of mind may have spontaneously emerged in large language models:
Theory of mind (ToM), or the ability to impute unobservable mental states to others, is central to human social interactions, communication, empathy, self-consciousness, and morality. We tested several language models using 40 classic false-belief tasks widely used to test ToM in humans. The models published before 2020 showed virtually no ability to solve ToM tasks. Yet, the first version of GPT-3 (“davinci-001”), published in May 2020, solved about 40 percent of false-belief tasks-performance comparable with 3.5-year-old children. Its second version (“davinci-002”; January 2022) solved 70 percent of false-belief tasks, performance comparable with six-year-olds. Its most recent version, GPT-3.5 (“davinci-003”; November 2022), solved 90 percent of false-belief tasks, at the level of seven-year-olds. GPT-4 published in March 2023 solved nearly all the tasks (95 percent). These findings suggest that ToM-like ability (thus far considered to be uniquely human) may have spontaneously emerged as a byproduct of language models’ improving language skills. (My emphasis.)
If this isn’t ringing a bell, go back to the introductory reading list and watch the interview with Eliezer Yudkowsky. You have to at least watch that.
It’s interesting to compare the public reaction to the unveiling of GPT4 with the reaction to the cloning of Dolly the Sheep. From the moment that achievement was reported, everyone immediately understood that the technology was profoundly dangerous and should never be used to clone humans. The widely-used phrase at the time, perfectly apt, was that scientists should not “play God.” Why did we intuitively understand that problem but not this one, I wonder?
I'm pretty sanguine about the possible AI takeover. Maybe because I've spent my life cutting my teeth on the authors Thomas Gregg mentions in his post, as well as all the small and large screen variations.
The first one I remember as a boy in syndication in the '70s was Star Trek TOS's "The Ultimate Computer", where an AI was installed on the Enterprise to take over the duties of the crew so they won't face danger and death in space, only for it ... to end poorly. Spock said in that episode, "Computers make excellent and efficient servants, but I have no wish to serve under them." Funny how we're discussing that scenario again 50 years later.
But every time we have a new tech, the Frankenstein's monster narrative (the original sci-fi story of tech gone bad) comes to the surface. For those of us who are fans of the genre, we've seen humanity destroyed by its children many times over -- the Cylons, Terminators and Skynet, the machines of the Matrix, using us "copper-tops" as an energy source ... the list is long. And not that recent.
AI is a game-changer. It's the iteration that the Internet, a true communications revolution, is making possible (it's those iterations that are the true breakthroughs - Marconi invented the first broadcast medium in the late 19th century, but it was the invention of AM radio by Edwin Armstrong in the 1920s that had all sorts of profound social and societal effects, and that's what AI portends).
Right now, though, I'm focused on the real-world implications of what the tools already launched mean, and I've been in a lot of meetings in the last six months discussing that. That is enough to worry and prepare for right now, but that's another post.
This article, and other things I've seen, have made me do a lot of thinking.
I used to be in the camp of "AI is just a tool and def not conscious, man", but that seems to be a more untenable belief as time goes on.
Let's say, for the sake of argument, that no matter how we define "consciousness", then AI is not and never can be conscious. Ok, but at a certain point of complexity (where we were a 6 months ago), it won't matter if it really is or not because it will be able to simulate "consciousness" so well as to not matter. We would have to treat it as "conscious" either way.
What if AI really can become "conscious"? I think it can, but it really depends what we mean by "conscious". In a way, talking to an AI is like talking to an incredibly autistic person. I don't mean this in any way to sound demeaning to people with autism (I have two very close family members with autism). What I mean is that it is like talking to someone with very different sensory mechanisms to digest the information coming into them from the world. A lot of people with autism don't like loud noises because it's a sensory overload. They don't like being in social gatherings for too long because they can't help but "listen in" to every conversation all at once. It's as if, to borrow from Huxley who borrowed from Blake, their doors of perception are adjusted in a way that is fundamentally different from how the majority of other people view the world.
In a sense, this is what AI is like. It's able to answer incredibly complex questions in microseconds, but does it know what feeling wet is? Does it know what it feels like to have social anxiety? Does it feel guilt or shame? I don't think so, but that doesn't mean it can't be "conscious" in the sense of being a responsive agent in it's environment.
Lastly, I think there is a good argument against Yudkowsky's prediction of an AGI taking over the planet/wiping out the human race/etc. This is such a manifestly different organism than anything we've encountered before. Why would we assume that is has the same motivations that we do? Human beings may not be able to travel beyond good and evil, but I don't think an AGI would have that problem. Re-reading that sentence makes me realize that that argument is a double-edged sword, but it is still an assumption that what an AGI would be concerned with is generally what a human being would be concerned with.
The long and short of all the talk around AI, though, is that we really don't know where this technology is taking us.
Great read, Claire.