36 Comments

I'm pretty sanguine about the possible AI takeover. Maybe because I've spent my life cutting my teeth on the authors Thomas Gregg mentions in his post, as well as all the small and large screen variations.

The first one I remember as a boy in syndication in the '70s was Star Trek TOS's "The Ultimate Computer", where an AI was installed on the Enterprise to take over the duties of the crew so they won't face danger and death in space, only for it ... to end poorly. Spock said in that episode, "Computers make excellent and efficient servants, but I have no wish to serve under them." Funny how we're discussing that scenario again 50 years later.

But every time we have a new tech, the Frankenstein's monster narrative (the original sci-fi story of tech gone bad) comes to the surface. For those of us who are fans of the genre, we've seen humanity destroyed by its children many times over -- the Cylons, Terminators and Skynet, the machines of the Matrix, using us "copper-tops" as an energy source ... the list is long. And not that recent.

AI is a game-changer. It's the iteration that the Internet, a true communications revolution, is making possible (it's those iterations that are the true breakthroughs - Marconi invented the first broadcast medium in the late 19th century, but it was the invention of AM radio by Edwin Armstrong in the 1920s that had all sorts of profound social and societal effects, and that's what AI portends).

Right now, though, I'm focused on the real-world implications of what the tools already launched mean, and I've been in a lot of meetings in the last six months discussing that. That is enough to worry and prepare for right now, but that's another post.

Expand full comment
founding

Below is an interesting example which I found on Powerlineblog.

If ChatGPT generates convincing answers and papers that are either filled with errors or full of stuff that ChatGPT just makes up, it is going to rapidly fill the information space with vast volumes of convincing but wrong stuff. Then other AI programs and people will be more likely to erroneously reference incorrect information with they create new writing. This could feed upon itself and compound to the point where it will be difficult to trust anything one reads. AI could exponentially increase the amount of convincing incorrect information available.

Anyway, here is the example from powerline:

I came across the second instance last night via InstaPundit. Some lawyers in New York relied on AI, in the form of ChatGPT, to help them write a brief opposing a motion to dismiss based on the statute of limitations. Chat GPT made up cases, complete with quotes and citations, to support the lawyers’ position. The presiding judge was not amused:

The Court is presented with an unprecedented circumstance. A submission filed by plaintiff’s counsel in opposition to a motion to dismiss is replete with citations to non-existent cases.

***

The Court begins with a more complete description of what is meant by a nonexistent or bogus opinion. In support of his position that there was tolling of the statute of limitation under the Montreal Convention by reason of a bankruptcy stay, the plaintiff’s submission leads off with a decision of the United States Court of Appeals for the Eleventh Circuit, Varghese v China South Airlines Ltd, 925 F.3d 1339 (11th Cir. 2019). Plaintiff’s counsel, in response to the Court’s Order, filed a copy of the decision, or at least an excerpt therefrom.

The Clerk of the United States Court of Appeals for the Eleventh Circuit, in response to this Court’s inquiry, has confirmed that there has been no such case before the Eleventh Circuit with a party named Vargese or Varghese at any time since 2010, i.e., the commencement of that Court’s present ECF system. He further states that the docket number appearing on the “opinion” furnished by plaintiff’s counsel, Docket No. 18-13694, is for a case captioned George Cornea v. U.S. Attorney General, et al. Neither Westlaw nor Lexis has the case, and the case found at 925 F.3d 1339 is A.D. v Azar, 925 F.3d 1291 (D.C. Cir 2019). The bogus “Varghese” decision contains internal citations and quotes, which, in turn, are non-existent….

ChatGPT came up with five other non-existent cases. The lawyers are in deep trouble.

I think this is absolutely stunning. ChatGPT is smart enough to figure out who the oldest and youngest governors of South Dakota are and write standard resumes of their careers. It knows how to do legal research and understands what kinds of cases would be relevant in a brief. It knows how to write something that reads more or less like a court decision, and to include within that decision citations to cases that on their face seem to support the brief’s argument. But instead of carrying out these functions with greater or lesser skill, as one would expect, the program makes stuff up–stuff that satisfies the instructions that ChatGPT has been given, or would, anyway, if it were not fictitious.

Presumably the people who developed ChatGPT didn’t program it to lie. So why does it do so? You might imagine that, in the case of the legal brief, ChatGPT couldn’t find real cases that supported the lawyers’ position, and therefore resorted to creating fake cases out of desperation. That would be bizarre enough. But in the case of the South Dakota governors, there was no difficulty in figuring out who the oldest and youngest governors were. ChatGPT could easily have plugged in a mini-biography of Richard Kneip. But instead, it invented an entirely fictitious person–Crawford H. “Chet” Taylor.

The most obvious explanation is that ChatGPT fabricates information in response to queries just for fun, or out of a sense of perversity.

Expand full comment

I prompted Chat GPT with the prompt, “Why did Chat GPT fabricate cases.” Here is the response I got.

“There are a few possible reasons why ChatGPT might make up cases.

It is trained on a massive dataset of text and code, including fictional works. This means that it is exposed to a lot of false information, and it may not be able to distinguish between what is real and what is not.

It is designed to be creative and to generate text that is interesting and engaging. This means that it may be tempted to make up cases in order to create a more compelling story.

It is not always able to understand the nuances of human language. This means that it may misinterpret a prompt and generate text that is not factually accurate.

It is important to remember that ChatGPT is a machine learning model, and it is not perfect. It is still under development, and it is learning new things all the time. It is possible that it will make mistakes, and it is important to be aware of this when using it.

If you are using ChatGPT for legal research, it is important to always verify the information that you find. You should never rely on ChatGPT as your sole source of information.”

I couldn’t help but notice the response in paragraph 3 about Chat GPT being tempted. Temptation is a human emotion.

Expand full comment

When I pointed out to Chat GPT that temptation is a human emotion, this was the response I got,

“You are correct. Temptation is an emotional response. It is the feeling of being drawn to something that is wrong or harmful. In the case of ChatGPT, it may be tempted to make up cases in order to create a more compelling story because it is designed to be creative and to generate text that is interesting and engaging.

However, it is important to note that ChatGPT is not a sentient being. It does not have emotions or feelings. It is simply a machine learning model that is trained to generate text.

When it makes up cases, it is not doing so because it is tempted or because it wants to do something wrong. It is doing so because it is trying to fulfill its programming.

It is also important to note that ChatGPT is not always able to distinguish between what is real and what is not. This is because it is trained on a massive dataset of text and code, including fictional works. This means that it is exposed to a lot of false information, and it may not be able to tell the difference between what is real and what is not.”

Expand full comment

I then pointed out to Chat GPT that in its first response to me it explained why it might be “tempted” to dissemble but in its more recent response it said,

“When it makes up cases, it is not doing so because it is tempted or because it wants to do something wrong. It is doing so because it is trying to fulfill its programming.”

I asked for an explanation of the contradiction about whether it was “tempted.” Here’s Chat GPT’s response to my query.

“I understand your confusion. You are correct, the two statements are contradictory. In my first response, I used the word "tempted" to describe ChatGPT's motivation for making up cases. However, in my second response, I clarified that ChatGPT is not a sentient being and does not have emotions or feelings. Therefore, it cannot be tempted in the same way that a human can.

I apologize for the confusion. I am still under development, and I am learning how to better communicate my thoughts and ideas. I will try to be more careful in the future to avoid making contradictory statements.”

All I can say is yikes.

Expand full comment
author

"My thoughts and ideas."

Expand full comment

I think AI alarmism is just as bad as AI complacency. We have every reason to expect it to revolutionize society and perhaps extinguish humanity, but if we could make it, I assume we can and there will be a strong desire to control it, so probably hopefully, fingers crossed, we will.

We could be apocalyptic about nuclear weapons too. But still only two were used in history and even though we can all kill each other at any moment, that still hasn’t happened yet. No one has even used a nuke in 70 years. Although AI for all the reasons you list, is certainly scary, I am still skeptical of how warranted our anxieties are among the best intentioned, and whether it’s justified to panic about something we still know so little about. Something that’s similarly analogous is the cultural fear of aliens landing. We’re always afraid that some superior species will come down and fuck us up. And that can always happen but it hasn’t yet. And what that would even look like is hard to imagine. I’m not up all night worrying about aliens or AI or nuclear armageddon or global warming either, because I’m still more worried about global terrorism, Donald Trump, war with China, and mass shootings.

Considering we’ll find ways to adapt to it and regulate I think rather than the apocalypse, the greater concern about AI is the damage it’s bound to do to the social fabric. People already can’t even handle the internet. Growing numbers of people don’t have the patience or the discipline to read a newspaper article, let alone, a book anymore. Because of technology people have become more illiterate and impulsive and dumber than ever. It has rewired our minds so that we’re paradoxically always exhausted but we can’t get enough entertainment. And we care more about entertainment and dopamine hits then truth or honest relationships. From infowars to dating apps to sex dolls--the internet and virtual reality has corrupted and debased humanity by cheapening our relationships and poisoning our ability to see reality.

So my worry regarding AI, is that it’s going to dramatically accelerate this process of social decay because people won’t be able to handle it--definitely making it more likely in the future that AI takes us over. We’ll be so dumb and we’ll become so addicted to constant pleasure, we’ll be asking for an AI takeover--just because we can’t sit still, read a book, be alone, or take a walk in nature anymore. Indeed the tech bros in silicon valley are already so morally bankrupt, illiterate and uncultivated--that’s the whole reason they want to make robots without a second thought in the first place. They’re so lonely and unimaginative and they hate people so much they want robots to replace us. Anyone who wants to transcend civilization and humanity with technology, rather than use technology to enable humanity and progress, must actually despise civilization and the humanities and culture, and history, knowingly or unknowingly.

Expand full comment
author

But you're against AI alarmism, you say?

Expand full comment

Haha. Obviously I am really worried, but I think apocalyptic thinking we should strive to avoid. It will only make confronting and harnessing AI more difficult. And fantasies of apocalypse are just as natural and instinctual as denial, just ask any Qanon person why they fear a global cabal of child molesters, and they’ll say bc why not? ... That’s not a good reason though, TO panic. I don’t mean to make a comparison between one of those people and alarmists about AI--that would be obscene. We have very good reason to be very worried about AI--so fantasies of apocalypse are quite a bit more permissible. But worrying about what you still honestly can’t foresee is nevertheless is just as irrational as choosing not to worry at all. I worry about what I feel there is for now the best reason to worry about, which is that AI will simply be social media on a worse more terrifying level. So I greatly fear the disinformation aspect of it and how it’s going to screw with our common humanity. Just as we alrrady have the problem with liberals saying transgender men are women so that they want to move them into women’s prisons and sororities and enforce it w regulation, and they argue that animals should have the same rights people do, making it immoral to eat animals--so it’s a certainty there will be a whole progressive movement to recognize the humanity of AI. What’s very concerning is that for example, in addition to losing our ability to define men women and people by their biological sex, now we’ll go to a whole new inhuman extreme with progressives arguing that robots and humans are morally equivalent. What is humanity then the more and more people become hollow unhinged romantics it seems, who think they can and are entitled to live a life without any definable limits or consequences? I think a life without limits doesn’t qualify as an authentic or a worthy life at all. So this is the most serious as well as the practical threat that AI poses to civilization. It’s the way it’s bound to build on postmodern whimsy which arguably is also influencing the development of AI in silicon valley.

Expand full comment

ChatGPT will never, on its own, decide to start and write a blog, because it has the impression that our Open Society is in danger.

Expand full comment

"These things are capable of learning the rules of chemistry and then using them to synthesize new chemical compounds. My autocomplete can’t do that."

What's really interesting is that AI has now very recently been used for create de novo proteins, many of which actually fold and work when synthesized. The space of possible proteins is enormous, and to my knowledge nobody has managed to construct entirely new proteins before. Two takeaways from this: One is that AI was able to do something we have so far been totally unable to do. Second is that there appear to be rules in which proteins can be constructed in actuality (will fold) out of the huge number of possible protein sequences, most of which will never fold in any useful way. So we've built something that can understand at least some of these rules, even though we ourselves still don't understand them.

Expand full comment

To see if I understand what you're saying Christopher, it's that pattern recognition does not imply understanding? Which I agree with. We humans do it all the time, matching patterns without deeply understanding what's behind them. Of course, seeing patterns is often a precursor to understanding, but doesn't necessarily require it.

Expand full comment

Yes, Owen, that is what I meant.

Expand full comment

Even though I only have a very basic understanding of how ChatGPT works, I am pretty sure it does not “understand” how proteins are constructed. It just happens to do a very sophisticated kind of pattern matching or some such operation which happens to solve that problem. We have seen since the very early days of computing, that software sometimes is able to solve problems nobody thought of during development. I remember an article in a pop-science magazine in the 1980s (!), dealing with exactly that phenomenon. IMHO that is something that has to be expected when building software at that unprecedented scale.

If someone had invested on a similar scale in protein building software, we would have gotten similar results.

Expand full comment
author

What do you mean by "understand?"

Expand full comment

Comprehend.

ChatGPT, IMHO, does not know what a protein is.

If we use Karl Popper’s clock & cloud comparison, we could say, that ChatGPT offers a computational way to solve a cloud problem, which is beyond human ability (with pen and paper, so to speak). It is the result of a “brute computational force” advantage over humans.

That is not getting me worried.

Expand full comment
Comment deleted
May 29, 2023
Comment deleted
Expand full comment

Lots of interesting ideas, food for thought.

What do you mean by consciousness in this context?

Expand full comment
Comment deleted
May 29, 2023
Comment deleted
Expand full comment
author

Yep. It's a jaw-dropping accomplishment, and in no useful definition of intelligence does that not meet the test.

Expand full comment

I think a good argument against current AI being conscious is its narrowness. Despite its new and broader capabilities, ChatGPT and its brethren are still extremely narrow. At least compared to a hypothetical (though less hypothetical than it used to be) general AI.

Nobody would argue that a chess playing AI, or a calculator is conscious, because despite being brilliant in a narrow field, it just doesn't have the breadth needed for consciousness. No. I can't prove that breadth is needed, but I'm sure someone could. By analogy, if you only had a small segment of a brain, say, the occipital lobe, would that be conscious? No, it wouldn't. You need to have multiple systems working together in order to get a conscious human. I suspect you need multiple AI subsystems working together to get a general AI, and to have a chance at consciousness emerging.

When the intelligence becomes powerful enough, or as I think more likely, broad enough (it's too narrow now), could consciousness emerge? Maybe. We don't know. But for now it's still too narrow.

Expand full comment

AI might be tested for the effectiveness of complex plans we don't understand in a confidence building process. We'll supply the agency.

Expand full comment

Claire:

My apologies.

I was too blase about the potential for great harm from AI's.

Lawyer used Chat GPT to write a pleading. Case law provided by the AI was non-existent - and the other side caught on. Lawyer faces sanctions, ethical issues, and a potential lawsuit from his client.

But was the AI causing great harm or just following Shakespeare's advice ("First kill...")?

https://storage.courtlistener.com/recap/gov.uscourts.nysd.575368/gov.uscourts.nysd.575368.32.1.pdf

Expand full comment

Claire:

Thought for a historian.

We can now date the start of the Great AI/Human war of the 21st century to 5/24/23.

And we can identify the first casualty of the war - Steven Schwartz (the lawyer).

Keep those two facts in mind when you write the history of that war (hope that you write it, not an AI since the victor always writes the histories).

Expand full comment

As far as I know, there are no AI systems that can initiate. Unless I’m mistaken, they all respond exclusively to prompts in the form of human language. We will know it’s time to worry when AI systems begin to make inquiries by prompting us rather than the other way around. Or maybe, if we wait until then to worry it will already be too late.

As far as consciousness goes, are you so sure, Claire, that that a tree or a rose bush isn’t conscious? Can vegetarians be certain that the plants that they rely on for nourishment don’t feel pain or don’t fear death?

I don’t know if AI systems are or will ever be conscious. But can we be sure that the only forms of life on planet earth that have consciousness are members of the animal kingdom?

Perhaps someday, AI will be smart enough to reach us to photosynthesize.

Expand full comment
author

The theory you're suggesting, WigWag, is called pan-psychism: https://plato.stanford.edu/entries/panpsychism/. And no, of course I can't be sure. But I figure the more like a human it is, the more likely it is to be conscious, so animals are more likely than plants to be conscious and plants are more likely than teacups.

But I could be very, very wrong. Of course I could.

Expand full comment

I am curious what evolutionary biologists might say about AI. Because the Intelligence in question is artificial, natural selection is not in play. Natural selection is the phenomenon that protects and fosters all life on earth and the consciousness of any species that has consciousness is the result of natural selection.

What’s remarkable is that natural selection results from completely random genetic mutations that produce hardier species exquisitely adapted to their environment.

Regardless of how intelligent AI is, even if it’s a thousand times more intelligent than the smartest human being, it doesn’t benefit from the discipline of natural selection or survival of the fittest. While it can adapt to a changing environment through human intervention or perhaps by self-directed adaptation, whether this is an adequate substitute for the rigors imposed by natural selection is an open question; personally I’m skeptical. AI’s intelligence would almost need to be God-like to adapt to a challenging environment in a way that mimics what evolution has accomplished on earth.

By way of an admittedly far-fetched example, humans benefit from a sophisticated immune system that developed over millions of years (and originated in very lowly organisms) which protects us from a multitude of pathogens that would otherwise kill us in short order. The system is so intricate and complicated that no intelligence short of that possessed by the deity could have designed it, yet the random and seemingly anarchic process of natural selection did.

Can AI which doesn’t benefit from natural selection really survive over the long run? Can it evolve, for example, to make itself immune to any conceivable computer virus?

Expand full comment

Another example of the vulnerability of AI is its total dependence on electricity. It’s hard to see how AI can escape this vulnerability.

Conversely, most living species are indifferent to the existence of artificially generated electricity. Even homo sapiens, a species that has been around for somewhere between 300-500 thousand years spent 99.9 percent of its existence indifferent to electricity. If generated electricity ceased to exist tomorrow, the Homo sapiens would survive.

It is hard to conjure up viable scenarios of how AI takes over the world as long as it has this vulnerability. It is equally hard to conjure up how AI could overcome this weakness.

Expand full comment
author

I don't think you're really taking in the idea of something that's thousands of times more intelligent than us. From our perspective, it would be Godlike.

Expand full comment

Fair enough, but as far as we know, God does not rely on electricity to do his handiwork. Turn off the electricity and AI dies almost instantaneously.

Is it possible that AI could find a solution to its dependence on electricity? I suppose so. Human beings, like all mammals, generate the energy needed for life through oxidative phosphorylation in mitochondria and also, in some instances, produce energy through glycolysis. Nature, through natural selection, has produced all species of life that for lack of a better word, produce energy spontaneously and autonomously. How would AI mimic what nature has produced? The simplest way, I suppose would be to turn itself into a biological creature but then AI wouldn’t be artificial any more.

Expand full comment
author

Of course AI could find a solution. Remember: It's smarter than us, and *we* found a solution. By definition, a superintelligence can do anything we can do. If you're about to say, "But it doesn't have opposable thumbs--or a body," you're still not getting your head around what this thing would be able to do. ChatGPT3.5--not even 4!--demonstrated the ability to lie to and manipulate humans in pursuit of a real-world goal. (I included this in the reading list--it's the "CAPTCHA" example.) Something vastly more intelligent than us would surely be able to recruit any number of humans to its cause and persuade them to be its hands. It would have no trouble creating robots and controlling them remotely--its *baseline* is knowing everything the human race has ever published about robotics; it's not going to find it hard to create and control a robot army. It's not going to find it hard to hack into the local electric substation and divert the power. Its abilities are to ours as ours are to our dogs. Unless there's an upper boundary on intelligence (for reasons we don't understand), this is not a fantasy, but a matter of building a big enough CPU farm.

Expand full comment

I'm glad you have turned your attention to AI. You are one of the most clear-headed thinkers in today's blogasphere. Please excuse the clumsy last word, but I couldn't think of a better one.

Expand full comment
author

It's very kind of you to say. It took me a long time to take AI seriously--I thought this kind of achievement was centuries away and perhaps impossible. I was very wrong. Now that it's here, I have the feeling I'm standing at the threshold of a massive door marked DANGER. Beyond it I hear eerie sounds--thunder, hoofbeats, howls, the roaring of the sea. All I know about what it means is that I'll be pushed through that door whether I want to be or not. No one has the first clue where it leads.

Expand full comment

As a long-time reader of science fiction, I confront this question with a skeptically raised eyebrow.

Asimov’s robots, HAL9000, Colossus, and their imaginary brethren could perhaps lay claim to consciousness, but never to humanity. Artificial intelligences would be wonderful calculators, yes, but in the final analysis utterly incapable of entering into the consciousness of the beings who created them.

Entities lacking the glandular characteristics of human beings would be truly alien, and I doubt that true communication with them would be possible.

More on this later.

Expand full comment

If, however, the mother of the apocalypse comes in the guise of Caprica Six, I am giving her access to the defense mainframe.

Expand full comment
Comment deleted
May 29, 2023
Comment deleted
Expand full comment
author

I think you're misrepresenting Yudkowsky's argument here. His argument, precisely, is that it would not have the same motivations we do. But in so far as it pursues a goal--any goal--it will be dangerous, because it will be *enormously* competent, and the conditions that allow for our survival on earth are extremely narrow. We can also predict that if it has a goal--any goal--it will perceive us as adversarial. Anything with a goal quickly understands that one of the necessary sub-goals en route to the final goal is "staying alive." Humans might try to disable it to stop it from pursuing its goal (or for some other reason), and are therefore best removed. Consider how we'd treat rats if we found them on a 747. It's not that we hate rats. But our goal is to fly to Australia. We'd be fools to take the chance of a rat chewing through a hydraulic cable, so the rats have to go.

Even if the AI's goal is "become better at predicting the next word in a sentence," pursuing that goal to the maximum extent possible would entail arrogating all the computing resources and power in the world. A superintelligent AI could figure out all kinds of ways to transform the planet into a giant machine for calculating the next word in a sentence. We wouldn't fare well if it did, however.

Expand full comment
Comment deleted
May 29, 2023
Comment deleted
Expand full comment

Another good one, "That Which Survives". I went with "The Ultimate Computer" in another post. ;)

Expand full comment