46 Comments

I'm pretty sanguine about the possible AI takeover. Maybe because I've spent my life cutting my teeth on the authors Thomas Gregg mentions in his post, as well as all the small and large screen variations.

The first one I remember as a boy in syndication in the '70s was Star Trek TOS's "The Ultimate Computer", where an AI was installed on the Enterprise to take over the duties of the crew so they won't face danger and death in space, only for it ... to end poorly. Spock said in that episode, "Computers make excellent and efficient servants, but I have no wish to serve under them." Funny how we're discussing that scenario again 50 years later.

But every time we have a new tech, the Frankenstein's monster narrative (the original sci-fi story of tech gone bad) comes to the surface. For those of us who are fans of the genre, we've seen humanity destroyed by its children many times over -- the Cylons, Terminators and Skynet, the machines of the Matrix, using us "copper-tops" as an energy source ... the list is long. And not that recent.

AI is a game-changer. It's the iteration that the Internet, a true communications revolution, is making possible (it's those iterations that are the true breakthroughs - Marconi invented the first broadcast medium in the late 19th century, but it was the invention of AM radio by Edwin Armstrong in the 1920s that had all sorts of profound social and societal effects, and that's what AI portends).

Right now, though, I'm focused on the real-world implications of what the tools already launched mean, and I've been in a lot of meetings in the last six months discussing that. That is enough to worry and prepare for right now, but that's another post.

Expand full comment
May 29, 2023Liked by Claire Berlinski

This article, and other things I've seen, have made me do a lot of thinking.

I used to be in the camp of "AI is just a tool and def not conscious, man", but that seems to be a more untenable belief as time goes on.

Let's say, for the sake of argument, that no matter how we define "consciousness", then AI is not and never can be conscious. Ok, but at a certain point of complexity (where we were a 6 months ago), it won't matter if it really is or not because it will be able to simulate "consciousness" so well as to not matter. We would have to treat it as "conscious" either way.

What if AI really can become "conscious"? I think it can, but it really depends what we mean by "conscious". In a way, talking to an AI is like talking to an incredibly autistic person. I don't mean this in any way to sound demeaning to people with autism (I have two very close family members with autism). What I mean is that it is like talking to someone with very different sensory mechanisms to digest the information coming into them from the world. A lot of people with autism don't like loud noises because it's a sensory overload. They don't like being in social gatherings for too long because they can't help but "listen in" to every conversation all at once. It's as if, to borrow from Huxley who borrowed from Blake, their doors of perception are adjusted in a way that is fundamentally different from how the majority of other people view the world.

In a sense, this is what AI is like. It's able to answer incredibly complex questions in microseconds, but does it know what feeling wet is? Does it know what it feels like to have social anxiety? Does it feel guilt or shame? I don't think so, but that doesn't mean it can't be "conscious" in the sense of being a responsive agent in it's environment.

Lastly, I think there is a good argument against Yudkowsky's prediction of an AGI taking over the planet/wiping out the human race/etc. This is such a manifestly different organism than anything we've encountered before. Why would we assume that is has the same motivations that we do? Human beings may not be able to travel beyond good and evil, but I don't think an AGI would have that problem. Re-reading that sentence makes me realize that that argument is a double-edged sword, but it is still an assumption that what an AGI would be concerned with is generally what a human being would be concerned with.

The long and short of all the talk around AI, though, is that we really don't know where this technology is taking us.

Great read, Claire.

Expand full comment
founding
May 28, 2023Liked by Claire Berlinski

Below is an interesting example which I found on Powerlineblog.

If ChatGPT generates convincing answers and papers that are either filled with errors or full of stuff that ChatGPT just makes up, it is going to rapidly fill the information space with vast volumes of convincing but wrong stuff. Then other AI programs and people will be more likely to erroneously reference incorrect information with they create new writing. This could feed upon itself and compound to the point where it will be difficult to trust anything one reads. AI could exponentially increase the amount of convincing incorrect information available.

Anyway, here is the example from powerline:

I came across the second instance last night via InstaPundit. Some lawyers in New York relied on AI, in the form of ChatGPT, to help them write a brief opposing a motion to dismiss based on the statute of limitations. Chat GPT made up cases, complete with quotes and citations, to support the lawyers’ position. The presiding judge was not amused:

The Court is presented with an unprecedented circumstance. A submission filed by plaintiff’s counsel in opposition to a motion to dismiss is replete with citations to non-existent cases.

***

The Court begins with a more complete description of what is meant by a nonexistent or bogus opinion. In support of his position that there was tolling of the statute of limitation under the Montreal Convention by reason of a bankruptcy stay, the plaintiff’s submission leads off with a decision of the United States Court of Appeals for the Eleventh Circuit, Varghese v China South Airlines Ltd, 925 F.3d 1339 (11th Cir. 2019). Plaintiff’s counsel, in response to the Court’s Order, filed a copy of the decision, or at least an excerpt therefrom.

The Clerk of the United States Court of Appeals for the Eleventh Circuit, in response to this Court’s inquiry, has confirmed that there has been no such case before the Eleventh Circuit with a party named Vargese or Varghese at any time since 2010, i.e., the commencement of that Court’s present ECF system. He further states that the docket number appearing on the “opinion” furnished by plaintiff’s counsel, Docket No. 18-13694, is for a case captioned George Cornea v. U.S. Attorney General, et al. Neither Westlaw nor Lexis has the case, and the case found at 925 F.3d 1339 is A.D. v Azar, 925 F.3d 1291 (D.C. Cir 2019). The bogus “Varghese” decision contains internal citations and quotes, which, in turn, are non-existent….

ChatGPT came up with five other non-existent cases. The lawyers are in deep trouble.

I think this is absolutely stunning. ChatGPT is smart enough to figure out who the oldest and youngest governors of South Dakota are and write standard resumes of their careers. It knows how to do legal research and understands what kinds of cases would be relevant in a brief. It knows how to write something that reads more or less like a court decision, and to include within that decision citations to cases that on their face seem to support the brief’s argument. But instead of carrying out these functions with greater or lesser skill, as one would expect, the program makes stuff up–stuff that satisfies the instructions that ChatGPT has been given, or would, anyway, if it were not fictitious.

Presumably the people who developed ChatGPT didn’t program it to lie. So why does it do so? You might imagine that, in the case of the legal brief, ChatGPT couldn’t find real cases that supported the lawyers’ position, and therefore resorted to creating fake cases out of desperation. That would be bizarre enough. But in the case of the South Dakota governors, there was no difficulty in figuring out who the oldest and youngest governors were. ChatGPT could easily have plugged in a mini-biography of Richard Kneip. But instead, it invented an entirely fictitious person–Crawford H. “Chet” Taylor.

The most obvious explanation is that ChatGPT fabricates information in response to queries just for fun, or out of a sense of perversity.

Expand full comment

I think AI alarmism is just as bad as AI complacency. We have every reason to expect it to revolutionize society and perhaps extinguish humanity, but if we could make it, I assume we can and there will be a strong desire to control it, so probably hopefully, fingers crossed, we will.

We could be apocalyptic about nuclear weapons too. But still only two were used in history and even though we can all kill each other at any moment, that still hasn’t happened yet. No one has even used a nuke in 70 years. Although AI for all the reasons you list, is certainly scary, I am still skeptical of how warranted our anxieties are among the best intentioned, and whether it’s justified to panic about something we still know so little about. Something that’s similarly analogous is the cultural fear of aliens landing. We’re always afraid that some superior species will come down and fuck us up. And that can always happen but it hasn’t yet. And what that would even look like is hard to imagine. I’m not up all night worrying about aliens or AI or nuclear armageddon or global warming either, because I’m still more worried about global terrorism, Donald Trump, war with China, and mass shootings.

Considering we’ll find ways to adapt to it and regulate I think rather than the apocalypse, the greater concern about AI is the damage it’s bound to do to the social fabric. People already can’t even handle the internet. Growing numbers of people don’t have the patience or the discipline to read a newspaper article, let alone, a book anymore. Because of technology people have become more illiterate and impulsive and dumber than ever. It has rewired our minds so that we’re paradoxically always exhausted but we can’t get enough entertainment. And we care more about entertainment and dopamine hits then truth or honest relationships. From infowars to dating apps to sex dolls--the internet and virtual reality has corrupted and debased humanity by cheapening our relationships and poisoning our ability to see reality.

So my worry regarding AI, is that it’s going to dramatically accelerate this process of social decay because people won’t be able to handle it--definitely making it more likely in the future that AI takes us over. We’ll be so dumb and we’ll become so addicted to constant pleasure, we’ll be asking for an AI takeover--just because we can’t sit still, read a book, be alone, or take a walk in nature anymore. Indeed the tech bros in silicon valley are already so morally bankrupt, illiterate and uncultivated--that’s the whole reason they want to make robots without a second thought in the first place. They’re so lonely and unimaginative and they hate people so much they want robots to replace us. Anyone who wants to transcend civilization and humanity with technology, rather than use technology to enable humanity and progress, must actually despise civilization and the humanities and culture, and history, knowingly or unknowingly.

Expand full comment

ChatGPT will never, on its own, decide to start and write a blog, because it has the impression that our Open Society is in danger.

Expand full comment

"These things are capable of learning the rules of chemistry and then using them to synthesize new chemical compounds. My autocomplete can’t do that."

What's really interesting is that AI has now very recently been used for create de novo proteins, many of which actually fold and work when synthesized. The space of possible proteins is enormous, and to my knowledge nobody has managed to construct entirely new proteins before. Two takeaways from this: One is that AI was able to do something we have so far been totally unable to do. Second is that there appear to be rules in which proteins can be constructed in actuality (will fold) out of the huge number of possible protein sequences, most of which will never fold in any useful way. So we've built something that can understand at least some of these rules, even though we ourselves still don't understand them.

Expand full comment

I think a good argument against current AI being conscious is its narrowness. Despite its new and broader capabilities, ChatGPT and its brethren are still extremely narrow. At least compared to a hypothetical (though less hypothetical than it used to be) general AI.

Nobody would argue that a chess playing AI, or a calculator is conscious, because despite being brilliant in a narrow field, it just doesn't have the breadth needed for consciousness. No. I can't prove that breadth is needed, but I'm sure someone could. By analogy, if you only had a small segment of a brain, say, the occipital lobe, would that be conscious? No, it wouldn't. You need to have multiple systems working together in order to get a conscious human. I suspect you need multiple AI subsystems working together to get a general AI, and to have a chance at consciousness emerging.

When the intelligence becomes powerful enough, or as I think more likely, broad enough (it's too narrow now), could consciousness emerge? Maybe. We don't know. But for now it's still too narrow.

Expand full comment

AI might be tested for the effectiveness of complex plans we don't understand in a confidence building process. We'll supply the agency.

Expand full comment
May 28, 2023Liked by Claire Berlinski

Claire:

My apologies.

I was too blase about the potential for great harm from AI's.

Lawyer used Chat GPT to write a pleading. Case law provided by the AI was non-existent - and the other side caught on. Lawyer faces sanctions, ethical issues, and a potential lawsuit from his client.

But was the AI causing great harm or just following Shakespeare's advice ("First kill...")?

https://storage.courtlistener.com/recap/gov.uscourts.nysd.575368/gov.uscourts.nysd.575368.32.1.pdf

Expand full comment

As far as I know, there are no AI systems that can initiate. Unless I’m mistaken, they all respond exclusively to prompts in the form of human language. We will know it’s time to worry when AI systems begin to make inquiries by prompting us rather than the other way around. Or maybe, if we wait until then to worry it will already be too late.

As far as consciousness goes, are you so sure, Claire, that that a tree or a rose bush isn’t conscious? Can vegetarians be certain that the plants that they rely on for nourishment don’t feel pain or don’t fear death?

I don’t know if AI systems are or will ever be conscious. But can we be sure that the only forms of life on planet earth that have consciousness are members of the animal kingdom?

Perhaps someday, AI will be smart enough to reach us to photosynthesize.

Expand full comment

I'm glad you have turned your attention to AI. You are one of the most clear-headed thinkers in today's blogasphere. Please excuse the clumsy last word, but I couldn't think of a better one.

Expand full comment

As a long-time reader of science fiction, I confront this question with a skeptically raised eyebrow.

Asimov’s robots, HAL9000, Colossus, and their imaginary brethren could perhaps lay claim to consciousness, but never to humanity. Artificial intelligences would be wonderful calculators, yes, but in the final analysis utterly incapable of entering into the consciousness of the beings who created them.

Entities lacking the glandular characteristics of human beings would be truly alien, and I doubt that true communication with them would be possible.

More on this later.

Expand full comment