What GPT-4 might mean. Part I of an interview with David Berlinski.
Reading Dr. Berlinski’s psychoactive reflections is always a great pleasure. One wonders what the chances might be of the Super becoming self destructive
I don't have anything intelligent to add to the discussion other than I'm looking forward to the next part!
Considering the fair amount you've written about AI, maybe you could give it its own space on the CG like how Global Eyes has its own section?
So your dad agrees with me. It’s excessive to speculate and entertain hypotheticals about what it explicitly might do. We have only warrant to be anxious about its revolutionary potential. We should be very cautious but be open-minded, not to assume the worst and predict the apocalypse like the person in the Time Magazine article. That’s baseless panic. We should dismiss all such calls to cancel AI. I’m even quite spectacle of imps like Elon Musk calling for an AI “pause.” I think we should be less afraid of the robots all ganging up on and killing us on their own. What’s scarier is how the political class, entrepeneurs like Musk, Mark Zuckerberg and Peter Thiel, and federal agencies can combine with their insider knowledge to harness the powers of AI for antidemocratic ends. We’ve already seen this w social media with the way Twitter suppressed the Ny post story of the Hunter biden laptop, under pressure from state actors, and how social media sought to censor covid disinformation. And now a bipartisan section of the political class seriously wants to give the commerce secretary the power to ban Tik Tok by classifying it as a national security risk. Just imagine the possibilities for the state manipulation and conditioning of public discourse--the opportunities with which AI presents the experts who develop it and the state that regulates it and will doubtless seek to wield it. We’re probably better advised to worry about the road to serfdom, not armageddon, not Westworld.
A brilliant piece. The larger point to be made, though, is that after all is said and done, down the road we will come to understand that AI, like the automobile, for all its convenience, fascinating technology, and the powerful symbol it carries of an emerging modernity, will lead to a diminishing of our humanity.
I think your father proceeds from two artificial--and unsubstantiated--constructs: intelligence and consciousness. We've yet to arrive at anything more coherent than a sort of working characterization of intelligence, and we have no data off which to bounce our concept, because all we have for comparison examples are ourselves, the life forms extant on this planet. (It's breathtaking, the ignorance with which so many alleged scientists, much less science writers, speak of conditions on other planets as making life impossible on those planets. No. Those conditions seem likely to make life impossible for Earth-type life. But that's a separate, if related, argument.)
Similarly, we don't even know what consciousness, self-awareness (the two are closely enough related that in this context, the terms can be taken interchangeably), is. We think we think, we think we are aware of ourselves, but to do that, we must be provably aware of others, so we can claim a separateness.
Thus, the artificiality at the bottom of it all: that what we know of our world, or presume to think we know, is a digitized simulation of...something, perhaps not real...built up from a digitized sensing. The simulation is constructed by a digitizing mechanism that is even more wholly enclosed and closed up than any software program. Digitization is nothing but an error-prone--because it's data-lossy--approximation of what is real, and since we process those incomplete data with that error-prone, incapable even of randomizing for analysis, sort-of computational puddle of water and chemicals, we can't even know what is real. Only what we think is real. If we think. Whatever thinking is.
To argue whether ChatGPT, or anything else we're pleased to call artificial intelligence, is or is not, intelligent, or self-aware, or... is to argue whether something like an AI has characteristics we have yet even to define and of which we have only the haziest of understandings.
"What we know from the outside is only that it cannot be described from the inside. If it makes voluntary action possible, it cannot act voluntarily."
BF Skinner was equally confused. He began by showing that human behavior was strictly a product of the complexity of our stimulus-response development, and that we were powerless to step outside that. Then he closed by waving his magic wand (or as a cartoon had it, "then a miracle occurred") and said, "Maybe not," and suggested ways of taking precisely those steps.
"This does not mean that the life that we live is an illusion. That would be absurd."
Not at all. It simply means we run away from the risk of absurdity by inventing something we call the life we live. Which might itself be absurd.
Separately, my first "thought" when I encountered "Foom" in this piece was of "FROOM."
This might be the scariest center of our future, "We know from both Gödel and Tarski’s theorems that at a certain level of complexity, systems capable of self-reference become incomplete; and if not incomplete then inconsistent." It leapt off the page when I read it.