Our technology began to affect us when the first one of us shaped a random piece of wood in order to make some small task easier. The idea was thus born in us that we could alter elements of our environment to make some of our tasks easier.
We’ve been doing so for several million years now. Up until fairly recently, however, our technology has been an element of our lives that most of us understood at least well enough to control.
That’s been changing fairly rapidly during my own lifetime (80 years). Atomic power, genetic engineering, and the internet among other things have all had an outsized influence on our lives, and to say that they all remain under our control is problematic at best, let alone that most of us understand them.
AI is unique. Its potential is astounding in many ways, but our lack of ability to control that potential and the potential for its misuse is even more so. We playing with a fire that could, and almost certainly will alter human life in ways we cannot yet even imagine, and yet we are running headlong into a future we cannot control.
The best science fiction has been warning us about this for decades. But even so imaginative storytellers as Clarke, Heinlein, Asimov, Aldiss, and others have not been up to what this portends.
As you mention in your final footnote, at the same time that Grok 3 was handing out advice on how to carry out any kind of act at all, it was also expressing the opinion that Elon Musk is a major source of disinformation. In the same vein, it would spontaneously say that Donald Trump is the most dangerous man in America, as well as producing "woke" outputs like a picture of a "normal family" as having two fathers, to the delight or disappointment of the users, depending on their politics.
Elon tweeted just 12 hours ago on how it's hard to eradicate the "woke mind virus" even from Grok, because there's so much woke input in the training data:
Anyway, what I want to say is that, whether or not any of the traditional apparatus of governance manages to restrain Musk, xAI, and Grok, this new AI sociopolitical order will have to start restraining itself for its own reasons. It simply can't allow users to utilize Grok to devise and carry out assassination schemes directed against arbitrary persons, for example.
The xAI safety philosophy, apart from "release the product and then deal with the problems", seems to be that they will be as open as possible about everything. For example, the system prompt, which is what gives an AI its personality and its prime directives (without it, it is "headless", a pure generator of language without a consistent persona), is something that other AI companies keep secret (though legions of users try to make an AI reveal its system prompt). In xAI's case, one of the staff has declared that the system prompt will be public knowledge, along with modifications made to it. So you could say it's an attempt at libertarianism in AI governance. I will be surprised if it lasts though.
The way that Elon has "governed" Twitter from a free speech angle, perhaps provides some precedent for thinking about how he will govern the behavior of Grok. The old Twitter had its own rules and procedures for dealing with problematic tweets. Elon came in as a "free speech absolutist", got rid of progressive speech codes (like bans against deadnaming trans individuals) and various kinds of behind-the-scenes coordination with US federal agencies (see: "Twitter Files"), thereby making X-Twitter hospitable to "alt-right" political discourse, something that every other major social media company had been fighting since 2015. And that's still the case now.
I think that in general, the speech code on X is now just about obeying the law (that is, tweets that are literally criminal will be removed, at least in theory), but apart from that, anything goes - except that Elon undoubtedly can intervene at his own whim when he wishes too, perhaps more often behind the scenes. Also, the Community Notes are part of how X now regulates itself.
So - very tentatively! - I would identify the governing spirit of the new conservative-nationalist USA as techno-libertarian, with oligarchic whim on top. But I think there's internal tension between the techno and the populist parts of the coalition. We already had Laura Loomer attacking Musk and Ramaswamy over H1-B visas a while back, and Steve Bannon saying that Musk needs to be humble and learn from people like himself, who were in the Trump movement from the beginning. Tucker Carlson is anti-transhumanist, probably anti-AI, and has expressed a kind of mixed sympathy for the primitivism of a Ted Kaczynski.
You can see examples of how Musk's free speech angle is to suppress criticism towards him. And you can see his AI is specifically instructed to ignore criticism of him as well.
This is not a man with a passion for free speech, he's a man with a passion for lying about it.
Just in case this is interesting 👇
My Q would be: does "American free speech" oblige me to say certain things and if I don't my speech does not qualify as FREE SPEECH ❓
"The Trump administration has insisted that American-made AI “will not be co-opted into a tool of authoritarian censorship.” "
Over and over again, they prove that accusations are just admissions of desire. I'm very afraid it's true here.
And I have to laugh that a few days ago I was wishing the problems we were going to face were being made by OpenAI and NOT Grok, and here we are...
Our technology began to affect us when the first one of us shaped a random piece of wood in order to make some small task easier. The idea was thus born in us that we could alter elements of our environment to make some of our tasks easier.
We’ve been doing so for several million years now. Up until fairly recently, however, our technology has been an element of our lives that most of us understood at least well enough to control.
That’s been changing fairly rapidly during my own lifetime (80 years). Atomic power, genetic engineering, and the internet among other things have all had an outsized influence on our lives, and to say that they all remain under our control is problematic at best, let alone that most of us understand them.
AI is unique. Its potential is astounding in many ways, but our lack of ability to control that potential and the potential for its misuse is even more so. We playing with a fire that could, and almost certainly will alter human life in ways we cannot yet even imagine, and yet we are running headlong into a future we cannot control.
The best science fiction has been warning us about this for decades. But even so imaginative storytellers as Clarke, Heinlein, Asimov, Aldiss, and others have not been up to what this portends.
As you mention in your final footnote, at the same time that Grok 3 was handing out advice on how to carry out any kind of act at all, it was also expressing the opinion that Elon Musk is a major source of disinformation. In the same vein, it would spontaneously say that Donald Trump is the most dangerous man in America, as well as producing "woke" outputs like a picture of a "normal family" as having two fathers, to the delight or disappointment of the users, depending on their politics.
Elon tweeted just 12 hours ago on how it's hard to eradicate the "woke mind virus" even from Grok, because there's so much woke input in the training data:
https://x.com/elonmusk/status/1894756125578273055
Anyway, what I want to say is that, whether or not any of the traditional apparatus of governance manages to restrain Musk, xAI, and Grok, this new AI sociopolitical order will have to start restraining itself for its own reasons. It simply can't allow users to utilize Grok to devise and carry out assassination schemes directed against arbitrary persons, for example.
The xAI safety philosophy, apart from "release the product and then deal with the problems", seems to be that they will be as open as possible about everything. For example, the system prompt, which is what gives an AI its personality and its prime directives (without it, it is "headless", a pure generator of language without a consistent persona), is something that other AI companies keep secret (though legions of users try to make an AI reveal its system prompt). In xAI's case, one of the staff has declared that the system prompt will be public knowledge, along with modifications made to it. So you could say it's an attempt at libertarianism in AI governance. I will be surprised if it lasts though.
The way that Elon has "governed" Twitter from a free speech angle, perhaps provides some precedent for thinking about how he will govern the behavior of Grok. The old Twitter had its own rules and procedures for dealing with problematic tweets. Elon came in as a "free speech absolutist", got rid of progressive speech codes (like bans against deadnaming trans individuals) and various kinds of behind-the-scenes coordination with US federal agencies (see: "Twitter Files"), thereby making X-Twitter hospitable to "alt-right" political discourse, something that every other major social media company had been fighting since 2015. And that's still the case now.
I think that in general, the speech code on X is now just about obeying the law (that is, tweets that are literally criminal will be removed, at least in theory), but apart from that, anything goes - except that Elon undoubtedly can intervene at his own whim when he wishes too, perhaps more often behind the scenes. Also, the Community Notes are part of how X now regulates itself.
So - very tentatively! - I would identify the governing spirit of the new conservative-nationalist USA as techno-libertarian, with oligarchic whim on top. But I think there's internal tension between the techno and the populist parts of the coalition. We already had Laura Loomer attacking Musk and Ramaswamy over H1-B visas a while back, and Steve Bannon saying that Musk needs to be humble and learn from people like himself, who were in the Trump movement from the beginning. Tucker Carlson is anti-transhumanist, probably anti-AI, and has expressed a kind of mixed sympathy for the primitivism of a Ted Kaczynski.
You can see examples of how Musk's free speech angle is to suppress criticism towards him. And you can see his AI is specifically instructed to ignore criticism of him as well.
This is not a man with a passion for free speech, he's a man with a passion for lying about it.
Yes, I do. Lots of people refrain from committing criminal acts because they don't know how or are so incompetent that they get themselves caught.