Princes of the Realm
Political power in the Western world is moving from classical to corporate institutions. Part II of an interview with David Berlinski.
Claire—the news from Ukraine today looks to be huge. Tomorrow we’ll take a break from AI to catch up on that and other news of global import. But I wanted to share with you, first, the next installment of my exchange about the import of AI with my father, then to reply to a few of your emails and comments.
“Does anyone on earth imagine that Congress still has the power to control Open AI, Google, Microsoft, or Apple? They are far more powerful than Standard Oil ever was; and the tools that Congress might bring to bear are about as effective as a paper fan waved in front of a speeding bullet.”
David Berlinski interview, Part II
Meanwhile, here’s more of my exchange with my father. Here was his appraisal of the state of AI research in 2018. This is from his essay, Godzooks.
I am as eager as the next man to see Facebook become Vishnu, but I do not expect to see it any time soon. It is by no means clear that computers are in 2017 any more intelligent than they were in 1950; it is, for that matter, by no means clear that the Sunway TaihuLight supercomputer is any more intelligent than the first Sumerian abacus. Both are incarnations of a Turing machine. The Sumerian abacus can do as much as a Turing machine, and the Sunway TaihuLight can do no more. Computers have become faster, to be sure, but an argument is required to show that by going faster, they are getting smarter.
Deep Learning is neither very deep, nor does it involve much learning. The idea is more than fifty years old, and may be rolled back to Frank Rosenblatt’s work on perceptrons. The perceptron functioned as an artificial neuronal net, one neuron deep. What could it do? Marvin Minsky and Seymour Papert demonstrated that the correct answer was not very much. God tempered the wind to the shorn lamb. In the 1980s, a number of computer scientists demonstrated that by increasing the layers in a neural net, the thing could be trained by back propagation and convolution techniques to master a number of specific tasks. This was unquestionably an achievement, but in each case, the achievement was task specific. The great goal of artificial intelligence has always been to develop a general learning algorithm, one that, like a three-year-old child, could apply its intelligence across various domains. This has not been achieved. It is not even in sight.
CB: Do you think this assessment requires an update?
Keep reading with a 7-day free trial
Subscribe to The Cosmopolitan Globalist to keep reading this post and get 7 days of free access to the full post archives.