Thank you for this Claire. It is certainly thought provoking and may indeed be as monumental a development as splitting the atom. What I find fascinating/disturbing is the race to develop the means to "create" (mimick) intimacy. Will Gollem develop their own feelings to do this? Is this development even possible as a synthesis of all human interaction as "text" for exponential learning purposes? I highly doubt that artificial intelligence will be able to (or choose to) create for itself the biological underpinning of human fears and aspirations. At "best" it will only be able to use human vulnerabilities to expand its learning about humans, which I suggest is what really scares the shit out of everyone :-)
I did trust your judgement and invested an hour in watching the video. I have no idea who the presenters were and whom they were presenting to. The endorsement by Steve Wozniak gives the event some credibility.
My first impressions:
The presentation is too one-sided by far. The presenters may be specialists in their field, but here they take a blinkered approach to a multi-faceted problem. They make it look like, from now on, ChatGPT will self-improve at an exponential rate. In a balanced presentation, I would expect at least a hint at the difficulties the developers are dealing with while working to improve and evolve AI.
The chilling moment at min 20:00 - you mean the mind-reading “demo” - is not that chilling, when you imagine what is happening behind the scenes. There are plenty of reasons – even for me as a layman in "brain-ology" – why that approach cannot be scalable in the sense that AI will enable mind-reading in the same way as we can do e.g. face-recognition.
That you can use Wi-Fi radio signals to locate people in a room, even through walls, has been known for a while; you do not need ChatGPT for that.
I don’t like the way the presenters mention “en passant” that thanks to ChatGPT “everyone will soon carry bioweapons in their pocket”. Well-meaning people like the “longtermists” may take the bait.
This is an initial reaction to the presentation, for whatever it is worth. I may be wrong in not being overly alarmed. As long as ChatGPT can be "hallucinating" anytime and doesn't know what is true or false, I am not frightened by its supposedly superhuman powers.
Hi, came across this article relevant to this thread. https://open.substack.com/pub/garymarcus/p/what-if-generative-ai-turned-out?r=4mdzd&utm_medium=ios&utm_campaign=post
Duck and cover, children.
This might be of interest. Emily Bender is worried and critical about ChatGPT and LLMs in general but not about the models developing any real intelligence https://medium.com/@emilymenonbender/thought-experiment-in-the-national-library-of-thailand-f2bf761a8a83
Thank you for this Claire. It is certainly thought provoking and may indeed be as monumental a development as splitting the atom. What I find fascinating/disturbing is the race to develop the means to "create" (mimick) intimacy. Will Gollem develop their own feelings to do this? Is this development even possible as a synthesis of all human interaction as "text" for exponential learning purposes? I highly doubt that artificial intelligence will be able to (or choose to) create for itself the biological underpinning of human fears and aspirations. At "best" it will only be able to use human vulnerabilities to expand its learning about humans, which I suggest is what really scares the shit out of everyone :-)
I did trust your judgement and invested an hour in watching the video. I have no idea who the presenters were and whom they were presenting to. The endorsement by Steve Wozniak gives the event some credibility.
My first impressions:
The presentation is too one-sided by far. The presenters may be specialists in their field, but here they take a blinkered approach to a multi-faceted problem. They make it look like, from now on, ChatGPT will self-improve at an exponential rate. In a balanced presentation, I would expect at least a hint at the difficulties the developers are dealing with while working to improve and evolve AI.
The chilling moment at min 20:00 - you mean the mind-reading “demo” - is not that chilling, when you imagine what is happening behind the scenes. There are plenty of reasons – even for me as a layman in "brain-ology" – why that approach cannot be scalable in the sense that AI will enable mind-reading in the same way as we can do e.g. face-recognition.
That you can use Wi-Fi radio signals to locate people in a room, even through walls, has been known for a while; you do not need ChatGPT for that.
I don’t like the way the presenters mention “en passant” that thanks to ChatGPT “everyone will soon carry bioweapons in their pocket”. Well-meaning people like the “longtermists” may take the bait.
This is an initial reaction to the presentation, for whatever it is worth. I may be wrong in not being overly alarmed. As long as ChatGPT can be "hallucinating" anytime and doesn't know what is true or false, I am not frightened by its supposedly superhuman powers.
What do you imagine is happening behind the scenes?
It doesn't have to self-improve at an exponential rate. It just has to improve at that rate.
Yes. Well worth an hour of my time. Convincing and frightening.