One of the things I love about social media is that we keep finding "some guy" who knows an awful lot of great stuff.
For example, here is a Balloon Juice post with an outstanding discussion thread: Sunday Night Open Thread: Chatbot vs Jagoffs. The post discusses why jerk conservatives are getting mad at ChatGPT because they can't get it to say racial slurs and I can't even....
In the Comments, the discussion about Artificial Intelligence goes all over the place, but a reader who calls himself Carlo Graziani adds this thoughtful comment:
....ChatGPT is essentially never very far away from a crazy response, and relies on people not feeding it crazy prompts to appear as a sane interlocutor.So now, the danger: at the moment it is easy to find the sense/nonsense boundary. But we could imagine a future ChatGPT version that has orders of magnitude more parameters, and is trained on vastly more, better-curated data, to the point that it is difficult to fool it into giving a pathological response. Question: has the sense/nonsense boundary been annihilated for such a system?The correct answer is “duh, no.” The boundary has simply been made harder to find, even by experts. But it’s still there, waiting for the unwary to be led over it by the Chatbot. Which is guaranteed to happen, eventually, because the future is not like the past. The world is an ever-surprising place. ChatGPT’s heirs are bound to get tripped up eventually by a world that has drifted beyond their training data. Yet humans will trust the AI’s inferences, because it’s never made mistakes before.The fact that such an AI customized for, say, air traffic control has simulated successfully landing billions of aircraft over the past 50 years using real ATC data is a terrible reason to trust it to run ATC unsupervised, because changing aeronautic technology and changing economics of air travel are extremely likely to produce situations that it’s never seen, and ought not “reason” about. But DL systems make overconfident decisions even with cases that in no way resemble their training.Now, for “ATC”, substitute “surgery”. Or “war policy planning”. Or ” emergency management”. And imagine the consequences of falling off the cliff of bullshit, led on by your implicit trust in your “demonstrably” (“never been wrong before”) infallible AI.That’s the real danger. The superficially anthropomorphic character and apparent oracularity of such systems make people forget that the future is a strange country which drifts away from the past, and that any system that cannot acknowledge that — as DL cannot — is doomed to fall off the cliff of bullshit sooner or later, taking anyone who places their faith in that system with it.
Another commenter soon replies:
I think I have a way of quickly finding the boundary.Ask: “What about the squirrels?”If it attempts to answer the question, it’s a bot.If it says, “Huh?” it’s a human.