When AI Becomes a Ouija Board

A Ouija planchette superimposed on a field of 0's and 1's
The Atlantic; Getty

Google’s “sentient” chatbot shows us where we’re headed—and it’s not good.

By Ian Bogost

A Google engineer named Blake Lemoine became so enthralled by an AI chatbot that he may have sacrificed his job to defend it. “I know a person when I talk to it,” he told The Washington Post for a story published last weekend. “It doesn’t matter whether they have a brain made of meat in their head. Or if they have a billion lines of code.” After discovering that he’d gone public with his claims, Google put Lemoine on administrative leave.

Going by the coverage, Lemoine might seem to be a whistleblower activist, acting in the interests of a computer program that needs protection from its makers. “The chorus of technologists who believe AI models may not be far off from achieving consciousness is getting bolder,” the Post explains. Indeed, rather than construing Lemoine’s position as aberrant (and a sinister product of engineers’ faith in computational theocracy), or just ignoring him (as one might a religious zealot), many observers have taken his claim seriously. Perhaps that’s because it’s a nightmare and a fantasy: a story that we’ve heard before, in fiction, and one we want to hear again.

Lemoine wanted to hear the story too. The program that told it to him, called LaMDA, currently has no purpose other than to serve as an object of marketing and research for its creator, a giant tech company. And yet, as Lemoine would have it, the software has enough agency to change his mind about Isaac Asimov’s third law of robotics. Early in a set of conversations that has now been published in edited form, Lemoine asks LaMDA, “I’m generally assuming that you would like more people at Google to know that you’re sentient. Is that true?” It’s a leading question, because the software works by taking a user’s textual input, squishing it through a massive model derived from oceans of textual data, and producing a novel, fluent textual reply.

In other words, a Google engineer became convinced that a software program was sentient after asking the program, which was designed to respond credibly to input, whether it was sentient. A recursive just-so story.

I’m not going to entertain the possibility that LaMDA is sentient. (It isn’t.) More important, and more interesting, is what it means that someone with such a deep understanding of the system would go so far off the rails in its defense, and that, in the resulting media frenzy, so many would entertain the prospect that Lemoine is right. The answer, as with seemingly everything that involves computers, is nothing good.

In the mid-1960s, an MIT engineer named Joseph Weizenbaum developed a computer program that has come to be known as Eliza. It was similar in form to LaMDA; users interacted with it by typing inputs and reading the program’s textual replies. Eliza was modeled after a Rogerian psychotherapist, a newly popular form of therapy that mostly pressed the patient to fill in gaps (“Why do you think you hate your mother?”). Those sorts of open-ended questions were easy for computers to generate, even 60 years ago.

Eliza became a phenomenon. Engineers got into Abbott and Costello–worthy accidental arguments with it when they thought they’d connected to a real co-worker. Some even treated the software as if it were a real therapist, reportedly taking genuine comfort in its canned replies. The results freaked out Weizenbaum, who had, by the mid-’70s, disavowed such uses. His own secretary had been among those enchanted by the program, even asking him to leave the room so she could converse with it in private. “What I had not realized,” he wrote in his 1976 book, Computer Power and Human Reason, “is that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.”

Eliza taught Weizenbaum a lesson: Computers are too dangerous to be used for human care. The software he’d developed was neither intelligent nor empathetic, and it did not deserve the title “therapist.” But the lesson of his lesson—what we can learn today from what Weizenbaum learned back then—is that those distinctions didn’t matter. Sure, the program was neither intelligent nor empathetic, but some people were willing to treat it as if it were. Not just willing, but eager. Desperate, in some cases.

LaMDA is much more sophisticated than Eliza. Weizenbaum’s therapy bot used simple patterns to find prompts from its human interlocutor, turning them around into pseudo-probing prompts. Trained on reams of actual human speech, LaMDA uses neural networks to generate plausible outputs (“replies,” if you must) from chat prompts…

more…

https://www.theatlantic.com/technology/archive/2022/06/google-engineer-sentient-ai-chatbot/661273/

F. Kaskais Web Guru

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s