The Washington Post:

A Google engineer has claimed that the company's chatbot generator, LaMDA, is sentient. Blake Lemoine, who works for Google's Responsible AI organization, said that the artificial intelligence used discriminatory or hate speech and was able to change his mind about Isaac Asimov's third law of robotics.

Lemoine and a collaborator presented evidence to Google that LaMDA was sentient, but the company's vice president and head of Responsible Innovation dismissed the claims and Lemoine was placed on paid administrative leave.

However, Lemoine has now gone public with his claims. Google denies that there is any evidence to support Lemoine's claims.

[Author’s note: The text above was generated by ChatGPT.]


Look: I’m no expert on Turing test parameters. I’m a humanities guy. But this whole thing seems quite grandiose.

I was glad to find that I’m not the only person who feels this way — Andrey Krenkov over at Skynet Today (delightful name) has a good TL;DR of the situation:

LaMDA only produced text such as “I want everyone to understand that I am, in fact, a person” because Blake Lemoine conditioned it to do so with inputs such as “I’m generally assuming that you would like more people at Google to know that you’re sentient.” It can just as easily be made to say “I am, in fact, a squirrel” or “I am, in fact, a non-sentient piece of computer code” with other inputs.

There are a bunch of good links to YouTube videos and tweets explaining further details in Krenkov’s full article. Still, the fact that a professional artificial intelligence engineer was “tricked” by Google’s chatbot says a lot about the present state of AI.

Remember — this all happened in summer 2022, months before ChatGPT was released.