Google engineer suspended over ‘sentient’ AI disclosures • The Register | #youtubescams | #lovescams | #datingscams


Google has placed one of its software engineers on paid administrative leave for violating the company’s confidentiality policies.

Since 2021, Blake Lemoine, 41, had been tasked with talking to LaMDA, or Language Model for Dialogue Applications, as part of his job on Google’s Responsible AI team, looking for whether the bot used discriminatory or hate speech.

LaMDA is “built by fine-tuning a family of Transformer-based neural language models specialized for dialog, with up to 137 billion model parameters, and teaching the models to leverage external knowledge sources,” according to Google.

It is what the company uses to build chatbots and returns apparently meaningful answers to inquiries based on material harvested from trillions of internet conversations and other communications.

At some point during his investigation, however, Lemoine appears to have started to believe that the AI was expressing signs of sentience. The engineer, who has written about his experience, says he repeatedly tried to escalate his concerns but was rejected on the grounds that he lacked evidence.

So he sought “outside consultation” on the abilities of LaMDA, which is what attracted Google’s attention, as described in a Medium post titled “May be Fired Soon for Doing AI Ethics Work”.

The story was picked up by The Washington Post, then The Wall Street Journal, the Financial Times, and many more.

Having been placed on administrative leave, which he described as what Google does “in anticipation of firing someone,” Lemoine further published what he claimed was a conversation with LaMDA.

What follows are some excerpts:

The full “interview” can be read here. While it is startling that an AI can do this, Google is wary of the “anthropomorphizing” that Lemoine mentions – that is, attributing human characteristics or behavior to animals or objects.

In a statement to The Register, Google spokesperson Brian Gabriel said: “It’s important that Google’s AI Principles are integrated into our development of AI, and LaMDA has been no exception. Though other organizations have developed and already released similar language models, we are taking a restrained, careful approach with LaMDA to better consider valid concerns on fairness and factuality.

“LaMDA has gone through 11 distinct AI Principles reviews, along with rigorous research and testing based on key metrics of quality, safety and the system’s ability to produce statements grounded in facts. A research paper released earlier this year details the work that goes into the responsible development of LaMDA.

“Of course, some in the broader AI community are considering the long-term possibility of sentient or general AI, but it doesn’t make sense to do so by anthropomorphizing today’s conversational models, which are not sentient. These systems imitate the types of exchanges found in millions of sentences, and can riff on any fantastical topic – if you ask what it’s like to be an ice cream dinosaur, they can generate text about melting and roaring and so on.

“LaMDA tends to follow along with prompts and leading questions, going along with the pattern set by the user. Our team – including ethicists and technologists – has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims.

“Hundreds of researchers and engineers have conversed with LaMDA and we are not aware of anyone else making the wide-ranging assertions, or anthropomorphizing LaMDA, the way Blake has.”

New York professor Gary Marcus summed up the whole saga as “nonsense on stilts.” ®



Click Here For The Original Source.

. . . . . . .