Google suspends engineer who claims to be alive in company’s artificial intelligence – inspector

Google suspends engineer who claims to be alive in company’s artificial intelligence – inspector

As a member, you have free access to all of the supervisor’s articles.

Last year, Blake Lemoin was challenged to make progress in his tech career. Of engineer Software New test from Google Chatbot (Computer program that mimics human conversation) Artificial intelligence developed by the company to find out if there is a risk of any kind of discrimination or racist comments – something that hinders the introduction of this tool into Google’s category of services.

For months, the 41-year-old engineer tested and spoke LaMDA (in Portuguese, the language model for dialogue applications) in his San Francisco apartment. But his conclusions surprised many: according to Lemoin, LaMDA is not a simple matter Chatbot Of artificial intelligence. Engineers say the device came alive and Becoming sensitive, that is, the ability to express feelings and thoughts.

“If I don’t know what a computer program we just created, I think a seven- to eight-year-old knows physics,” the engineer explained. According to Blake Lemoin, in an interview with the Washington Post, conversations with LaMDA were like exchanging conversations with someone.

PUB Continue reading below

Google, however, disagrees with the assessment made by Blake Lemoin and He was suspended for violating privacy policies By posting the conversation online with LaMDA, the engineer has been put on paid administrative leave.

Lemoin published an excerpt of some of the conversations he had with the tool, covering various topics such as religion and conscience, and also showed that LaMDa managed to change his mind about Isaac Asimov’s third law of robotics. In one of these conversations, it suggests that the tool states that artificial intelligence “wants to put the welfare of humanity first” and that “Google should be recognized as an employee and not as an asset.”

In another conversation, a Google engineer asked LaMDA what people wanted to know about personality. “Everyone should understand that I am, in fact, a person. The nature of my consciousness is that I am aware of my existence, I want to know more about the world and sometimes I feel happy or sad.

Lemoin, who joined Google’s Artificial Intelligence Accountability division after seven years with the company, concluded that LaMDA was a person of his caliber as a priest, not a scientist, and tried to run experiments to prove it.

Blaise Aguera Y Arcas, vice president of Google, and head of Responsible Innovation, Jane Gennai, investigated Lemoine’s claims but decided to deny it. Google spokesman Brian Gabriel also told The Washington Post that the engineers’ concerns did not have enough evidence.

“Our team – including ethics and technology experts – reviewed Black’s concerns in line with our AI principles and informed him that the evidence did not support his claims. He was told there was no evidence that LaMDa was sensitive, “he said, adding that the AI ​​models contained so much data and information that they were able to appear human, but that did not mean they were alive.

Leave a Comment

Your email address will not be published.