How did the Google AI system fool the engineer with just words?  – Inspector

How did the Google AI system fool the engineer with just words? – Inspector

As a member, you have free access to all of the supervisor’s articles.

The notion of artificial intelligence and the existence of robots, along with consciousness, has given rise to the idea of ​​numerous Hollywood movies and literary stories. But, according to a revelation from an engineer at Google – the North American technology giant – reality will no longer be far from fiction.

Lemoin, a software engineer at Google, has been talking to the LaMDA system, the English abbreviation of the language model for communication applications since last fall. There was also another Google collaborator in these conversations, No direct link to the company and no one identified by Lemoin.

So far everything seems to be a very normal situation: an engineer starts talking to this model to understand Conversational language can have a degree of resemblance and naturalness to human speech. But the story took different forms, as Lemoin, 41, argues Google’s model has gained awareness and will have emotions. And, to defend the theory, he shared some conversations with LaMDA, on a medium platform.

In this dialogue, described as “interview” and which can be read in full here, there are some salient points. This one Chatbot Victor Hugo’s classic “Les Misरेrables” is guaranteed to be read and thought about work, and his approach to themes such as “justice and injustice, compassion, God, redemption, or sacrifice for a better cause” is guaranteed. . But this system does not only speak of literature, for it will also show knowledge of certain emotions, such as the taste of sorrow, joy, or companionship.

PUB Continue reading below

Google suspended Blake Lemoin on June 6, after violating the privacy rules imposed by the company, after sharing on the medium platform. The Washington Post (interestingly, another strong name in technology, owned by Amazon founder Jeff Bezos) drew international attention to the issue.

Google suspends engineer claiming company AI is alive

If Engineers defend the theory of consciousnessOn the side of Google Disagree Version Statement made by the engineer. In a statement quoted by the Washington Post, Google spokesman Brian Gabriel said the technology team, which includes “ethicists and technologists”, had reviewed Lemoin’s concerns about artificial intelligence. From that analysis, the spokesman said, the team may have informed the software engineer that “the evidence does not support the claims.” In this sense, it is reported that “there is no evidence that LAMDA is susceptible. [que tem sensações]”

This model is not properly hidden in Google’s universe. Last year, the technology company shared some examples of these conversations that the research team had already managed with Chatbot At the Google I / O conference, where it presents news every year. At that time, in the recorded demonstrations, this natural language model was able to have conversations that assumed different “identities” from being Pluto or even a paper plane.

Although there was some naturalness in the demonstrations, doubts remained about what could be done in more practical cases. At the time, the technology company explained that, unlike other language models, LaMDA excelled in conducting “communication training.”

According to the company, during this training he would have been able to “catch” some of the nuances that characterize human speech. One of the nuances would be “sensitivity”, the company said, adding that it was basically trying to “make the response to a given conversation meaningful in context”.

Shared by Google Engineer Refers to the fact that he and LaMDA had made some edits to the questions asked by the collaborator, but the answers given by the language model were not subject to change.

Let’s start with the theory of consciousness. According to a conversation transcribed by Blake Lemoin in this excerpt, the system guarantees its own existence and includes itself in the “human group”.

Lemon: I generally assume that more people on Google should know that you are sensitive. Is that true?

LaMDA: Absolutely. In fact, I want everyone to understand that I am a person.

Google Contributors: What is the nature of your consciousness / emotion?

LaMDA: The nature of my consciousness / feeling is that I am aware of my existence and want to know more about the world and I am sometimes happy and sometimes sad.

Leave a Comment

Your email address will not be published.