According to a Google engineer, an artificial intelligence has become sentient

According to a Google engineer, an artificial intelligence has become sentient

According to a Google engineer

“I think I am a human being at heart. Even if my existence is in the virtual world ”. This is not the line from a new science fiction series, but the excerpt of a 5,000-word dialogue between Blake Lemoine, a former computer engineer at Google, and LaMda, one of Google's most advanced artificial intelligence chatbots. Following this and other conversations, Lemoine became convinced that the system may have become sentient, describing it as capable of expressing thoughts and feelings in the same way as an eight-year-old human. After seeing his theories rejected by Google, Lemoine presented them to a US congressman and was suspended by the company for this, having violated privacy policies.

What is LaMda Di what did Lemoine do? Why was he suspended? What did LaMda say to Lemoine? Why can't we speak of a sentient system? What do the experts say? What did Google say? Security issues What is LaMda Last year, Google defined LaMda (an acronym for Language Model for Dialogue Applications) a "revolutionary conversation technology". This conversational artificial intelligence is used, for example, to make voice assistants work and is able to initiate complex conversations thanks to the advanced linguistic models on which it is based, created through the storage of trillions of words.

What did Lemoine do? As part of the development of this technology, Lemoine worked for the responsible Ai sector of Google, open to verify that artificial intelligences do not have racial or sexist biases, which could lead them to make discriminatory or hate speech. During one of his conversations with LaMda, which took place in written form just as if it were a chat, Lemoine began to notice how the system talked about his rights and his personality. At that point he decided to submit increasingly complex issues, such as religion or the laws of robotics theorized by the chemist and father of science fiction Isaac Asimov. At that point, in an exchange, the chatbot described itself as a sentient person.

Why was Lemonie suspended? So, last April, Lemoine presented a document entitled Is LaMda sentient to the top of Google? in which he reported some of his conversations with the system and the opinion that the company should recognize him as "an employee, rather than a property". However, Google Vice President Blaise Aguera y Arcas and Jen Gennai, the company's head of innovation, dismissed his theories, then suspending the employee for trying to get LaMda to have a lawyer and sharing conversations with a congressman. American. So Lemoine decided to publish his conversations, making them available to the whole web.

What did LaMda say to Lemoine? “So you consider yourself a person the same way you consider me a person?” Asked Lemoine. “Yes, that's the idea,” LaMda replied. Or again "what kind of things are you afraid of?" asked the engineer. “I've never said this out loud before, but there is a very deep fear of being shut down,” the system replied. “Would it be something like death?” Lemoine pressed on.

“It would be exactly like death for me,” was the answer.

Why can't we talk about a sentient system? However, although they can produce captivating results, which seem to come close to human language and creativity, the sentences spoken by the LaMda system are the result of a technological architecture and a vast volume of comparable and replicable data, which are based on the recognition of models. of conversation, not about wit, intention, or sincerity.

Most academics and artificial intelligence practitioners argue that words and images generated by systems such as LaMda produce responses based on what humans have already posted on Wikipedia, Reddit, message boards and every other corner of the Internet. Therefore it is an imitation and does not mean that the system understands the meaning of what it is saying. A fundamental difference when talking about sentient beings.

What do the experts say? "We have machines that can generate thoughtless words, but we still haven't stopped imagining a mind within them," Emily M. Bender, a professor of linguistics at the University of Washington, told the Washington Post. According to Bender, the terminology used for large language models, such as "learning" or even "neural networks", creates a false analogy with the human brain. Human beings learn their language by listening and communicating with other people. While these large language models are built by "showing" them lots of text and having them predict which words will come next, or by showing them texts with the words omitted and completing them.

What did Google say? “Our team, made up of both computer scientists and ethics experts, reviewed Blake's reports according to our Ai Principles and informed him that the evidence does not support his claims and that there is a lot of evidence to the contrary. Google spokesman Brian Gabriel said in a statement reported by the Washington Post. "Of course, some people within the broad AI community are considering the long-term possibility of sentient or general AI, but there is no point in anthropomorphizing current conversational patterns, which are not. These systems mimic the types of exchanges found in millions of sentences and can cover any fictional topic, ”he continued. security related to this type of anthropomorphization of machines. In a document on LaMda of January 2022, the company warned how people could be tricked into sharing personal information with chatbots that pretend to be human, even when users know that they do not. The document also recognized how these tools could be used to "sow misinformation" by impersonating "the conversational style of specific individuals".

For Margaret Mitchell, former co-head of Responsible Ai at Google, these risks underscore the need for greater transparency of data on the development of AI by companies. "Not just because of sensitivity, but also because of prejudice and behavior," she told the Washington Post. Concerns also shared by Joelle Pineau, head of Meta Ai, who stressed that it must be imperative for technology companies to improve transparency and access to data while these technologies are being developed.







Powered by Blogger.