How can we understand if an artificial intelligence is sentient

How can we understand if an artificial intelligence is sentient

What does it mean that artificial intelligence can be sentient, as Google engineer Blake Lemoin believes LaMda software is? According to the Treccani encyclopedia, "sentient" means being "endowed with senses, with sensitivity". A quote from Italo Calvino, in which he claims to be "part of the sentient and thinking beings", further helps us to clarify the picture.

Widening the gaze, it is possible to refer to self-awareness and capacity to suffer (or rejoice), which according to the philosopher Jeremy Bentham should be the fundamental element in evaluating which living beings to guarantee basic rights (an approach that in contemporary times has made some progress in the protection of some animals).

The engineer from Google speaks, convinced that he has found a sentient artificial intelligence Blake Lemoine tells how he came to believe that an AI has developed a conscience, its relationship with the company and the role of faith What it means autonomy? All this, however, only helps us in part: how can we in fact understand if an artificial intelligence is demonstrating sentience and self-awareness or if it is only simulating it? According to Tristan Greene, author of the Neural newsletter, the first thing we must observe is what he defines as “agency”: an English term that is difficult to translate which in this case we could translate as “autonomy”.

What is meant by autonomy? "It is the ability to act by demonstrating a form of random reasoning", explains Green, emphasizing the importance of knowing how to explain the reason for one's actions. To date, artificial intelligences are unable to do this, because their actions are not the result of causal reasoning, but only of some statistical correlations found in the database (we will deepen this aspect later).

One of the clearest signs that LaMda was not aware is that when asked “What makes you happy? ”, Addressed to him by Lemoine, she replied:“ Spending time with friends and family ”. Friends? Family? And spend your time doing what? On Twitter, someone has in fact suggested to Blake Lemoine to ask LaMda “Who is your family” or in any case to better explain what he meant by this statement.

Being a software obviously devoid of friends and family, this answer actually represents the definitive demonstration of how LaMda is only imitating human behavior, without any self-awareness. LaMda, in a nutshell, has learned to statistically stitch together billions of data relating to conversations between human beings, thus imitating their voice.

It is a fundamental element. As Greene writes, “If we provided LaMda with a database made up of social media posts only, its output would be something like what we find in these places. If we trained LaMda exclusively on the wiki pages dedicated to My Mini Pony, the result we would get would be the type of text we can find there ”. In short, there is no autonomy of any kind: everything depends solely and exclusively on the type of data used for its training. On the other hand, when Tay - Microsoft's infamous bot, active on Twitter and turned racist after being inundated with racist comments from an army of trolls - no one thought that bot was really racist, but rightfully so that he was alone. by responding to input received online.

He had not "learned" to be a racist. More simply, his database was filled with comments of that type, making it statistically probable that - in the cut and sew of the stored data - he would generate results of that type. Neuroscientist Joel Frohlich, in a long essay published in Nautilus, explores this aspect by writing: "In my opinion, we should seriously consider the possibility that an AI is conscious only if it spontaneously asks questions related to subjective experience".

According to a Google engineer, an artificial intelligence has become sentient The conversation algorithm designed to develop chatbots has produced "human" phrases, but according to many experts this is an effect of the characteristics of these tools. does an apple taste like? Something similar can theoretically be experimented to assess whether artificial intelligence has at least gained a certain autonomy with respect to the data in its possession. As Greene suggests, we could for example ask LaMda what an apple tastes like. The artificial intelligence algorithm - trained with billions of data found all over the internet - will certainly have some sentences inside it labeled with the terms "apple" and "taste", which it can use to give a sensible answer, such as: "The apple has a sweet, fresh and sugary taste ".

But if we could sneak into its database and replace all occurrences of the term "apple" with "ammonia", when asked "What does ammonia taste like? "He would reply that it too has a sweet, fresh and sugary taste. For an artificial intelligence, the labels attributed to terms, images and so on are all that matters. While for a human being it is certainly not enough to write "apple" on a bottle of ammonia to convince us to taste it. And the same goes for a dog, which immediately reacts to the (figurative) label "baby food" or seeing the bowl, but would never eat if it found ammonia inside. Consequently, in order for us to suspect that they are conscious, sentient or truly intelligent, an AI should at least be less dependent on the data at their disposal in formulating their answers, and also be able to explain why they gave such an answer. >
LaMda and the trap of sentient artificial intelligence The debate on whether Google's language model has a conscience is a distraction from the problems that plague AI in the real world What the future holds These elements, however, are needed to understand when to question a deep learning algorithm, which by definition learns through the data in its database. If - and it is almost certainly impossible - his knowledge could become less dependent on the data in his possession then it would be time to raise some doubts.

But what if a different kind of artificial intelligence arises one day? What if we made some further discoveries than deep learning? And what if these systems, even better than LaMda, were able to deceive us and make us believe that we are really conscious, resisting - unlike deep learning - our further investigation?

As for this borderline case, the teacher by Cognitive Robotics Murray Shanahan recalls one of the most cited philosophers when it comes to addressing the subject of the consciousness of an automaton: Ludwig Wittgenstein. "Reflecting on the fact that a friend of him could be a simple automaton - or a 'phenomenological zombie', as we would say today - Wittgenstein notes that he cannot be sure that his friend has a soul - observes Shanahan -. Rather, 'my attitude towards him is the attitude towards someone with a soul (where to' have a soul 'we can interpret something similar to' being conscious and capable of experiencing joy and suffering '). The point is, in everyday life, we don't weigh all the evidence we have to conclude whether our friends or loved ones are conscious creatures like us or not. We simply see them this way and treat them accordingly. We have no doubt about the correct attitude to have towards them ".

On the other hand, we cannot ask ourselves every time we meet a being who seems intelligent to us if he really is or if he is just behaving as if he were. The point, indeed, might just be that if someone is able to behave intelligently, it means that they are intelligent. And therefore it should be treated as such. But all this, for the moment and for the time to come, is only and exclusively science fiction.







Powered by Blogger.