Artificial intelligence: what if it doesn't need to be smart?

Artificial intelligence: what if it doesn't need to be smart?

Artificial intelligence

Every time there is clear progress in the field of artificial intelligence, the debate restarts: “Are machines reaching human beings? Are they getting really smart?” . The latest occasion, of course, is the advent of generative artificial intelligences, above all in the case of ChatGPT and its new iteration GPT-4.

In reality, the debate on the potential of machines and on the possibility that, over time, they become sentient, conscious, able to think and – ultimately – reach the intellectual level of the human being has been going on for longer than one might imagine. One of the first scientists to ask this question was none other than Alan Turing , a founding figure in the field of information technology (and not only) who in his seminal 1950 paper - Computer Machinery and Intelligence - explicitly asked himself: "Machines can think ?” .

In the meantime, more than seventy years have passed and artificial intelligence has alternated several so-called "winters" (periods in which research stalls and funding is reduced) to phases of very rapid development. This is the case in the last ten years, in which the enormous diffusion of deep learning algorithms has been constantly accompanied by the question of how intelligent they could be considered (or at least become) truly. Over time, however, we are realizing that perhaps the real question to ask could be another: does deep learning need to become intelligent? Why do we feel the need to measure its progress in human terms?

Of course, up to now we know - despite  some suggestions - that we are not dealing with true forms of intelligence, but only with tools which, as is the case with ChatGPT , are able to draw from their database the sequences of words that have the greatest probability of being coherent with our questions, without however having the slightest idea of ​​what they are actually saying .

The total lack of understanding of the task he is carrying out (which it also applies to the algorithms that win in chess, that recommend movies on Netflix or in any other case) has in no way prevented deep learning from revolutionizing the world and achieving incredible goals. Just think, to give just one example, of the potential importance of a tool like AlphaFold: an algorithm developed by DeepMind (owned by Google) and capable of determining the structure of proteins with extreme precision. In the future, this AI system could change the world of healthcare forever. If he succeeded, he would have achieved an extraordinary feat without needing even a spark of intelligence in it, but only - as in all these tools - the ability to statistically grind an immense amount of data.

From he explosion of deep learning (around 2013) to date, artificial intelligence has become a foundational and ever-evolving technology that has gradually integrated into an exorbitant number of services, devices and tools. Digital cameras use it to improve your photos, social networks use it to determine which posts you will see, thermostats to autonomously manage the temperature of our house, companies to select curriculum vitae; it is also used for surveillance purposes, to suggest what to buy, to help warehouses manage logistics and in so many other areas that it would be impossible to complete the list.

Unlike other innovations that have received enormous media attention without ever living up to expectations (think of the metaverse or cryptocurrencies), artificial intelligence has gradually crept into every area of ​​our private and professional lives, to the point of fading into the background and becoming almost invisible ( from this point of view, its evolution is somewhat reminiscent of the internet, which today powers practically everything without even noticing it anymore).

The same will probably happen with ChatGPT and the new forms of generative artificial intelligence , which - it is worth remembering - do not represent an absolute novelty but a further step forward along the road already trodden by the many similar systems that have eceded. Today however - while for the first time we are grappling with tools capable of conversing in a coherent manner or of creating images starting from our textual commands - it is easy to be amazed by their exploits and think that we are dealing with something intelligent or magical .

Indeed, as Arthur C. Clarke , author of 2001 A Space Odyssey , stated, “any sufficiently advanced technology is indistinguishable from magic” . Over time, however, this magical effect fades as our awareness of its true workings increases. The more their use becomes daily, the more these tools become trivial. Remember when Facebook started identifying the identity of our friends in photos? It seemed like an almost science fiction technology: today, however, no one is surprised anymore by the existence of facial recognition and even less consider it a form of intelligence.

The same will probably happen with generative artificial intelligences: we will gradually stop to amaze us and - learning about their functioning and limits - to confuse their behavior for something intelligent from a human point of view. Not only that: as time goes on, perhaps we will understand that there is no need to evaluate the progress of these tools on a scale that, at its peak, must lead to true intelligence. On the other hand, deep learning algorithms have been able to constantly improve while never taking a step forward in the direction of “sentience”.

In this process of normalization we might as well follow the imagined path more than seventy years ago by Alan Turing himself, according to which at a certain point we would have stopped wondering if a machine had conquered human intelligence. And instead we would have begun to consider that of machines as a form of intelligence as well. Simply, a form of intelligence very different from ours.

Powered by Blogger.