ChatGPT, the artificial intelligence expert who got interviewed

ChatGPT, the artificial intelligence expert who got interviewed


The artificial intelligence that elaborates the most relevant questions about itself and poses them to one of the leading experts on the subject, giving birth to a meta-interview: it is the experiment, in the form of a podcast, created by Frame - Festival della Comunicazione (of which is media partner, with the tenth edition scheduled in Camogli from 7 to 10 September) and just published online. The protagonists of this sui generis chat are Nello Cristianini , Italian professor of artificial intelligence at the British University of Bath, and the famous ChatGPT , who was asked to devise the 10 most relevant questions regarding the development and implications of artificial intelligence itself.

"It seemed to us an interesting and even more relevant way to talk about how machines have become intelligent without thinking in a human way", explained Cristianini, who also published The Shortcut on the same subject in February , published by Il Mulino. In short, an interview in which the heart of the discussion is not only the answers, but also the questions, with their selection, their order and the way they are formulated.

The selection elaborated by ChatGPT started from the very definition of artificial intelligence and from current and future applications, to then move on to the balance between benefits and risks, current limitations and obstacles to overcome. Again, artificial intelligence has asked (and has wondered) what the impacts of its presence will be in the world of work, which uses are ethical and responsible, how to protect people's data and privacy and how they can be identified and addressed the inherent biases already present at the AI ​​design level. Finally, the possibility of applying AI to combat global problems (climate change, disease, poverty,…) and the possible consequences for the evolution of humanity in the face of increasingly advanced artificial intelligence.

“We must understand not only the technology we have created, but also how it interacts with the society in which we place it, with its norms, with the business model that finances it, and with the people who use it” , explained Cristianini, interviewed this time by “We must learn to recognize artificial intelligence when we see it, even and above all when it doesn't speak, and when it doesn't have a body, for example the one found in platforms such as TikTok and YouTube, which continuously observes us in order to make us click. The idea of ​​the podcast and of the book The shortcut is to explain that a historical, economic and social context is needed to understand this revolution".

The podcast The shortcut - How machines became intelligent is part of the series Stories that leave their mark , a collection of contents from the voice of the protagonists of the Communication Festival

In Cristianini, how can we trust machines that have a way of reasoning and deducing so different from that of human beings?

"In many respects these creatures of ours are like aliens, that is, they have a form of intelligence fundamentally different from ours, but it is still a type of true intelligence. The worst mistake would be to imagine that they can think like us: this would lead us to attempts at relationship and regulation, which would fail. In order to be able to trust them, we must first understand, understand the path that has brought us to this point, including the shortcuts we have had to take. Only from there can we then build a coexistence safe.

"What matters to me is not blocking progress, but making sure we preserve the autonomy and dignity of people even in those situations. The direct consequence is privacy, the ability to work and that of remaining independent from machines: to live with machines in a safe way, everyone's work will be needed - jurists, philosophers, economists. It's not just a technical question, and this will be the great adventure of the coming years" .

What does it mean to trust an intelligent machine?

"When we delegate important decisions to a machine, we must be able to trust, believe in its competence and benevolence. The problem is not only technical but also philosophical, because each of these two dimensions is difficult to measure, and also to define. How do we know if they deny us a mortgage, or recommend a job, for the wrong reasons? Adherence to values ​​and norms is not a category they may understand at the moment, even though they are intelligent in other ways. Over time, we will also develop more controllable techniques and - most importantly - new cultural tools to relate to them. This is the part that will require the contribution of scientists, jurists and humanists, and it is also what we intend to do in the coming years".

Nello Cristianini

What are the disciplines and knowledge that can lead us towards a healthy relationship of coexistence with machines and artificial intelligences?

"Just think of the risks: every day we use intelligent algorithms created to learn what makes us click. What can be the long-term effect on a user, for example a child, of selecting and proposing videos or articles to read? Is there a risk of excessive use, of emotional distress, perhaps of polarization of opinion? Those algorithms cannot understand the consequences of their actions, they only know how to pursue the goals we have given them. So here's the answer: the disciplines that will help us understand how to use and control them will be the humanities, the social sciences, psychology, and maybe even economics.

“It's a matter of understanding intelligent machines in the context of their business model, the culture in which they operate, and also the characteristics of the individuals who use them. But this will not be possible if the public, or colleagues from other disciplines, continue to imagine a version of AI that exists only in the cinema: we must start from the knowledge of the methods that are actually used, and of the shortcuts that we have had to take".

What other shortcuts are and will machines be capable of, even beyond the notable goals already achieved?

"Just as engineers have taken shortcuts to reproduce intelligent behavior in machines, so too do they take shortcuts whenever they have to make a decision. When Amazon's agent recommends a book, he doesn't really understand the mechanisms that lead us to choose, and so when a word processor offers us how to complete a sentence, or TikTok offers us a video. The problem can arise when they use the same logic to offer us a job or grant a loan, because those are sectors regulated by law.

“The next turns and questions are already in full view: what are called the emergent properties of language models such as GPT, which spontaneously develop skills for which they were not programmed. For example, it can solve little puns, create paraphrases, and so on, having only been programmed to do elementary things, like predicting missing words in text.We're all studying this puzzle, and really at the moment we have only guesses. Those skills are very useful, but how much can we trust them if we don't know how they emerge? This is a direction of work that interests me personally at the moment.

“But let's also remember the positive side: the same technology helps us to use the web, find information in different languages, drive cars and examine radiographs. We don't think that there are only problems: the opportunity is enormous, only that we have to study thoroughly all the consequences. I am optimistic”.

Powered by Blogger.