Artificial intelligence, because it's not the new atomic bomb

Artificial intelligence, because it's not the new atomic bomb

Artificial intelligence

The current pace of evolution in generative artificial intelligence (AI) is insane. OpenAi launched ChatGpt to the public just four months ago, and it took the chatbot just two months to reach 100 million users (TikTok, the previous internet phenomenon, had nine). In an effort to keep up with the competition, Google released Bard. There are already several ChatGpt clones in circulation, as well as new plug-ins that allow you to integrate the bot with popular websites such as Expedia and OpenTable. Gpt-4, the new version of the OpenAi model presented last month, is more precise and "multimodal": it is able to handle text, images, video and audio simultaneously. Ai image-making is advancing at an equally frantic pace: MidJourney's latest release gave us viral deepfakes of Donald Trump's "arrest" and those of the Pope with an improbable feather duster, which illustrated unequivocally how soon we will have to look with suspicion at every single image we come across online.

Not to mention newspaper headlines and news sites, which have announced the landing of artificial intelligence in schools, science fiction books, the industry legal and in gaming . Or the articles on AI that is now capable of producing videos , fighting cybersecurity violations , fueling the culture wars , creating black markets , unleashing a gold rush for startups , revolutionizing online searches , selecting music for us , and threaten our work .

New Manhattan Project?

In the midst of this frenzy, I have twice come across comparisons between the birth of generative AI and the creation of the atomic bomb. The remarkable thing is this comparison comes from people who attribute diametrically opposite meanings to it.

One such individual is the closest thing to a lead architect for the generative AI revolution: Sam Altman, CEO of OpenAi, who in a recent interview with the New York Times he indicated in the Manhattan Project - the secret plan that led to the development of the atomic bomb - " the level of ambition to which we aspire ". The others are Tristan Harris and Aza Raskin of the Center for Humane Technology , who gained some notoriety a few years ago for saying that social media was destroying democracy . The two now argue that generative AI could even annihilate civilization, making tools of impressive and unpredictable power available to anyone's hands.

Altman, to be honest, agrees with Harris and Raskin that the AI could destroy civilization. But he stresses that he has better intentions than other people, and that he will try to make sure that the new tools include protections. Furthermore, Altman is convinced that he has no choice but to continue with the development of AI, since the technology would be unstoppable in any case. A disconcerting mix of faith and fatalism.

For the record, I share Altman's position on this last point. But I also believe that the protections put in place today, such as the filtering of incitement to hatred or crimes in ChatGpt responses, are characterized by a ridiculous weakness. For example, it would be quite trivial for companies like OpenAI or MidJourney to embed digital watermarks that are difficult to remove in all the images generated by their artificial intelligence systems, to facilitate the detection of deepfakes such as photos of the Pope. A coalition called the Content Authenticity Initiative it is trying to do exactly that, albeit on a limited scale; its protocol allows artists to voluntarily attach metadata to AI-generated images. However, I don't think any of the major generative AI companies have joined these efforts.

Out-of-focus comparison

Regardless of the positive or negative opinion, I believe that the parallel between the Generative ai and nuclear weapons are both more misleading than useful. While they could literally wipe out most of humanity in minutes, there are relatively few people capable of getting their hands on nuclear weapons. On the other hand, almost everyone will have the opportunity to use AI, which however will not be able to annihilate most of mankind in one fell swoop. Of course, perhaps a model like Gpt-4 (lacking protection systems) or its successors could be asked to "design a superbug that is more contagious than Covid-19 and that kills 20 percent of the people it infects". But humanity hasn't gone extinct yet despite the fact that the formulas for creating deadly toxins and the genetic code of virulent diseases have been available to anyone online for years.

What makes AI scary, rather , is that no one can predict the majority of applications users will come up with in the future. Some of these uses could be the equivalent of an atomic bomb for very specific industries, such as college essays, which could become obsolete. In other cases, the harmful effects of the technology will be slower and harder to predict (for example, while ChatGpt has proven to be an incredibly effective tool for writing code, some fear that the chabot will make communities where human beings share knowledge in the field of programming, thus destroying the foundation on which future human developers and artificial intelligence will be trained).

At any rate, the analogy with the Manhattan Project seems to me to be centered for a aspect: there is a world before mass access to generative AI and one after , and they are not the same.

This article originally appeared on sportsgaming.win US.







Powered by Blogger.