ChatGpt, because connecting it to the internet is dangerous

ChatGpt, because connecting it to the internet is dangerous


ChatGpt amazed everyone with his poetry, prose and academic test scores. Now OpenAi's precocious chatbot will also be able toa> find your next flight, recommend a restaurant and even deliver you a sandwich.

Last week OpenAi announced that a number of companies – including Expedia, OpenTable and Instacart – have developed plugins that allow the chatbot to access their services. Once one of these plugins is activated, users will be able to ask ChatGpt to perform tasks that would normally require using the web or opening an app.

The move could herald a transformation in the way people use computers, apps, and the web, where AI programs complete tasks on behalf of users. Not having access to the internet, so far ChatGpt was unable to search for recent information or interact with websites. It's possible the change will help solidify OpenAi's position at the center of what could quickly become a new era for AI and personal computing.

“I think it's a brilliant move,” Linxi says” Jim" Fan, a scientist at Nvidia who researches Ai. Fan says that ChatGpt's ability to read documents and interpret code should make the process of integrating new plugins much easier. The scientist believes that the novelty can help OpenAi compete with Apple and Google : " The next generation of ChatGpt will be similar to a meta-app, an app that uses other apps ", adds Fan.

The fears of the sector

But there are also those who say they are worried by the prospect that ChatGpt and OpenAi acquire an increasingly dominant position thanks to artificial intelligence. If other companies were to rely too heavily on OpenAi's technology, the company could significantly benefit financially and exert enormous influence over the technology industry. And if ChatGpt becomes a mainstay of the industry, OpenAi will have a huge responsibility to ensure that rapidly evolving technology is used carefully and responsibly.

" There is a certain unease facing the steamroller of OpenAi in the startup ecosystem among companies that were only raising pennies,” says investment group Conviction VC co-founder Sarah Guo, referring to companies that are building technologies similar to ChatGpt. According to Guo, OpenAi's latest maneuver "improves the company's resilience and strategic position" in the consumer sector.

OpenAi has managed to spark the imagination of the public thanks to ChatGpt, a very more capable, consistent and creative than its previous chatbots. The company has also prompted dozens of startups to build products based on its Ai. Microsoft, which has invested ten billion dollars in OpenAi, has added ChatGpt to its search engine, Bing, and is now rushing to integrate the chatbot into other products, including the Office suite.

ChatGpt si it is based on an algorithm called Gpt, which OpenAI started developing several years ago. Gpt is able to predict the words that should follow each other in a message thanks to the statistical analysis of trillions of lines of text collected from web pages, books and other sources. While at heart little more than an autocomplete program, the latest version of Gpt, called Gpt-4 , offers some notable features, including the ability to score high on many academic tests.

Several open source projects, such as LangChain and LLamaIndex, are also exploring ways to build applications that use the capabilities of large language models. According to Guo, the launch of the OpenAi plugins risks derailing these efforts.

Security risks

Furthermore, the new plugins could also introduce the risks that plague the complex artificial intelligence. According to Emily Bender, professor of linguistics at the University of Washington, members of the ChatGpt team have detected the possibility of "sending fraudulent or spam emails, bypassing security restrictions or misusing the information sent to the plugin" .

Dan Hendrycks, director of the nonprofit Center for Ai Safety believes plugins make language models more dangerous at a time when companies like Google, Microsoft and OpenAi in the US are lobbying hard to limit their responsibilities. For Hendrycks, the launch of the ChatGpt plugins sets a bad precedent and could lead other large language model makers to take a similar path.

And if the selection of plugins is currently limited, the competition could push OpenAi to expand its offer. Hendrycks sees a difference between ChatGpt plugins and previous efforts by tech companies to grow developer ecosystems around conversational AI, as is the case with Amazon and its voice assistant Alexa.

Gpt-4 can for example run Linux commands and is able to explain how to make biological weapons, synthesize bombs or buy ransomware on the dark web. Hendrycks suspects that ChatGpt's plugin-inspired extensions could make tasks like phishing much easier.

Switching from generating text to performing tasks on behalf of a person closes a gap that has so far prevented linguistic models to perform actions: " We know that models can be jailbroken [a process that allows you to bypass the restrictions of a system, ed ] and now we are connecting them to the internet so that they can potentially take actions – says Hendrycks -. This is not to say that ChatGpt will voluntarily build bombs or anything like that, but it does make doing this sort of thing much easier.”

Also Ali Alkhatib, interim director of the University's Center for Applied Data Ethics of San Francisco, agrees that language model plugins could make it easier to remove the limitations of these systems. As users interact with AI using natural language, there are potentially millions of undiscovered vulnerabilities.

“Things are moving fast enough that they are not only dangerous, but downright harmful to many people,” says Alkhatib, who worries that companies excited to use the new AI systems could plug plugins into sensitive contexts such as consulting services .

Adding new features to Ai programs like ChatGpt could also have unintended consequences , reports Generally Intelligent CEO Kanjun Qiu. A chatbot, for example, could book an overpriced flight or be used to distribute spam; in cases like these, according to Qiu we will also have to understand who would be responsible for incorrect behavior.

However, Qiu also adds that the usefulness of Ai's programs connected to the internet makes technology virtually unstoppable: " In the next months and years, we can expect much of the internet to be connected to large language patterns,” he says.

This article originally appeared on US.

Powered by Blogger.