Stop ChatGPT, the legal reasons of the Guarantor

Stop ChatGPT, the legal reasons of the Guarantor

Stop ChatGPT

Artificial intelligence (AI) has rapidly become an increasingly present reality in our daily lives. One of the best known and most used AI systems is ChatGPT , a conversational language model developed by the American company OpenAI (which also includes Elon Musk among its founders). But at the moment it is not possible to use it in Italy.

Let's see why.

The Guarantor's provision: the reasons for the stop

On 30 March last, the Guarantor for the protection of personal data has ordered, against OpenAI, the temporary limitation of the processing of personal data of the Italian users of ChatGPT, reserving the right to take further decisions (and therefore possibly impose a pecuniary sanction) at the outcome of the investigation currently in progress.

The day after the publication of the provision, OpenAI decided to suspend the provision of the ChatGPT service throughout Italy, offering users of the "Plus" service a refund of the amount spent on the monthly subscription . The Guarantor's fears are justified for a number of reasons.

Last March 20, due to a bug in an open source library, some users were able to see the personal data of ChatGPT Plus subscribers, although OpenAI reported that it was a small percentage of users (1.2%). This event, which legally qualifies as a data breach, brought to the fore the issue of the security of a system widely used in Italy as well and probably gave the Guarantor input to start the investigation culminating, for now, in the provision in question .

The provision lists a series of violations of European legislation (EU Regulation 2016/679 "GDPR") among which the lack of an appropriate legal basis for the processing of personal data and the absence of user age verification.

Although the disclosure is present on the OpenAI website, it does not contain the formal requirements required by the GDPR: in addition to the legal basis, the retention times of personal data are not indicated, there are no guarantees regarding the transfer of European citizens' personal data to the USA (a question common to all American companies, which will probably be resolved with the arrival of the Privacy Shield 2.0), no evidence is given of automated decision-making processes (e.g. profiling). Waiting to know the developments of the story, various smoky aspects regarding the use of technologies similar to ChatGPT remain to be analysed.

The protection of personal data and security

This decision of the Guarantor , which has dominated all social media and national and international newspapers, should be considered only the first step towards a necessary and in-depth analysis of the legal and ethical implications of artificial intelligence systems. Of this opinion is also the European Commission which, with the AI ​​Act (which should arrive by the end of the year), is attempting to regulate what is currently out of control: ensuring that European citizens can benefit from the new technologies developed and operating in accordance with the values, fundamental rights and principles of the Union.

The proposal for a regulation on AI, published by the Commission in April 2021, underlines that the evolution of artificial intelligence in Europe should be anchored to the fundamental values ​​of the EU and the GDPR, imposing a series of rules on companies to follow before embarking on the design and development of new AI systems.

The document establishes harmonized rules for the development, placing on the market and use of AI systems, while ensuring a high level of protection of public interests, such as health, safety and the protection of fundamental rights, as recognized and protected by Union law , following a proportionate risk-based approach. In particular, according to the provisions of the new Regulation, artificial intelligence systems used to manipulate human behavior, exploit information on individuals or groups of individuals, used to perform social scoring (as in China) or for the surveillance of mass (as in the USA), with the exception of cases in which such systems are authorized by law or are used to safeguard public safety but with adequate guarantees for the rights and freedoms of the individual.

The artificial intelligence applications deemed more harmful will be banned, while “high-risk” AI systems will have to be subject to rigorous controls (critical infrastructure, education, employment, essential public services, etc.).

ChatGPT vs. copyright

To generate the content, ChatGPT uses machine learning models ( machine learning ) in order to predict the next word based on the previous word sequences, until the complete text is generated based on the input provided by the user. The AI ​​model lacks the ability to evaluate or verify information sources during its operation, as its job is to generate responses based on the information present in the training data which includes content extracted from web pages, books, essays, and other publicly available text sources. But who checks if the data used to train the model is owned by someone else?

In the terms of use of ChatGPT, we read that OpenAI recognizes the user the rights to use the contents produced ("Output" ) as well as on the data entered to create them ("Input"), but with an important specification: it is the user's task, the copyright holder, to verify that that content does not violate any law. This means that the contents created with ChatGPT can also be used commercially, provided that the user has verified that no content equal or similar to his own has already been placed on the market.

This is a big responsibility placed on the user, who is often not even aware of having it. It should be noted that the content produced, in order to be protected by copyright, must be a "work of the mind of a creative nature" (Article 1 of the Copyright Law). So, paraphrasing the OpenAI Terms of Use, if the input entered by the user in the chat is trivial (“what color is the sky?”) it is highly probable that the response generated could be the same for hundreds (or thousands) of users, as well as not covering any type of user creativity.

The future of AI in Europe

As mentioned, the proposed regulation to regulate artificial intelligence dates back to to two years ago. But why is it still still? As is easy to understand, this regulation will put a brake on the spread of such systems in Europe, cutting off a large slice of the Big Tech market. Investments in artificial intelligence by American companies have increased by about 30% compared to 2020. And these investments must have a return. In these two years, there has been evidence of a pressing lobbying activity carried out by companies with respect to the AI ​​Act, in order to reach compromises acceptable to both parties (EU Observer reports that at least 565 meetings between MEPs and companies have taken place).

In the coming weeks, the European Parliament will vote on the approximately 3,000 amendments on the proposed regulation on AI and it is expected that the AI ​​Act could enter into force as early as the end of 2023, but the companies that develop intelligence systems artificial will have the usual two years to comply with the new legislation. We hope that the final result will be a fair political and economic compromise without, however, undermining the fundamental rights of European citizens.






Powered by Blogger.