ChatGPT, what the Privacy Guarantor and OpenAi said to each other

ChatGPT, what the Privacy Guarantor and OpenAi said to each other

ChatGPT

To untie the knots on the management of data by ChatGPT is still too early. But the first meeting between the Italian Privacy Guarantor, which temporarily suspended the processing of information by the powerful conversational chatbot on Friday 31 March, and the startup that develops it, OpenAi, which responded by blocking access to the service for users del Belpaese, marks the start of negotiations to find a solution that safeguards user data in the algorithm training process. Because the company, as stated in a note from the guarantor, "undertook to strengthen transparency in the use of personal data of the interested parties, the existing mechanisms for the exercise of rights and guarantees for minors and to send to the guarantor within today a document that indicates the measures that respond to the requests of the authority ”. A sign that he wants to close the game quickly.

The CEO of OpenAi, Sam Altman, also took part in the opening. The President Pasquale Stanzione and the members Ginevra Cerrina Feroni, Agostino Ghiglia, Guido Scorza spoke for the board of the Guarantor. For the ChatGPT startup Che Chang, deputy general counsel of the US company, Anna Makanju, public policy manager and Ashley Pantuliano, associate general counsel.

The interview

In the hours immediately following the stop, OpenAi said it was available to collaborate with the Authority for the protection of personal data. On the other hand, what founder Sam Altman and his colleagues have intuited is that the move in Piazzale Venezia could unleash a domino effect, in Europe and beyond. So much so that after the halt of the Italian Guarantor, the first in the world to contest ChatGPT's lack of consent to the use of personal data for artificial intelligence training, other authorities raised their heads. Starting with Canada, which in turn launched an investigation against OpenAi for the collection, use and dissemination of personal data without consent. In France, Ireland and Germany the guarantors are studying the Italian dossier. Just like in Japan.

The conversation between the top management of OpenAi and the board of the Italian privacy guarantor took place via videoconference. No practical conclusions and it was legitimate not to expect anything more. As mentioned, this is an interlocutory phase. OpenAi has twenty days from the communication of the provision of 31 March (a standard window, provided for by law) to respond to the objections raised by Piazzale Venezia and provide its counter-deductions or possible solutions. Under penalty of a fine of up to 20 million euros or up to 4% of the annual global turnover, as foreseen by the GDPR. But it wants to do it quickly, as evidenced by the willingness to send a document with its commitments already during the day, which the Guarantor will evaluate.

The company said it is convinced it is complying with the rules on the protection of personal data, but confirmed its willingness to collaborate with the Italian Authority with the aim of arriving at a positive solution to the critical issues identified by the Guarantor. " The Authority for its part underlined that there is no intention of putting a brake on the development of AI and technological innovation and reiterated the importance of respecting the rules aimed at protecting the personal data of Italian and European citizens ”, reads a note.

The disputes

At OpenAi, the Information Protection Authority, led by the president Pasquale Stanzione, has imposed a temporary block on data processing ( which resulted in the suspension of the chatbot for users located in Italy decided by the company) on the basis of four main reasons: lack of information on data processing; the absence of consent for the training of the algorithm; inaccurate results ; the absence of a filter to prevent anyone under the age of 13 from accessing ChatGPT.

The block concerns the use of the chatbot and in particular all the personal data that we give away, even involuntarily, when we question it . If I ask ChatGPT " Recommend me a recipe for a romantic dinner ", for example, I am leaking personal information that I give without having given explicit consent and that are processed, at least for now, only in the United States .

Another matter concerns GPT-3, i.e. the large-scale linguistic model underlying ChatGPT launched in 2020. It is a neural network which, based on 175 million parameters, generates a text word by word based on the instructions that are entered . In this case Microsoft, which is behind the project, provides the service through servers located in Europe.






Powered by Blogger.