In Europe, the legal battle against the most talked about facial recognition startup in the world begins

In Europe, the legal battle against the most talked about facial recognition startup in the world begins

In Europe

Appeals and reports against Clearview Ai have been filed with the privacy guarantors of Italy, France, Greece, Austria and the United Kingdom to stop the startup's activities

The photos ended up in the Clearview Ai database The files have been addressed to the Privacy Guarantors of Austria, France, Greece, Italy and the United Kingdom. The target is Clearview Ai, a US company that trains facial recognition algorithms, which are then sold to police forces or private companies, trawling photos of faces online. A group of associations engaged in the protection of digital rights signed complaints and reports: in Italy the Hermes Center, together with Privacy International, Homo Digitalis and Noyb, the organization founded by the Austrian activist Max Schrems, who with his causes it has twice undermined the exchange of data between Europe and the United States. Goal: to block Clearview Ai's operations in the old continent.

The controversial company has long been in the crosshairs. He claims to have a database of 3 billion images, with which he refines the algorithms which he then resells to his clientele, ranging from the United States police force (although some, such as the Los Angeles Corps, are starting to back down) to private companies, large distribution chains, casinos. In Italy, the substantial dossier of the Hermes Center arrives on the desk of a data protection authority already requested in this regard.

The investigation of the Guarantor

In March, the New York company received a request for clarification from the offices in Piazza Venezia. The technical ways in which data are managed and protected are under observation. In particular, the authority led by Pasquale Stanzione wants to know if biometric data is processed from the photos treated, if these are adequately protected and if there are automatic decision processes behind the analysis of the faces.

L The investigation starts from a request for intervention by the Privacy network, an Italian organization for the defense of fundamental rights, presented in February. And in the meantime there are also complaints from citizens who found themselves in the Clearview Ai database without their knowledge, including the author of this article (as told in Wired). A few months later, however, a second request to the company to find out if personal photographs were in the databases gave a negative result. However much we must rely on the company's response, because the user has no other forms of control or reporting.

Continental barrier

Now the new front of complaints at European level is pushing for a community barrier to the invasive technologies of society. Which "has never had contracts with any customers in the European Union and is not available at the moment in the Union," its CEO, Hoan Ton-That, told Wired, but he is sure he tried. Also in Italy. The Guarantor has taken a clear stance against these technologies, also rejecting the real-time facial recognition adopted by the police (Sari). “Facial recognition technologies endanger our lives both when we are on the internet and when we are on the street - commented Fabio Pietrosanti, president of the Hermes Center -. By secretly collecting our biometric data, these technologies introduce constant surveillance of our bodies. ”

Heavy rulings have already arrived against Clearview Ai. Canada has banned it. In Germany there was a first, partial victory, to protect a user. The four associations are now raising the bar in Europe: putting the controversial startup at the door. “European data protection laws are very clear when it comes to the purposes for which companies can use our data - the line Ioannis Kouvakas, legal manager of Privacy International -. Extracting our unique facial features or even sharing them with the police and other companies goes far beyond what we could ever expect as online users. ”

How to defend yourself

The general regulation for the protection of data (Gdpr) is clear: informed consent is needed when dealing with personal information, both with the photos that the startup collects from the web, and with those it asks to do research in its archives. Instead in the first case the users are not even aware that they are in the database, while in the second they have to be satisfied with a simple I agree. Now the authorities have three months to respond to the reports. And the citizens? As the Hermes Center recalls, "they can ask Clearview if their face is contained in the database and request that their biometric data no longer be included in the searches by sending a request to privacy@clearview.ai or following the methods offered by the platform. My Data Done Right ".

If you want to make reports to Wired, to help us understand how many Italians found themselves in the Clearview Ai database after applying, you can contact our editorial staff through various channels and in a secure way and anonymous with the Wired Leaks platform.




Politics - 2 hours ago

Joe Biden ordered new coronavirus origins investigation

adsJSCode ("nativeADV1", [[2,1]], "true", "1", "native", "read- more "," 1 "); Tech - 6 hours ago

Does it make sense to talk about algorithm ethics?

adsJSCode ("nativeADV2", [[2,1]], "true", "2", "native", "read-more", "2"); Politics - 21 hours ago

The Privacy Guarantor blocks the rules of the Campania Region on the green pass

Topics

Cybersecurity Europe Gdpr Privacy Startup surveillance United States globalData.fldTopic = " Cybersecurity, Europe, Gdpr, Privacy, Surveillance, startup, United States "

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License.




Powered by Blogger.