Artificial intelligence, the 10 still unresolved points of the European regulation

Artificial intelligence, the 10 still unresolved points of the European regulation

Artificial intelligence

After years of negotiations, the approval of the European Union regulation on artificial intelligence is expected by 2023. But on many points of the Artificial Intelligence Act (Ai Act) there is still no agreement. The European Council has opposite ideas to those of the Parliament on real-time facial recognition, while within the European Parliament itself there are conflicting positions on emotion identification systems. sportsgaming.win spoke to Brando Benifei, Ai Act co-rapporteur for the European Parliament and Patrick Breyer, MEP of the German Pirate Party, to identify the 10 most controversial points, which will be at the heart of the trilogue negotiations, i.e. the meetings between Parliament, the Council and the Commission who will have to find a mediation.

Real-time facial recognition Analysis of emotions Polygraphs and lie detectors Verification of impact on fundamental rights Social scoring "De facto" social score Repression of a free and diverse society Risk of disinformation with GPT Chat Risk of supporting regimes who use AI for repression Risk of stopping a large number of innocent people (especially migrants and minorities) Brando Benifei at sportsgaming.win Next Fest 2022: "In a year we will have the rules on artificial intelligence" In the meeting in Florence we spoke with Max Schrems and Mattia Fantinati about the state of the art of European legislation in terms of digital and artificial intelligence

Real-time facial recognition

Surveillance systems that identify people as they walk in public places, such as when climbing the stairs of a subway are prohibited in the regulation proposed by the Commission. But there are exceptions - such as the fight against terrorism and the search for missing persons - that would allow a judge to activate them. “ In the information system of states at any moment there are hundreds of thousands of people wanted for terrorism - underlines Breyer -. Probably the courts would order to identify them and that would mean permanent biometric mass surveillance”. “ We have no data that real-time facial recognition helps security - confirms Benifei -, but we know, instead, that it creates security problems ”. The bulk of the political groups in the European Parliament have been convinced by the Reclaim your face campaign for a total ban on mass surveillance, but the Council has added “national security” as an exception for its use.

Analysis of emotions

The biometric analysis of movements for the identification of emotions is not prohibited by the Ai Act, but only qualified as "at risk" technology. It means that the systems that use this application of artificial intelligence are listed in an annex to the regulation (which will need to be periodically updated) and are subject to specific certification procedures. “ In my opinion, the recognition of emotions should be banned with the sole exception of medical research - says Benifei - but Parliament does not have a majority on this, because right-wing liberals (PPE) and conservatives are against banning these technologies, believing that they can be used for security ”.

The "lie detector" at the borders that Europe preferred not to talk about

Polygraphs and lie detectors

Among the biometric emotion analysis technologies considered risk but not prohibited by the Ai Act there are products that promise to identify those who move dangerously in the crowd (for example, those who leave luggage unattended ), and there are polygraphs , or real lie detectors . Among these, the Iborder system: based on an algorithm that analyzes facial micro-movements, it was tested on the borders of Europe to identify suspected terrorists. Despite giving wrong answers to those who tested it, its experimentation has been described as a success story by the European Commission.

Fundamental rights impact assessment

The subject of heated discussion among the Parliament and the Council is the impact assessment for users of artificial intelligence systems classified as high risk. “ Currently the regulation provides only for a certification for the producers of these systems. These are self-checks on data quality and risks of discrimination, which will be supervised by the national authority of each member country and the European office on artificial intelligence - explains Benifei -. We want to insert a further control obligation on the part of users, i.e. public administrations and companies that use these systems, but the Council does not envisage this mechanism".

Even in Italy, the Privacy Guarantor blocks the most controversial startup of facial recognition in the world The Authority fined Clearview AI with a fine of 20 million euros and a ban on collecting photos of Italian people and deleting existing ones. The survey conducted by sportsgaming.win Italia also lit the light on the company

Social scoring

The use of artificial intelligence to score people based on their behavior is prohibited in the proposed regulation, with an exception for small businesses contained in the draft approved by the Council, but canceled in the mediation one drawn up by the Justice Commission of the European Parliament: "It is appropriate to exempt the Ai systems intended for the assessment of creditworthiness and where they are put into service by micro or small enterprises for their own use”.

Social Score of Fact

“There is a risk that emotion recognition technologies will be used to control minorities in train stations and at migrant borders, in prisons and even in sporting events - adds Breyer -. In all the places where these technologies have already been tested". The exponent of the Pirate Party then underlines that "many of the cameras used for recording and monitoring movements are technically capable of recognizing faces, especially if they are purchased from Chinese manufacturers". Furthermore, “ it would be very easy for the police to activate the facial recognition function ”, even if it is not allowed by European legislation.

Repression of a free and diverse society

Despite the ban on giving social credits, for Breyer there is the danger that the information coming from emotion recognition systems aimed at security reasons could be used to identify those who behave differently from the masses and constitute, in fact, a social credit system that represses those who want to adopt behaviors different from those of the masses, such as participating in political demonstrations.

Risk of disinformation with ChatGPT

In the compromise proposal of the European Parliament, contents generated by artificial intelligence that appear to have been written by a person, as well as deepfake images - are subject to an obligation transparency towards users. It undertakes to inform users, during the moment of exposure to the content (chatbot or deepfake) that it has been generated by an algorithm. " This obligation of transparency is foreseen in the draft of the European Parliament but not in the position of the Council ", underlines Benifei.

The answers of ChatGPT and other artificial intelligences are full of prejudices From Google Translate to Character.AI up to at Chatsonic, deep learning systems trained to converse can't help but repeat the most classic stereotypes, despite the efforts of programmers

Risk of supporting regimes that use AI for repression

“ The Iran has announced that it will use facial recognition to report women who do not wear headscarves properly, Russia to identify people for arrest. The use of this technology on a large scale in Europe, would lead companies to strengthen its production and this would also have an impact on authoritarian regimes outside the continent ”, warns Breyer.

Risk of stopping a large number of innocents

“ Even if facial recognition technologies reach 99% accuracy, when applied to thousands of people, they risk identifying an enormous number of innocent citizens - recalls Breyer -. A study by the US National Institute for Standardization of Technology has found that many biometric facial recognition technologies on the market are unreliable when dealing with non-white people - highlights the MEP - probably because the training data of the algorithm were flawed: these technologies tend to be used in areas with high crime rates, where mainly ethnic minorities live ”.






Powered by Blogger.