What's in the European Commission's plan to regulate artificial intelligence

What's in the European Commission's plan to regulate artificial intelligence

The proposal establishes four levels of risk to determine whether a technology can be adopted in Europe and sets sanctions for those who violate the rules

The Berlaymont building in Brussels, headquarters of the European Commission (photo: Luca Zorloni for Wired ) The European Commission has presented the first package of rules to regulate the use of artificial intelligence. Presented by Vice President Margrethe Vestager, responsible for the Union's digitization strategy, and by Thierry Breton, European Commissioner for the Internal Market and Competition, the text puts the safeguarding and security of people's fundamental rights at the center of the text. "The coordinated plan - reads a note - outlines the strategic changes and investments required at Member State level to strengthen Europe's leading position in the development of an anthropocentric, sustainable, safe, inclusive and reliable AI" .

The Commission segments the applications of AI according to the level of risk of abuse and damage and, on the basis of this scan, establishes the rules to be adopted. Minimum, low, high and "unacceptable" risk: these four levels indicate both the intensity of the limitations that will be imposed and the way in which a technology that uses artificial intelligence is employed.

Levels of risk

At the lowest level of risk are those systems that have no impact on people's rights and lives, such as filters to block spam in emails or systems to divide the waste. These technologies can be used without further limitations to those still in force. At the second level of risk are those systems that interact directly with humans, such as bots used in support chats. In this case, the new regulation requires it to be made explicit that the user is interfacing with an artificial intelligence and not with a human being.

The third level of risk represents the heart of the regulatory intervention. All systems that can interfere with important aspects of human life are part of this level. These are Ai who can filter and select candidate resumes for a job or systems used in healthcare. The companies that supply these technologies will be obliged to use the data in such a way as to avoid any discrimination for individuals and to use adequate risk assessment and mitigation systems. They will have to regularly provide transparent and complete documentation to the competent authorities, to verify the correct use of the data, and keep a timely record of the activities, to guarantee the traceability of the results. They will have to provide information to users regarding the operation and use of these technologies. In addition, the legislation requires companies to guarantee constant human control over the construction and implementation of AI and to follow the highest computer security protocols.

At the last level, instead belong those technologies considered extremely dangerous for people's lives or contrary to the fundamental values ​​of the European Union. These are systems that evaluate and categorize people based on their social behaviors or technologies capable of subliminally influencing people's behavior.

Biometric technologies

A separate chapter concerns the use of biometric technologies. Vice President Vestager pointed out that these systems can fall into both the third and fourth levels of risk. The use will therefore be strictly regulated and allowed only if it directly supports human personnel in certain types of operations. As in the case of border controls, with the scanning of fingerprints or face. While all those systems that operate remotely, in real time and on a massive level, will be prohibited. "There is no room for mass surveillance in our societies," Vestager said. However, these systems may be used in particular circumstances, in case of extreme necessity, by the police. To allow this type of use, Member States will have to have ad hoc rules.

National authorities will be responsible for supervising compliance with the new legislation. In case of violation of the provisions, the authorities will proceed with gradual sanctions based on the persistence of the violation. For example, authorities will be able to request the withdrawal from the market of a particular product that is not in line with the regulatory framework, in the event of a first violation of the rules. While they will be able to sanction companies, with fines equal to up to 6% of their global turnover, in the event of repeated violations. In addition, a European Artificial Intelligence Board will be established to facilitate the implementation of the regulations and facilitate implementation and stimulate the development of further AI standards.

Now the proposal will need to be scrutinized governments and the European Parliament. The European Commission plans to guarantee financial support for the development of AI through Community research programs.




Tech - 5 hours ago

The unfortunate marriage between Lombroso's physiognomy and artificial intelligence

adsJSCode ("nativeADV1", [[2,1] ], "true", "1", "native", "read-more", "1"); In collaboration with Retelit

How blockchain is changing the way the world lives and works

adsJSCode ("nativeADV2", [[2,1]], "true", "2 "," native "," read-more "," 2 "); Environment - 20 Apr

Night bonfires to save vines in Germany

Topics

Big data Cybersecurity Europe Artificial intelligence globalData.fldTopic = "Big data, Cybersecurity, Europe , Artificial Intelligence "

You May Also Interest



This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License.





Powered by Blogger.