Italy receives millions from Europe to test artificial intelligence at the borders

Italy receives millions from Europe to test artificial intelligence at the borders

Italy ranks third in terms of the amount of European funds earmarked for research in the field of artificial intelligence for border control. With France and Spain in the lead, 38 million euros arrive in Italy every year, 12% of the total budget distributed by the European Union in the last fifteen years, which amounts to 341 million euros.

These are the figures that emerged from the A clear and present danger research carried out by the British organization Statewatch, which has been active for many years for the promotion of digital rights, which returns an image of a limping Europe. According to the organization, which worked together with European Digital Rights and Access Now among others, the current European regulation proposal on artificial intelligence, the AI ​​act, does not in fact provide adequate safeguards on the use of this technology in the field. migratory. The proposal, says the European Commission, "is based on EU fundamental rights and values ​​with the aim of giving people and other users the confidence in adopting AI-based solutions, while encouraging companies to develop them. ".

Greece enlists tech surveillance systems for migrant camps in the Aegean islands Two weights, two measures However, there are no particular obligations regarding the use of facial intelligence at the borders: a choice that will actually allow , as declared by the European Commission, the development of technologies that will put certain categories of people at risk without any regulatory protection. In this case, the organization states, "it is vital that the legislator, and the law itself, recognize the problem and consequently impose limits and safeguards on the design and use of AI in a high-risk context, in this case migration control ".

According to the Union, the unacceptable risks in the use of this technology, in addition to the surveillance carried out in real time in the public space, are exemplified by the systems that operate "subliminal techniques", that is, that exploit vulnerable groups and that a score people by judging their behavior and physical characteristics. The control at the borders or the lack of access to welfare are examples of how these techniques or judgments are however justified when the people on whom they work are migrants.

Europe is spending more than 900 million on a maxi-system of biometric data AI and law enforcement As reported by Statewatch on this point, “any practice that undermines the essence of the autonomy of a person causes harm ". The same goes for the issue of vulnerability, defined solely through factors such as age and physical and mental disability. “The law should consider all sensitive characteristics as potential indicators of inequality (and therefore with a higher risk of vulnerability)”, the report reads. Not only age, sex attributed at birth but also ethnic origin, health status, sexual orientation and socio-economic status.

The launch of the European campaign Reclaim Your Face two years ago did not sanction a real ban on the use of biometric technologies in the public space, as requested by over 60 associations in support of the campaign. In some cases, such as the search for victims of crimes or suspects, biometric identification is considered "strictly necessary" and could be used by the police authorities for security purposes. It is therefore clear, says Statewatch, that "the use [of these systems, ed.] For the control of migration and asylum [...] is allowed and this places the right to non-discrimination and to privacy ".

The position of the European Union regarding research in the field of artificial intelligence is that technologies of this type continue to be able to be studied in all their applications, and for Statewatch "it is not clear what need there is to do research when these systems are prohibited in reality ". On a general level, as already stated by numerous associations for the promotion of digital rights, the proposed regulation prohibits only the artificial intelligence systems mentioned in the law, considerably limiting the field of action that a rule of this type could have in context. European.

WiredLeaks, how to send us an anonymous report Procurement in Italy European Union funding monitored by the NGO appears to have ended up mostly in the pockets of private companies. Leonardo plays the lion's share in Italy. Marisa, Promenade and Ranger are the three projects won under Horizon 2020 by the former Finmeccanica, which has received more than 3 million euros from the European Union. The Marisa project, which also involves the University of Bologna, aimed to create a tool that would allow the exchange of data and information collected on the internet or on social media. "The proposed solution will provide mechanisms to obtain information from any big data source, perform analysis of a variety of data based on geographical and spatial representation, use techniques to search for typical and new models that identify possible connections between events", is reported on project sheet.

When it comes to maritime surveillance, Promenade - in progress until 2023 - and Ranger are the most illustrative. The first aims to apply artificial intelligence and big data for the monitoring (and automatic detection of any anomalies) of ships in the Mediterranean, favoring the exchange of this information between international authorities. In this project Leonardo, 30% owned by the Ministry of Economy and Finance, took part together with six European ministries in various subjects. In the second, Ranger, a maritime surveillance project with the aim of creating radar technologies that can alert the authorities in the event of a possible recognition of a suspicious ship off the Mediterranean, Leonardo has the largest share of funding after the French CS Groupe.

The European push for the creation of datasets and technologies that increase the power of member states at the borders affects migrants, a category of people who risk entering the mechanisms of a dangerous and dead-end control system concrete output, as also emerges from a report by the Hermes Center for transparency and digital human rights *. Frontex pushbacks in recent years are certainly a first wake-up call on the impact that non-regulation in the sector could cause in the future. An important issue to consider, says Statewatch, if we are to regulate the future of artificial intelligence for all.

* The author of this article contributed to the study






Powered by Blogger.