Artificial intelligence, who pays for the damages?

Artificial intelligence, who pays for the damages?

Artificial intelligence

Alongside the traditional profiles of civil liability caused by man, we must now deal with the legal profiles of liability deriving from the use of artificial intelligence systems. Artificial intelligence systems are reaching ever more advanced levels of autonomy, especially in the case of machine learning, in which properly trained "machines" acquire the ability to learn and develop solutions in an (almost) completely autonomous way.

But what is meant by artificial intelligence? The European Union was the first to give a definition of artificial intelligence in the Coordinated Plan on Artificial Intelligence Com (2018) 795 final : " By 'artificial intelligence' (AI) we mean those systems that show intelligent behavior by analyzing their environment and carrying out actions, with a certain degree of autonomy, to achieve specific objectives ”.

A similar definition is contained in the 2020 White Paper on artificial intelligence and in the subsequent communication of the European Union (2021) 205. In the proposed Regulation on artificial intelligence Com (2021) 206 artificial intelligence is defined as “software developed […] , which can, for a certain set of human-defined objectives, generate outputs such as content, predictions, recommendations or decisions that affect the environments with which they interact ”.

The proposal also hypothesizes the regulation of the so-called “ high risk

artificial intelligences ”, i.e. those systems whose use may involve risks for “fundamental rights” .

At the Italian level, we have limited ourselves to transposing the principles established at the European level, as emerges from the Artificial Intelligence Strategic Program 2022 - 2024.

Absence of a specific regulation

In the absence of a specific regulation of the liability deriving from the use of artificial intelligence systems, which provisions can/should we refer to today in Italy in this area ? We know the dichotomy, in the field of civil liability, between non-contractual liability and contractual liability. Leaving aside the issue of liability for defective products (applicable in the abstract also to artificial intelligence systems) the first rules to refer to, in the event of non-contractual liability, could be articles 2050 of the Italian Civil Code. and 2051 of the Civil Code which provide for, respectively, liability for "dangerous activity" and "thing in custody".

However, these are provisions that may not be entirely adequate with respect to the new scenarios. Artificial intelligence activity is not said to be necessarily a "dangerous activity", i.e. such as to involve a significant probability of causing damage to third parties. On the other hand, even the traditional notion of "custody" could prove inadequate with respect to a system capable of making decisions or expressing opinions independently. Furthermore, these provisions do not in any case exempt the injured party from proof of the damage suffered, as well as the causal link between the damage suffered and, respectively, the dangerous activity or the thing in custody. On the other hand, the general rule of liability for tort pursuant to art. 2043 of the Civil Code, requires the injured party (hypothetically from the artificial intelligence system) proof of the fault or the tortfeasor.

Contractual liability could only help where a relationship actually exists between the artificial intelligence service provider and the user. If a product or service were supplied/provided using an artificial intelligence system, the application of art. 1228 of the Civil Code on liability for the act of the auxiliary agent, assuming that a third party relationship (for example, an artificial intelligence system) - debtor (for example, a person who uses it to supply a product or a service), a report required by art. 1228. The proposed Regulation mentioned above has attempted to offer a solution to some of these problems, which, in the event of damage from artificial intelligence, places the burden on the producer of demonstrating that he has done everything possible in order to avoid the damage. Amendments to the Machinery Directive are planned with

the aim of introducing specific compliance requirements for artificial intelligence systems.

Some insights from recent experience

In some cases that have been brought to the attention of the courts, in order to establish liability for damage from artificial intelligence, the rules on manufacturer's liability have been applied while in other cases the liability has been identified in the person who in any case had the control of the use of the machine (see Brouse vs. United States).

Interesting, from a different point of view, is the decision of the Australian Federal Court in the case Thaler vs. United States. Commissioner of Patents who denied the possibility of patenting an invention created by an artificial intelligence system because it lacks legal personality, i.e. the ability to be the owner of subjective legal situations.

Still from a different point of view, the Court of Cassation has recently decided a dispute relating to liability for damage caused by an artificial intelligence system for reputational ratings for the unlawful processing of personal data. In this case, the cause of the damage from the illegal processing of personal data by an artificial intelligence system was identified in the lack of transparency regarding the algorithm used by the system itself to determine the rating.






Powered by Blogger.