The EU puts limits on artificial intelligence but forgets autonomous weapons
Facial recognition, autonomous driving, network listening, robotics, research on new drugs, mechanisms for calculating the probability of return of a loan … Artificial intelligence (AI) has many faces. The most feared is his version Orwellian: This group of technologies – its growth has been such that it is no longer possible to speak of just one – has proven to be very useful for monitoring citizens and influencing their decisions. It already happens in China and, according to the mathematician Cathy O’Neil, also in the West, although in a veiled way. The European Union does not want to let the dark side of these systems flourish. The Commission today presents a regulation, whose draft was leaked last week, which will lay the foundations for the development of artificial intelligence on the continent.
The text now starts the processing process in the European Parliament, and can therefore be altered. Brussels’ approach to the matter is the same as that applied to the General Data Protection Regulation (RGPD), in force since 2018: it evaluates the risk of the different applications of artificial intelligence and restricts or prohibits those it considers most dangerous. The wording of the standard makes it clear in the preamble that its objective is for artificial intelligence to take a “human-centered” approach in the EU. Translation: in Europe companies are not going to be left to do what they please.
Restrictions and prohibitions
It falls into the category of “high risk”, and therefore “indiscriminate surveillance” is prohibited. This is understood as the systems that track citizens in physical environments, placing them in an exact location at a certain time, or that extract aggregated data on them.
Also high risk are the so-called “remote biometric identification systems”, a term that refers to facial recognition, a technology whose application has raised angry complaints at the Academy. The regulation establishes legitimate exceptions to its prohibition: these systems will only be allowed if authorized by the EU or the Member States, if they are used for “the purpose of prevention, detention or investigation of serious crimes or terrorism” or if their application is limited to a specified time and then it is erased.
Social credit scoring systems, by which the reliability of the individual is calculated according to a series of variables, are also outlawed. The Chinese authorities apply one, whose operation is unknown, which can have serious consequences on those who lose points.
Just as the RGPD requires those who want to manipulate an individual’s data to give their consent, the artificial intelligence regulation establishes that the aforementioned must be notified when interacting with an AI system. Unless, the regulations say, that is “obvious from the circumstances and context of use.”
Surveillance and sanctions
Likewise, it is specified that systems considered high risk must be supervised, a category that, the text clarifies, will be in a constant process of updating. In addition to the applications already mentioned, this definition includes AI systems designed to decide how and where to allocate resources in case of emergency, those used to admit or deny access requests to educational institutions, to evaluate candidates in recruitment processes. personnel, to calculate the creditworthiness of individuals or to assist judges, among others.
The regulation also establishes the creation of a “European Council of Artificial Intelligence”, whose main attribution will be to decide which applications are considered high risk.
Companies that do not comply with the regulations can face fines of up to 20 million euros or 4% of their turnover, a figure that in the case of large technology companies can be very large.
The beginning of the road
The Secretary of State for Digitalization and Artificial Intelligence values the regulation positively. “It is a great step forward for the European Union to design the new digital reality,” say sources from the department headed by Carme Artigas. “It is a balanced proposal, which provides an environment of trust and guarantees to citizens, but at the same time without limiting the innovation capacity of a technology with as many opportunities as AI”.
Borja Adsuara, consultant and expert in digital law, does not agree on this last point. The intention of Brussels to try to foresee what the development of the set of technologies that we call AI may be, he believes, will constrain innovation. “Europe, from Justinian to Napoleon, has always had a problem: it is a closed source system, everything that is not expressly allowed is prohibited. This is not very prone to innovation, because innovating is precisely doing what is not planned ”, he reflects. In the United States, on the other hand, the method is the reverse: what is not expressly prohibited is allowed. “That is why the great startups they are gestated there ”, he adds.
For this jurist, the regulation is a good starting point, although he wonders if it would not make more sense to modify existing laws, from the Penal Code to the Administrative Procedure Law, than to create a new one for a particular technology. “What matters about technology are its applications, specifically its misuses. The right is to prevent the latter ”, he indicates.
A notable absence
“I have been very disappointed that autonomous lethal weapons have been left out,” says Ramon López de Mántaras, director of the Institute for Artificial Intelligence Research at the CSIC. “The draft talks about high-risk applications of artificial intelligence; I don’t know what could be more risky than a weapon that makes the decision to kill autonomously ”, he abounds. The document says verbatim that “this regulation does not apply to AI systems used exclusively for the handling of weapons or other military purposes.”
Professor López de Mantaras also believes that the regulation is a good starting point to regulate AI, something that he considers necessary. “But I think it will be extremely difficult to comply with this regulation. In some cases for technical reasons, such as achieving transparency in the data and algorithms that meet the requirements, and also for practical reasons, such as the cost derived from doing so ”.
There are more elements that will make it difficult to comply with the regulations. The draft says that high-risk systems have to make sure they use data that is of high quality so that artificial intelligence does what it is supposed to do, that it is not biased. “That is very good, but getting training databases that are relevant, representative and free of errors is very complex,” explains López de Mántaras.
Then there are the systems that are not closed, the ones that are continually learning. For example, autonomous cars. “How to ensure that a system that, when it was launched on the market, had databases without bias, well assembled and controlled, does not subsequently go haywire as it learns?”