A group of academics asks the Government for a moratorium on the use of facial recognition
The use of facial recognition techniques appears regularly in the media. The penultimate controversy in the US has been Amazon’s demand that its delivery drivers accept cameras inside their vans and trucks with “biometric consent”: in addition to controlling their identity or the position of the camera, the software is capable of detecting yawning or inattention. Now in a paper signed by 70 personalities with different relationships with the consequences of biometrics, it asks the government not to allow its use until a commission investigates their problems. Among the signatories are Adela Cortina, Emeritus Professor of Ethics and Political Philosophy at the University of Valencia; Pilar Dellunde and Ramón López Mántaras, from the CSIC’s AI Research Institute; Txetxu Ausín, titular scientist of the Institute of Philosophy of the CSIC or Ofelia Tejerina, president of the Association of Internet users.
“The concern that motivates this letter has to do with the potential pernicious effects that these systems can have on the well-being, interests and fundamental needs and rights of the Spanish population,” they write. His concern is less linked to the police or security use of facial recognition than to the analysis to understand behaviors, reactions or attitudes: “The systems for the recognition and analysis of people’s images (of their faces, gestures, hairstyles, postures and movements body, clothing, textures and / or skin colors) and, by extension, the machine learning algorithms that support them computationally have serious problems that have been widely documented ”, they add.
The authors cite several examples of this problematic use of this technology: assigning a negative category to someone (cheater, delinquent, little responsible) based on population statistics of a group; associate postures, gestures, hairstyles or clothes to negative behaviors or problems of bias and lack of representation. “Due to the serious deficiencies and risks that these systems present, the possible benefits that they could offer do not in any way outweigh their potential negative effects, especially for groups and collectives that tend to suffer injustices and discriminatory treatment,” they write.
To deepen this debate, the promoters of the letter ask for an “independent” investigation commission made up of “scientists, jurists, experts in ethics and artificial intelligence and members of civil society, especially those groups that can be prima facie affected by these systems ”. The promoters are Ujué Agudo and Karlos G. Liberal, responsible for Bikolabs; David Casacuberta, Professor of Philosophy at the Autonomous University of Barcelona; Ariel Guersenzvaig, professor at the Elisava Faculty of Design and Engineering.