Policy strikes hard! Facial recognition in public places may be completely banned. What are the “crimes” committed by AI-based biometrics?

If this is an era where face recognition technology is fully applied, I believe most people will not stand up to refute this view.

Face recognition technology has gone through the introduction of technology, market education, and technological improvement. It has been adopted in various industries and fields in several stages. Now it has developed relatively mature. It serves smart security in all corners of streets, buildings, coal mines, commercial buildings, banks, etc. , Smart office, e-commerce and other fields.

However, the “Negation Law of Negation” has long revealed that the development of all things is a spiral or wave-like process of movement, rather than smooth sailing and overnight. Just recently, the European Parliament voted to pass a proposal to “call for a total ban on large-scale surveillance based on AI biometric technology”, which further tightened the application of facial recognition technology from the regulatory level.

European Parliament member Petar Vitanov issued a statement after the resolution was passed, saying, “The fundamental rights of human beings are unconditional. For the first time in our history, we have called for the suspension of the deployment of facial recognition systems for law enforcement because the technology has been proven to be ineffective and Often leads to discriminatory results. This report is a huge victory for all European citizens.”

From the “black technology” hyped by the outside world to the “plague” that the people shunned, what kind of mistakes has been made in face recognition based on AI that have attracted the attention of the regulatory authorities, and give it to its future development. What are the enlightenments brought by domestic regulatory authorities and enterprises?

Several “crimes” reveal that face recognition is not harmless to humans and animals

During the epidemic that lasted more than a year in the past, the contactless epidemic prevention application supported by face recognition technology has been deeply rooted in the hearts of the people, allowing the outside world to enjoy the technological dividends brought about by technology, thus making it easy for people to label face recognition technology “good” Labels, thereby ignoring the two sides of its own.

In January 2020, the first known false arrest involving facial recognition technology occurred in the United States. According to an administrative complaint filed by the American Civil Liberties Union (ACLU) with the Detroit Police Department in June, at the beginning of the year, Robert Williams from Detroit was charged with theft by the local police, and under the watchful eyes of Williams’ wife and children, Robert Williams was charged with theft. His arrest and detention lasted up to 30 hours.

According to the ACLU’s complaint, Detroit police provided a video of a black man stealing a watch from a Shinola store to the Michigan State Police. The police then ran the video through a facial recognition system and suggested that Williams’ photo should be a potential match. . Later, when the police did not have much evidence that Williams was the suspect, this absurd scene finally happened.

Coincidentally, the experience of Robert Williams is not unique. Michael Oliver, also from Detroit, was also detained for ten and a half days because of his facial recognition conviction of theft; Neal Parkes was also detained in Woodbridge for ten days because of the conviction of theft. It is worth noting that in the above cases in the United States, the parties who were mistakenly regarded as criminal suspects were all blacks, which also aroused the media’s attention to algorithmic discrimination.

As early as 2018, Joy Buolamwini, a researcher at MIT Media Lab, and Timnit Gebru, a scientist from Microsoft, published a research paper “Gender Shades: Differences in Race Accuracy in Commercial Gender Classification Technology” (Gender Shades). : Intersectional Accuracy Disparities in Commercia l Gender Classification), in which the author compares the face recognition technologies of Microsoft, IBM and Megvii Technology. The results show that the error rate of AI application of face recognition for black women is as high as 21%-35%, and for The error rate of white men is less than 1%, and the accuracy of different races varies greatly.

Time flies to the present, this problem has not been resolved, but has become more prominent.

In China, there has not been an error in the application of face recognition in police activities, but it is not uncommon for the technology to have problems in other applications. For example, in 2018, when pedestrians in Ningbo traffic police ran through the red light exposure station, they had an oolong. Face recognition mistakenly regarded the portrait of Dong Mingzhu sprayed on the bus as a passerby who crossed the road. “Mingzhu broke the red light illegally” jokes.

At the 315 party this year, the case of illegal collection of personal face data based on face recognition was also the first to be taken out. As one of the important technologies of biometric recognition, face recognition technology has the characteristics of non-contact and non-invasion, so people will be collected data without knowing it, and they have the least rejection psychology.

However, face information belongs to the unique biometric information of an individual, and has the characteristics of not being tampered with. At present, many users use it as payment passwords, account passwords, and so on. The “Personal Information Security Regulations” issued by the State Administration for Market Regulation clearly stipulates that face information belongs to biometric information and also belongs to personal sensitive information. When collecting personal information, the authorization and consent of the subject of personal information should be obtained.

Personal privacy has become a common problem at home and abroad

The European Parliament’s resolution is mainly aimed at the police and judicial authorities’ use of AI in criminal matters. The European Commission called for remote facial recognition in the report, “through legislative and non-legislative means, and when necessary, through infringement lawsuits, prohibiting the use of AI. Biometric data processing for law enforcement purposes, including facial image recognition and large-scale surveillance in public places.”

For example, the resolution opposes the use of artificial intelligence by law enforcement agencies to predict the behavior of individuals or groups based on historical data and past behavior, group membership, location, or any other such characteristics, so as to try to identify people who may commit crimes. In addition, the resolution also emphasizes the role of people, avoids excessive belief in the “objective” and “scientific” on the surface of artificial intelligence tools, and emphasizes the characteristics of transparency, traceability, and adequate documentation that algorithms should have.

In fact, for the application of face recognition technology based on AI technology, a certain amount of attention has been shown at home and abroad earlier.

As early as April 21 this year, the European Commission announced a draft called “Laying Down Harmonised Certain Rules on Artificial Intelligence (Artificial Intelligence Act) And Amending Union Legislative Acts”, which assessed the risks of artificial intelligence technology applications. Among them, face recognition technology, as a kind of real-time remote biometric recognition technology based on AI, is an artificial intelligence technology that the European Commission emphasizes on restrictions.

The draft qualifies the four risk levels of “low, limited, high, and unacceptable” for artificial intelligence application scenarios. The higher the level of application scenarios, the stricter the restrictions.

Face recognition infringes people’s privacy to a large extent, and all long-distance biometric systems are considered high-risk in this draft, and law enforcement agencies are prohibited from using this technology in public. The draft also proposes that in addition to finding missing children, removing the threat of terrorist attacks, and tracing specific criminal suspects within the scope permitted by law, undifferentiated large-scale surveillance will be completely banned by the EU.

In the United States on the other side of the ocean, the use of facial recognition technology is also very cautious. In 2019, nine U.S. states all promulgated bills on banning facial recognition technology, prohibiting the police, government departments, and the use of facial recognition technology in public places. San Francisco fired the first shot for banning face recognition in major cities in the United States. Subsequently, cities including Portland, Oakland, and Boston have banned the government from using facial recognition technology. In July this year, New York, the largest economic center in the United States Also joined the ranks.

In addition, the American giants have also flinched in this regard. In June last year, the Blue Giant IBM made a clear-cut announcement to abandon the face recognition business and no longer provide any face recognition and face analysis software. Then carry out research and development on this technology. Several other giants, such as Amazon, Google, etc., also expressed that they suspend or abandon the development of facial recognition-related businesses.

In my country, bills on personal data security have also been put on the agenda, and the supervision of artificial intelligence technologies such as face recognition has also become stricter. The “Civil Code” formally implemented on January 1 this year stipulates that the processing of personal information requires the consent of the natural person or his guardian; on April 23, the National Standardization Administration issued the “Information Security Technology Face Recognition Data Security Requirements” The draft for soliciting opinions also formulated standardized specifications for the application scenarios, safety requirements, data collection and processing of face recognition technology.

On August 20, the “Personal Information Protection Law” passed by legislation once again clarified that the installation of image collection and personal identification equipment in public places should be necessary to maintain public safety, comply with relevant national regulations, and set up prominent reminders. The collected personal images and identification information can only be used for the purpose of maintaining public safety and shall not be used for other purposes; except for those with individual consent.

Face recognition cannot be “one size fits all”, use the advantages of a double-edged sword

As for how to evaluate the resolution of the European Parliament, I believe that everyone has different opinions. The problems of face recognition technology itself have indeed been exposed in recent years, but the “one size fits all” approach is obviously not suitable for an advanced technology.

The crux of the problem is that when technology cannot solve the current problems, it will be like a double-edged sword. On the one hand, it facilitates people’s lives and improves social efficiency; on the other hand, it is sharp and sharp.

Just as the European Parliament stated, “The application of artificial intelligence in the field of law enforcement provides huge opportunities, in particular, it can effectively combat financial crimes such as money laundering, terrorist financing, and cyber crimes such as child abuse, and contribute to the safety of EU citizens. But at the same time, data shows that AI recognition has a higher rate of misunderstanding of ethnic minorities, LGBT people, the elderly, and women, which may pose a major risk to people’s basic rights.”

When considering whether face recognition based on AI technology should be completely banned or continue to be used, it is necessary to consider whether its risk and effect are proportional to avoid any harmful effects caused by face recognition.

The Links:   203DMQ100PBF LM32019P2 BUYPART

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *