Human Rights in AI Facial Recognition: a look into China’s abuse of AI technology
Author: Caroline Adams
AI Facial Recognition
AI Facial Recognition has become one of the most controversial topics in the digital era. It is a successful tool and measure in biometrical technological advances. However, it has ethical barriers and dangers to human rights, especially the right to privacy. As this technology has been used to track criminals or anyone that may be a harm to society, it is an invasion of privacy. One of the most detrimental uses of this technology is China’s use against the Uyghur community.
The Uyghur Crisis in China has become one of the most alarming atrocities since the Second World War (Lu, 2021). As millions of Uyghurs experience some of the highest degrees of human rights violations, what becomes the most problematic is the lack of coverage, accountability, and policy enforcement. The voices in the Uyghur community have been silenced and oppressed by the Chinese government. AI facial recognition technology has tremendous power behind it. The international community must continue its commitment to human rights and orchestrate the use of AI to enhance those rights and not weaken them.
Facial recognition technology is a system built on algorithms that function to identify people through imaging or video. Although this is not a recent technology, it has dramatically advanced in recent years. In its advancement, artificial intelligence (AI) has become one of the most controversial technologies. AI systems can search for multiple faces making comparisons to one another or detecting multiple faces in one area.
AI technology operates in three steps: it captures a photo or video, then it measures “landmarks” or facial features, and then the information is converted into a mathematical formula in a database that can compare those features with others in the system (EconomistMagazine, 2018). The accuracy holds at a 99.5% accuracy rate. AI face recognition software does have advantages including real-time identification, anti-spoofing measures, lessened racial or gender bias model training across millions of faces, and can be used across multiple cameras. AI is used in health care, security, airport boarding, and proctoring (Ai Facial Recognition Technology Overview 2021 2021).
There is a gray area in the legality and ethical usage of AI facial recognition. The majority of the images that are collected are done so without permission. In 2016, Seattle’s University of Washington collected 3.3 million photos of faces from Flickr without consent and posted them to a database. “Currently, there are no clear legal safeguards regarding the gathering of facial recognition training data – but, recently, Facebook paid a $650 million settlement for harvesting facial data” (Ai Facial Recognition Technology Overview 2021).
Companies have come forward claiming to take more responsibility with facial recognition technology, such as not reinforcing existing biases, not violating internationally accepted ethical norms, and protecting privacy with control and transparency (Ai Facial Recognition Technology Overview 2021 2021). As companies and governments have acted in the explosive growth of artificial intelligence, the global market is expected to reach $12.92 billion by 2027. This technology has stepped outside the boundaries of security and safety, it has entered into retail, transportation, hospitality, and banking (EconomistMagazine, 2018).
The conversation for this technology internationally has varied and become incredibly controversial. Figures like Elon Musk and Stephen Hawking have publicly voiced their hesitancy on AI technology as they view it could destroy humanity. Whereas China, the leading country in the industry, uses it daily through consumer shopping, airport check-in, and banking. YITU is innovating a ‘smart city’ where the blueprint is built on AI facial recognition. It could reach into reading people’s emotions to sexuality. The Chinese government has plans to monitor and control citizens through an authoritarian lens using this technology. Currently, China has 170 million CCTV cameras, and it is estimated they will have 400 million in the next 3 years.
As China claims this technology will improve society, others see this as incredibly dangerous for human rights. China has developed the capacity for AI to be abused by the authoritarian government including having their citizens named and shammed for minor offenses on billboards using AI facial recognition. The ethnic minorities are most threatened, especially the Uighur community. There are at least 1 million Uighur detained in secret camps where abuse is prevalent, the largest incarceration of a minority group in the world today (EconomistMagazine, 2018). “China’s goal is to establish industrial standards now, so that they can have a hand in shaping the development and implementation of worldwide standards” (Ai Facial Recognition Technology Overview 2021). A technology that is so powerful needs to be deeply ethically questioned.
China’s Surveillance and Security
The lockdown and authoritarian surveillance in China have hit dramatic levels that infringe on the human rights of the Uyghurs. Through financial, human, and technical investments, China has increased security with police checkpoints and police stations. Monitoring the region includes looking at their familial and social networks, behavior, beliefs, and relations to other family members. “Xinjiang authorities conduct compulsory mass collection of biometric data, such as voice samples and DNA, and use artificial intelligence and big data to identify, profile, and track everyone in Xinjiang” (HRW, 2020). Through this high-level technology, the government can weed out those that have behaviors that are potentially threatening the State. Those labeled as threatening to the State have been detained or punished with harsh treatment, in justification of eliminating “problematic ideas” of Turkic Islam.
China has built algorithms used to track the Uyghur community and has installed systems for advanced facial recognition. This system allows for racial profiling of the Uyghurs, where the cameras pick up their appearance and keep recognition records. “China a pioneer in applying next-generation technology to watch its people, potentially ushering in a new era of automated racism” (Mozur, 2018). The authorities have sustained a “surveillance net” that collects and stores people’s DNA to track the Uyghurs. The technology has reached outside of the Xinjiang region, and it has launched into cities like Hangzhou and Wenzhou.
Since Uyghurs have a distinct look compared to the majority Han population, the technology has successfully monitored and controlled the community. CloudWalk, a Chinese start-up for surveillance systems, listed directly on the website “if originally one Uighur life in a neighborhood, and within 20 days six Uighurs appear, it immediately sends an alarm to law enforcement” (Mozur, 2018). In combination with the racist laws in the name of counterterrorism, this advanced technology has given provision for unsafe environments inside and outside of the camps. It has created a new era of racial profiling resulting in one of the worst genocides we may ever witness.
China’s use of AI facial recognition technology has outlined a large spectrum of ethical boundaries. The XUAR (Xinjiang Uyghur Autonomous Region) has been turned into a surveillance state, enforced by Chen Quanguo Party Secretary. Like his efforts in Tibet, he has begun to use a “massive system of electronic surveillance, which included an extensive database on Uyghur residents’ habits, relations, religiosity, and other traits that are used to assess their ‘loyalty to the state” (Roberts, 2022). This suspicion has increased for decades, resulting in the creation of incarceration camps in 2017, and by 2018, the camps contained 1 to 2 million Uyghurs and other Muslim minorities. The PRC enforces a cultural genocide through policies that demolish the community’s identity and restructure its landscape.
XI Jinping has used the Mao-era belief system to orchestrate political control where technology has supported this power (Robert, 2022). The United Nations Human High Commissioner for Human Rights, Michele Bachelet, declared a moratorium on the sale and use of artificial intelligence systems until the “negative, even catastrophic” risk can be properly advised. Her response was after reviewing the UN report, Urgent action needed over artificial intelligence risk to human rights, in reflection of Pegasus spyware that enters into smartphones and gains access to all information. “The Pegasus revelations were no surprise to many people, Ms. Bachelet told the Council of Europe’s Committee on Legal Affairs and Human Rights, about the widespread use of spyware commercialized by the NSO group, which affected thousands of people in 45 countries across four continents” (United Nations , 2021). The concern for the right to privacy has a huge impact on the use of this technology.
The United Nations Human Rights Office of High Commissioner developed a report that reviewed and analyzed the effects of AI technology on the right to privacy, the rights to health, education, freedom of movement, peaceful assembly, association, and freedom of expression. Tim Engelhardt, Human Rights Officer of Law and Democracy Section, has stepped forward with high concern for AI technology, stating “the European Union’s agreement to strengthen the rules on control” and “the growth of international voluntary commitments and accountability mechanisms”, he warned that “we don’t think we will have a solution in the coming year, but the first steps need to be taken now or many people in the world will pay a high price” (Neuman, 2021). This report suggested that states and businesses do not carry out due diligence when installing this software and uncovered the mistreatment of numerous counts for people, such as the denial of social security benefits due to faulty AI, or the unjust arrest because of flawed technology. Most importantly, the discriminatory risk in technology is most detrimental to human rights (Neuman, 2021).
As the responsibility and accountability rest in the state and governance, public and private technology companies have a huge role in eliminating human rights violations with the use of their products. In 2020, MIT cut off its relationship with iFlytek, a Chinese artificial intelligence company that supplied the technology that surveilled Muslims in Xinjiang. Maria Zuber, the Vice President of research at MIT, stated “We take very seriously concerns about national security and economic security threats from China and other countries and human rights issues” (Knight, 2020). In 2019, the US government banned Chinese AI companies from doing business with American companies because of their use of technology to oppress the Muslim Minority. The Human Rights Watch declared that iFlytek supplies the Xinjiang authorities with technology that was used to identify the Muslim Community with voiceprints. The surveillance has resulted in the detention and disappearance of 1 million Uighurs (Knight, 2020).
Ethical Framework
To highlight the beneficial effects of China’s use in AI facial recognition, there has been significant success in eliminating crime. For example, in Zhengzhou, a police officer was able to spot a heroin smuggler at a train station. In Qingdao, artificial intelligence captured two dozen criminal suspects during a festival. These programs have found criminals, like murderers, drug dealers, etc. Beijing is embracing the technology that would track 1.4 billion people and assemble national surveillance systems. Train stations can now scan for the most wanted criminals in China (Mozur, 2018). “Already, China has an estimated 200 million surveillance cameras — four times as many as the United States” (Mozur, 2018).
One of the major necessities in the usage of AI technology is transparency by companies, States, and international organizations. The world must be able to fully grasp the development, usage, and intentions behind the technology. The OHCHR report states that: “The complexity of the data environment, algorithms and models underlying the development and operation of AI systems, as well as intentional secrecy of government and private actors are factors undermining meaningful ways for the public to understand the effects of AI systems on human rights and society” (United Nations, 2021). There is responsibility within the public and private sector, technology companies, academic research, State obligations, international obligations, and NGO commitments to use facial recognition to enhance human rights.
Amnesty International launched a global campaign to ban facial recognition technology as it allows for racist policing and threatens the right to protest. Matt Mahmoudi, AI and Human Rights Researcher at Amnesty International stated, “Facial recognition risks being weaponized by law enforcement against marginalized communities around the world. From New Delhi to New York, this invasive technology turns our identities against us and undermines human rights” . Matt and the organization feel the technology can function as racial weaponry. UK courts ruled facial recognition as a technology that violates human rights, personal freedom, invades privacy, and carries discrimination in its practice but not to eliminate it, just utilize it for greater goods (Fernandez, 2020).
AI has a larger barrier of discriminatory issuing that flaws the technology. “In fact, Asians and African Americans are 100 times more likely to be misidentified than white men, with Native Americans having the highest rates of false identifications” (Fernandez, 2020). The data training for the technology is set to search for certain racial and color features. For example, if the technology is trained on data that is based on white males, then it will focus on white males. South Wales Police force had used facial recognition to find a known criminal at the Champions League Final which then resulted in 92% of the matches being a false positive. Similarly, Detroit police ran into the same issue of 96% false positives in a search that arrested the wrong man (Fernandez, 2020). For this technology to be used it must be perfected and structured to protect human rights as it has been a flawed system in previous cases.
Conclusion
Artificial Intelligence Facial Recognition technology has become one of the most powerful tools to enhance human rights at the same time it has been orchestrated as weaponry against humanity. This technology stands on a considerable spectrum with two polar opposite conditions. Responsibility to exercise the use of this technology is a concern for the State, government, the international community, the private sector, public companies, and education. To ethically use this technology, transparency must be communicated, and intentions need to be founded on the enhancement of human life.
List of References
Ai Facial Recognition Technology Overview 2021. RecFaces. (2021, August 23). Retrieved November 8, 2021, from https://recfaces.com/articles/ai-facial-recognition
Amnesty International. (2021, August 17). Ban facial recognition technology. Amnesty International. Retrieved November 9, 2021, from https://www.amnesty.org/en/latest/press-release/2021/01/ban-dangerous-facial-recognition-technology-that-amplifies-racist-policing/
Fernandez, E. (2020, August 12). Facial recognition violates human rights, court rules. Forbes. Retrieved November 9, 2021, from https://www.forbes.com/sites/fernandezelizabeth/2020/08/13/facial-recognition-violates-human-rights-court-rules/?sh=44d625675d44
Interesting Engineering. (2020). How does facial recognition work? YouTube. YouTube. Retrieved November 10, 2021, from https://www.youtube.com/watch?v=YX8BzK_LU0E.
Knight, W. (2020, April 21). Mit cuts ties with a Chinese AI firm Amid Human Rights Concerns. Wired. Retrieved November 9, 2021, from https://www.wired.com/story/mit-cuts-ties-chinese-ai-firm-human-rights/
Lu, C. (2021, March 16). Painting Xinjiang’s brutal camps in 3D. Foreign Policy. Retrieved December 22, 2021, from https://foreignpolicy.com/2021/03/16/xinjiang-reeducated-documentary-uyghur-china-vr/.
Mozur, P. (2018, July 8). Inside China’s dystopian dreams: A.I., shame and lots of cameras. The New York Times. Retrieved November 8, 2021, from https://www.nytimes.com/2018/07/08/business/china-surveillance-technology.html
Neuman, S. (2021, September 16). The U.N. warns that AI can pose a threat to human rights. NPR. Retrieved November 9, 2021, from https://www.npr.org/2021/09/16/1037902314/the-u-n-warns-that-ai-can-pose-a-threat-to-human-rights
ROBERTS, S. E. A. N. R. (2022). War on the Uyghurs: China’s internal campaign against a Muslim minority. PRINCETON UNIV PRESS
United Nations. (2021, September 15). Urgent action needed over artificial intelligence risks to human rights | | UN news. United Nations: UN News Global perspective Human stories. Retrieved November 9, 2021, from https://news.un.org/en/story/2021/09/1099972
YouTube. (2018). China: facial recognition and state control | The Economist. YouTube. Retrieved November 10, 2021, from https://www.youtube.com/watch?v=lH2gMNrUuEY&t=273s.