[De]Coded Discrimination: A Human Rights Based Approach to Automated Immigration Decision Systems
Authors: Leah Durst-Lee, Christina Martinez, Montserrat Picado Campos
Automated immigration decision systems have increasingly been used around the world despite serious concerns of human rights violations. Two such countries, the United States and Canada, have employed these systems to facilitate faster immigration claim determinations and assist with management and administrative tasks at the border, but ethical and human rights concerns arise as States replace human beings in decision-making processes. This policy brief will identify the context of the use of automated decision systems in State immigration systems, the rights holders and duty bearers, the human rights legal framework, and finally best practice recommendations.
Automated decision systems rely on algorithms programmed to analyze data, predict an individual’s risk to national security, and alert officials of potential fraud. Multiple issues surround this technology. Firstly, border officials already have high levels of discretion over the immigration process, so if human decision-making becomes replaced with automated systems, discretionary power effectively transfers to the system. This risks refugee and immigration claims that are nuanced, complex, and require human rights considerations that technology may not understand or miss entirely because it is ill-equipped to determine credibility and truthfulness. Even in cases where the technology is used as an aid, issues can arise considering border officials may engage in automation bias, whereby they still defer to the system’s assessment and recommendations.
Secondly, automated systems have input and output bias. Input bias is the personal prejudices, biases and values of the programmer that are built into the automated system through the type, quality, and amount of input data. In order for the system to be able to predict future risk and identify red flags, it often relies on data that has already undergone analysis, it looks at trends established by previous decision-making, and it risks using data that promotes racial or gender stereotypes. Output bias results when the decisions generated by the system have disproportionate impacts on certain groups and lead to indirect and direct forms of discrimination.
Discrimination can be coded into an algorithm through the bias of the developer or data
Thirdly, there is always the potential when dealing with technology that system or technical errors will materialize. For example, some facial recognition technologies are unable to effectively analyze women or individuals with darker skin complexions, raising questions about how those situations are managed, while indicating the need for improved algorithms and better quality input data. In some cases, these errors occur for unexplained reasons and given the sensitive circumstances and the nature of refugee and migrant claims, a system that malfunctions, lacks oversight, and could potentially lead to arbitrary denials of immigration petitions is bound to have serious implications on human rights.
Automated identification systems have been employed or are being pursued around the world, for example in Afghanistan, Australia, Bangladesh, China, Denmark, European Union, Germany, Guatemala, Jordan, Malaysia, New Zealand, United Arab Emirates, and the United Kingdom. The United States and Canada are pertinent case studies and despite differences in the development and implementation of automated technology within their immigration systems, the effects on human rights are undoubtedly similar.
Facial recognition software struggles to correctly identify women and people with darker skin complexions
In the United States, automated systems were introduced post 9/11 by Congress in an effort to protect national security and reduce potential terrorism. The State implemented lie detection technology that analyzed changes in individuals’ voice and body language, as well as biometric screenings of behavioral and physiological data, such as fingerprints, ear shapes, and gait.
Officials then used data to verify identity and predict whether an individual posed a threat to the country and was likely to commit a crime. This technology led to a travel ban on seven Muslim majority countries, as well as the creation of the “Extreme Vetting Initiative” which tracked individuals on social media throughout their stay in the country, in order to identify them for deportation and justify visa denials.
Reliance on a system that utilizes data markers that are clearly cultural or religious in nature poses serious human rights violations, increases profiling, and promotes the stereotype that Muslims are terrorists. The use of automated immigration decision systems are wide-ranging, encompassing a move towards detention bias, historically large migration raids, and even an increased number of deaths among migrants.
In Canada, automated decision systems have been utilized in various government sectors and are now being tested in immigration processes, in order to assess whether applications can be streamlined and backlogs effectively reduced. Machine learning technology has been used in Canada in two specific ways: to triage cases of differing complexities and through a tool called Chinook. As a triage mechanism, the technology reviews resident visa applications from China and India and classifies them according to risk level. Those deemed low risk are automatically approved, while those labeled medium or high risk warrant review by an immigration officer.
Chinook, on the other hand, is a software tool that allows immigration officers to process multiple applications at the same time. Canada’s approach to implementing both of these technologies has led to concerns that the ability to bulk review applications has led to an increase in the number of cases that are denied, automation has decreased the care and consideration given to cases, and the lack of transparency behind the automation process inhibits efforts to eliminate racist or discriminatory aspects of the data that may have been incorporated in the design of the technology.
Other criticisms to Canada’s use of these technologies include their near-unlimited collection of data; engagement in religious and ethnic profiling and Islamophobia; use of secret and discriminatory no-fly lists; and the use of faulty systems that operate significant margins of error.
The fundamental question that arises from the use of automated systems is how is success measured? In order for these systems to provide a determination, they have to be told how to think, so the criteria that is used has to be neutral. But can technology ever really be that, if it lacks the ability to consider human dimensions?
Serious human rights violations exist as a result from the use of automated decision systems in State immigration processes. This is not to say that technology cannot be used, but human rights considerations must be taken into account when these programs are developed and employed.
Rights Holders & Duty Bearers
Under international law, duty bearers are actors with an obligation to respect, promote, and realize human rights and to abstain from violations. These can be States or non-State actors, such as private sector businesses. Rights holders are the individuals or groups who are entitled to have their rights upheld. Automated immigration decision systems are being used and created by duty bearers – upon the rights holders, who are migrants.
Can technology be nuanced enough to uphold the human rights legal obligations of duty holders?
The duty bearer States must uphold and perform in good faith their human rights obligations through internationally ratified treaties and customary law. As stated explicitly in Article 26 of the Vienna Convention on the Law of Treaties: “Every treaty in force is binding upon the parties to it [signatory states] and must be performed by them in good faith.”
Private sector businesses are also duty bearers. Because these technologies are not always developed by governments, they can be seen as proprietary and exist as confidential business assets, which shields them from public scrutiny.
However, the Guiding Principles for Business and Human Rights states that businesses are “required to comply with all applicable laws and to respect human rights.” This responsibility is, according to Principle 11, limited to their respecting human rights to the extent that they are included in the International Bill of Human Rights and the International Labor Organization’s Declaration on Fundamental Principles and Rights at Work (as specified in Principle 12). Their obligations, therefore, do not extend to protecting or fulfilling human rights.
States and private businesses are obligated under international law to respect human rights
The rights holders in cases of automated immigration decision systems are the immigrants whose cases are being evaluated. The rights immigrants are guaranteed to are at risk of numerous human rights violations if their immigration cases are decided by automated systems.
Human Rights Legal Framework
The discrimination experienced by migrants during their immigration processes falls under multiple realms of international and human rights law. There currently does not exist an internationally regulated governance framework for the use of automated technologies and thus a lack of mechanisms and accountability.
However, this gap can be addressed through binding international conventions which States have ratified, as well as at the regional level within the Organization of American States, Council of Europe and African Union. Party States are obligated under the International Covenant on Civil and Political Rights, International Covenant on Economic, Social and Cultural Rights and the International Convention on the Elimination of All Forms of Racial Discrimination to uphold every person’s right to equality and to engage in no act or practice of racial discrimination in all of its forms: “any distinction, exclusion, restriction or preference based on race, colour, descent, or national or ethnic origin” (Art. 1-2).
Discrimination occurs in the absence of an internationally regulated governance framework of automated technologies
Discrimination is closely tied to other human rights, such as the freedom of association, religion, expression and privacy, which are protected in the International Covenant of Civil and Political Rights. These rights are to be free from interference, which is challenged when governments employ invasive technology to potentially identify migrants from particular religious, cultural or political backgrounds. Migrants must not be made to choose between sharing, dressing or posting online in accordance with their religious, cultural or political views out of the concern that governments will use that information against their immigration processes.
The rights of life, liberty and security of the person are also at risk due to automated immigration decision systems, particularly when applied to the right to asylum. This right exists for refugees, or individuals who are unable or unwilling to return to their country of origin owing to a well-founded fear of being persecuted for reasons of race, religion, nationality, membership of a particular social group, or political opinion. The first codification of the right to asylum was in the 1948 Universal Declaration of Human Rights and has been more fully developed with the Refugee Convention and its optional Protocol.
Central to the right of asylum is the principle of non-refoulement (non return) which means that an asylum seeker must not be returned to a country in which they face threats to their life or freedom. Non-refoulement is considered of jus cogens status, meaning that to return an asylum seeker to danger is considered as inhumane as slavery, genocide and human trafficking. The rights of refugees and asylum seekers to seek protection in a country are seriously endangered if their immigration decisions are being conducted by automated decision processes, which have empirically demonstrated to include biased, flawed technology. If the technology sends refugees and asylum seekers back to a country in which they face threats to their life or freedom, the State has violated the principle of non-refoulement through its automated decision processes.
As with many human rights violations, an intersectional analysis must identify additional violations experienced by women, children and persons with disabilities, for example in cases concerning the right to protection of the family.
Conclusion
Automated immigration decision systems pose grave risks to multiple human rights and despite potential benefits to the State, their use must never come at a human cost.
The input and output bias inherent in such systems demonstrates an inability to produce unbiased results on complex human issues, as the reliance on technology and algorithms has shown a propensity to incorporate system errors and individual prejudices, leading to increased discrimination.
States and private companies, as duty holders of human rights, are obligated to ensure ongoing respect for the rights of migrants in the development and utilization of automated immigration decision systems as mandated throughout the human rights legal framework. These technologies will only be successful when they can assist human immigration officials in making their decisions in an unbiased manner that fully respects human rights. Until this issue is resolved, their usage cannot be recommended.
Recommendations
General
- Involve migrants affected by the use of these technologies in all stages of the development processes, including decision-making power on when, how, and where they should be used.
For States
- Conduct independent human rights impact assessments of automated immigration decision systems.
- Perform ongoing monitoring and evaluation, led by human rights legal experts, on automated immigration decision systems to prevent human rights violations.
- Establish a human rights mechanism for external and independent monitoring and evaluating of the development and utilization of automated immigration decision systems.
- Train immigration officials in human rights law and the principles of non-discrimination, and the best practices for their efficient and comprehensive application.
- Conduct reviews of, and appropriately update, the data utilized to construct automated immigration decision systems to avoid input bias.
- Eliminate the exclusive utilization of automated immigration decision systems, so that they never replace human judgment but are only utilized in conjunction with human assessment.
- Create spaces, such as meetings and conferences, to foster dialogue between different stakeholders (including but not limited to policy makers, technology developers, and civil society actors) to evaluate the possible risks and benefits of automated immigration decision systems.
- Develop binding legislation to ensure that businesses operating within their jurisdiction respect human rights throughout their operations (in compliance with Guiding Principle for Business Number 2).
For Private Sector Businesses
- Establish internal policies to fulfill their responsibility to respect human rights (in compliance with Guiding Principle for Business Number 15).
- Carry out internal human rights assessments (in compliance with Guiding Principle for Business Number 17).
Subject their operations to external and independent human rights assessments.
Author’s Short Bio
Christina Alejandra Martinez was born in the United States and is a Returned Peace Corps Volunteer who served in China from 2015-2017. She has over 10 years of experience advocating on behalf of immigrant communities, with a special focus on upholding children’s rights. She is a recipient of the Heller Diversity and Inclusion Scholarship from Brandeis University and the UCLA Achievement Scholarship. She holds a B.A. in Political Science from UCLA and is currently pursuing a M.A. in International Law and Human Rights from the University for Peace, as well as a M.A. in Conflict Resolution and Coexistence from Brandeis University.
Montserrat Picado Campos was born and raised in Costa Rica. I have been interested in human rights since a very early age, and have been very privileged to study them in a series of outstanding academic institutions. I hold a Masters of Arts in International Relations and Comparative Literature from the University of St Andrews and a Master of Letters in Legal and Constitutional Studies from the same institution. I am in route to graduate with a Masters in International Law and Human Rights from the University of Peace. Currently, I am interning with the OHCHR in the right to development and will be starting a position with the IOM in September.
Leah Durst-Lee is an International Law and Human Rights MA student with the University for Peace, and also holds an Advanced Migration Studies MA from the University of Copenhagen, Denmark. Before graduate school, Leah worked with a nonprofit law firm representing detained asylum seekers in their deportation cases. She is a Refugee Resettlement Expert and is writing her thesis on the refoulement of refugees while awaiting deployment. Leah is originally from the United States, but is fortunate to also call Costa Rica, Denmark, and Mexico home.