AI and human rights
Rights to privacy and data protection.
the Right to freedom of movement .
the rights to freedom of expression, thought, and religion.
Rights to equality and non-discrimination.
Rights to political participation and self determination.
The prohibition on propaganda.
Right to work and adequate standard of living.
Addressing AI-related human-rights harms.
So why do human rights matter in the context of AI?
Along with the promises of technological advancements, AI has also ushered in a new era of challenges, giving rise to various forms of oppression that tend to disproportionately affect those who are already marginalized and vulnerable. The fundamental concept of human rights serves as a crucial framework for addressing power imbalances, equipping individuals and their representative organizations with the necessary language and procedures to challenge the actions of powerful entities, such as governments and corporations.
The universality and binding nature of human rights are protected in international law, demanding adherence from both governmental bodies and corporate entities. While governments bear the additional responsibility of safeguarding and fulfilling human rights, a well-established system of regional, international, and domestic institutions provides comprehensive frameworks for addressing grievances and adapting human rights law to evolving circumstances, including the impact of technological advancements.
In situations where domestic legislation falls short, the moral authority of human rights remains a potent force, carrying significant normative weight. The violation of human rights induces global reputational and political consequences, often leading to the effective strategy of exposing and condemning those responsible for such transgressions. Human rights law emerges as a powerful tool capable of mitigating some of the most blatant societal harms caused by AI, serving as a preventive measure against future occurrences.
Similar to the human rights challenges posed by other technological applications leveraging data, the deployment of AI introduces a set of harms that disproportionately affect marginalized communities. Women, children, specific ethnic, racial, or religious groups, the economically disadvantaged, differently-abled individuals, and members of the LGBTQ community are particularly vulnerable. The historical marginalization of these groups is not only reflected in the underlying data but also perpetuated in the outcomes produced by AI systems, reinforcing entrenched patterns of discrimination and inequality. Addressing these challenges becomes important to ensure that AI contributes positively to societal progress without increasing existing disparities.
The role of AI in facilitating discrimination is well documented, and is one of the key issues in the ethics debate today. However, the right to non-discrimination is not the only human right impacted by AI. Because human rights are interdependent and interrelated, AI affects nearly every internationally recognized human right. Let’s discuss a few in detail.
Rights to privacy and data protection
The concept of privacy is a fundamental right that lies at the core of human dignity. It not only stands independently but also serves to fortify other essential rights, such as freedom of expression and association. In contemporary times, numerous governments and regions have acknowledged the intrinsic value of a fundamental right to data protection. This right is primarily centred around safeguarding any personal data associated with an individual, forming an integral part of the broader right to privacy within the UN human rights framework.
The advent of AI systems has significantly impacted privacy and data protection rights, particularly through the utilization of vast datasets for training purposes. The collection of data serves multiple purposes, including the establishment of feedback mechanisms and the continual refinement of AI models. However, this data collection raises concerns as it directly violates the rights to privacy and data protection. The analysis conducted by AI systems can unveil sensitive information about individuals, even when derived from publicly available data.
For instance, researchers have successfully developed machine learning models capable of accurately estimating a person's age, gender, occupation, and marital status using nothing but their cell phone location data. Furthermore, these models can predict a person's future location based on past history and the location data of their friends. It is important to treat such information with the same level of protection as any other personal data to uphold human rights.
A striking illustration of the blurred boundary between public and private data is the growing prevalence of government social media monitoring programs. In these initiatives, law enforcement agencies aquire extensive social media information, subjecting it to AI-powered programs for the detection of potential threats. While targeted scrutiny of public social media may initially seem like a policing strategy, these programs often involve unwarranted and massive data intake, violating human rights. Bulk collection of this nature has been regarded inherently violative of privacy rights.
Moreover, the transparency and accountability of the systems processing this data play a significant role in safeguarding the right to data protection. Insufficient transparency can result in ambiguity regarding the decision-making process, violating key elements of data protection rights. Therefore, to preserve and protect human rights, it is important to treat personal information derived from AI analysis and social media monitoring programs with the same level of scrutiny and confidentiality as any other form of sensitive data.
The perils emerging from the capabilities of AI to track and scrutinize our digital activities are magnified by the staggering volume of data we generate in today's internet-centric world. With the increasing spread of Internet of Things (IoT) devices and the push towards creating "smart cities," individuals are unknowingly leaving a digital breadcrumb trail for almost every facet of their existence. While each individual data point may appear harmless on its own, their amalgamation exposes details about our personal lives. AI is set to play a crucial role in processing and dissecting this vast data landscape, from tailoring micro-targeted advertisements to optimizing public transportation and even facilitating government surveillance of citizens. In such a landscape, not only does privacy face substantial risks, but the very feasibility of safeguarding data becomes a pressing question.
The expansion of government surveillance has accompanied the internet's growth and the evolution of new technologies, with AI ushering in more intrusive surveillance tools than ever before. For instance, while a fully centralized government facial recognition system has yet to materialize, China's concerted efforts to proliferate CCTV cameras in public spaces and centralize its facial recognition infrastructure hint at the potential for a shift in this direction. In the United States, a staggering half of all adults are already included in law enforcement facial recognition databases. This widespread use poses a direct threat to anonymity, instilling a fear of constant observation that could prevent individuals from exercising fundamental rights, such as the freedom of association. The adverse effects of AI-powered surveillance are most acutely felt by marginalized populations, who bear the disproportionate brunt of security force targeting. Moreover, the 24/7 monitoring of the general populace, which lacks necessity and proportionality in achieving the goals of public safety or crime prevention, almost certainly violates the fundamental right to privacy.
The Right to freedom of movement
The potential consequences of AI on reducing freedom of movement are closely linked to its application in surveillance. By incorporating data from sources like satellite imagery, facial recognition cameras, and cell phone location data, AI can meticulously track and predict individuals' movements. This capability raises concerns about governments using AI to impose more precise limitations on both individual and group mobility.
As the Internet of Things (IoT) expands its reach into infrastructure and transportation systems, encompassing smart highways and biometrically tagged public transportation, AI's role in real-time location tracking becomes more widespread. This could empower governments to impose further restrictions on freedom of movement. For example imagine a situation where an AI system is entrusted with automating decisions regarding travel permissions of individuals, such as creating a "Do Not Fly" list, errors in the system could lead to unjust restrictions on individuals' freedom of movement.
The rights to freedom of expression, thought, and religion
Internet companies are known for using AI to identify and remove posts violating their terms of service. Governments pressuring companies to address issues like alleged terrorist content and hate speech have led to increased automation. For example, a recent law in Germany mandates the removal of flagged content from social media sites within 24 hours. However, the quick removal, driven by imperfect AI and pressure, often results in errors. Authoritarian governments, like China, are adopting AI for censorship, thereby replacing human censors. In countries where freedom of religion is threatened, AI could aid officials in monitoring and targeting religious groups, restricting their online activities. AI-enabled censorship may also limit freedom of association by removing content that facilitates organization, affecting global protest movements reliant on social media.
Indirect threats to our fundamental rights, particularly the right to privacy, can cast a cold shadow on the domain of free expression. The mere perception of being under surveillance or lacking the shield of anonymity has been proven to induce self-censorship and behavioural modifications in individuals. The rise of AI-powered surveillance heightens this phenomenon, posing a significant risk to the essence of freedom of expression.
Take facial recognition technology, for instance. If deployed in public spaces to identify individuals participating in a protest, it could instil a palpable sense of fear, discouraging people from exercising their right to assemble. In countries where the freedom to gather is already restricted, implementing such technology would effectively stifle this essential right, as individuals rely on the security of anonymity to openly express their views.
Another subtler threat emerges from the field of AI-powered social media and search algorithms. Platforms like Facebook, with algorithms dictating users' newsfeeds, and Google, determining search result rankings, play a significant role in creating and reinforcing echo chambers. This not only jeopardizes media pluralism but also hinders the diversity of perspectives, impacting freedom of thought. In environments with limited access to alternative sources or constrained by a lack of a free press, social media becomes the primary, albeit regulated, source of information.
A more immediate and direct menace to free expression arises from online harassment orchestrated by AI-driven bots. While harassment itself is not a novel concept, the increasing prevalence of bots engaging in such behaviour poses a new and daunting challenge. These bots, disguised as real users, unleash relentless automated responses, creating a chilling effect on free expression. This is particularly pronounced among marginalized populations, who often bear the disproportionate brunt of such harassment. As bot designers incorporate sophisticated natural language processing, the detection, reporting, and elimination of these bots become even more challenging.
Looking ahead, the predictive capabilities of AI, initially harnessed for preventing armed conflicts, could be wielded by governments to foresee and suppress public demonstrations or protests. This potential misuse underscores the need for vigilance in ensuring that AI technologies are deployed responsibly, preserving the balance between security and the safeguarding of fundamental rights.
Rights to equality and non-discrimination
AI systems, designed for the purpose of sorting and filtering, play a crucial role in various aspects of our lives, from ranking search results to categorizing individuals into different groups. While discrimination within these systems can sometimes serve positive social goals, such as promoting diversity, it often poses a threat to human rights when it results from biased algorithms. This bias is particularly evident in the criminal justice system, where AI can perpetuate historical injustices, influencing outcomes in areas ranging from prison sentencing to loan applications.
The impact of AI on people's lives extends to the seemingly harmless domain of online advertisements. Research indicates that the online ad space can contribute to discrimination and reinforce historical biases. For instance, a study in 2013 revealed that a Google search for names with stereotypically African American sounds often generated ads implying arrest records. Similarly, a 2015 research found that Google displayed fewer ads for high-paying executive jobs to women. The underlying personalized ad algorithms of Google, powered by AI, learn from user behaviour, creating a feedback loop where discriminatory preferences and behaviours are reinforced.
AI-powered surveillance software, when intentionally designed for discrimination, enables governments to identify, target, and deny services to individuals based on their affiliation with different groups. A controversial 2017 study claimed that a machine learning system could accurately predict someone's sexual orientation based solely on facial photos. Despite criticisms of the study's methodology and potential biases, the model demonstrated an 81% accuracy rate for men and 74% for women. Governments could exploit such systems to discriminate against LGBTQ individuals in regions where homosexuality and gender nonconformity are either illegal or socially unacceptable. The ethical concerns surrounding the scientific validity of facial recognition studies may not deter those in control of such technology, posing significant risks to marginalized communities.
Rights to political participation and self determination
The impact of AI on the spread of misinformation poses a significant threat to the fundamental principles of fair elections, thereby jeopardizing the right to political participation and self-determination. The 2016 U.S. presidential election serves as a stark example, demonstrating how external entities can exploit AI tools such as bots and social media algorithms to amplify false narratives, potentially swaying the opinions of voters. Despite ongoing efforts by platforms to combat such manipulative activities, the evolving landscape of AI, with the emergence of advanced chatbots and deep fakes, could further enhance the persuasiveness of deceptive content, making it more challenging for companies to identify and mitigate.
As we look ahead, the integration of AI-powered surveillance introduces another layer of concern, as it could be employed to restrict and hinder political engagement. By identifying and discouraging specific demographics from participating in the electoral process, AI surveillance has the potential to undermine the inclusivity of democratic practices. An alarming prospect arises with the utilization of facial recognition technology in polling places or voting booths, as it may compromise the confidentiality of the ballot. Governments, seeking to manipulate election outcomes, might not even need to directly observe the act of voting; the mere implication of surveillance could be enough to instil doubt in voters about the secrecy of their ballots, potentially influencing their decisions and eroding trust in the legitimacy of the electoral process. The cumulative effect of these AI-related challenges could lead to a chilling effect on political participation, as citizens may become apprehensive about engaging in a system where the genuineness of their vote is in question.
The prohibition on propaganda.
Just as individuals leverage AI-driven technology to amplify the spread of misinformation or sway public discourse, there exists a dangerous potential for its utilization in generating and disseminating content aimed at provoking war, fostering discrimination, spreading hostility, or inciting violence. Notably, governments worldwide have employed the services of "troll armies" to fuel conflicts, exploiting these tactics for political gains. Looking ahead, we face the ominous prospect of chatbots being employed to spark racial and ethnic violence in regions already on the brink of tension. Additionally, the deployment of deep fakes looms as a menacing tool, enabling the simulation of world leaders declaring war or provoking armed conflict with alarming authenticity.
The Right to work and adequate standard of living
While the right to work doesn't guarantee a job outright, it does necessitate that states actively strive for full employment. The looming presence of AI in job automation poses a genuine threat to this right, potentially hindering individuals from entering the labour market. The prevailing consensus anticipates that AI will accelerate the ongoing trend of job displacement, with differing opinions on the magnitude of its impact. Regardless, the influence of AI on the labour market will undoubtedly bring about both job creation and destruction.
The consequences of job automation extend beyond mere employment concerns, giving rise to multifaceted challenges that governments must confront to safeguard an acceptable standard of living. In the United States, the government employs automated decision-making systems in various programs addressing poverty, spanning eligibility assessments for government-funded healthcare to food assistance initiatives.
However, the potential shift towards machine learning (ML) introduces a concerning element due to its inherent lack of transparency and explainability. If ML becomes the norm in automated decision-making for public services, there is a risk that both the government agencies responsible for these decisions and the public may find themselves in the dark, lacking a full understanding of the automated processes shaping crucial aspects of society.
The fundamental Right to Health
AI holds immense promise in revolutionizing healthcare, presenting opportunities to enhance disease diagnosis accuracy, provide personalized treatment recommendations, and increase accessibility to specialist medical advice. Despite these advancements, it is crucial to acknowledge the potential risks that could compromise the right to health.
One significant concern is the possibility of AI-driven systems contributing to discrimination or prioritizing outcomes, such as cost reduction, over patient well-being. For instance, an AI system might be programmed to suggest different treatments based on a patient's insurance status or financial capacity, potentially leading to the denial of life-saving care to individuals due to their socioeconomic status. This poses a significant threat to marginalized groups already grappling with inadequate access to quality healthcare.
Furthermore, the inherent error rates in any AI system also poses a challenge. Even if a system like IBM's "Watson" surpasses human doctors in diagnostic accuracy, occasional misdiagnoses or incorrect treatment recommendations are inevitable. This raises questions about accountability in life-and-death medical decisions made by machines versus human doctors.
The ethical dilemmas extend to AI systems predicting disease outbreaks and recommending responses. Automation in resource deployment, driven by AI predictions, could lead to critical disparities. Deciding where to allocate resources based on AI suggestions involves complex considerations, prompting a broader discussion on the extent of automation, the necessity of a "human in the loop," and the division of responsibility between human doctors and AI systems.
Moreover, there is a growing concern about the use of AI in determining healthcare access and insurance pricing. Health insurance providers may leverage AI for profiling individuals based on behaviours and medical history, potentially leading to discriminatory practices. The use of data points, such as family illness history, sedentary lifestyle, dietary habits, and smoking, could result in personalized health insurance rates, posing a threat to privacy and fairness in the healthcare system.
In navigating these challenges, it becomes essential to establish ethical guidelines, regulatory frameworks, and a balance between AI-driven innovation and human oversight to ensure that healthcare remains a right accessible to all, free from discrimination and undue influence.
The Right to education
AI has the potential to undermine the principle of equal access, especially in the context of university admissions in the United States. Many American universities currently employ deterministic algorithmic systems to provide recommendations on which applicants to admit. These algorithms are often tailored to suit the specific preferences of each institution, but they come with a range of issues that can lead to discriminatory outcomes. One major concern is the use of historical data from past admitted students, which, given the historical demographic makeup of elite universities, tends to favour wealthy white males. This reliance on biased data risks perpetuating historical trends and reinforcing existing inequalities.
As we look to the future, it is highly likely that these systems will incorporate machine learning (ML) techniques, making it even more challenging to detect and address biases. The danger lies in the potential for universities to discriminate against certain groups under the guise of objectivity, as the algorithms become more complex.
Moreover, if AI is applied to monitor and predict student performance in a way that limits access to specific subjects or educational opportunities, it poses a significant threat to the right to education. The increasing focus on early childhood predictors of success raises the possibility of implementing such systems at progressively younger ages, leading to substantial discrimination. Students from underprivileged backgrounds may find their opportunities limited based on statistical predictions that may not account for their individual potential or resilience.
This system risks overlooking and disregarding the achievements of students who overcome adversity to excel academically and professionally. Ultimately, it has the potential to solidify existing educational inequalities rather than addressing them. The right to education should be safeguarded against the unintended consequences of AI applications, ensuring that all individuals, regardless of background, have equal access to educational opportunities and the chance to pursue their aspirations.
The Right to Cultural Participation and scientific benefits
The utilization of AI technologies by governments for the identification and suppression of cultural groups poses a significant threat to people's engagement in cultural activities, either through direct repression or indirect influence. Surveillance mechanisms instil fear of identification and potential reprisals, discouraging individuals from actively participating in cultural expressions. This raises the alarming prospect of AI being employed to effectively "criminalize" specific cultures.
When law enforcement disproportionately targets members of a particular culture, AI systems, particularly machine learning algorithms analysing visual data, may accidentally associate elements like clothing, speech patterns, or gestures with criminal activities. Consequently, this could provide a pretext for the unjustified targeting of these cultural groups under the guise of crime prevention. The risk emerges where AI systems perpetuate biased narratives, linking cultural behaviours with criminality and contributing to the marginalization of specific communities.
In the context of the global AI race, concerns are growing in the developing world about the potential for being left behind in the transformative wave of economic change accompanying AI advancements. There is a genuine fear that developing countries may become passive consumers of AI systems primarily developed in technologically advanced regions like China or the West. This dependency on foreign-developed AI poses a dual risk of heightening existing inequalities and deepening social divisions.
The unequal distribution of internet access and technology, predominantly favouring the affluent and urban populations, further compounds the issue. If access to AI and technology remains restricted to the wealthy, it could widen existing social disparities. Additionally, the looming threat of job automation displacing traditional manufacturing roles raises concerns about job loss and economic inequalities, particularly in regions where these industries play a crucial role in economic development. The potential consequences include not only economic disparities but also social and political aftermaths, as the transformative power of AI is not evenly distributed across different sectors of society.
So how do we address AI-related human-rights harms that we discussed so far?
Taking swift action to address the human rights risks associated with artificial intelligence (AI) is crucial in mitigating the potential harmful impacts that may arise in the future. By proactively tackling these issues, it is possible to create a foundation that not only prevents foreseeable problems but also establishes a framework for addressing unforeseen challenges.
AI, being a vast and diverse field, requires a sector-specific approach to effectively manage its human rights implications. While recognizing this diversity, there are four broad policy approaches that could play a pivotal role in addressing many of the human rights risks posed by AI.
1. Comprehensive Data Protection Legislation: Implementing reliable data protection legislation is a fundamental step in anticipating and mitigating human rights risks associated with AI. However, it's important to acknowledge that data protection measures alone may not be sufficient, and additional complementary measures are necessary to ensure a comprehensive approach.
2. Government Governance Standards for AI: The use of AI by governments should adhere to high standards, encompassing open procurement practices, human rights impact assessments, transparency, and processes that ensure explainability and accountability. This approach aims to foster responsible and ethical use of AI in governmental operations.
3. Private Sector Responsibility: Acknowledging the private sector's duty to respect and uphold human rights, companies should go beyond merely establishing internal ethics policies. They should actively develop and implement transparency, explainability, and accountability processes to ensure that their AI technologies align with human rights principles.
4. Research and Investment: To better understand and address potential human rights harms arising from AI systems, there is a need for significantly more research. Investing in comprehensive studies will contribute to identifying risks and developing strategies to mitigate them. Additionally, creating structures dedicated to responding to these risks is crucial for effective and informed decision-making.
By adopting these policy approaches, we can proactively manage the human rights implications of AI, ensuring that the development and deployment of AI technologies align with ethical standards and respect for human rights across various sectors.
Conclusion
Human rights matter profoundly, especially in the face of technological advancements like artificial intelligence (AI), which bring both promise and challenges. AI introduces new forms of oppression that often affect marginalized and vulnerable communities disproportionately. Human rights provide a crucial framework for addressing power imbalances, enabling individuals and organizations to challenge actions by governments and corporations. These rights are enshrined in international law, demanding adherence from both governmental bodies and corporate entities. When domestic legislation falls short, human rights law serves as a powerful tool with global consequences for violators.
The impact of AI on human rights spans various domains, from privacy and freedom of movement to equality and non-discrimination. AI systems, fuelled by vast datasets, can infringe upon privacy and data protection rights. Governments' increasing use of AI-powered surveillance raises concerns about freedom of movement and association, particularly for marginalized groups. Additionally, AI algorithms can perpetuate discrimination, affecting outcomes in areas like employment and healthcare. The potential for AI to influence political participation through misinformation and surveillance poses threats to democracy and self-determination.
Addressing AI-related human rights challenges requires multifaceted approaches. These include comprehensive data protection legislation, high standards for government AI governance, corporate responsibility, and increased research and investment. By adopting these strategies, societies can proactively manage the human rights implications of AI, ensuring that its development and deployment align with ethical standards and respect for human rights across various sectors.