The Discriminatory Side Effects of Artificial Intelligence

Introduction

The benefits of Artificial Intelligence (AI) are monumental. From self-driving cars to proof-reading contracts, AI has embedded itself into our daily life as an invaluable tool due to the innate fact that the capabilities of AI extend beyond that of humans. As the benefit of AI to both work and personal life becomes increasingly evident, there are discriminatory side effects that become increasingly evident as AI has a tendency to discriminate towards women, ethnic minorities (which henceforth I will focus on with specific regard to black people only), and neurodivergent people. However, peaceful cohabitation between humans and AI cannot be possible whilst the discriminatory side effects of AI prevail. The question is: can we foster an environment in which AI can flourish whilst simultaneously preventing discrimination against marginalised groups?

For the purposes of this article, the author has decided to discuss the impact on a different group in society for each form of AI that is discussed. Primarily this article will begin by considering the impact of deepfakes on women. Then it will discuss the impact of facial recognition technology on Black people. Finally, this article will engage with the impact of AI-assisted recruitment systems on neurodivergent people. Whilst this article focuses on three demographics, it is worth noting that AI can also adversely impact other groups in society. Similarly, although only three different types of AI will be discussed in this article, there are a variety of other types of AI that have the potential to discriminate against marginalised groups. Additionally, due to the intersectionality between the three demographics, many of the issues discussed overlap 'in ways that generate distinct forms of structural discrimination that cannot be reduced to their component parts'. 

This article will also briefly consider the compatibility of AI with the European Convention on Human Rights (ECHR) and suggest ways in which the discriminatory side effects of AI can be minimised. Although the focus of this article is on discrimination, this article also aims to address the positive aspects of AI, the author's view being that AI is an incredibly useful tool for society but one that requires regulation to prevent discrimination. A statement from the CEO of AI chatbot ChatGPT, Sam Altman, effectively sums up the main argument of this article: 'we want to maximize the good and minimize the bad, and for [Artificial General Intelligence] to be an amplifier of humanity.' 

Impact of Deepfakes on Women

Increased circulation of AI-generated work has instilled a lack of trust in what we see and hear online, and deepfakes are largely to blame as the 'harbingers of an unprecedented epistemic apocalypse.' By nature, deepfake technology is deceptive as it facilitates the development of images, audio or video that impersonate another, typically a well-known individual. The impact of this technology cannot be overstated, and the scope of its impact is extremely wide. Whilst humorous deepfakes circulated to a small following on Instagram may cause little harm, of larger concern are deepfakes produced with the intent to harm another's reputation and mislead others. The non-consensual manipulation of images of women using deepfake technology is a large concern for individual data and privacy and could amount to a breach of Article 8 of the ECHR, the right to private and family life. In addition to Article 8, the production of pornographic deepfakes conflicts with Article 14 of the ECHR, the prohibition of discrimination. Non-consensual deepfake pornography can cause significant psychological harm to victims and is an infringement on their private life which can adversely impact their professional life.

The origins of deepfakes can be traced back to 2017 when a Reddit user popularised the term by posting pornographic material made using 'open source face-swapping technology.' Additionally, a study conducted by Deeptrace in 2019 found that 'non-consensual deepfake pornography' accounted for 96% of the deepfakes they analysed and pornographic deepfakes typically targeted female celebrities. Since 2019, pornographic deepfakes have expanded outside of the realm of celebrities and now have the ability to impact all women through 'revenge pornography'. As deepfake technology has become more accessible, apps have been developed that facilitate the production of pornographic material of women. In 2019 an app called DeepNude was developed which allowed users to 'synthetically remove clothes from images of women, and generate naked parts of their body that were previously covered' in 30 seconds. Whilst the generated image contained a large watermark, users could pay $50 to remove it. After over 545,162 visits to the website in June 2019, the website's servers were overwhelmed, and the website was subsequently taken down, but the network’s algorithm is difficult to remove from circulation.  DeepNude is an alarming example of how deepfake technology has been misused to sexualise women.

Important to note is that DeepNude was unable to carry out the same process on men as the algorithm was 'specifically trained on images of women.' As a result, the production of pornographic deepfakes disproportionately affects women and can be considered a form of 'hate speech'. Hate speech can be defined as 'the deliberate and often intentional degradation of people through messages that call for, justify and/or trivialise violence based on a category (gender, phenotype, religion or sexual orientation)'. As such, the deliberate, non-consensual generation of pornographic images of women should be referred to as a form of hate speech. 

Given the discriminatory side effects of deepfakes, one might question why they are not banned altogether. Yet deepfakes, like many forms of AI, are a 'double-edged sword'. In the field of healthcare, deepfakes can provide a sense of comfort to grieving individuals by producing a deepfake of the deceased. And, in the entertainment industry, deepfakes can enhance visual effects and alter the appearance of actors. By outlawing deepfakes altogether we would be limiting our potential to develop humanity. However, it must be recognised that this technology requires regulation due to the discriminatory impact on women. It may be that the most plausible way to regulate deepfakes would be to recognise that it is an issue for society and, as a result, deepfakes should be combated through 'community norm policing': challenging those that develop harmful deepfakes. However, such a policy would require the cooperation of the whole community, which may not be feasible.

Impact of Facial Recognition Technology on Black People

The second sub-category of AI that this article will consider is facial recognition technology (FRT), 'a biometric technology powered by machine learning algorithms which can measure, analyse, as well as identify or classify people's faces.' FRT has many uses, particularly in the realm of security, as evidenced by its integration into mobile phones to authorise payments securely. However, there exists an innate tendency for FRTs to discriminate against Black people as, if the data inputted into the algorithm is biased, the output will also be biased. A study conducted by the US National Institute of Standards and Technology (NIST) discovered that African American faces were more likely to produce false positives (whereby 'the software wrongly considered photos of two different individuals to show the same person') than Caucasians. As a result, FRTs pose a threat to Article 14 of the ECHR, the prohibition of discrimination.

In 2020 Uber introduced a form of FRT into their app which required drivers to take a photograph of themselves, which is then matched against an existing photograph, in order to access the app. Mr Manjang, a black African man and Uber Eats courier, was suspended from the platform after the technology could not verify that the photo he submitted was him. After he continued to fail facial recognition checks, Mr Manjang was removed from the platform and subsequently brought a successful claim for indirect race discrimination under section 19 of the Equality Act 2021 for which he received a financial settlement. Mr Manjang's case demonstrates the direct impact FRTs have on Black people as well as the indirect impact the failure of the technology had on the claimant's ability to work and receive an income. 

Under section 149 of the Equality Act 2010, the public sector equality duty (PSED), public authorities are required to give due regard to the need to 'eliminate discrimination'. In the case of R (Bridges) v Chief Constable of South Wales Police (SWP), it was held that 'SWP have not done all they reasonably could to fulfil the PSED' due to a lack of consideration of the potential racial and gender biases of automated facial recognition (AFR) technology. Dr Anil Jain, a Computer Science and Engineering professor at Michigan State University, noted that 'AFR systems can suffer from training "bias"'; if there is a low representation of one demographic in the training process, the AFR system may have a high false alarm rate for that demographic. Furthermore, Dr Jain noted that 'it would be difficult for SWP to confirm whether the technology is in fact biased' which led the Court of Appeal to allow the appeal on the grounds of the public sector equality duty.

A further issue regarding the use of FRTs to combat crime is that minority groups are overrepresented in police databases. This is of particular concern in the United States where 'more than three-quarters of the black male population is listed in criminal justice databases.' This can lead to an increased likelihood of Black people being identified by FRTs, and may mean that FRTs can 'perpetuate racial inequality.' Due to the overwhelming evidence that FRTs are less compatible with Black people, AI developers must look towards developing FRTs with samples that are representative of the whole population. In the meantime, FRTs should not be overused to prevent further discrimination towards Black people.

Impact of AI-Assisted Recruitment Systems on Neurodivergent People

The final sub-category of AI that will be considered in this article are AI-assisted recruitment systems. Employers are increasingly using AI in the hiring process as they claim it will 'streamline and debias recruitment' through decreasing HR workload and providing a solution 'to fulfill corporate diversity, equality, and inclusion (DEI) goals.' However, the claim that AI-assisted recruitment systems 'debias recruitment' is contestable. Whilst there may be less chance of unconscious bias compared to the traditional recruitment process, Amazon's experimental AI hiring system is a prime example of how AI has failed to 'debias recruitment'. Amazon's recruitment system was developed by analysing patterns of recruitment, yet many previous applicants were male, resulting in the recruitment system favouring men over women. The system learnt that male applicants were preferable to female applicants, reflecting the gender divide in the tech industry. Whilst Amazon's hiring system was not used to assess real candidates, it raises concerns over the compatibility of AI-recruitment systems with marginalised groups. Although this example refers to discrimination towards women, it demonstrates how AI structures are fundamentally discriminatory, and thus flawed. However, discrimination in the recruitment process is not limited to women; it also heavily disadvantages neurodivergent people.

Neurodivergent people may be underrepresented in the training process for AI hiring systems if companies have little history of neurodivergent employees which presents concerns for the compatibility of AI-assisted recruitment systems with Article 14 of the ECHR. As a result, AI hiring systems may determine that neurodivergent applicants are 'unlikely to be a good 'fit' for the company.' Additionally, AI may misinterpret actions of autistic people such as 'avoiding eye contact or having different facial expressions.' If AI is integrated into the interview process, this misinterpretation can present discriminatory issues as AI models are trained to look for features of an employable candidate, such as good posture and eye contact, which may result in the algorithm selecting against neurodivergent applicants. One solution to this problem has been provided by Nvidia who developed deepfake technology to be used in video calls to make it appear as if the individual is making direct eye contact even if they are looking elsewhere. However, such a development can be criticised 'for attempting to make autistic people conform to neurotypical conversational norms'. 

Since the reporting of the failure of Amazon's AI hiring system in 2018, there have been more recent developments in AI-assisted recruitment technology. Censia, a developer of AI hiring systems, adopts an 'anonymous mode' which allows recruiters to alternate between an individual's full profile and an anonymous version. However, this system can be criticised because it suggests that, by completely removing identifying factors such as race and gender, the hiring process is free from bias. Additionally, solely focusing on an individual's personality in an application demonstrates the 'colorblind logic of AI hiring technologies'. The desire to assess candidates equally cannot be achieved by removing race and gender from the equation, because in doing so, disadvantaged groups are positioned as 'divergences from the white male norm.' In relation to neurodivergent applicants, the algorithm may neglect to consider individual differences between neurodivergent applicants which would provide an employer with the necessary context of how their disability impacts their life.

Given the positive impact of AI-assisted recruitment systems on efficiency, it is unlikely that this technology will be phased out. However, the discriminatory side effect of AI hiring systems needs to be addressed to prevent further human rights abuses towards neurodivergent applicants. The Council of Europe recently announced the 'HUDERIA methodology' which aims to provide an 'evidence-based and structured approach to carrying out risk and impact assessments for AI-systems.' One notable use of this tool is that, if an AI-assisted recruitment system displays bias, the methodology may be involved in 'adjusting the algorithm or implementing human oversight.' Whilst only recently adopted, this new technology demonstrates a promising move towards reducing discrimination in AI hiring systems as well as other AI systems that negatively impact human rights.

Conclusion

Today, AI systems are considered 'Artificial Narrow Intelligence' but with further development 'Artificial General Intelligence' will emerge with a level of knowledge 'comparable to, and ultimately perhaps greater than, that of human beings.' As AI grows more sophisticated and human-like, it is essential that organisations intervene in and regulate the development of AI to prevent discrimination. To strike a balance between allowing AI to flourish and reducing discrimination, companies should hire AI ethics experts who can promote responsible business and increase awareness of how AI can discriminate against certain demographics. Additionally, the establishment of a mechanism for individuals to seek a remedy would help mitigate the impact of biased facial recognition or recruitment systems. However, such a remedy should be a last resort and the focus of organisations should be on preventative measures, such as educating the public on AI. On a larger scale, international organisations, such as the Council of Europe and United Nations, should establish a forum to discuss technological developments such as AI and continue to establish treaties and deliver guidance for nations to adhere to.

The Council of Europe acknowledges the risks that AI presents and endeavours to minimise its threat to human rights. However, the intersect between AI and human rights is an unwieldy topic. On the one hand, AI acts as a facilitator for discrimination yet, on the other, AI can propel society forward in this new era of the technological revolution. Looking forward, legislative bodies need to take great care as to not constrain future development of AI whilst implementing measures to mitigate the risk of discrimination against marginalised groups. However, the task of preventing abuses is difficult. As AI is so readily available and developing at an unprecedented speed, can human rights organisations keep up with rapid technological developments?


Freya Bover

Bibliography

Cases

Mr P E Manjang v Uber Eats UK Ltd and others 3206212/2021

R (Bridges) v Chief Constable of South Wales Police [2020] EWCA Civ 1058, [2020] 1 WLR 5037

Legislation

Equality Act 2010

Equality Act 2021

European Convention on Human Rights (ECHR)

Books

Ahmed S, On Being Included: Racism and Diversity in Institutional Life (Duke University Press 2012)

Golunova V, 'Artificial Intelligence and the Right to Liberty and Security' in Alberto Quintavalla and Jeroen Temperman (eds), Artificial Intelligence and Human Rights (OUP 2023)

Jefferson B, Digitize and Punish: Racial Criminalization in the Digital Age (University of Minnesota Press 2020)

Michałkiewicz-Kądziela E, 'Deepfakes: New Challenges for Law and Democracy' in Michał Balcerzak and Julia Kapelańska-Pręgowska (eds), Artificial Intelligence and International Human Rights Law: Developing Standards for a Changing World (Edward Elgar Publishing 2024)

Quintavalla A and Temperman J (eds), Artificial Intelligence and Human Rights (OUP 2023)

Smith M and Mann M, 'Facial Recognition Technology and Potential for Bias and Discrimination' in Rita Matulionyte and Monika Zalnieriute (eds), The Cambridge Handbook of Facial Recognition in the Modern State (Cambridge University Press 2024)

Sponholz L, Hate Speech in den Massenmedien: Theoretische Grundlagen und Empirische Umsetzung (Springer Fachmedien Wiesbaden 2018)

Journal Articles

Altman S, 'Planning for AGI and Beyond' (2023) <https://openai.com/index/planning-for-agi-and-beyond/> accessed 26 February 2025

Chauhan PS and Ahmad N, 'Deepfake: Risks and Opportunities' (2024) 57(6) Computer 141, 141 <https://ieeexplore-ieee-org.libproxy.york.ac.uk/stamp/stamp.jsp?tp=&arnumber=10547080> accessed 2 March 2025

Chen Z, 'Ethics and Discrimination in Artificial Intelligence-enabled Recruitment Practices' (2023) 567 Humanities and Social Sciences Communications 1, 1 <https://www.nature.com/articles/s41599-023-02079-x> accessed 2 March 2025

Citron DK, 'Sexual Privacy' (2019) 128(7) The Yale Law Journal 1870, 1926 <https://www.yalelawjournal.org/article/sexual-privacy> accessed 2 March 2025

Drage E and Mackereth K, 'Does AI Debias Recruitment? Race, Gender, and AI's "Eradication of Difference"' (2022) 35(4) Philosophy and Technology 1, 2 <https://pmc.ncbi.nlm.nih.gov/articles/PMC9550152/pdf/13347_2022_Article_543.pdf> accessed 2 March 2025

Giri D and Brady E, 'Exploring Outlooks Towards Generate AI-Based Assistive Technologies for People with Autism' (2023) Association for Computing Machinery 1 <https://arxiv.org/abs/2305.09815> accessed 2 March 2025

Goertzel B, 'Artificial General Intelligence: Concept, State of the Art, and Future Prospects' (2014) 5(1) Journal of Artificial General Intelligence 1, 1 <https://sciendo.com/article/10.2478/jagi-2014-0001?tab=references> accessed 2 March 2025

Habgood-Coote J, 'Deepfakes and the Epistemic Apocalypse' (2023) 201(103) Synthese 1 <https://eprints.whiterose.ac.uk/196368/9/s11229-023-04097-3.pdf> accessed 19 February 2025

Hao K, 'Deepfake Porn is Ruining Women's Lives. Now the Law May Finally Ban It.' [2021] MIT Technology Review <https://www.technologyreview.com/2021/02/12/1018222/deepfake-revenge-porn-coming-ban/> accessed 2 March 2025

Pawelec M, 'Deepfakes and Democracy (Theory): How Synthetic Audio-Visual Media for Disinformation and Hate Speech Threaten Core Democratic Functions' (2022) 1(19) Digital Society 1, 22 <https://link.springer.com/article/10.1007/s44206-022-00010-6#citeas> accessed 2 March 2025

Pereira R and Min J and Nayak N, 'Improve Human Connection in Video Conferences with NVIDIA Maxine Eye Contact' (2023) <https://developer.nvidia.com/blog/improve-human-connection-in-video-conferences-with-nvidia-maxine-eye-contact/> accessed 2 March 2025

Ryan C, 'Facial Recognition Technology and a Proposed Expansion of Human Rights' (2024) 76(1) Federal Communications Law Journal 87 <https://heinonline-org.libproxy.york.ac.uk/HOL/Page?collection=journals&handle=hein.journals/fedcom76&id=103&men_tab=srchresults> accessed 2 March 2025

Tilmes N, 'Disability, Fairness, and Algorithmic Bias in AI Recruitment' (2022) 24(21) Ethics and Information Technology 1 <https://link-springer-com.libproxy.york.ac.uk/article/10.1007/s10676-022-09633-2> accessed 2 March 2025

Williams RM and Gilbert JE, 'Perseverations of the Academy: A Survey of Wearable Technologies Applied to Autism Intervention' (2020) 143(2) International Journal of Human-Computer Studies <https://www.sciencedirect.com/science/article/abs/pii/S1071581920300872> accessed 2 March 2025

Zhang X and Zhang Z, 'Leaking My Face via Payment: Unveiling the Influence of Technology Anxiety, Vulnerabilities, and Privacy Concerns on User Resistance to Facial Recognition Payment' (2024) 48(3) Telecommunications Policy <https://www.sciencedirect.com/science/article/abs/pii/S0308596123002148> accessed 2 March 2025

Reports 

Ajder H and others, 'The State of Deepfakes: Landscape, Threats and Impact' (2019) <https://regmedia.co.uk/2019/10/08/deepfake_report.pdf> accessed 2 February 2025

Research Papers 

Jones K, 'AI Governance and Human Rights: Resetting the Relationship' (Chatham House, January 2023) 50-51 <https://www.chathamhouse.org/2023/01/ai-governance-and-human-rights> accessed 25 February 2025

Websites

Autism Europe, 'Advocating for Autism in AI Regulation: Navigating Europe's New AI Act' (Autism Europe, 17 October 2024) <https://www.autismeurope.org/blog/2024/10/17/advocating-for-autism-in-ai-regulation-navigating-europes-new-ai-act/> accessed 28 February 2025

Council of Europe, 'HUDERIA: New Tool to Assess the Impact of AI Systems on Human Rights' (Council of Europe, 2 December 2024) <https://www.coe.int/en/web/portal/-/huderia-new-tool-to-assess-the-impact-of-ai-systems-on-human-rights> accessed 23 February 2025.

Council of Europe, 'Human Rights and Artificial Intelligence (CDDH-IA)' (Council of Europe) <https://www.coe.int/en/web/human-rights-intergovernmental-cooperation/intelligence-artificielle#{%22269581064%22:[]}> accessed 2 March 2025

Dastin J, 'Insight - Amazon Scraps Secret AI Recruiting Tool that Showed Bias Against Women' (Reuters,  11 October 2018) <https://www.reuters.com/article/world/insight-amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK0AG/> accessed 23 February 2025

Equality and Human Rights Commission, 'Uber Eats Courier Wins Payout with Help of Equality Watchdog, after Facing Problematic AI Checks' (26 March 2024) <https://www.equalityhumanrights.com/media-centre/news/uber-eats-courier-wins-payout-help-equality-watchdog-after-facing-problematic-ai> accessed 2 March 2025

LexisPlus UK, 'Claims Regarding Facial Recognition and the Use of AI in the Workplace Generally' (6 December 2021) <https://plus.lexis.com/uk/document/?pdmfid=1001073&crid=54f2b156-fa52-4dec-bb57-3d7d73313f72&pddocfullpath=%2Fshared%2Fdocument%2Fnews-uk%2Furn:contentItem:647N-44X3-GXF6-817C-00000-00&pdcontentcomponentid=184200&pdteaserkey=&pdislpamode=false&pddocumentnumber=3&pdworkfolderlocatorid=NOT_SAVED_IN_WORKFOLDER&ecomp=5t5k&earg=sr2&prid=6facf38c-ed54-407d-ad95-6545227eb320&federationidp=KCFX2659464&cbc=0> accessed 2 March 2025

National Institute of Standards and Technology, 'Facial Recognition Technology (FRT)' (6 February 2020) <https://www.nist.gov/speech-testimony/facial-recognition-technology-frt-0#:~:text=For%20most%20algorithms%2C%20the%20NIST,rates%20across%20these%20specific%20demographics.> accessed 27 February 2025

Spindlow S, 'Council of Europe adopts Turing-developed Human Rights Risk and Impact Assessment for AI Systems' (The Alan Turing Institute, 5 December 2024) <https://www.turing.ac.uk/news/council-europe-adopts-turing-developed-human-rights-risk-and-impact-assessment-ai-systems> accessed 23 February 2025

Previous
Previous

The rise of AI and its challenges: How can regulation ensure that Artificial Intelligence benefits innovation in UK law firms?

Next
Next

Death of a City: How Legal Mechanisms Caused Democratic Backsliding in Hong Kong