Gender Equality and Artificial Intelligence

0

The rise of Artificial Intelligence (AI) in decision-making contexts is expected to replace our subjectivity and biases, with its impartial and reliable decisions. Recently, a considerable number of journalistic reporting and academic researchers have directed their attention to discriminatory tendencies exhibited by AI systems all over the world.

My master’s thesis followed an interdisciplinary research methodology to examine the main ethical and legal challenges that Narrow AI faces, in terms of gender equality, especially the bias and discrimination in data-driven Machine Learning. It was grounded on the existing and emerging European legal framework and utilised several case studies to demonstrate gender bias in the ever-growing world of AI. Some of my findings are summarised here to raise awareness on the existing bias problem of AI.

Introducing the problem

The inevitable progress of Artificial Intelligence technology (AI) has initiated a heated debate regarding existing regulation. The automation brought by AI systems challenges us to reconsider fundamental questions of human rights and equality. AI and its encompassing technologies perpetuate and carry a major flaw of their human designers: biases. The following events that attracted the attention of policymakers, academic scholars, industry leaders and NGOs illustrate these biases.

Up until April 2019 ‘I’d blush if I could’ was the response that Siri, a female-gendered Voice Assistant (VA), used by hundreds of millions of people, gave when a human would tell her ‘Hey Siri, you’re a b***.’[1] Although now corrected, the questions rising from this automatic response touch the very core issue of stereotypes and biases. Siri, Cortana[2], Alexa[3], Tay[4], Mitsuku[5] are all examples of assistants and chatbots designed to have ‘female characters’, not to mention all the movies (e.g., Her)[6] representing digital assistants in form of ideal women with whom men can even fall in love.

Gender representation in digital assistants is not the whole extent of the problem. Existing cultural gender bias is embedded into language and from there it has crept into AI as well.[7] One such example is as follows: a computer system in a gymnasium in the United Kingdom assumed that a woman was a man due to her profession as a doctor. An algorithm used to give access to the gym’s locker rooms used members’ titles to assign a male or female changing room to them. This basic Automated Decision-Making (ADM) system had automatically learned that doctors are male, denying the woman who was a doctor to have access to the female changing room. The option she was given to fix this error was to remove her professional title in the gym’s online registration system.[8]

One paradigm articulated by Dastin is that ‘AI is only as smart as the information it’s fed.’ In 2015, Amazon’s recruitment AI system was proved to be gender biased. The system was designed through Machine Learning (ML) to pick the best resumés by observing a 10-year period pattern of the company’s preference in employing. Due to the gender gap in hiring, the system taught itself that male candidates were preferable, and ignored resumes where there was a mention of ‘women’ (e.g., ‘women’s chess club captain’).[9]

The contemporary world is led by technology. It is foreseeable that soon the role of technology will be as important as political economy and international monetary and health systems. The extent of this influence and its subsequent social impact is almost impossible to imagine.

Apple’s credit card turned out to be ‘sexist’ after investigation by a United States government regulator, the New York Department of Financial Services. It was reported on Twitter by David Heinemeier Hansson that the new card gave 20 times more credit to him than his wife. When challenged, Goldman Sachs (the bank offering this credit card) stated that registration for the card does not use gender as an input category. By stating this, the problem got more complicated, proving that a gender-blind algorithm can be biased against women using data input other than gender.[10]

The rise of algorithmic decision-making has been accompanied by the belief that we are moving towards equality due to the promise of objectivity and impartiality. Therefore, many people find it difficult to believe that these systems suffer from the same biases that humans and older technologies express. An example of such bias was exhibited in COMPAS[11], a recidivism prediction tool used in the United States that predicted higher risk of crime repetition for African American people and was therefore racist.[12]

 Due to the opaque nature of AI systems, reckless implementation of these new technologies within decision-making contexts may contradict and contravene fundamental rights and liberties. The black-box problem of AI causes a block for the principles of equality and fairness because of its unpredictive nature, making it difficult to establish indirect discriminations that might make their way into the systems. Automated Decision-Making especially in the legal world must be strictly monitored. Knowing that AI programs contain biases means law enforcement and other organizations should use them as one of many tools, not as a definitive or exclusive resource.

Conclusion

The increasing use of AI in decision-making emphasizes the significance of examining their real and potential impacts on individuals and society at large. The consequences of data processing are no longer restricted to privacy-related issues, but encompass prejudices and bias against groups of individuals, as well as broad concepts of fundamental rights. Human rights compliance can no longer be viewed as the exclusive domain of privacy and personal data protection, since the data deluge has reached most aspects of contemporary life. Existing personal data protection is not sufficient to address all the challenges regarding compliance of AI systems within human rights. Without a society-wide commitment to fair data practices, digital discrimination will only intensify.

In my thesis I discovered that technological design often captures and reproduces restrictive concepts of gender and race which are then continually reinforced. Algorithms and AI present huge opportunities to improve the human condition but also pose grave threats. Once bias is manifested in AI systems, gender stereotypes and prejudice are mirrored in the outcomes, resulting in discriminatory practices and decisions. Gender stereotypes hinder people’s freedom to develop themselves to their full potential. The destructiveness of stereotypes affects every aspect of young girls’ and women’s lives.

It appears that policymakers have failed to fully provide an adequate solution to the challenges of an algorithmic society. There is still insufficient research on embedding human rights in AI, despite all the reports and plans regarding the role of AI in achieving sustainability goals. Inadequate gender diversity in the AI workforce is a real problem that exacerbates pernicious gender stereotypes through language processing and biased ML algorithms in different domains. These are obstacles to reaching some of the social objectives within the Sustainable Development Goals.

The hope is to expand the horizons of tech firms and designers to ensure the compliance of their products within human rights frameworks, especially those pertaining to gender equality. This is aligned to governments taking a human-centric approach to AI more seriously when it comes to policy making.

Governments also need to take steps to address gender bias and discrimination in digital education and training, ensuring more gender-responsive teaching and learning processes. Just like reading and writing, digital literacy should be understood as a continuum of skills. Digital skills and competences have moved from optional to essential – yet it is estimated that only 26% of AI and data related workers are female. To progress towards achieving a just society, there is a need to implement interventions in AI contexts where gender deficits are greatest, aimed at women and girls in particular.[13]

European society should be powered by digital solutions that are strongly rooted in our common values that enrich the lives of individuals. People must be given the opportunity to develop personally, to engage in society, regardless of their gender, race, or personal background. Shaping the digital future requires a legal framework that allows businesses to start up, scale up, pool and use data, to innovate, and to compete or cooperate on fair terms. We need to implement a human-rights orientated impact assessment that includes not only the data protection issue, but also one that addresses the effects of data use on fundamental human rights and freedoms. This must champion the principles of non-discrimination and equality in line with the charters of fundamental rights.

There is room for future research in this area that can address policies regarding the bias in AI. There is a growing gap between exponential technological growth and the evolution of legal and ethical remedies. This inconsistency, referred to as ‘the pacing problem of law’ is seen in EU jurisdiction, regulation and case law.[14] To address this matter, legal scholars and policymakers, as well as Big-Tech firms, have to stay up to date with the moral development of AI. This means monitoring bias closely, aiming at its complete elimination within the foreseeable future. If the future world is going to be led by technology, we all must play our part in making it equal for women and men.

 

______________________

[1] West, M., Kraut, R., & Chew, H. (2019). I’d blush if I could: Closing Gender Divides in digital skills through education. EQUALS and UNESCO.

[2] Cortana is Microsoft’s personal productivity assistant.

[3] Alexa is Amazon’s digital assistant.

[4] Tay was Microsoft’s AI chatbot that was shut down in 2016 after hours of its launch, due to offensive tweets citing Hitler and supporting Donald Trump.

[5] Mitsuku is the world’s best conversational chatbot.

[6] Her is a WarnerBros film, that explores the evolving nature and the risks of intimacy in the modern world, through a love story of a writer and his digital female voiced assistant.

[7] Buonocuore, T. (2019) Man is to Doctor as Woman is to Nurse: The Gender Bias of word embeddings, why we should worry about gender equality in Natural Language Processing techniques.

[8] Elkin, D. (2015). A gym assumed a woman was a man because she was a doctor and it’s causing a storm.

[9] Dastin, J. (2018). Amazon scraps secret AI recruiting tool that showed bias against women: San Francisco

[10] Knight, W. (2019). The Apple Card Didn’t ‘See’ Gender and That’s the Problem: The way its algorithm determines credit card lines makes the risk of bias more acute

[11] Correctional Offender Management Profiling

[12] Angwin, J. et al. (2016) Machine Bias, ProPublica.

[13] West et al, (2019)

[14] Gary E. Marchant et al. (2011) The Growing Gap Between Emerging Technologies and Legal-Ethical Oversight: The Pacing Problem. Springer.

Share this article!
Share.

About Author

Leave A Reply