Bias has become one of the key issues in the debates about Artificial Intelligence (AI). Several well-publicized cases demonstrate problematic impacts of gender and racial biases embedded in sentencing algorithms, facial recognition systems and search engines. In our recent article, we analyse power and politics of framing bias in AI policy (Ulnicane and Aden 2023). To do that, we examine narratives about bias, framing of its causes and impacts as well as recommendations to tackle it. We focus on intersectional characteristics of bias, namely, how multiple identities like gender, race and ethnicity converge and reinforce each other to exclude and marginalize certain social groups.

In debates about bias in AI, we identify two competing narratives. Some suggest that AI will help to eliminate human bias. However, others call attention to dangers of such a ‘technological fix’ approach to a complex societal problem like bias. Perceiving AI as more objective, while it actually reinforces and amplifies pre-existing gender and racial biases, can be particularly harmful. These competing narratives also imply different ways of dealing with bias. The technological fix approach suggests that bias in AI can be solved with technical means such as larger datasets and better design of algorithms. Critics of technological fix call for a more holistic and ambitious approach that would address historical, political and social aspects of bias, existing power asymmetries and structural injustice. They suggest to include a broader range of social groups, disciplines and sectors in decisions about designing and deploying AI.   

Our article is part of a new special issue ‘Politics and policy of Artificial Intelligence’, which I edited together with Tero Erkkilä. This special issue brings together nine articles that study topics such as digital public service delivery, framing of food couriers, gender equality, socio-technical imaginaries. policy paradigms and narratives, global standards and rankings. All articles address a number of overarching topics like power, the role of ideas and co-shaping of AI and society. This special issue is part of my broader research programme on politics, policy and governance of AI.

Ulnicane, I., & Aden, A. (2023). Power and politics in framing bias in Artificial Intelligence bias policy. Review of Policy Research, 40(5): 665–687.

Ulnicane, I. & Erkkilä, T. (2023). Politics and policy of Artificial Intelligence. Review of Policy Research, 40(5): 612-625.

Dr. Inga Ulnicane. Photo credits: HBP Education’
Dr. Inga Ulnicane. Photo credits: HBP Education’
Author Profile

Leave a Reply

Your email address will not be published. Required fields are marked *