The ethical and social implications of using artificial intelligence in life science – or society – have become the main hurdle for implementation. The debate has intensified in recent years, bringing a long list of solutions in the form of approaches, tools and initiatives. One of the more prominent examples of this is the European Union’s Assessment list for trustworthy AI, or ALTAI, developed by the EU’s high-level expert group on artificial intelligence. How does it work? A recent paper published in AI and Ethics presents a first empirical test in a live setting: Looking at neuro-informatics in the Human Brain Project, the authors demonstrate the effectiveness and limitations of the ALTAI in practice.
Ex-ante impact assessments like the ALTAI are designed to identify issues in the early stages of development. According to Bernd Carsten Stahl, Professor of Critical Research in Technology at the School of Computer Science, University of Nottingham and Tonii Leach, research assistant at the Centre for Computing and Social Responsibility at De Montfort University, their article shows that ex-ante impact assessment has the potential to help identify and address ethical and social issues. However, for this kind of assessment to be useful, we need to understand them as part of a much broader socio-technical ecosystem of artificial intelligence.
According to Bernd Stahl, who is also Ethics Director of the Human Brain Project, the work shows that ALTAI can highlight potential ethical and social aspects of AI. This raises the question whether it is appropriate to look at AI as an ethical issue.
“When we do an ex-ante assessment of AI, perhaps what we are looking at is not ethical problems, but rather a list of consequences of AI use”, says Bernd Stahl.
In the article, they show that AI ethics need to be considered in the broader AI ecosystem, where impact assessments such as the ALTAI need to be interlinked with other mechanisms of addressing ethical and social concerns.
Another challenge for ex-ante impact assessments is the intended end result. In industry R&D, it might be clear what the end goal is, but that is not always so in research – particularly in large collaborative projects or in research endeavours where the end goal might be to discover what technology can be used for, rather than developing a particular product.
“Our assessment demonstrates the limitations of ALTAI, in particular when it is applied in research where it is not clear what the eventual outcomes and products will be” says Tonii Leach.
The paper is published open access in AI and Ethics:
Stahl, B. C., & Leach, T. (2022). Assessing the ethical and social concerns of artificial intelligence in neuroinformatics research: An empirical test of the European Union Assessment List for Trustworthy AI (ALTAI). AI and Ethics. https://doi.org/10.1007/s43681-022-00201-4