Bernd Carsten Stahl

The discussion of ethical aspects of artificial intelligence (AI) covers a broad set of issues and many different application areas. One frequently named application area where AI may have a revolutionary potential is that of healthcare. In addition to being able to improve specific medical processes, such as disease classification, there is a hope that using public health data for AI analysis may help public health processes and interventions. At the same time, there are significant worries. One of these relates to the impact of new technologies on equity in public health. As a report published by the Wellcome Trust puts it: “Will these technologies help eradicate or exacerbate existing health inequalities?” (Fenech et al. 2018)

While equity is well established as a potential issue arising from the use of AI in health, it is not clear what exactly it means. In order to provide more clarity about this, a group of Canadian investigators, led by Max Smith (Western University) and Renata Axler (University of Toronto) organised a two day event focusing on Equitable AI in Public Health on 7th-8th November 2019 in Toronto (Canada) to discuss AI’s implications for equity in public health. This event was supported by a Canadian Institutes of Health Research (CIHR) Planning Grant.

Data & AI ethics in public health 

The event started with three keynote presentations on the first day that were open to the public and that were meant to set the scene. The first presentation was given by Rumi Chunara (Assistant Professor, NYU Department of Computer Science and Engineering & College of Global Public Health). Rumi focuses in her research on the use of unstructured data to gain insights into population-level health. She gave a number of interesting examples of this work and emphasised the importance of collaboration and stakeholder engagement, which is required to understand problems and move forward.

I gave the second keynote presentation, drawing on work I have been doing in the SHERPA project ( and the Human Brain Project ( to suggest ways of classifying ethical issues arising from AI and applying this classification to public health. An important question that I think should be addressed is whether and to what degree existing remedies, ranging from codes of conduct or standardisation to legislation and regulation already address these issues. Only a clear answer to this question will allow us to determine steps that are required in the future. 

Professor Bernd Carsten Stahl presenting during the event

The final presentation of the first day was given by Andreas Reis  who services Co-Lead of the Global Health Ethics Unit at the World Health Organization. Andreas Gave an overview of the type of issues that the WHO is aware of and aims to address. 

Following the three presentations there was an open discussion with the audience of around 120 people. During the discussion a  focus on economic questions, control and ownership quickly emerged. There was noticeable unease in the room with regards to the fact that most AI systems and the data needed to train them are owned by private companies, often by the dominant internet giants. Worries were expressed that the current ownership and incentive structures were likely to exacerbate existing inequalities with few policies or initiatives in place to stop such developments.

Bringing developers, researchers & patients together

Welcome screen of the 2nd way of the event

The second day of the event was open only to invited participants. The organisers tried to bring three communities together: AI developers / specialists, public health researchers and practitioners and civil society, i.e. patients and users of public health services.

The day began with two more presentations. The first one of these by Frank Rudzicz, University of Toronto, Vector Institute for Artificial Intelligence explored how equity can be modelled in AI. Frank gave interesting insights into the question how complex social issues can be implemented in algorithms. One example was that of understanding equitability in terms of envy-freeness. In an envy-free division every agent believes that their share is at least as good as that of every other agent. There is an interesting question whether and to what degree envy-freeness and equitability are identical or overlap. In addition, Frank pointed out that the choice of parameters and interpretation of equitability can lead to vastly different outcomes. A technical implementation of equitable AI is therefore not guaranteed to lead to social agreement that its outcomes are indeed equitable. 

The second plenary presentation was given Jennifer Gibson (University of Toronto Joint Centre for Bioethics).  Jennifer approached the topic from a philosophical perspective and asked what constitutes equity and what AI can be expected to achieve it. 

The main activity of the second day was conducted using the Fishbowl methodology. This is an engagement method that aims involve numerous participants in a conversation by having a discussion about a topic by a number of initial respondents which is then broadened out to include a wider audience of participants who listened to the initial conversation. In our case this translated into three rounds of conversations, each starting with a set of experts who discussed the topic from their perspective, prompted by a moderator, which led to a plenary discussion. The first of the three rounds focused on people working on AI, the second one on public health and the final one involving community stakeholders, notably patients and representatives of groups who have strong public health needs, such as homeless people. 

Perceptions of worthwhile research & investments 

I cannot even try to do these three overlapping conversations any justice, so I will just pick out a few themes that I found interesting. One theme that emerged from the AI panel was the problem of determining whether a particular AI approach, e.g. deep neural networks are useful to address a particular problem. This is difficult enough to determine with a particular discipline, such as AI or computer science. When trying to apply AI to interdisciplinary issues, such as public health, a challenge that arises is what counts as research and what is perceived to be worthwhile research. An interesting public health benefit may be achieved by applying well-established AI approaches to public health data. This may be of interest to public health practitioners but lacks the novelty and excitement that are required to incentivise technical researchers. This is one of a number of problems arising from multi-disciplinary research which boundary-spanning work such as the application of AI to public health encounter.

A second fundamental issue, which linked back to the discussion on the first day is the question whether applications of AI are the best investment that public health research and practice can undertake. There is much knowledge about effective interventions that can improve public health that could be promoted and implemented. It is not clear that investments into emerging technologies such as AI are going to have a comparable public health benefit. This is particularly pertinent in resource-poor environments, e.g. in the case of low income countries. What advice should such countries be given? Should they be encouraged to invest in AI research and be part of novel developments or would it be more appropriate to use focus resources on urgent tasks, such as primary care provision or basic public health measures? 

This point about needs of patients and promises of AI research was pertinent for the discussion of the third group, the representatives of patients, users of public health and civil society. The problems that were discussed during this panel mostly concerned basic needs to access healthcare. Technical means of importance were mostly described in terms of access to basic computing and networking tools and the lack of awareness and competence in using these technologies. There seemed to be a large distance between the needs of the users of public healthcare systems and what cutting-edge AI systems can offer. 

Overall, this was an interesting two day event that, maybe not surprisingly, raised more questions than it answered. There are numerous types of issues, problems and challenges one faces when looking at AI, equity and public health. These range from conceptual questions to institutional incentives and societal question of equity. I would suggest that AI is neither the core of the problem nor will it be the magic bullet that solves the problem of equity. Future research on AI in health needs to remain vigilant with regards to equity issues. But equity is not primarily a technical problem, neither in public health nor elsewhere. And one should ask the question more often whether investment in AI is the optimal use of research and other funds or whether societal concerns, such as equity in public health, are better served by promoting other means.

Bernd Carsten Stahl is Professor of Critical Research in Technology and Director of the Centre for Computing and Social Responsibility at De Montfort University, Leicester, UK. His interests cover philosophical issues arising from the intersections of business, technology, and information. This includes the ethics of ICT and critical approaches to information systems. He is Ethics Director and Ethics Support work-package leader in the Human Brain Project.


Fenech, M., Strukelj, N., Buston, O., 2018. Ethical, Social and Political Challenges of Artificial Intelligence in Health. Wellcome Trust, London.

Author Profile

One thought on “Equitable AI in public health”

Leave a Reply

Your email address will not be published. Required fields are marked *