Rowena Rodrigues and Anaïs Rességuier, Trilateral Research
Advances in AI will have serious and lasting consequences for human autonomy. Does the increasing autonomy of machines necessarily imply a decreasing human autonomy?
The human and the ‘system’
In a Pew Research Center report on “Artificial Intelligence and the Future of Humans” (2018), Thomas Schneider, head of International Relations Service and vice-director at the Federal Office of Communications (OFCOM), Switzerland, points out, “The biggest danger in my view is that there will be a greater pressure on all members of our societies to live according to what ‘the system’ will tell us is ‘best for us’ to do and not to do, i.e., that we may lose the autonomy to decide ourselves how we want to live our lives, to choose diverse ways of doing things”. This is a critical point that needs further exploration given its profound implications for individuals and society at large. It is even more critical considering the fact that the impacts on autonomy are often invisible and little is known about them and their consequences.
Human autonomy is an individual’s capacity for self-determination or self-governance. It describes a person’s ability to make her or his own rules in life and to make decisions independently. Autonomy is a fundamental human value and an ethical principle. As a principle, it means people must be free to shape their own lives. Respect for autonomy is also enshrined in law in various ways, e.g., under Articles 2 (right to life), 3 (prohibition of torture, inhuman and degrading treatment), and 8 (the right to respect for private life) of the European Convention on Human Rights.
Every technology impacts human autonomy. AI and its applications have the potential to further exacerbate such impacts, whether positively or negatively. The EU-funded H2020 SIENNA project carried out a socio-economic impact assessment as part of its research into the state of the art in AI and robotics. One of the key points that emerged was the potential for the diminishment of individual autonomy due to increased use of, and reliance on AI technology.
Impacts of artificial intelligence on human autonomy
Human autonomy might be compromised when it competes with other values that are increasingly gaining importance in modern society such as safety, security, convenience, and access to services and/or products. Trade-offs and serious consequences emerge. Take the example of using AI risk assessment tools and techniques used in the criminal justice system. One news report citing sociologist Robert Werth, suggests these “can reduce discrepancies in how individuals are assessed and treated” but “can also exacerbate existing inequalities, particularly on the basis of socioeconomic status or race”. Similarly, the Partnership on AI Report on Algorithmic Risk Assessment Tools in the U.S. Criminal Justice System highlights concerns in relation to the validity, accuracy, and bias in the tools themselves; issues with the interface between the tools and the humans who interact with them; and questions of governance, transparency, and accountability.
Here, human autonomy is affected in two key ways. The first relates to the autonomy of judges and their degree of freedom left in their decision-making process considering the assessment provided by the AI system. They might indeed feel highly pressured to comply with the judgment proposed by the system. The second relates to the individual autonomy of perpetrators: they will not be judged on a strictly individual basis but rather on the basis of what other people with a profile similar to theirs have done, and what was decided about them.
Let’s also take some fictional examples. A security robot tells a human it is unsafe to venture into a park. The human does not go to the park missing out on enjoying fresh air, exercise and socialising. The human does not question the robot, rather relies on its advice without double checking, assuming there must be some security issue. Here the human places trust and relinquishes decision-making to a non-human entity without verification. A digital butler will help manage our household tasks but this may be accompanied by a relinquishment of privacy and opening of access to sensitive personal information, which in turn poses a threat to our autonomy. There are many examples of how we do this already and have passed such decisions on. As humans increasingly rely on robots, how much of our decision-making and ability to think for ourselves are we giving up?
Particular groups such as the elderly and children are highly vulnerable to adverse effects on their personal autonomy, due to their needs and dependence on technologies, lack of choice or possibility to consent, and their limited resources to mitigate the negative impacts. The elderly stand to have their autonomy and privacy diminished by certain applications of AI and robotics, e.g., through remote electronic surveillance of their daily habits, including of them bathing or changing. However, at the same time, AI might help increase their autonomy by reducing their dependency on a human carer. It could help them to continue living independently in their own home rather than in a care home. AI is affecting the way children behave, their values and their relationships with other humans and their environment. What future generation will AI bring forth?
Does the increasing autonomy of machines necessarily imply a decreasing human autonomy?
In an AI society, is it truly possible for individuals to make their own rules about what they want to achieve and how they want to live? Does the environment they live in support them in realising their potential given changes being brought about? Will there be a further tug of war as technology advances? Will humans push back? Or perhaps this will create a more limited meaning of human autonomy as we knew it or the advancement of hu-mechanised autonomy – i.e., human autonomy that is assisted or shaped to a greater or lesser extent by machines.
The risk of losing the capacity to exercise autonomy
As AI systems become more autonomous and supplant humans and human decision-making in increasing manners, there is the risk that we will lose the ability to make our own life rules, decisions or shape our lives, in cohort with other humans as traditionally has been the case. For example, we may increasingly consult an AI medical advisor (see Ada, for example). We might choose to give up our ability and willingness to know and understand our own bodies and ailments. We may choose to let an automated system that replaces our (inaccessible or costly) general physician or other medical services manage such choices and make critical decisions. An AI dietician might pose a similar threat, dictating to humans what they should eat and what they should avoid. Furthermore, AI tech may have the power to actually force the implementation of this diet through smart fridges, for instance. Humans might relinquish the management of their diet to such an system/app and lose the ability to care for oneself or manage their own diet due to increasing reliance on such systems. This example implies a profound transformation of who we are, what we do, and how we relate to ourselves.
Beyond these deep changes affecting our autonomy, there are also great risks in terms of growing human passivity and ignorance, which, in turn will further jeopardise our autonomy. It is a vicious circle, one that started way before AI got into our lives, but that AI is most likely further exacerbating.
Not without us
We have always been subject to social structures and our environment that have shaped who we are, what we want, and how we make decisions. What we need to pay heed to is how AI is destabilising not only our institutional-human relations and arrangements but also forming a critical part of our social environment (sometimes very sublimely, sometimes forcefully and pervasively). We need to make sure these profound changes do not happen without us.
About the authors:
Rowena Rodrigues is deputy coordinator of the SIENNA Horizon 2020 project and research manager at Trilateral Research, working on the ethics and governance of new and emerging technologies. Anaïs Rességuier is Research Analyst at Trilateral Research working on the ethics of new technologies with a focus on the ethics of artificial intelligence and human-machine interaction.
Acknowledgement:
The SIENNA project – Stakeholder-informed ethics for new technologies with high socio-economic and human rights impact – has received funding under the European Union’s H2020 research and innovation programme under grant agreement No 741716.
Disclaimer: This blogpost reflects only the authors’ views. The European Commission is not responsible for any use that may be made of the information it contains.
One thought on “The underdog in the AI ethical and legal debate: human autonomy”