neurons

Artificial Intelligence (AI) has achieved remarkable results in recent decades, especially thanks to the refinement of an old and for a long time neglected technology called Deep Learning (DL), a class of machine learning algorithms. Some achievements of DL had a significant impact on public opinion thanks to important media coverage, like the cases of the program AlphaGo and its successor AlphaGo Zero, which both defeated the Go World Champion, Lee Sedol.

This triumph of AlphaGo was a kind of profane consecration of AI’s operational superiority in an increasing number of tasks. This manifest superiority of AI gave rise to mixed feelings in human observers: the pride of being its creator; the admiration of what it was able to do; the fear of what it might eventually learn to do.

AI research has generated a linguistic and conceptual process of re-thinking traditionally human features, stretching their meaning or even reinventing their semantics in order to attribute these traits also to machines. Think of how learning, experience, training, prediction, to name just a few, are attributed to AI. Even if they have a specific technical meaning among AI specialists, lay people tend to interpret them within an anthropomorphic view of AI.

One human feature in particular is considered the Holy Grail when AI is interpreted according to an anthropomorphic pattern: consciousness. The question is: can AI be conscious? It seems to me that we can answer this question only after considering a number of preliminary issues.

First we should clarify what we mean by consciousness. In philosophy and in cognitive science, there is a useful distinction, originally introduced by Ned Block, between access consciousness and phenomenal consciousness. The first refers to the interaction between different mental states, particularly the availability of one state’s content for use in reasoning and rationally guiding speech and action. In other words, access consciousness refers to the possibility of using what I am conscious of. Phenomenal consciousness refers to the subjective feeling of a particular experience, “what it is like to be” in a particular state, to use the words of Thomas Nagel. So, in what sense of the word “consciousness” are we asking if AI can be conscious?

To illustrate how the sense in which we choose to talk about consciousness makes a difference in the assessment of the possibility of conscious AI, let us take a look at an interesting article written by Stanislas Dehaene, Hakwan Lau and Sid Koudier. They frame the question of AI consciousness within the Global Neuronal Workspace Theory, one of the leading contemporary theories of consciousness. As the authors write, according to this theory, conscious access corresponds to the selection, amplification, and global broadcasting of particular information, selected for its salience or relevance to current goals, to many distant areas. More specifically, Dehaene and colleagues explore the question of conscious AI along two lines within an overall computational framework:

  1. Global availability of information (the ability to select, access, and report information)
  2. Metacognition (the capacity for self-monitoring and confidence estimation).

Their conclusion is that AI might implement the first meaning of consciousness, while it currently lacks the necessary architecture for the second one.

As mentioned, the premise of their analysis is a computational view of consciousness. In other words, they choose to reduce consciousness to specific types of information-processing computations. We can legitimately ask whether such a choice covers the richness of consciousness, particularly whether a computational view can account for the experiential dimension of consciousness.

This shows how the main obstacle in assessing the question whether AI can be conscious is a lack of agreement about a theory of consciousness in the first place. For this reason, rather than asking whether AI can be conscious, maybe it is better to ask what might indicate that AI is conscious. This brings us back to the indicators of consciousness that I wrote about in a blog post some months ago.

Another important preliminary issue to consider, if we want to seriously address the possibility of conscious AI, is whether we can use the same term, “consciousness,” to refer to a different kind of entity: a machine instead of a living being. Should we expand our definition to include machines, or should we rather create a new term to denote it? I personally think that the term “consciousness” is too charged, from several different perspectives, including ethical, social, and legal perspectives, to be extended to machines. Using the term to qualify AI risks extending it so far that it eventually becomes meaningless.

If we create AI that manifests abilities that are similar to those that we see as expressions of consciousness in humans, I believe we need a new language to denote and think about it. Otherwise, important preliminary philosophical questions risk being dismissed or lost sight of behind a conceptual veil of possibly superficial linguistic analogies.

Written by…

Michele Farisco, Postdoc Researcher at Centre for Research Ethics & Bioethics, working in the EU Flagship Human Brain Project.

This post was originally published on the Ethics Blog.

Author Profile

One thought on “Can AI be conscious? Let us think about the question”

  1. It’s becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman’s Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with primary consciousness will probably have to come first.

    The thing I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990’s and 2000’s. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I’ve encountered is anywhere near as convincing.

    I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there’s lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

    My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar’s lab at UC Irvine, possibly. Dr. Edelman’s roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461

Leave a Reply

Your email address will not be published. Required fields are marked *