How can we be sure that a person in front of us is conscious? This might seem like a näive question, but it actually resulted in one of the trickiest and most intriguing philosophical problems, classically known as “the other minds problem.”
Yet this is more than just a philosophical game: reliable detection of conscious activity is among the main neuroscientific and technological enterprises today. Moreover, it is a problem that touches our daily lives. Think, for instance, of animals: we are (at least today) inclined to attribute a certain level of consciousness to animals, depending on the behavioural complexity they exhibit. Or think of Artificial Intelligence, which exhibits astonishing practical abilities, even superior to humans in some specific contexts.
Both examples above raise a fundamental question: can we rely on behaviour alone in order to attribute consciousness? Is that sufficient?
It is now clear that it is not. The case of patients with devastating neurological impairments, like disorders of consciousness (unresponsive wakefulness syndrome, minimally conscious state, and cognitive-motor dissociation) is highly illustrative. A number of these patients might retain residual conscious abilities although they are unable to show them behaviourally. In addition, subjects with locked-in syndrome have a fully conscious mind even if they do not exhibit any behaviours other than blinking.
We can conclude that absence of behavioural evidence for consciousness is not evidence for the absence of consciousness. If so, what other indicators can we rely on in order to attribute consciousness?
The identification of indicators of consciousness is necessarily a conceptual and an empirical task: we need a clear idea of what to look for in order to define appropriate empirical strategies. Accordingly, we (a group of two philosophers and one neuroscientist) conducted joint research eventually publishing a list of six indicators of consciousness. These indicators do not rely only on behaviour, but can be assessed also through technological and clinical approaches:
- Goal directed behaviour (GDB) and model-based learning. In GDB I am driven by expected consequences of my action, and I know that my action is causal for obtaining a desirable outcome. Model-based learning depends on my ability to have an explicit model of myself and the world surrounding me.
- Brain anatomy and physiology. Since the consciousness of mammals depends on the integrity of particular cerebral systems (i.e., thalamocortical systems), it is reasonable to think that similar structures indicate the presence of consciousness.
- Psychometrics and meta-cognitive judgement. If I can detect and discriminate stimuli, and can make some meta-cognitive judgements about perceived stimuli, I am probably conscious.
- Episodic memory. If I can remember events (“what”) I experienced at a particular place (“where”) and time (“when”), I am probably conscious.
- Acting out one’s subjective, situational survey: illusion and multistable perception. If I am susceptible to illusions and perceptual ambiguity, I am probably conscious.
- Acting out one’s subjective, situational survey: visuospatial behaviour. Our last proposed indicator of consciousness is the ability to perceive objects as stably positioned, even when I move in my environment and scan it with my eyes.
This list is conceived to be provisional and heuristic but also operational: it is not a definitive answer to the problem, but it is sufficiently concrete to help identify consciousness in others.
The second step in our task is to explore the clinical relevance of the indicators and their ethical implications. For this reason, we selected disorders of consciousness as a case study. We are now working together with cognitive and clinical neuroscientists, as well as computer scientists and modellers, in order to explore the potential of the indicators to quantify to what extent consciousness is present in affected patients, and eventually improve diagnostic and prognostic accuracy. The results of this research will be published in what the Human Brain Project Simulation Platform defines as a “live paper,” which is an interactive paper that allows readers to download, visualize or simulate the presented results.
This text was originally published on the Ethics Blog.
Pennartz CMA, Farisco M and Evers K (2019) Indicators and Criteria of Consciousness in Animals and Intelligent Machines: An Inside-Out Approach. Front. Syst. Neurosci. 13:25. doi: 10.3389/fnsys.2019.00025