A Short Potential Solution to the Hard Problem of Consciousness
Updated: Nov 7, 2018
Dr. David Chalmers writes “A mental state is phenomenally conscious when there is something it is like to be in that state." He has also coined “The Hard Problem of Consciousness” which is so-called because, according to him, there is seemingly no reason why a system that processes information should have phenomenally conscious experiences, rather unconscious ones.
Regardless of what any system’s or life form’s conscious experiences of gravity or air-pressure are like, it is thought to be a mystery of why there should be any experience in the first place. After all, we can build satellites and drones that measure and detect gravity and air-pressure automatically without having any phenomenal sensation. What advantage does consciousness confer?
In 2018, we find neural networks approximating complex tasks which were previously thought to be in the exclusive domain of conscious human information processing, including winning at go, holding (text-based) conversations, labeling images, recommending music, and generating psychedelic art. Neural networks provide a clear example that complex computational tasks can be done without conscious experience.
In more traditional accounts of the Hard Problem, this idea is illustrated with reference to “philosophical zombies,” unconscious creatures which do just about everything you or I can do, but in terms of automatic responses to data collected by the zombie's senses.
I suspect that the reason the Hard Problem is hard is because the very referent of the word “consciousness” implicitly contains two semi-separable concepts, rather than just one as many people have assumed. I will do my best to disentangle them, though seeing as how they are linked, please forgive some circularity of definitions while I build to the conclusion.
This image has the quality of capturing Qually The Qualified California Quail's likeness, which some would call Qualeish. Photo from Wikipedia.
The first part is the “something it is like”-ness. Though it goes by many names (including phenomenal or epiphenomenal experience, and qualia), it is relatively straightforward and has been discussed by many. To give a concrete example, the feeling of hunger is one such conscious state with a distinct "something it is like" subjective flavor — we know we are hungry when our conscious experience is colored by a distinct overtone that increases in salience the more we go without food, and which only abates when we eat, or are injected with the right fluids, or are temporarily distracted.
Crucially, the second part is that conscious experiences cannot be conceived of without some thing which observes them. It seems to me that you need both of the words in the phrase “conscious observer” to get an idea of what’s going on when you say the words "consciousness" or "phenomenally conscious experience."
Certain wavelengths of light, for example, after they are run through a bunch of unconscious neural processing, are at some point consciously perceived to be red by humans. But what is “redness” without some thing that observes it as such?
Consider that whenever we imagine what it would mean for a system (like a life form or neural network) to have phenomenal states, either we imagine that the system has an “I” to consciously observe the data it operates with, or we use our “observing I” to perceive our conscious imagining of the system’s “something it is to be like” states. This leads me to state the following proposition:
1. It is by definition impossible to conceive of a conscious state divorced from something observing it.
In other words, a quail on the dark side of the moon only "has" qualia when we read or think about this sentence. Or as neuroscientist, psychiatrist, and creator of “Integrated Information Theory” Giulio Tononi (who more or less believes that anything that deals with information is, to some infinitesimal degree, conscious) almost put it, consciousness is data, observed. It is the very act of observing some experience in the first place that makes said experience, by definition, conscious.
Assuming that the “something that observes” lies in system we are imagining, and not in us, this bit must form the base of the “I”-ness of said system. This leads me to a second proposition:
2. It is impossible to conceive of any creature’s or system’s sense of “ observational I” that doesn’t have any phenomenally conscious experiences.
A moment’s thought about philosophical zombies, comatose patients, future neural-network-driven recommendation systems, and so on, reveals that these things are lacking a sense of “observing I," though comatose patients recover an “observing I” when they become conscious.
Allow me to make two clarifications. First, in humans, ego, identity, and habits all grow around the observing I, and when comatose patients recover they regain some or all of these things. Second, by "observing I" I mean the part of you that's left over if you get rid of all of the following:
your entire past,
what you are thinking about now,
what you are planning for in the future,
the content of your working, short-term, and long-term memory,
what you are expecting,
any ideas you have about yourself,
any mental models you've formed from being in the world,
and anything that isn't a sensation that is registering right now.**
We thus have a functional account of the purpose of consciousness: phenomenally conscious experiences are impossible to divorce from a system’s “consciously observing I.” And if you happen to be in the business of self-preservation, having a sense of “I” or “I am”-ness might make feedback, rewards, punishments for various goal-selection and goal-pursuit strategies more effective.
Taken together, both propositions imply that Tononi’s pan-psychism, while a laudable commitment to consistency, is not correct. A light-diode has no more a sense of “I” than the banana I just ate, even though both are capable of responding to changes in their environments (of concentrations of light and Ethylene gas, respectively). Because these things have no “I” that observes the data they are operating with, by definition, there is nothing to generate their conscious experiences.
** Various writers, sages, and gurus make this point; it is well known in meditative traditions. Alan Watts writes about it in The Book, Sam Harris writes about it in Waking Up, and for someone a tad less mainstream, the spiritual teacher Mooji also talks about it.