Harnad, S. (1998) Review of J. Shear (Ed.) Explaining Consciousness (MIT/Bradford 1997)
Trends in Cognitive Sciences
2(6): 234-5.

The Hardships of Cognitive Science.

[Book Review of: Explaining Consciousness: The Hard Problem.
J. Shear (Ed.) Bradford/MIT Press 1997
ISBN 0-262-19388-4]

Stevan Harnad 
Cognitive Sciences Center 
Southampton University 
Highfield, Southampton 
SO17 1BJ United Kingdom 
harnad@soton.ac.uk harnad@princeton.edu 

To aspire to do justice to the problem of consciousness in 900 words would be unconscionable, so all I can offer is clipped koans and aphorisms: This book is edited by Jonathan Shear, but reprinted from the Journal of Consciousness Studies and focused on a "target article" by David Chalmers, followed by 26 commentaries and the author's response. Chalmers sets the agenda with what he has singled out as the "hard" problem of consciousness, not to be confused with the easier problems, such as modelling it and finding its basis in the brain.

Chalmers is right: the hard problem is indeed that of explaining why and how it feels like something to be conscious [1]. I'll say that again, backwards: If we were unconscious Zombies, but otherwise identical to the way we are now in deed, word, and (unconscious) thought [2, 3], then the hard problem of consciousness would vanish, leaving only the "easy problems" of reverse-engineering our remarkable capacity for thought, word and deed (including, just to set your scale: chess playing, novel writing, and "worrying" -- unconsciously, but verbally -- about the hard and easy problems of consciousness).

Some idea of what those Zombies would be saying about all this is conveyed by the contributors themselves: Pat Churchland thinks the hard problem will evaporate as we find and piece together the easy bioengineering solutions; those will be all there is or needs to be known about why and how we are not Zombies; the rest is just philosophers giving us a hard time. But although Churchland -- and perhaps Dan Dennett, for whom whatever distinction we make between ourselves and real Zombies is at best just a theoretical convenience, so that we can predict and explain one another better -- may sound like model Zombies, duly denying the existence of the hard problem, or at least its hardness, other contributors do not, instead reminding us vehemently that feelings do exist, that we are not Zombies, and that the easy answers (when they come) will not solve the hard problem. In a Zombie world, their curious locutions -- Zombies with delusions of grandeur -- would somehow have to be explained too.

But of course we are not Zombies, as Chalmers, the champion of the hard problem, reminds us. Does he think the hard problem can be solved? At bottom, I think not, but he does insist that the easy solutions will go a long way: After all, there is an undeniable correlation between our feelings and the doings of our brains and bodies. If that correlation is tight enough -- Chalmers calls it an "isomorphism" -- then even the most subtle bits of our consciousness, the minutiae of feeling like this rather than like that, will have brain engineering correlates that are fully analysable and explicable -- except for the fact that they feel like anything at all.

Can't we just take that as a brute fact, a law of nature? Chalmers, quite common-sensically, says we must: There must be something about those engineering features of ours that inexorably carries feelings with it, for whatever reason. Chalmers vacillates between declaring this primary law of nature to be peculiar to living creatures (which could mean that, unlike other laws of nature, which are manifest all over the universe, this one is local to little more than a handful of entities in one thin layer of one minute planet), and what he evidently finds a more plausible alternative: that everything in the universe may be conscious, including galaxies, solar systems, meteorites and electrons.

But, regrettably (though it does not seem to trouble Chalmers), any arbitrary part or combination of the things in the world or their respective parts could then have feelings too. And worse (though Chalmers seems to welcome this consequence, seeing it as a manifestation of the causal character of the correlation in question): every computer-simulation of any of those entities, or their parts, or combinations of their parts, could likewise have feelings (thereby demonstrating the ultimate form of dualism, with (1) the consciousness of the computer itself, as a member of the universe like the rest of us, plus piggy-backing on top of that, (2) the consciousness of all the virtual entities in the piece of reality that the computer happens to be simulating [4]).

Perhaps this would put too much emotion in the world (leaving vegetarians like myself, who try to avoid ingesting sentient creatures, in rather more of a Zeno-Paradox than the one already posed by our own intestinal fauna and flora). Other contributors have other radical solutions, including (1) assigning to consciousness one of the most fundamental and mysterious causal roles in physics, that of "collapsing the wave packet" in physical measurement (Stapp); or (2) assigning to quantum mechanics a causal role in brain function (Hameroff & Penrose).

For what it's worth, I rather hope none of these heroics proves necessary, for they would solve the hard problem of cognitive science only at the cost of creating all sorts of hardship for other sciences that really should not have to be worrying about feelings.

Last, some niggles: Abstracts of the target article, 26 commentaries and Response would have been helpful in navigating this volume. Addresses and affiliations of the contributors would have given us a better idea of who they are and what disciplines they represent. An index is never a bad idea either. And the many typos -- many of which appear to have been introduced from scanning in the pages of the prior journal version -- might have profited from being proofed.

1. Nagel, T. (1974) What is is like to be a bat? Philosophical Review 83: 435-451.

2. Harnad, S. (1995) Why and How We Are Not Zombies. Journal of Consciousness Studies 1: 164-167. http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad95.zombies.html

3. Harnad, S. (1998) Turing Indistinguishability and the Blind Watchmaker. In: Mulhauser, G. (ed.) "Evolving Consciousness" Amsterdam: John Benjamins (in press) http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad98.turing.evol.html

4. Hayes, P., Harnad, S., Perlis, D. & Block, N. (1992) Virtual Symposium on Virtual Mind. Minds and Machines 2: 217-238. http://www.cogsci.soton.ac.uk/~harnad/Papers/Harnad/harnad92.virtualmind.html