School of Computer Science THE UNIVERSITY OF BIRMINGHAM CoSy project CogX project

Comments on Susan Blackmore
Zen and the Art of Consciousness
Paperback: Oneworld Publications, 2011,
Previously published as: Ten Zen Questions

(DRAFT: Liable to change)

Aaron Sloman
School of Computer Science, University of Birmingham.
(Philosopher in a Computer Science department)

Installed: 31 Jul 2011
Last updated: 31 Jul 2011; 1 Aug 2011; 14 Jan 2012 ; 5 Nov 2014 (re-formatted)
This paper is
A PDF version may be added later.

A partial index of discussion notes is in

There are several online presentations related to the nature of consciousness and how to explain the phenomena (including the existence of qualia) in terms of operations of virtual machinery, in


The book is an excellent read if you have the right kind of interest and the right kind of patience. I regard it primarily as an essay on the phenomenology of certain kinds of puzzlement about consciousness, freedom, self, the relationships between mind and brain (or more generally mind and matter).

I.e. it gives a detailed account of what it's like for an intelligent, well-informed, and honest scientist/philosopher/psychologist to be puzzled about various aspects of what it is like to be alive, thinking, trying to understand and asking hard questions.

What it fails to present is an account of what it is like to be a designer of such a system, i.e. a designer of future intelligent robots, including robots who will be capable of having many of the same experiences and puzzles as Sue Blackmore (SB).

My own work is all about that shift of viewpoint and how it sheds new light on old philosophical puzzles.

The designer standpoint is summarised briefly here:

However there is much more to be said about the variety of possible designs, many of which are to be found in biological organisms.

By exploring both the space of sets of requirements for animals and intelligent machines (Niche space) and the space of possible sets of designs (Design space) we can gain new insights that are not achievable simply by trying to work things out by looking inside ourselves -- no matter how hard we look.

Some of the designs, and that includes most of the animal designs, do not support the kind of information processing that allows an individual to monitor, think about, and remember some of the contents of its own information processing.

However the fact that some individuals can do that to some extent (and I don't know how many species can do it to any extent) does not imply that they have full access to all their own information processing.

Human infants appear not to be able to do it at all, presumably because relevant parts of the information-processing architecture have not yet developed. The ability does not come fully developed at a certain age: suggesting that it is not one ability but a collection of different abilities that develop over time. In Chapter 10 of The Computer Revolution in Philosophy (1978), I have a section that starts:

There are several different sorts of reasons why information about a complex system may be inaccessible to the central processes. Here are some, which might not occur to someone not familiar with programming.
The list there probably needs to be extended. But the main point is that it indicates reasons why the attempt to understand consciousness simply by gazing inwardly at it can have limited success.

Compare trying to understand water simply by gazing intently at it.

We may gain some insight by trying to understand, through both empirical research and trying to build working systems, what the design differences are that may distinguish different species, and different developmental stages within an individual, e.g. a developing human.

Transitions in Niche space and in Design space are characteristic features of biological evolution and also development of biological individuals.

Biological change is a web of concurrent interactions between such transitions going on simultaneously in many species and many individuals.

Many of the transitions involve virtual machinery and cannot be discovered by the methods characteristic of the physical sciences, any more than those methods can be used to work out what is going on in complex virtual machinery in modern multi-processing computing systems.

I am not opposed to looking inside oneself (which is very different from looking inside other people's brains): I use examples of phenomenology of perception, motivation, emotions, mathematical discovery and scientific thinking/explaining in many of my own papers and talks, e.g.
     Ways of making mathematical discoveries by contemplating triangles!

But, as the history of philosophy and psychology shows, no amount of close attention to the phenomena as experienced can provide insight into the mechanisms underlying those phenomena and why they do not fit easily into a physicalist-mechanist world view, and why that world view needs to be supplemented by what we have learnt about virtual machinery from half a century of science and engineering.

There are now many human-created (not always human-designed) information processing systems that have some kinds of self-monitoring and self-modifying capabilities, for instance operating systems that attempt to ensure that they allocate resources fairly between users, or attempt to ensure that they do not get into deadlocked situations (e.g. Component A can't finish some task until B has finished what it is doing, and B can't finish till A has finished). But none of them is able to work out what all its capabilities and limitations are nor how it was designed or implemented.

There's no reason to believe that the very sophisticated biological information processing mechanisms produced by biological processes of evolution and development that have some self-monitoring and self-control capabilities, will necessarily also be able to inspect all the details of their own operation. On the contrary attempts to implement that will produce systems that get stuck in infinite levels of observation, like trying to make the trace-printing portion of a debugging package trace itself.

So gaining the required understanding must involve a different process.

Such insight requires first learning about or discovering ways of designing, making, testing, debugging, extending, explaining, and combining various kinds of virtual information-processing machinery.

Then it is possible to use experienced phenomena as helping to provide requirements for future explanations and models of human like mental phenomena.

My tutorial on Philosophy and AI for AAAI-11 is partly about this:


How to think clearly about yourself and other selves

Free will
I think Sue's conclusion that she lacks free will could have been avoided, as explained in these two papers. SB decides she does not have free will. She's right as regards the two incoherent (e.g. theological, and romantic) notions of free will.

But there are two useful and coherent notions of freedom, one associated with the differences between what you do and what's done to you, between what you might not have done and what you couldn't help doing, and the other associated with differences between what you are legally and morally held responsible for and what you are not held responsible for. (Explained in the "Four notions" paper (which also mentions a fifth notion of a different sort -- being allowed to do...).

Part of the answer to free-will deniers is the fact that events in virtual machinery can be causes, even if the whole virtual machine is implemented in physical machinery. How that can be is a complex story, partly told here

Virtual machinery

Unnoticed qualia
One of the recurring themes in the book is puzzlement about things that seem to exist in consciousness before they are noticed.

An experimental demonstration of an example (which does not work for everyone, though I have used it in many lectures where it works for between about 30% and 70% of the audience) is here:

The book lists various common assumptions about consciousness that SB concludes are erroneous, for example on page 162: But I think SBs philosophical conclusions about being in error on such points are mistaken because she has not considered the right sorts of explanations - and she has been too influenced by Dennett (and possibly Hume though he is not mentioned in the book).

Part of the problem is that, like many others, she is searching for something inside her -- a thing that experiences, a thing that decides, a thing that observes itself -- which she fails to find. She fails because SHE SB is that thing: it is not something else inside her, though there are many physical components and information processing sub-systems within her. She rightly concludes for example that contents of consciousness are not in her brain. They are in her. So is her brain.

It is worth remembering the suggestion made by Peter Strawson in his book Individuals: An essay in descriptive metaphysics (1959). Namely, he suggested that there are things that have both bodies and minds. So both P (physical) predicates and M (mental) predicates are applicable to them.

That will also be true of future robots, though to a limited extent it is already true of computing systems running complex virtual machines. We can say the machine is playing chess, that the machine detects a threat that the machine explores alternative defences against the threat. We can also say that the machine is 50cm wide, that it weighs 6Kilograms, that its power consumption is 2 Watts, that it contains several billion transistors, and so on. Some of those statements use M predicates some use P predicates.

The common philosophical mistake is to look inside the entity for the subject of all P predicates -- it's the whole person, or machine, that experiences, thinks, notices, learns, decides, acts, etc.

But, as in the computing case, there are also parts that have access to different subsets of the information being processed. And some parts of a human (e.g. your low level speech understanding mechanisms) have access to information (e.g. about acoustic features of the auditory signal) that you don't have.

There need not be a sharp boundary between what you do and what you don't have access to, because having access is a dispositional state, and dispositions can be conditional on multiple other dispositions. (How high you can jump can vary from time to time, though the peak height that you mostly fail to reach nevertheless is a height that you can reach.) Similar things can be said about what information that is accessible to you, even if for part of the time you fail to access it, as demonstrated in:

So the things that are on the fringe of consciousness, the things that are noticed, but which must have been going on before they are noticed may be contents of information processing that at different times satisfied different subsets of the conditions for being noticed by the whole person. When they are noticed, that can also make records of previous states and processes become accessible so that it is clear that the sound you have noticed was (sort of) heard before you (fully) heard it. Perhaps within the next 100 years we'll be able to demonstrate such states and processes in information processing machines, though whether current computer designs will need to be replaced by something more biological (e.g. making more use of chemistry) is an open question.

Finding the right way to say all that is tricky. But that does not mean ordinary experiences are illusory in the way SB suggests -- if I've read correctly.

There's more to be said about the book. I think it may become a classic because of the unique combination of qualities the author brings to it.

It's a pity she is not also a designer of working information processing systems! I think she might then have been able to add many additional insights, throughout the book.

Maintained by Aaron Sloman
School of Computer Science
The University of Birmingham