by Drew V. McDermott
MIT Press, 2001
Review by Marcel Scheele on Mar 27th 2002
McDermott has written an excellent essay on
the subject of mind. It covers the whole of what nowadays is understood to be
the subject matter of philosophy of mind. Roughly, one can say that this
consists of the question as to what thinking is and the question as to
what consciousness is. Philosophy of mind construed more broadly as the
question about the relation between mind and world is not the subject of the
book (although any theory of the mind does treat it, if only implicitly, by
giving a theory of what the mind is). Also, questions about something like the soul
of people or questions similar to this are not treated.
is not a philosopher, but an Artificial Intelligence (AI) specialist. That
means that he is very knowledgeable about one approach to the topic of cognition.
Very often non-philosophers talking more broadly or in a philosophical vein
about their topic think it is quite easy to do philosophy of their
discipline. Usually I am not very impressed by the things they write: being
good at a subject does not imply being good in the philosophy of that subject.
McDermott definitely is an exception to that rule. He has understood many
philosophical intricacies of the problems he treats and I think he advances
some of the subjects as well, especially in the part about consciousness.
Let us therefore see what he does in the book.
introduces the problems it deals with and shows some of the difficulties anyone
thinking about the mind encounters. I said above that the main problem is the
question what thinking consists in, but the so-called hard problem of
consciousness is to explain how a physical object can have experiences. This is
the problem of phenomenal consciousness. (p. 20) The second chapter
takes on the problem of thinking. For philosophers and cognitive
scientists in general it does not contain many novel insights. It is however
interesting to read about the problem from the particular viewpoint of an AI
it is quite common to conceive of thinking or of the cognitive apparatus as a
kind of formal symbol manipulating machine. Philosophers tend to call this a functional
system; AI experts tend to call it a computational system.
However, both conceive of such a system roughly as something that takes input
such input can also be the output of another part of the system and
manipulates it according to certain (formal) rules such rules are not
necessarily deterministic rules. This results in a certain output, which might
be behaviour, or serve as input to another part of the system. The AI approach
tries to model such processes in order to see whether kinds of intelligent
behaviour can be modelled. In that sense the approach McDermott takes in this
book is quite interesting, because he has access to particular experiments and
AI systems built, which provides insightful examples.
For me chapter 3 contains the core of the
book. Here the author goes beyond what is state of the art theory of cognition
and tries to give a theory of phenomenal consciousness. The problem is a hard
problem because of the following: (...) it looks at first as if computational
models could never explain phenomenal consciousness. They just dont trade in
the correct currency. They talk about inputs, outputs, and computations, but
sensations of any of these things dont enter in. Hence it seems logically
possible that there could be a computer that made exactly the same sensory
discriminations as a person and yet experienced nothing at all. (p. 94)
this review, I will limit myself to stating and explaining some of the core
ideas of his theory, without pretending to be able to do full justice to it.
In the first
place, McDermott believes that the (kind of) theory contained in chapter two provides
enough material for a theory of consciousness. Chapter 3, therefore, is called:
A computational theory of consciousness.
the second place, he argues that computational systems can exhibit models
that represent states of affairs. A very complex system (or robot) might have
models that represent its surroundings in detail, but a very simple system can
also have such models, for instance in the case where a computer that keeps
inventory of an office building has a model of many of the objects inside the
kinds of system can also have models of themselves. Again, a complex system
might have a model that pictures the self as an autonomously acting agent we
humans are of such complexity. But our inventory-computer also can have a model
of itself, being part of the inventory of the building.
that can be said about consciousness can be put in terms of these self-models.
Self-models are the I in the sentence: I am experiencing pain.
short, McDermott now argues that this notion of self-model can be explained in
terms of, and built with, the help of purely computational systems. In
that sense a computational system can exhibit a self-model that says: I am
This is an
extremely important first step. A second step needs to be taken, though. Is
this pain experience a kind of virtual experience or a real
(conscious) experience? McDermott first shows in what sense it is a simulated
or virtual experience: he shows that it is has to do with beliefs about
ones experiences in casu ones self model. So the conscious experience
of McDermotts pain in his knee consists of a model of himself in which a pain
in the knee figures, but also a belief about that pain, namely that it is a
pain within his self model.
is, I believe, relatively easy to accept that this is a necessary condition
for having experiences, but is it also sufficient? McDermott believes
this is sufficient (in principle at least). He argues that sufficiently complex
systems having beliefs about their own phenomenal experiences (i.e. a system
exhibiting virtual consciousness) are thereby really experiencing (i.e. exhibit
the arguments he gives are correct then he has shown that virtual and real
consciousness are the same and he has shown how a computational system can
exhibit consciousness. If human minds are such computational systems, then he
has given (at least partly) an explanation of consciousness. I do not know
whether the theory is viable in the end, but it is one of the best attempts I
have read up till now, and at least deserves further research by scientists and
After this discussion of his theory of
consciousness, he discusses, in chapters four and five, many possible
objections to his theory and he works out several details concerning it.
The final chapter ends with some
consequences of the theory. What does this theory (or better: what does the
truth of this theory) mean for morality, for our worldview in general
and for religion? Much of what is being said is not very shocking or new
and I will leave it to the reader to decide on the merits.
it seems that the pen of the author began to live a life of its own towards the
last ten or so pages of the book. Suddenly he says that it is indeed hard or
impossible to see a God or morality in such a view of the world, but still it
is reasonable to believe in God. This seems to me to be very strange. Im not
arguing that one should not believe in a God or so, but, given what he wrote
before, it seems to me strange to call it reasonable.
no book is perfect, I guess. This last point does not take away my opinion that
the book is very good and a must for anyone wanting to know something about a
state of the art theory of the mind.
© 2002 Marcel Scheele
Marcel Scheele is a philosopher. He received his masters degree in the Philosophy
of Mind at Leiden University (Netherlands). His thesis was on the functionalist
theory of mind. Currently he is doing Ph.D. research at the Delft University of
Technology on the philosophy of technology. The research concerns the nature of
technical artifacts. It is especially concerned with the question how users
bestow different functions on these objects and how this relates to 'the'
function of an artifact. The main area's of inquiry to this effect are the
notion of function, social ontology, collective intentionality, and meaning. He
is also still working in the philosophy of mind.