Date: Mon, 3 Feb 2003 12:14:59 +0000
Reply-To: "PSYCHE Discussion Forum (Biological/Psychological emphasis)"
<[log in to unmask]>
Sender: "PSYCHE Discussion Forum (Biological/Psychological emphasis)"
<[log in to unmask]>
From: Aaron Sloman <[log in to unmask]>
Subject: Re: Memories are made of this (and a virtual machine question)
Apologies for not responding sooner to various interesting
reactions to my earlier posting, available here:
In part I've been delayed because a hard drive broke the week
before last and had to be replaced with a new one. The linux
virtual machine is wonderful to work with, but even it cannot
prevent faults in physical machines. It took me ages to
reconstitute my working environment on account of having been too
lazy about a backup policy.
On 22 Jan 2003 20:57:32 Jeff Dalton <[log in to unmask]>, a very
experienced and expert software developer, raised some questions and
objections relating to what I had written about the relationship
between virtual machines and the physical machines on which they are
I claimed that understanding this relationship can help biologists
(i.e. not just philosophers) understand the relationship between
animal minds and their brains.
> What is more, we can use these simple cases to refute incorrect
> theories that have confused thinking about the more complex
> virtual machines, e.g. theories which claim that every component
> of a virtual machine must correspond to a physical part of the
> implementing physical machine,
> I'm not quite sure what that means. That there isn't a part-to-part
> correspondence (but instead a correspondence of some other sort), or
> that some components of the VM are not implemented, or that they are
> implemented but their implementation somehow remains nonphysical?
Well, as you know the possible mappings between components of a
virtual machine and the physical components can vary enormously
depending on such things as
- whether there's a virtual memory system (which switches
fragments of the implementation of data-structures between
fast central memory and slower secondary memory, which is
- whether there's a garbage collector (which can reclaim space by
shuffling things round in virtual memory and consequently also
in physical memory),
- whether information in a module is stored explicitly or lazily
evaluated/computed on demand (an implementation difference that
may be undetectable by other modules),
- whether a distributed implementation is used (as in some kinds
of neural nets),
- whether adjacency in a virtual data-structure maps onto
physical adjacency or is implemented by 'pointers', or is
implemented by use of adjacent coordinates in some vector
- whether an 'interpolating memory mechanism' is used (like some
neural nets) which can return values that are not explicitly
stored but computed by some sort of interpolation between
Since evolution has got so much further than human engineers in
so many other ways, we should not be surprised if we find that
there is even greater variety and sophistication in the mappings
between virtual and physical machines in biological systems than
exists so far in man-made virtual machines.
[But we won't find them if biologists don't look for them -- a
task that requires specialist training, far beyond looking for
things like wiring diagrams, or correlations between externally
observable behaviours and brain events.]
My original comment was probably too brief, because it referred
to a variety of different sorts of false assumptions sometimes
made about the connection between virtual and physical entities
(a) there *must* be a regular (fixed) correlation -- one
interpretation of the search for NCCs, though not the
(b) that part-whole relationships *must* be preserved in the
(c) that for every identifiable VM entity there *must* exist a
corresponding physical entity (which is weaker than the
assumption that the correspondence is fixed).
The point is that we already know that those assumptions are all
false in some cases where a virtual machine *as a whole* is
completely physically implemented.
E.g., as you well know, if a huge sparse array or infinite lazily
evaluated list is fully implemented in a physical mechanism that
does not imply that every component of the array or every link in
the list corresponds to some portion of the physical machine.
In short: implementation is a relation between *whole* systems
(whole ontologies) not a piecemeal relation between parts.
Partial analogy: the claim that the socio-economic systems are fully
implemented in the physical world does not imply that the rate of
inflation in the USA corresponds to some identifiable enduring bit
of the physical world that changes exactly as the rate of inflation
changes, or that confidence in the stock markets, which can be a
very powerful force in the modern world has some clearly
identifiable physical correlate that changes whenever the level of
> A simple example is the implementation, using sparse array techniques,
> of an array containing more components than there are physical particles
> in the universe. This can be done on your desktop computer.
> But the array does not contain that many components. As soon as
> you describe it in more implementational terms (as a hash table or
> whatever), that becomes clear.
Here you are mixing up two descriptions: the description of what
is in the array and the description of what is in its
The array DOES contain as many components as the software
specification states. Its implementation does not.
The whole point of a sparse array is that as far as the operation
of the *virtual* machine is concerned it is correct to describe
it as containing for instance 1000000x1000000x1000000 locations
each of which holds a value (which can change over time), even
though the *physical* machine in which it is implemented has far
fewer memory locations than that.
The array in the virtual machine is essentially a function which
when given three integers each between 1 and 1000000 (or between
0 and 999999) returns a result and which can also be run in
reverse, i.e. it can be told to update the value associated with
any particular triple of integers in that bounded 3-D space.
That's what defines the number of locations in the VM array.
(Some languages make this relationship between arrays and functions
explicit by using the same syntax for both.)
How that virtual array is implemented is of no concern when
defining what the virtual array is and does.
The sparse implementation (where only values that differ from some
default are stored explicitly and indexed in a much smaller array in
a manner based on their coordinates in the larger array) works as
long as the vast majority of the array cells hold the same default
value. So only cells whose values differ from the default need to
have their values explicitly stored in the low level implementation.
This is different from an interpolating implementation which
stores a *representative sample* of the whole virtual array
and computes the rest on demand.
It may well be the case that aspects of human memory work something
like these mechanisms -- as indicated by the work on memory by F.
Bartlett about 70 (?) years ago.
His theories implied that instead of all the 'remembered' details
being stored explicitly a method of computing details when
required was stored, which was subject to influences over time
that could cause the 'virtual details' to drift over time, as
revealed in Bartlett's experiments.
(This can't be true of ALL human memory: e.g. many people learn
arithmetical tables, algebraic formulae, poems, piano sonatas,
historical dates, and can reproduce them exactly, even decades
after they were first learnt.)
Some theories of visual consciousness have already moved in the
direction of virtual machines with properties something like
sparse arrays or lazily evaluated data-structures.
The popular, untutored, view is that your current huge array of
visual qualia as you survey a large and complex and rapidly
changing scene (trees waving in the breeze, waves pounding a
rocky shore) is full of detail at every location.
In contrast there are theorists who claim that this is an
illusion, and that instead the contents of individual locations
are 'computed' on demand as attention shifts, sometimes with
paradoxical consequences. (O'Regan? Dennett? and probably many
A more accurate view may be that there is a virtual visual buffer
which somehow presents a lot of meta-information about what
information is available at every 'location' which can drive
processes that rapidly produce details on demand.
[Note that the notion of 'every location' is an ill-defined
notion requiring more detailed analysis: it's part of the
"hallucination" mentioned below.]
(Arnold Trehub's book 'The Cognitive Brain' MIT Press, 1991 (out
of print, alas) presents an interesting neurally inspired variant
of this which appears not to have received much attention.)
Even that idea of the virtual visual buffer is too simplistic: there
could be multiple collections of such 'virtual arrays' on various
scales, with spatial resolution varying across them, some indexed by
retinal location, some by physical location, some by location on a
larger object, all simultaneously available to a collection of
different sorts of cognitive, affective and action-control
mechanisms whose relative importance switches rapidly as tasks
If, in addition, this largely externally driven perceptual system
is combined with an internally directed self-monitoring and
self-evaluating mechanism (sometimes called reflection, or
meta-management) it is to be expected that that 'internal
perceptual system' will also have a mixture of ways of
representing information about what's going on in the externally
directed perceptual system and other sub-systems.
Some of the contents of the self-monitoring mechanisms may be more
or less directly driven by (internal) data, some of them only
available on demand, some inferred by interpolation and other
Most experiments on human consciousness depend on the ability of
humans to communicate what's going on within them: and will use the
combined effects of the above along with effects of remembering and
reporting mechanisms. (Experiments require this, but the systems
being reported on need not have fully reportable contents. Some
animals will have very limited reporting mechanisms.)
Thus what we intuitively think of as the contents of visual or other
sensory experience to which we have direct and infallible access may
be a complex virtual data-structure produced by the combined
- physical and virtual machines involved in perception of the
- self-perception of internal perceptual processes
- memory mechanisms
- reporting mechanisms.
(To say nothing of the influences of wishful thinking,
philosophical fashion, and the pressures of scientific or
Helmholtz claimed that human perception is controlled hallucination.
That's not necessarily a criticism: controlled hallucination may be
a brilliant solution to a very hard biological engineering problem.
It may also apply to self-perception which occurs when people think
> and theories that claim that
> virtual machine entities and events cannot have physical effects
> since they assume that only physical causes can have physical
> But VMs don't show that non-physical causes can have physical effects.
> When a VM is physically implemented, the causation of the implemented
> VM entities is all physical. (So that assumption "that only physical
> causes can have physical effects" seems safe.)
The 'all physical' claim is a philosophical claim that
presupposes an answer to the hard question 'what is causation?'
(One of the hardest unsolved problem in philosophy).
This is a topic of much philosophical debate. Jeff's view fits
with some of the standard philosophical views according to which
only physical mechanisms can have physical effects.
To make this work one either has to deny that non-physical things
(like anger, relative poverty, economic inflation) can have
effects (i.e. they are purely epiphenomenal) or adopt an
'identity' theory (e.g. the virtual machine entities ARE just the
physical entities that implement them, or some such thing.).
Like many philosophers, I've argued against this elsewhere.
E.g. identity is a symmetric relation, which would imply that if
virtual entities are implemented in physical entities than
physical entities are implemented in virtual entities!
More importantly, the argument makes incorrect assumptions about
causality, as if it were a kind of stuff obeying some kind of
conservation law. If instead we regard all causal relations as
amounting to the truth of some complex set of counterfactual
conditionals, which answers some context-specific question,
then we can show how statements about physical causes and
statements about VM events as causes of the same physical event
can both be true without assuming virtual/physical identity, or
denying that VMs are fully implemented in physical systems.
A sketch of the argument is in this slide presentation (pdf and
ps), though it needs to be filled out in more detail:
also in this online tutorial, in the latter half:
The 'reductive' analysis of VM events and causal powers assumes
that physics has some well defined bottom level. Is that obvious?
Chemistry is implemented in physics but that does not prevent
chemists discussing chemical virtual machines as having causal
powers. Likewise genes, selection pressures, biological niches
are implemented in chemistry and physics: but that does not mean
that the study of biology is just the study of chemistry and
> For instance, putting a program into execution just changes the
> state of the hardware.
If that were the only thing that happened, how come software
designers and people trying to debug complex programs don't
"just" think about states of hardware? (In fact many of them
don't need to think about hardware at all: the hardware keeps
changing anyway, while the virtual machines remain the same
apart from getting faster, and maybe bigger.)
These engineers have learnt to think about complex interacting
non-physical entities in virtual machines which are *implemented*
in hardware (different hardware at different times) because
thinking about the virtual machines instead of the hardware has
led to great advances in our ability to design and build ever
more sophisticated and useful systems.
(How this happened is a long and interesting story connected with
the development of new languages, partly analogous to the
development of new or extended languages in the history of
physics, chemistry, biology, geology, cosmology, etc.)
The history of science and mathematics is full of examples of the
huge increase in understanding and power that comes from our
ability to understand different levels without having to be
committed to reductive identity theories.
> The "virtual machine", once compiled etc
> is then a physical machine.
By using the word "is" you are expressing the identity theory. It
can't simply be regarded as axiomatically true, in view of all
the objections to it, even though many philosophers (mostly
ignorant of software engineering) accept it.
Incidentally self-modifying *interpreted* programs are even more
resistant to this kind of reductive analysis than *compiled*
programs. Use of incremental compilers where the compiler is part
of the run time system is an intermediate case.
> I suspect that the deepest theory we'll ever have about qualia will be a
> theory that tells us how to build robots with virtual machines so like
> those of humans that the robots go through the very same virtual machine
> states and processes that led philosophers to invent the notion of
> qualia, feels, what it is like to experience something,
I am inviting consideration of a new paradigm of philosophical
explanation, where philosophy advances through deep advances in
science and engineering, leading to the replication of
philosopher's thought processes.
I.e. sometimes you can solve a philosophical problem by moving
outside the realm of philosophy and showing how the discovery of the
problem is a biological event which can be explained biologically.
(Not all philosophical problems are solvable that way: though
maybe more than we think.
E.g. is the notion of 'cause' best thought of as a tool developed
by biological organisms for dealing with a complex world. I don't
know. How many organisms have some grasp of causation which
they use in dealing with their environment? How many different
such biological implementations of a notion of causation are there?)
> That begs the question of whether it was virtual machine states
> that led philosophers to invent the notion.
I was stating a conjecture. Conjectures can be rebutted or supported
if we learn something new that contradicts them or supports them,
though scientific theories are never conclusively proved or refuted.
I am trying to get people to think about possibilities most people
don't consider, so that they can be investigated fully.
VIRTUAL MACHINES AND PROGRAMS
In his response to George McKee Jeff wrote
> Aaron is using VM in a general way, roughly how I would use "program",
That's a common gloss but it is seriously misleading. A program is
strictly a piece of text, or some other static structure (e.g. a
compiled program may be a collection of bit patterns in a computer
memory, or in a stored file). A program could also be a
data-structure operated on by an interpreter, but is at best a part
of the larger virtual machine that includes the interpeter and the
rest of the run-time system.
A virtual machine, unlike a program, is usually a complex collection
of interacting entities with changing states and processes in which
there are many causal relations not found in a static program.
Of course virtual machines are often produced by running a program.
But there are other kinds, e.g. the virtual machine in a trained
neural net that's controlling some machinery may not be best viewed
as a running program especially if the net is implemented in
hardware. The virtual machine is not just that hardware net but is
something produced by training.
I don't know if any of that help.s
Aaron Sloman, ( http://www.cs.bham.ac.uk/~axs/ )
School of Computer Science, The University of Birmingham, B15 2TT, UK
Back to: Top of message | Previous page | Main PSYCHE-B page