Date: Mon, 15 Dec 1997 00:54:59 +0000
Reply-To: "PSYCHE Discussion Forum (Biological/Psychological emphasis)"
<[log in to unmask]>
Sender: "PSYCHE Discussion Forum (Biological/Psychological emphasis)"
<[log in to unmask]>
From: Aaron Sloman <[log in to unmask]>
Subject: Re: More on consistency and consciousness: reasons for view B
Date: Wed, 10 Dec 1997 15:57:55 +0000
From: Peter Cariani <[log in to unmask]>
Peter made some useful critical comments on my list of arguments for
view B --- the view that sometimes the contents of visual and other
types of consciousness are consistent and sometimes they are not. I.e.
arguments for the view that internal consistency is not an *absolute*
requirement for the contents of consciousness, but may often be a
consequence of mechanisms serving other requirements.
> > I said there were three types of reasons for preferring B:
> > 1. No known type of mechanism could implement the general consistency
> > constraint.
> > 2. Empirically there are counter examples.
> > 3. There are more biologically plausible and biologically useful,
> > mechanisms whose side-effects would include achieving consistency
> > most of the time, without totally ruling out inconsistency.
> While I generally agree with View B (as it seems most of us here do),
> that an internally consistent experiential world is usually,
> but not always constructed, ....
Interesting. I had formed the impression that it was a minority view.
Otherwise I'd not have gone to such lengths to defend it!
> ...I think some
> of the proposed arguments against View A have problems.
I think you interpret View A more narrowly than the view I challenged,
namely that contents of consciousness (including at least conscious
beliefs and percepts) CANNOT be inconsistent because of some constraint
imposed by our brains.
If I misunderstood the claim, then my arguments are irrelevant. But
your arguments don't seem to be saying that I've got the claim wrong.
> > 1. No known mechanisms could do it.
> > ...in the most general
> > case it is undecidable ... and even
> > decidable cases ... are combinatorially
> > explosive, and therefore intractable in general.
> I've said this before, but Godel's incompleteness theorem
> (and the related Halting problem) only applies to notational
> systems that are unbounded.
The main fact I was (perhaps carelessly) referring to is that we are
able to think about the natural number system which is an unbounded
system: we can very easily formulate (and even believe) propositions
that we understand clearly and which, for all we know, may not be
decidable, e.g. "There is no largest pair of twin primes".
Contrast: "There is no largest prime number", which is also easy to
understand but was proved long ago by the ancients. Someone who has
never seen the proof might believe correctly that prime numbers must get
scarcer as you go along the number sequence and believe incorrectly that
eventually there comes a point beyond which they don't exist.
Although my comments are trite, they have implications for the power of
the mechanisms that would be required for a brain to be able to rule out
all inconsistent conscious belief contents of which humans are capable.
I don't think your remark about bounded systems has any bearing on this.
It's important to distinguish the boundedness of the brain (on which I
agree with you, obviously) from the boundedness of notational systems
used by brains. You and I use unbounded notational systems of various
kinds including natural and formal languages and most obviously the
everyday notation for numbers.
[[This is related to Chomsky's admittedly controversial distinction
between competence and performance (e.g. in Aspects of the Theory of
Syntax, circa 1963). I think Chomsky got this right and most of his
critics simply failed to understand what he was saying.
This is also loosely related to the fact that a computer can have the
(generative) capacity to construct symbols which just happen to be too
large to fit into its memory. By giving such a machine more memory,
without extending its basic abilities, you change its upper bound. The
*virtual machine* implemented in a computer may have bounds which are
larger than the limits actually supported by the implementation. In some
cases the implementation limits keep changing, e.g. when a process is
sharing physical memory with other processes whose requirements change
over time. Some virtual machines support indefinite precision rationals
and "bigintegers" which have no upper bound (E.g. common lisp).
However, they are typically implemented in systems which do have size
limits. E.g. most CPUs, unlike Turing machines, have an upper limit to
the amount of memory they can address: that follows from the requirement
for direct "random" access on the basis of *explicit* addresses held in
address registers of fixed size. Extending the addressing power of such
a machine requires not only extra memory but additional general
mechanisms, usually involving a mixture of hardware and software, which
would change the nature of the machine.
I have no idea whether brains use fixed size addressing mechanisms at
some important level. Whether they do or not, at another level we, like
the Common Lisp virtual machine, have the *potential* to construct
arbitrarily large numerals, e.g. in our case with the aid of external
memory. I don't know how many other animal brains have this feature, nor
how exactly it evolved. I have some speculations, but, that's a topic
for another occasion.]]
Anyhow, even if it is true that there is a biggest explicit numeral that
can fit into my brain (or any human brain, or even into the universe)
that does not imply any upper limit to the size of numeral about which I
can have a belief. E.g. I believe (and I expect you believe) that the
result of adding that numeral to itself is an even number which cannot
be explicitly encoded in my brain as a bit pattern.
For all these reasons, the boundedness of the brain proves nothing about
decidability of belief contents. Any adequate theory of how the brain
works must explain our undoubted ability to think about infinite sets of
various kinds -- discrete, continuous, linear, multi-dimensional,
tree-structured, network-structured, static, changing, etc.
Perceptual contents may be more limited! Why that's so is an interesting
topic for another occasion.
(We could easily get into a discussion of logical and semantic paradoxes
here, but I suspect the psyche-b list members would not welcome that.
A discussion of the biological advantages of having the abilities that
make possible thinking about infinite sets, and the developmental
processes that produce them in young children could be suitable topics
for another thread.)
> If one has bounded string lengths, then the set of all operations on
> strings can be surveyed, and consistency can be assessed one way or the
That is perfectly correct, but if you think you can use this to prove
that arithmetic as you and I understand it is decidable, then I look
forward to seeing the proof, and, if possible, an answer to the question
whether there's a largest pair of twin primes!
(Of course there's a largest pair of twin primes that can be represented
in my brain. But that's irrelevant.)
I hope it's clear that there's an ambiguity in the underlined
antecedent: for it could refer to the syntactic properties of an engine,
i.e. it has a maximum memory capacity, or it could refer to the semantic
properties: i.e. there's a largest string it can refer to.
The latter does not follow from the former if you have a sufficiently
powerful language, including quantifiers.
Even with my poor, tired, finite brain I have no difficulty thinking
about the set of all even numbers, and noticing that only "half" of them
are divisible by four.
However, I agree with your implicit claim that many people misuse
Godel's theorem in philosophical discussions about minds and machines:
I've argued against people like Lucas and Penrose elsewhere, for
instance, including a long critical review of Penrose's Emperor's New
Mind in the AI Journal, 1992 Vol 56 (pp 355-396). I don't understand all
the issues I discuss and I may have got some details wrong: a of my
critique will be appearing soon in AIJ!
> It's true that the number of alternatives increases
> combinatorically, but this is a matter of computational complexity
> rather than some inherent, absolute computability barrier.
> Ditto for the Halting problem if one has a finite length tape.
That's correct. The complexity/tractability argument is different from
the computability/decidability one.
Apologies if I did not make that clear. I was presupposing that the
difference was understood.
My claim was simply that *even* for decidable/computable cases
tractability can still raise its ugly head, and often does. There are
far cleverer mathematicians and computer scientists than I am who have
been trying with only very limited success to reduce the combinatorial
complexity of some quite important practical problems, e.g. deciding
whether a given number is prime.
Consider these numbers: 7 59 557 3343 10007 1000033 7777801 10000019
Someone who had made a mistake could easily believe that one or more of
these numbers was divisible by 131. To claim that human brains have some
sort of inconsistency detector which would always discover the
inconsistency in such a belief and prevent it consciously being held
would be very rash.
Actually I am coming round to the view that Baars really intended only
to rule out a very *simple* class of inconsistencies e.g. where the
contradiction is made explicit because there are two percepts, two
sentences or whatever, whose syntactic form makes the inconsistency
I suspect he simply did not notice that what he wrote, if taken
literally, referred to a much larger class of inconsistencies which no
brain mechanisms could exclude.
That's an understandable mistake in this multidisciplinary morass.
The task of specifying precisely what claim he was trying to make
I don't think it's a simple matter to specify precisely, since as I
showed previously it's easy to ask questions which sound empirical but
actually are not because a positive answer is incoherent. (E.g. "Can
anyone see a cube and a sphere occupying the same region of space at the
same time?" The question is incoherent, and therefore not empirical.
Likewise Stan Klein's question whether it is possible to see my two
intersecting pencils at the same time in the same place without either
being "transparent" or "diaphanous".)
I suspect that there's no implementation-independent way of specifying
what sorts of inconsistencies the brain excludes. To see what I mean
consider a particular implementation which physically prevents certain
states, e.g. a bistable network allowing only group G1 of neurons all to
be active or group G2 of neurons, but not both groups. Then IF those
groups are used by the brain to represent particular semantic contents,
then those contents cannot be represented simultaneously. However, that
does not rule out group G1 and another group G3 being simultaneously
active even if that represents a semantic inconsistency.
In other words, certain mechanisms (e.g. low level sensory mechanisms)
may compile some semantic inconsistencies into physical
incompatibilities. Then THOSE semantic inconsistencies will be ruled
out, but not others, which have not (yet) been compiled into inhibitory
Anyhow, until the precise type of inconsistency allegedly ruled out has
been specified, apparently empirical questions remain too unclear to be
capable of being settled by evidence.
Perhaps someone with more patience and tact than I have can finish the
task of clarification.
From now on the discussion becomesmore obscure. I am not sure exactly
what Peter is arguing for or against, especially as we both reject
View A. So I don't know if anyone will find it worth reading on.
Here goes anyway.
> In general, the kinds of consistencies (or inconsistencies) that
> we observe regarding our own perceptions and concepts of the
> real-world do not involve huge numbers of properties and
NB: just because I say that issues about decidability or tractability
are relevant to *some* contents of consciousness (e.g. beliefs about
number theory or perception of complex Penrose/Escher pictures) it does
not follow that I am claiming that they are relevant to *all* such
I was arguing against a theory which as actually expressed was very
broad and covered many cases, without qualification. My
counter-arguments dealt differently with different cases, though I
probably did not make this clear enough.
> ...(impossible figures are not very complex -- if they
> were it would take us much longer to recognize their global
Why do you say they are "not very complex" ??? As I suggested in a
previous message, they can be made as complex as desired. With patience,
you can create a Penrose triangle, square, pentagon, hexagon, etc...
Detecting the inconsistency gets harder and harder.
So I don't know what point you are making. I've always allowed that we
can detect *some* inconsistencies. I even allowed that where small
inference chains were sufficient, layered neural nets could do the
propagation required to find the inconsistency quickly.
My argument was only against a view that the brain can prevent *all*
inconsistencies which seems to imply the ability to detect all of them.
A separate argument is that even when inconsistencies are detected they
are not always eliminated (e.g. by suppression of one corner of the
Penrose triangle, or interpretation of the edges as curved.) That was
part of my argument 2 (the empirical argument) against view A. (Baars
notes the phenomenon, but for some reason doesn't regard it as a counter
example, which is why I now think he is using "inconsistent" in a very
> This again is an argument for view B, limited
> but it is at the same time another also
> argument against the relevance of computability.
No. It's irrelevant to the relevance of computability!
Just because *some* examples don't involve computability/decidability
issues, it doesn't follow that *none* do!
I was writing in a context in which the discussion (and the words in
Baar's books and papers) had already broadened the context beyond visual
Until someone convinces me that all possible conscious human beliefs
about numbers are decidable, I'll stick by my arguments against view A
in its unrestricted form. Even then I'll need convincing that the
decision task is tractable given available mechanisms in the brain.
> > Even if the brain does not use propositional or predicate logic it seems
> > (in the case of humans) to have combinatorial capabilities both as
> > regards the variety of 2-D images that can be "parsed" and the variety
> > of 3-D percepts that can be formed. Motion merely adds to the
> > combinatorial complexity.
> There are analog mechanisms that can handle the
> simultaneous satisfaction of huge numbers of constraints.
Agreed. Using soap bubbles stretched over a wire frame to find the
minimum stress shape for a roof over an irregular building is an
Another nice example avoiding combinatorial search is the use of a
network made of bits of string to find the shortest route from A to B in
a network of roads: just build the network in the obvious way, then pull
the knots representing A and B apart as far as possible. The tautest
bits of string will then determine the shortest route. That's a nice
fast parallel analog computation that may be quite difficult to
implement using discrete symbol manipulation (though I expect it's
possible using discrete token-passing mechanisms in a concurrent
However, from the fact that there are lots and lots of cases where
special purpose devices can avoid combinatorial explosions you cannot
conclude that there's anything wrong with my argument. What you need to
show is that
(a) ALL decidable problems can be solved that way without explicit
combinatorial searching (perhaps using new kinds of quantum gravity
(b) All the possible contents of consciousness (percepts, beliefs,
intentions, plans...?) are constrained to lie in the class for which
such analog mechanisms are available.
Either (a) or (b) would be a very interesting claim, and I would like to
see the arguments. Till then I remain sceptical.
> Any large scale system that can come to
> equilibrium is performing something like this. The problem is that when
> we deal with information, we first think of encoding it symbolically
> and computing on symbol strings,
"we first think"? Who exactly?
I for one have spent many years thinking and writing about other
> ...but in general,
> this is not how biological evolution tends to solve these problems --
> things are much more like analog relaxation processes.
"...in general..." "...tends to..." ???
I expect there are *specific* classes of problems for which this is
*always* done. (In fact my previous discussion of winner-takes-all
networks was meant to be an example).
But that still leaves open the possibility that another type of engine
has also evolved in human brains which, for certain purposes (e.g.
planning), uses discrete, inherently sequential, mechanisms.
E.g. unless I've missed something, not all plans can be constructed from
scratch via continuous deformations of some initial physical
configuration. E.g. plans for proving mathematical or logical theorems
don't inhabit a continuous space. Likewise (I think) plans for building
I conjecture that some of the problems that require discrete, largely
sequential, resource-limited mechanisms depend on the use of memory that
can answer questions like: "if the situation part way through is like
this then what will happen if I act like that?". I don't see how an
associative memory that can answer questions like that can avoid working
in discrete question-answer steps (even if it is implemented using
highly parallel continuous low level mechanisms).
Perhaps I am ignorant of some important types of problem solving
mechanisms or associative memory mechanisms.
Maybe you know how to design a house-building planner that gets round
such constraints: I've been thinking about such issues ever since I
attacked logicist AI at a conference in 1971, but I still think that
different sorts of mechanisms are needed for different sorts of
problems. Some of them are approximately like AI symbol manipulators,
while others use different mechanisms.
If that's what you are saying, then we are in complete agreement: and we
need to find out how the different sorts of (virtual) machines are
implemented in brains.
> Motion is only a problem if you are trying to encode things like images
> into pixel arrays.
No. I was not thinking of pixel arrays.
I was making the trivial point that sequences of complex structures at
any level of description typically have more complexity than individual
structures at that level. This is relevant both to planning processes
and perception of moving objects, e.g. assembly of a machine. (Did you
ever play with meccano? It's one thing to see a structure built from a
kit. It's another to find a sequence of operations which will produce
Your comments are concerned only with the *disambiguation* sometimes
facilitated by motion, which I don't dispute, though it's not easy to
design mechanisms with these properties:
> .... if our perceptual systems are set up
> to register ongoing and stable patterns of spatial cross-correlations
> (i.e. sets of spatial relations), then motion actually helps simplify
> the visual world, by segmenting it into stable "objects" (whose
> internal correlations are stable). Relations between moving objects
> are in constant flux, but relations within objects are constant.
Yes, and I hope you are trying to implement these ideas in a working
system. The results could be interesting.
However, detecting constant relations between objects of various sorts
moving with various kinds of motion (rotation, translation, flexing,
etc) is easier said than done.
> When you have invariances like this, (the formation of stable objects),
> then a huge reduction in the complexity of the representation is
> achieved (data reduction).
Yes. But that's just a special case of the general point I made in a
previous message about the brain's ability to use schemata (Kant and
Hebb. I mistakenly wrote "Bartlett").
Even though complexity gets reduced by the use of subsumptive schemata
(abstraction), the combinatoric issues remain. The problems are reduced
but not eliminated. This does not contradict anything I wrote.
> > ...Finding a
> > non-exponentially explosive (in space or time) way to check consistency
> > of an arbitrary propositional expression could bring fame and fortune
> > and answer one of the open questions in the theory of computation.
> I actually believe that there might be ways to do this using analog
> electrical circuits that either settle down into a stable state or
> alternatively, oscillate, blow up or go chaotic.
I have an open mind. If you can do it for propositional calculus I'll be
happy to applaud. Maybe you'll get a Nobel prize?
Of course one can do it in parallel trivially by building 2 networks
for dealing with a formula containing N propositions, and trying each
combination of values in parallel.
This produces a space explosion instead of a time explosion.
I look forward to hearing the results of experiments with your proposed
circuits for detecting consistency.
> But if we are simply discussing logical
> (syntactic) coherence, as in the consistency of a finite formal system,
> then there are no external semantics, so this isn't a problem.
Having a semantics doesn't help. I can formulate N propositions about
things in the world (houses, people, cities, fields, rivers, etc.) and
combine them in various different ways to make statements whose
inconsistency is not obvious, and requires combinatorial checking.
The confusion of logic with syntax is a deep mistake fuelled by the
development of syntactic mechanisms for addressing logical problems. The
mistake is widely encouraged by teachers who introduce logic solely via
syntactic rules (e.g. natural deduction).
It would likewise be a mistake to claim that all the mathematics used by
physicists is simply concerned with syntax because much of the power of
mathematics arises out of new forms of syntax and associated algorithms.
But that's a topic for another day.
> There exist good strategies for designing computer
> programs that don't loop.
But only in special cases. E.g. where loops are known at compile time to
be bounded they can be "unwound" at compile time into non-looping
conditional constructs. In other cases it is possible to prove by
induction on the forms of inputs that an algorithm always terminates.
But not in all cases.
> It might be hard to prove the consistency of
> arbitrary computer programs, but it might not be so difficult to do
> so if there are strong programming structures imposed on things (this
> was part of the thrust of structured programming).
As long as the main questions of complexity theory remain open, we
should keep an open mind. Certainly the task of identifying important
*classes* of sub-problems with reduced complexity is worth while. What
it has to do with how brains work remains to be seen. Reality is full of
If you are saying that there exist classes of problems for which brains
have developed fast complexity-defeating strategies, then I agree. Human
vision is a remarkable example.
This is not relevant to my arguments against an *unqualified* general
claim about a consistency constraint implemented in our brains.
> Again, the big
> problem is semantic consistency, not syntactic consistency.
As indicated above: that distinction is not relevant.
Consistency is an inherently semantic notion (as someone else has
already remarked). Some semantic notions can be compiled into or mapped
onto syntactic ones. That could also be true of any special class of
complexity-reducing semantic relations. (I.e. what AI researchers have
often called "heuristics".)
> > Some people may think that parallelism in neural nets can defeat the
> > combinatorics, but although I have not thought it through, I suspect
> > there is probably a theorem about limitations of perceptron-like neural
> > nets on this sort of task, analogous to the Minsky/Papert results
> > regarding parity and connectedness: such essentially global properties
> > require serial mechanisms (or enough perceptron layers to simulate
> > serial processing).
> I'm a little surprised that these kinds of arguments are still being
> trotted out, even today.
Mathematical truth has no history.
> Single-layer perceptrons are impoverished devices, even as neural
> networks go, so it was (and is still) incorrect to
> indiscriminately apply those
> arguments to all neural networks.
I suspect you didn't read what I wrote even though you quoted it. Look
at the parenthetical bit at the end. Neither my comment, nor, as far as
I remember, the Minsky&Papert book, was restricted to SINGLE layer
> Nobody today thinks the brain is
> a big single layer perceptron. I'm not sure anyone ever did, literally.
I cannot understand what you are getting at since I was not talking
about single layer perceptrons.
> The second general problem with these arguments
I suspect you are thinking of arguments you've heard from other people,
and misidentified with mine. (Likewise, people often mis-quote Minsky
> ...is that,
> biologically- or psychologically-speaking,
> how important is the computation of parity?
Any counter-example, whether biologically or psychological important or
not, can be used to refute an unqualified generalisation.
Had Baars claimed that inconsistency is ruled out only in *specific*,
biologically relevant, contexts the argument would have been different.
> it's obvious that we don't have the ability to pre-attentively
> discriminate between 55 and 56 objects ... It's a very
> artificial and contrived argument.
I think you are rehearsing points analogous to my second type of
argument against view B, namely: Empirically there are counter examples.
Brains can prevent any sort of inconsistent contents of
Sloman argument 1:
Mechanisms for doing this in general don't exist and even when
they do there are intractable sub-classes.
But empirically we can't do it.
I don't see the relevance of your comment, given that I had already
acknowledged the empirical limitations.
> ...I think what we want is
> some kind of broadcast distribution of information that is then
> operated on by far flung sets of neural assemblies that emit their
> own patterns upon activation, with these patterns reinforcing or
> mutually inhibiting each other, with their interactions activating
> still other assemblies.
I would not dispute that *sometimes* space (parallelism) can be traded
for time. I was merely objecting to a general claim apparently made
without any consideration of what sorts of mechanisms might have the
properties required, as if it were a purely empirical issue to be
settled by looking for evidence of what we can and cannot experience.
Like you, I suggested that a subset of cases can be handled by
multi-stable networks using mutual excitation and inhibition
implementing winner-takes-all strategies. (The idea is at least 20 years
old, probably a lot older)
What's needed now is an analysis of various types of mechanisms required
for different classes of biologically relevant tasks, and architectures
in which they can be combined effectively.
> .... like those line drawings of scenes in children's books ...
> ...It's not that we go looking at
> each object one by one and deciding whether it fits -- the mismatch
> pops out.
Sometimes. Not always.
> > Vision has to work fast, at least if you are a bird, squirrel, chimp,
> > tennis player, car driver, etc. .....
> It could be that sensory systems do a rough analysis early on and
> elaborate on it as more information comes in.
This is the theory of perception I've always supported: with different
levels of analysis proceeding in parallel with mutual excitation and
inhibition between levels implementing a combination of top-down and
bottom-up processing, though not only using numerical values.
But in itself this design does not deal with the sorts of inference
chains required to detect inconsistency in Penrose/Escher figures,
especially those where multi-step high level inferences are needed.
> ...If you take a random
> sequence of clicks 5-10 seconds long ... and repeat the sequence
> ....and finally one
> gets the pattern. This can take 30-60 seconds
And longer if the initial sequence of clicks is longer. So?
Aaron Sloman, ( http://www.cs.bham.ac.uk/~axs/ )
School of Computer Science, The University of Birmingham, B15 2TT, UK
EMAIL [log in to unmask]
Phone: +44-121-414-4775 (Sec 3711) Fax: +44-121-414-4281
Back to: Top of message | Previous page | Main PSYCHE-B page