(Still disorganised -- to be improved later).
(DRAFT: Liable to change)
A partial index of discussion notes is in
Also available here:
http://www.scifuture.org/metamorphogenesis-how-a-planet-can-produce-minds-mathematics-and-music-aaron-sloman/ 27 Nov 2015
No also available here
It includes a summary of the Chemoton Theory of Tibor Ganti.
His earlier book, which I have also not read yet,
seems to be very relevant too:
The Origins of Evolutionary Innovations: A Theory of Transformative Change in Living Systems
OUP Oxford (2011) (Oxford Biology)
An interview with Wagner in 2012 is online at https://www.youtube.com/watch?v=wyQgCMZdv6E
For more on construction kits see this (draft) paper:
Please send me additional items for this list.
NOTE: an online PDF version of Hayek's The Sensory Order is available Hayek (1952).
NOTE: Jack Birner has recently written a draft paper that includes more on
Hayek, Popper, and theory of mind, available on Academia:
How Artificial is Intelligence in AI? Arguments for a Non-Discriminatory Turing test. (2014)
Added 26 Aug 2014
The idea of a system with generative power was previously well understood in mathematics and computer science: e.g. a turing machine has generative power, and a recursive or iterative computer program can give a turing machine or conventional computer infinite competence in Chomsky's sense, though with performance limitations, in exactly the same way as he claimed human minds have infinite competence but finite performance -- mainly because of physical size limits. It was also known much earlier that a finite rule or set of axioms can have infinitely many consequences, a point discussed by Kant in his Critique of Pure Reason (1781).
One of the claims of the M-M project is that natural selection is also a mechanism with infinite competence and finite performance limitations. In part that is obvious insofar as natural selection can produce human brains. But long before that happened the mechanisms driving increases in physical complexity and increases in information processing powers had the same sort of "infinite competence", which could more modestly be described as "potentially unbounded competence".
F. H. C. Crick, 1954/2015
The structure of the hereditary material, in
Nobel Prizewinners who changed out world
Scientific American, Topix Media Lab, New York USA 1954/2015 pp. 6--15
More generally, since reading Dawkins' The Selfish Gene soon after it was published I've learnt much from his writings, though I don't think he is very good at debating with theists!
Like many who put forward theories about the evolution of mind from matter,
ignores the need to explain how evolution could produce animals
with the kinds of mathematical capabilities that led to the discoveries (and
proofs) reported in Euclid's elements, discoveries that must have been made
originally before there were mathematics teachers. The need to explain such
capabilities has also been ignored by most researchers in AI, Robotics, and
Neuroscience, as far as I know. See this incomplete survey:
This book, like much of what Dennett has written is mostly consistent with my own emphasis on the need to understand "the space of possible minds" if we wish to understand human minds. Simply trying to study human minds while ignoring all others is as misguided as trying to do chemistry by studying one complex molecule (e.g. haemoglobin) and ignoring all others.
Dennett and I have also written partly similar things about how to think about discussions of "free will" in the light of mechanisms produced by Biological evolution in different sorts of species: compare Dennett (1984) and Aaron Sloman (1992)
In particular, much of what Merlin Donald has written about evolution of consciousness is relevant to this project, though it is not clear that he appreciates the importance of virtual machinery, as outlined in Sloman(2010), Sloman (2013, revised)), and other documents on this web site.
However, some of the "laws of form", which as far as I know they did not discuss, are concerned with forms of information processing and how possibilities are enabled and constrained by (a) the physical mechanisms in which the information processing machinery (even virtual machinery) has to be implemented and (b) the environments with which organisms need to interact in order to develop, learn, live their lives and reproduce -- some of which include other information processors: friends, foes, food, playmates, and things to observe or be observed by.
Kauffman's 1995 book is very approachable:
At home in the universe: The search for laws of complexity
John McCarthy and Patrick J. Hayes, 1969,
"Some philosophical problems from the standpoint of AI",
Machine Intelligence 4,
Eds. B. Meltzer and D. Michie,
Edinburgh University Press,
Also used (with my permission) as the basis for Chapter 2 of
These ideas are closely related to those of Daniel Dennett in
Elbow Room: the varieties of free will worth wanting
Our main difference is that I don't regard what Dennett calls "the intentional stance" as a requirement for a science of mind, since reference to mental states and processes is not merely a sort of useful explanatory fiction: those states and processes, and qualia exist and their existence can be explained in terms of actual entities, states, processes and causal interactions in the operation of types virtual machinery produced by biological evolution rather than human engineering. There is no presumption that the operations of such a virtual machine are always, or even usually rational, as required by the "intentional stance" (if I have understood Dennett correctly).
However, in discussions, Dennett sometimes also seems to hold that view. (There is more on Virtual Machine Functionalism in Sloman (2013, revised)).
Questions from the audience were also recorded. Near the end of the video (at approximately 1 hour 26 minutes from the start) I had a chance to suggest that what he was trying to say about human consciousness and its role in mathematical discovery might be expressed (perhaps more clearly) in terms of the kinds of meta-cognitive functions required in animals, children, and future robots, as well as mathematicians. The common process is first gaining expertise in some domain (or micro-domain!) of experience and then using meta-cognitive mechanisms that inspect the knowledge acquired so far and discover the possibility of reorganising the information gained into a deeper, more powerful, generative form. The best known example of this sort of transition is the transition in human language development to use of a generative syntax. (At one point I mistakenly referred to a "generative theorem" when I meant "generative theory".)
I suggested that something similar must have happened when early humans made the
discoveries, without the aid of mathematics teachers, that provided the basis of
Euclidean geometry (later systematised through social processes). I have
proposed that there are many examples, that have mostly gone unnoticed, of young
children discovering what I call "Toddler theorems", some of them probably also
discovered by other animals, as discussed in
This is also related to the ideas about "Representational Re-description" in the work of Annette Karmiloff-Smith, presented in her 1992 book.
Penrose seemed to agree with the suggestion, and to accept that it might also explain why the basis of some mathematical competences are biologically valuable, which he had previously said he was doubtful about. I don't know whether he realised he was agreeing to a proposal that instead of thinking of consciousness as part of the explanation of human mathematics, we can switch to thinking of the biological requirement for mathematical thinking as part of the explanation of important kinds of human (and animal) consciousness.
This is also connected with the need to extend J.J.Gibson's theory of perception of affordances discussed in http://www.cs.bham.ac.uk/research/projects/cogaff/talks/#gibson