How to deal with worries about entropy.
Moreover: What happened to Droguli?
(That's explained below.)
This paper is
A slightly messy, automatically generated, PDF of this document is here:
A partial index of discussion notes is in
Work in progress
Comments, criticisms, suggestions welcome
(by email to A.Sloman [at] cs.bham.ac.uk)
For more on the "Fundamental Construction Kit" (FCK) and evolution of derived "Intermediate Construction Kits" (ICKs) see the document on "Construction kits for biological evolution", and others listed below.
Until fairly recently, human-designed construction-kits did not include parts and tools specifically for building information processing machines that make use of information, as opposed to constructing, storing or transmitting information bearers. There have been information using machines, e.g. music boxes and "player pianos" that use stored information (e.g. punched cards, punched paper rolls or strips, prongs on rotating disks) to produce sounds, and mechanical devices that can perform complex sequences of actions under the control of an information store, e.g. punched cards, used in Jacquard Looms. These information users were built by humans, using the available mechanical engineering technology of the time.
For very much longer, biological evolution has been producing increasingly complex machinery that uses increasingly complex forms of information, using increasingly complex evolved construction kits that are essential for life and evolution. This must have starting with just the fundamental construction kit provided by the physical universe, followed by increasingly complex derived construction kits, including kits for building information-based control mechanisms needed by all organisms that can reproduce and can initiate, maintain, or resist changes in themselves or their environment.
What sorts of information-processing mechanisms (or proto-mechanisms) were included in the (derived) construction-kit (or kits) available when this planet formed, and what were they capable of? Knowing the powers of the fundamental construction kit, and the powers of relatively quickly reachable derived construction kits, provides deeper answers to questions about origins of life than knowing whether comets brought the first amino-acids to earth. If the kit has the right generative powers, then different histories might have produced the smallest assemblies capable of supporting life.
In particular, the original construction kit (or kits?) that allowed evolution to get started, and to produce increasingly complex and sophisticated organisms, must directly or indirectly enable various kinds of information to be assembled and used that could only arise in relatively late products of evolution. I.e. the potential existed long before it was realised. Long before mathematicians existed, the (still only partly understood) fundamental construction kit provided by physics (and chemistry) made it possible in principle for mathematicians to come into existence -- among a vast array of life forms of many types and scales, including many evolved biological mechanisms with complex mathematical features required for reproduction, development, learning, feeding, competing, mating, and caring for young -- though not all organisms do all these things. How is all that possible?
It would be misleading to answer would be that the orignal physical universe already provided the possibility for evolution of brains with the mathematical powers of, e.g. Archimedes. Misleading because there was initially no physical mechanism that could collect all the required atoms and molecules and organise them into all the required structures. It should be clear that there was an earlier state at which there existed possibilities for many chains of construction of increasingly complex physical mechanisms, and that mechanisms capable of constructing of human mathematical brains could exist only at very late stages in some of those chains.
In other words, whatever the universe was like when it first existed, the only way the possibility of creating a human-like mathematician could have been realised was by combining results of a very large number of chains of intermediate designs for increasingly complex living, behaving systems, including vast numbers of non-human organisms.
The main goal of the Meta-Morphogenesis project is to identify the various chains of evolved designs and studying the possible forms of development of various competences, especially mathematical competences that can't yet be explained by known types of computational model.
Long before mathematicians were produced, biological control mechanisms were produced whose effectiveness depended on mathematical features discovered and used by evolution, including, for example, negative feedback control mechanisms for maintaining states or trajectories, long before humans had invented feedback control devices like the Watt centrifugal governor, or self-orienting fan-tail windmills.
At first this might seem to contradict the second law of thermodynamics, according to which the order in the universe, or any energetically isolated part of the universe, cannot increase, whereas biological evolution produces increasingly ordered biological mechanisms, including plants animals and other forms of life, over time.
Natural selection alone does not explain how increasingly complex forms of life can come into existence: we also need mechanisms that are (a) capable of generating the increasingly complex options between which natural selection chooses and (b) capable of generating environments within which the choices among a fixed set of options will vary, e.g. whether option A or option B does better depends on whether they are being tested in a hot or a cold climate.
The theory of fundamental and derived construction kits referenced below provides an outline explanation of how this is possible -- though many details are still unknown. The fundamental construction kit constituted by the physical universe provides not only possible mechanisms for increasingly complex life forms, using increasingly sophisticated information processing mechanisms but also mechanisms that can produce increasingly demanding environmental challenges to be met by products of natural selection, including challenges produced by other products of evolution.
But even that is not an explanation: how can steadily increasing complexity of structure and function emerge despite the second law of thermodynamics? This is a problem with which Terrence Deacon struggles painfully in Deacon(2011). He seems to be unaware that there is a relatively simple answer that depends on what can be called "ratchet" mechanisms. Rachet mechanisms can allow matter, energy and complexity of organisation to be accumulated in increasingly complex structures, in environments with an external source of energy or a supply of chemically stored energy -- like a planet that is both heated by solar radiation and/or has a great deal of stored energy (e.g. volcanic, gravitational and chemical energy) as a result of the pre-history of the planet.
An example could be a horizontal tray on which marbles are moved by shaking the tray horizontally, randomly. If the tray contains hemispherical hollows with diameter a little greater than the diameter of the marbles, and each such hollow is in a shallow slightly sunken area, then random horizontal shaking of the tray can soon get every marble into a hole, where it will remain despite the shaking, thanks to gravity. Details of the geometry and the gentleness/violence of the shaking will affect whether there's only one marble per hole or more than one, and also how often the shaking will cause sunken marbles to emerge from the holes. Of course, violent vertical and horizontal shaking will dislodge some or all of the marbles from their holes, as will turning the tray upside down! But no special mechanism with the sort of complexity Deacon seems to assume is necessary, is required to produce a balls-in-holes organisation out of random horizontal shaking of this apparatus.
Thanks to gravity the amount of energy directed upward, that would be required to get the balls out of the holes, is not available in this apparatus. I.e. it has temporal asymmetry built into it, like a great many products of human engineering and even more products of biological evolution at molecular scales, based on quantum mechanisms of the sorts discussed in Schrödinger(1944). Instead of marbles in holes he discussed oxygen atoms that can be attached to different carbon atoms in a chain, forming different isomeres. In the case of molecules the presence of catalysts rather than shaking can alter chemical bonds.
The use of such a ratchet mechanism to produce consumption of "food" leading to growth and reproduction, without violation of the second law, was demonstrated by Lionel Penrose using simple hinged and sprung mechanisms which he called "droguli" in a lecture I attended in Oxford around 1960, as explained below.
I suspect that if Deacon had understood these points about ratchet-like mechanisms, the length of his book could have been reduced substantially, and much of his hard to remember novel terminology would have been redundant. The word "ratchet" is not in his index.
Note added: 4 Nov 2018
After I had written the above, Google informed me that Richard Feynman had explained some of the basic ideas in his "Ratchet and Pawl" lecture (1964?) http://www.feynmanlectures.caltech.edu/I_46.html
made available by Michael Gottlieb in the Caltech Feynman Lectures on Physics at
Note that I am not claiming that an isolated system (with no energy input) can
illustrate these points, although that idea has been considered and
analysed/refuted by physicists, including Feynman. See also
'The Feynman-Smoluchowski ratchet', by Greg Harmer and Derek Abbott
(Comments and suggestions by email are welcome, including links to relevant online resources.)
Many creationists have attempted to use the second law, or entropy-based arguments, to refute the theory of natural selection. An example is linked below.
This document does not defend any theistic form of creationism, though it is about beginnings. More precisely: it addresses the question: What must the world be like for life as we know it (including human minds) to be able to emerge in a physical universe without life?
I shall attempt to map some of the detailed features common to wide ranges of
living things, and also common to ways in which they develop and evolve, onto
requirements for the physical substrate. We can then ask what the physical
substrate needed to be like at the birth of this planet (or earlier) in order to
make possible evolution of known forms of life, and intelligence.
Part of the answer is spelled out in a discussion of construction kits,
including construction kits for building construction kits, here:
That paper does not deal with questions about entropy and the second law of thermodynamics. This is a first draft attempt, complementing that paper.
This leads to a set of questions about requirements for life that differ from the questions commonly posed, e.g. about temperature ranges, availability of carbon, water, oxygen, etc. Those are questions about conditions for sustaining life, not producing ever changing life forms.
The processes of biological evolution produce increasingly organised forms of physical matter with increasingly organised behaviours and more and more sophisticated forms of information processing, including abilities to discover deep mathematical theorems.
My question (asked by many others in the past) is "What must the physical
universe be like in order to make such evolutionary processes possible, and what
must a physical planet be like initially to enable such processes to occur?" But
I'll mainly be concerned with requirements for information processing -- in the
original sense of "information", used, for example, by Jane Austen in her
novels, not Shannon's notion of "information":
The answer proposed here will refer to quantum mechanics, but not to most of the features of quantum mechanics often referred to in connection with minds, consciousness, and life. In particular, most of the discussion does not depend on non-locality, quantum indeterminacy, wave-particle duality, parallel-universes, limits to measurement, statistics, or anything remotely like Schrödinger's cat. (Though I am interested in abilities of ordinary cats, squirrels and many other animals.) Near the end, I'll relent and hint at a possible role for superposition in some kinds of mental processing, but only tentatively and vaguely. This topic is developed further in
Quantum Mechanical Construction Kits?
(Possible roles in evolution of minds and mathematical abilities.)
A striking feature of the photoelectric effect is that electrons forming part of an atom can be dislodged by electromagnetic radiation (e.g. light shining on them) but seem to need a special sort of key to dislodge them rather than just a minimum amount of energy. The key is high enough frequency (or, equivalently, short enough wavelength) of the radiation. Below a certain frequency, increasing the intensity of the radiation (the amount of energy being transferred) will not cause the electron to be dislodged. However, increasing the frequency of the radiation (reducing its wavelength) to the "key" frequency, can allow even very small amounts of energy to dislodge the electrons, albeit fewer of them. If the radiation frequency is above the crucial level (for the material in question) then increasing the intensity (transferring more energy) will cause more electrons to be dislodged and the electrical current produced will increase.
Chemical locks and keys
That idea of a special key being required to unlock things is not unique to use of radiation to dislodge electrons. Chemical bonds can be very stable, yet unlocked by a chemical key (a catalyst). Likewise a catalyst can cause molecules to join up in specific configurations that are resistant to disruption by minor impacts. For more on catalysis see:
In chemistry the mixture of continuous change (folding, stretching, coming together, coming apart) and discrete change (formation and unlocking of chemical bonds, e.g. in catalytic processes) means that complex collections of atoms buffeted by external forces can have multi-stable sub-states, and these can form at many scales, in enormously varied structures, with enormously varied potential for interaction with other molecules.
Mixtures of randomness and stability
In a collection of items moving around continuously with randomly changing accelerations (causing changes of speed and direction), any relatively large stable, impermeable, structures can interfere with those motions and constrain them so that they lose some of their randomness.
An elementary example: compare (a) and (b), where (b) has such additional structures:
(b) A situation like (a) where the board also has fixed bumps that can alter the direction of motion of marbles that hit them, dents into which marbles can come to rest for a while, causing others that hit them to be deflected, and grooves into which marbles can fall, after which they will be constrained to move only along the length of the groove (in either direction) depending on the depth, shape and orientation of the groove. If such a groove-trapped marble gets enough energy (e.g. from something hitting it, or hitting the board) then it may jump out of the groove and return to its previous unconstrained pattern of motion. (Compare pinball/bagatelle machines.)
Now think of a more general mechanism extended across a 3-D space in which marbles are moving around, along with other things floating in the space that can temporarily constrain the motions of the marbles, in more complex ways than the grooves, dents, etc. in (a). E.g. marbles may bounce off them, stick to them, or lock into structures that allow constrained motion e.g. 3-D tubes playing the role of the grooves in the 2-D scenario.
Again the presence of structures that can reflect the marbles, or temporarily capture them, with or without allowing some movement in fixed relations to the capturing body will reduce the randomness and independence of the motions occurring. The importance of quantum chemistry, as I understand it, is that the combination of more or less random energy flows, and presence of catalysts, can cause formation of relatively stable complex structures of varying shapes and sizes, whose presence will then change the effects of random bombardment or radiation -- including sometimes producing new larger structures that will add new constraints and opportunities.
Constraining the constrainers
Suppose that when the constraining entities move around in 3-D space some of them are also capable of locking together when they meet, thereby reducing their independence of motion for a while (until something unlocks). That process may produce new larger, more complex constraining entities which may be further enlarged later on or disassembled depending on which sorts of collisions occur.
If the physical mechanisms allow temporarily stable structures to form on multiple scales (as happens in chemistry) there will be opportunities for increasingly complex stable structures to emerge without any designer intending them or guiding their formation, as long as gusts and streams and spurts of energy keep perturbing the system by bringing components into configurations where there is the possibility of forming a new enduring combination, and also providing the extra energy to get objects over the "hump" to a new more stable configuration (e.g. where something is latched onto something else), possibly controlled by presence or absence of catalysts (keys to allow locking and unlocking).
The result of such an interaction may be a new structure formed from two previously separate structures A and B, where the new structure has abilities to attach to things that neither A nor B, alone could attach to. (Like a hair clip that has two arms connected by a spring at one end, where neither arm on its own could clip onto hair.)
How the new (temporary) stable structures affect or constrain the motions of other things will depend on the shapes and other properties, e.g. rigidity, hardness, elasticity, density, etc., of all the items involved. Ratchet mechanisms, in particular, can make a huge difference, and produce increasing order in physical mechanisms that would be violations of the 2nd law of thermodynamics, but for the dependence on external energy sources. This fact was demonstrated by the Penrose droguli.
Reproducing droguli (Lionel and Roger Penrose)
While writing this document, I remembered that many years ago Lionel and Roger Penrose (father and son, biologist and mathematical physicist) developed a demonstration of a very simple variant of the sorts of process discussed above. They used mechanical devices made of wood and metal, including, hinges, latches and springs, shaken around on a tray.
The objects were capable of locking together on impact and certain combinations had the ability to attach successive new smaller pieces ("food") to themselves and grow, until above a certain size, and then, split into two pieces.
After that, the process would repeat with each of the two new pieces latching onto smaller pieces as a result of continued shaking of the tray, growing, then splitting. The process continued until there were no more of the smaller pieces ("food") to allow existing products to grow. This was offered as a very abstract model of asexual reproduction. The devices were called "droguli" (singular "drogulus").
I heard Lionel Penrose give a lecture on droguli around 1960 when I was a student in Oxford. I am amazed that it is now very hard to find any reference to droguli. This is the best I've managed to find:
It seems that in a radio broadcast some years earlier, Ayer, the philosopher, had talked about a drogulus that was invisible, intangible, and had no effects, in order to pour scorn on certain philosophical (theological?) theories. So perhaps he was pleased to be able transfer the label from something utterly useless to something of great interest.
Note added 21 Jul 2015
Some time after writing the above I tried another google search and found a reference to this very short note:
The mechanism described in that paper is simpler than the mechanism I recall being presented by Lionel Penrose in his lecture on droguli, which must have been a few years later than 1957 (the year I arrived in Oxford as a mathematics student, about two years before I switched to philosophy). The Nature note does not include either "drogulus" or "droguli".
If some of the temporarily stable structures are also able to store energy that they can release under conditions that are partly controlled by parts of those structures then some complex persistent structures may begin to manipulate other complex structures -- assembling and disassembling them.
In many such cases the energy required to undo the new structure may be greater than the energy required to create it, because of asymmetries in the mechanism. This is the basis of the common ratchet mechanism: the Penrose reproducing droguli essentially made use of ratchets, and an external (random) source of energy provided by someone shaking the tray.
In a subsequent state something new could come between the protruding ends of the two arms.
If at the same time yet another object causes the strut to break or fold, it will cease to hold the two arms apart, and they will come together and press on the new object, and the force applied to that object could trigger some other reaction, creating yet another new object, possibly more complex than the preceding objects.
This very abstract scenario in which structured objects affected by random forces interact so as to create new, more complex, relatively stable structured objects may appear to violate the 2nd law of thermodynamics. But the violation is not real because we are not describing a "closed" system.
Speaking loosely, we can say that external sources of energy produce random changes, but the already formed structures constrain and direct the consequences of those changes in non-random ways. In particular, new random bursts of energy may suffice to cause previously built structures to overcome an obstacle to bonding and forming a new more complex relatively stable structure.
Moreover if some of the stable chemical structures store energy that can be released only in certain ways (e.g. the electrical energy of a laptop battery can only be released under certain conditions, e.g. when a circuit is closed), then chemical keys (catalysts) may in some circumstances trigger complex constrained non-random processes, like the processes of formation of an embryo from a fertilized egg, or the growth of a plant from a seed, triggered by temperature, moisture and light.
Propagation of constraints across scales
Of course, what sorts of new large objects can be produced will depend on the shapes and other physical properties of the original small components, and the presence of non-uniform external forces that provide new (e.g. kinetic) energy that can be stored (e.g. as potential, elastic or chemical energy).
The types of shapes that can be assembled by applying forces to lego bricks and the types that can be assembled from meccano components, including nuts, bolts, girders with holes, angle-girders, plates, flanges, rods, cranks, wheels, string, clips, etc. are very different, though both types are of unbounded complexity, in principle.
The types that can be formed by assembling molecules will be even more varied and will have far more complex potential behaviours fuelled by chemical energy and controlled by molecular structures and their constrained relative movements.
Extending Tibor Ganti's analysis of requirements for life
The ideas presented here build on the work by Tibor Ganti (2003) on minimal conditions for life to form in something like a chemical soup. See also Ganti's Chemoton theory: http://en.wikipedia.org/wiki/Chemoton)
In contrast, I am trying to identify the minimal physical properties of a universe (or a planet!) in which life forms as diverse and complex as those found on Earth (so far) can evolve by natural selection (or even by some other means, including future types of laboratory synthesis). For this we need to explain the possibility of an enormous variety of biological phenomena, on many different scales, from microbes to elephants and blue whales, with many different information-processing capabilities, in addition to those required for storage of information in molecular structures that can control subsequent chemical processes, such as assembly of new structures from old.
There's no need to limit the requirement to evolution on our planet. There may be other parts of the universe in which very different, and even far more complex forms of life have evolved, or could evolve, using the same basic construction kit. (Compare the diversity of human languages implicitly supported by the human genome.)
It follows that these conditions require not only the ability to support Ganti's chemical configuration, but also the generative (combinatorial) power to support arbitrarily(?) large and complex life forms, with arbitrarily complex controlled, useful, behaviours.
Those two conditions are importantly different: specifying a structure for a complex animal body, specifies a possible variety of processes in which parts change their relations to each other and to things in the environment, but leaves open which of those processes will occur, i.e. what sorts of movements it will make in various situations. That requires additional information and typically that information is not static, but can change with learning. So in addition to the basic building blocks of organisms being able to support an enormous variety of physical structures they also need to be capable of representing (or encoding) an even larger variety of types of process in which those physical structures interact with other things.
Furthermore, since the repertoire of processes for many organisms will not be fixed, an additional repertoire of forms of representation for strategies or patterns of alteration by learning or instantiation of generic patterns via inheritance mechanisms will be needed. In some cases these will make use of virtual machines, as argued in http://www.cs.bham.ac.uk/research/projects/cogaff/misc/vm-functionalism.html
All of this implies that the mechanisms supplied by the physical universe need to be able to support layers upon layers of complexity of structure, some of it physical structure but probably a lot more of it "invisibly" representing or encoding (in different ways) information about structures, processes, and both physical and abstract machinery. The genius of pioneers such as Frege, Church, Post, Turing and others in the recent past is that they were able to specify abstract machines that were physically implementable, and which had relatively simple structures but were capable of generating enormously varied structures and processes, including everything we now find on the internet and much besides.
But they all depended on the assembly of the machines and the lowest level control functions to be determined by human designers (and later on machines programmed by human designers, or machines programmed by machines programmed .... by human designers).
In contrast the physical universe at the time of the formation of this planet had the capability not only to combine components to provide all the physical structures and all the information structures now found on this planet, but also to do it entirely unaided, and without any plan or guidance from would-be users, until evolution had produced animals that could take decisions about breeding and genetic engineering.
One of the things we have learnt from AI research (and other kinds) is that it is possible to specify a system whose combinatorial generative powers have the ability to produce solutions to complex problems, but lack the ability to find the solutions without using combinatorial search processes that are too time consuming (or space consuming) to fit into the physical universe. So another requirement was for the physical universe, at the time the earth formed, to have the ability to evolve mechanisms that would constrain such searches, e.g. using symmetry and other properties of search spaces, so as to allow a huge variety of wonderful solutions to a huge variety of horrendously complex design problems to be found, in a mere four and a half billion years on a medium sized planet"
I suspect nobody knows yet how the basic laws and materials of physics do that. The meta-morphogenesis project may eventually provide answers.
It could turn out that any system that can meet Ganti's conditions can also meet additional conditions for supporting arbitrarily complex life forms, but that's not obvious: there could be a universe, or even a location in this universe, where the types of materials available meet Ganti's conditions, but can support only fairly simple life forms -- e.g. because they do not enable the construction and maintenance of multi-cellular organisms with abilities to act on large objects in the environment.
I am not aware of any intrinsic physical limit to possible lengths of polymers, which provide one example of a form of chemical combination that seems to support ever increasing size. There are probably many others, including structures that are more heterogeneous than polymers. It seems intuitively that the variety of ways in which chemical structures can combine to form larger structures has no intrinsic upper bound.
Moreover, as the sizes of molecules increase so do their diversity of composition, physical properties, and chemical properties. This suggests that chemistry also provides a basis for arbitrarily rich and complex information stores.
So there may be no limit to the sizes and complexity of behaviours of possible organisms, apart from limits that come from problems of size, such as gravitational collapse, and problems of communication across large distances, required for coordination of an organism's behaviour, or lack of access to an external supply of sufficient nutrients, or even problems of waste disposal for a very large organism.
Comparison with Turing machinery
There is also no theoretical limit to the size of the symbol structure that can grow on the tape of a turing machine. However the larger the structure the more time will be required for interactions between parts added at different stages, because of the linear nature of the tape.
In computers using the von Neumann architecture, the use of bit patterns to store addresses of locations in a large memory can make access and updates very fast, but the "word length" (number of bits in an address) will limit the size of memory that each machine can handle, though combining different memory structures (e.g. core memory and hard drives or tapes) can allow limits to be overcome without redesigning the core machine, at the cost of slowing things down.
In contrast, the use of 3-D chemical structures in 3-D space to store information seems to allow arbitrarily large information stores (until something like gravitational collapse becomes a problem). Moreover if the molecules used can be bent, twisted and folded, then it is possible for items added at different stages to be brought close enough to each other to interact, possibly using "bridging" molecules to link items.
There will still be constraints, however: a newly added substructure cannot be brought close to two pre-existing structures that are rigidly held very far apart for some reason. Nevertheless being embedded in a 3-D geometric space rather than a linear space of concatenated bit strings allows a rich variety of chemically constructed communication channels that can be used in parallel, as illustrated by neural networks.
I suspect that we don't yet understand more than a tiny subset of the varieties of chemical information processing used in life of earth. For now, all I am trying to do is emphasise the variety of types and structures of information stores and information-processing systems made possible in principle by chemistry (which in turn depends on quantum physics). Natural selection on earth may have made use of more aspects of chemical information processing than we have so far discovered.
Possible further developments
This discussion is merely a very rough, abstract, partial answer to: how is it possible for random processes to produce some large, complex objects with complex behaviours, starting from very much smaller elements affected by randomly varying forces. An important additional step is required before this has any chance of becoming an explanation of the origins of complex living organisms, namely the identification of physical structures that can encode acquired information that can later be used, e.g. information about where things are, information about how to do something or prevent something, information about how to create a new instance of something, and so on.
Available evidence suggests that chemistry provides the required initial basis, with sub-atomic components capable of being rearranged in different sorts of molecules with different physical and chemical properties, including different combinatorial properties.
Later on, forms of information storage and mechanisms for manipulating and using the stored information come out of evolutionary processes based on that chemistry.
Eventually this leads to evolution of chemistry-based mechanisms able to construct neural systems. Over time (after further evolution) the resulting brains get larger, more complex, supporting more varied types of information-processing. Much later on brains become able to support virtual machines in which virtual symbolic structures such as sentences, equations, maps, abstract diagrams and many information structures that nobody understands yet, come to be used. (Some requirements for virtual machinery are mentioned briefly below.
Later still, machines produced by machines with brains instead use transistors, wires, and other devices to build more and more complex information-processing machines including physical machines that can support more and more complex varieties of virtual machinery..
For more on virtual machinery including the causal powers of interacting virtual machines, see
At present, very little is known about how brains support virtual machinery as they clearly do while you think about what you read, plan a move in chess, try to remember what you had for lunch yesterday, wonder why things look a little blurred to you if you partially close your eyes, etc.....
This is a long term project. For now, I merely wanted to say that provided that the system starts from components of the right sort, which have the ability to be assembled in an enormously rich variety of ways, varying scales from sub-microscopic to Giant Redwood trees, some of which have the ability to store energy acquired from the environment then later release it, and some of which have the ability to encode information that can later be used (through some decoding, or interpreting, mechanism), then, in principle, random rearrangements of those components, triggered by external or internal forces, could, from time to time, produce entities with properties of biological organisms. So perhaps 4 billion years on an earth-sized planet is plenty of time for processes of agglomeration constrained by previously built structures, to produce ecosystems as diverse as those on earth.
This is totally different from the false claim that a tornado could assemble a Boeing 747 from a junk-yard containing enough parts.
Some more detailed requirements for the process to get started are in Tibor Ganti's book on how chemistry can support life Ganti (2003).
Newtonian mechanics would not suffice
A universe made of Newtonian point-masses would not have the required properties. I don't know whether Newton, or his contemporaries, understood these inadequacies of the so-called Newtonian world view.
Newton was very interested in chemistry/alchemy, asked some deep questions and formulated some interesting hypotheses, as reported in http://www.chemistryexplained.com/Ne-Nu/Newton-Isaac.html
In particular, he wrote:
So Newton's thinking was not a long way from the ideas presented here!
A universe made of chemical components as diverse in their shapes and behaviours as the chemistry on this planet can in principle support the assembly of larger and larger structures with more and more complex and varied behaviours. The work of Ganti (2003) on how chemistry can support life, at least in its simplest forms is very relevant here. The chemical properties of atoms make them very different from the components of meccano sets, tinker toys and other construction kits.
With the appropriate components, blind external forces could assemble very complex and interesting structures without violating any physical laws.
The addition of abilities of some of those structures to include information about how to produce copies of themselves allows designs to compete with one another and natural selection processes can emerge where there is competition for resources or space and relative success in the competition depends on reproducible design features.
Competition is not essential for evolution
But competition is not necessary for evolution to occur. Even in times, and places, of plenty where competition is not needed, peaceful co-existence can lead to new alliances allowing new achievements, e.g. better defences against extremes of temperature, or storms. New forms of symbiosis may be the result in some cases. In that sense natural selection can be opportunistic without being competitive.
Natural selection uses abstract dents and grooves in abstract design spaces that constrain what happens and can be replicated. But it cannot work without a "low-level" medium of the right sort, and chemical matter is the only known candidate, for reasons spelled out by Ganti (and apparently nearly guessed by Newton).
The importance of quantum mechanisms
Quantum mechanics plays a crucial role. Many years ago, the physicist Philip Morrison gave a series of lectures on BBC TV. One of the points he made was that the molecular stability required for genetic information to be preserved in complex molecules for long periods of time in the life of an individual, and across generations, despite a great deal of thermal buffeting, would be impossible without the discreteness of quantum mechanics. Nothing in Newtonian mechanics could explain that. Similar points had previously been made by Schrödinger in his (1944), discussed in more detail in the companion paper on construction kits. I have produced a draft annotated collection of extracts from that book here:
All I am doing is making the same point about many biological structures, not just the genetic material. This includes structures encoding many different forms of information required by individuals, at various stages in their development and during performance of complex individual or collaborative or competitive actions. Antibodies are another example.
Curiously, many writers attempt to make use of quantum indeterminacy in explaining how minds can exist in a physical universe. In contrast, I am emphasising the role of quantum determinacy, the stability of certain structures that could not be stable in a Newtonian universe, as the basis for enduring information stores needed for increasingly complex forms of life.
I think there is also something important about the fact that chemistry based information structures can have both their geometrical relationships and their physical shapes changed in ways that affect later uses of the information. It may be the case, for example, that searching for ways of moving already assembled but flexible structures encoding information into new configurations that meet new requirements is possible in physically realisable time-scales, whereas the corresponding search in a space of bit patterns or symbolic structures would require more than the expected lifetime of the whole universe.
These processes seem to go beyond what Turing machines and current electronic computer architectures can support. But that claim needs to be made mathematically precise, in order to be interesting.
Requirements for rapidly changing virtual machine structures
There are additional requirements if biological mechanisms are to be able to support rapid reorganisation of virtual machine structures. If you watch water flowing rapidly over rocks from above, a flock of birds taking off or landing, or if you listen to a performance of a five part fugue, your mental contents will be changing in ways that require rapidly reorganised information structures, that could not be achieved by rapidly rearranging the required number of physical objects, e.g. parts of brains. (There are many reasons, including energy required, path planning required to avoid collisions, etc. etc.)
In the last half century, we have learnt many techniques for achieving rapid reorganisation of complex structures in virtual machinery implemented on computers.
The techniques include changing "addresses" of information structures (replace address of structure A with address of structure B, instead of moving the whole structure from one place to another, or creating and destroying physical links). There are also "inheritance" mechanisms whereby if reference to one complex structure is replaced by reference to another complex structure, some of the references to parts of the original are automatically (implicitly, but accessibly) replaced by references to parts of the new one (e.g. the left front wheel).
E.g. suppose I am told that instead of my car, your car is now in a garage whose dimensions I know allow only one car, facing in one of two directions. Then I don't need to be told also that the left front wheel of your car, but not the left front wheel of my car is in the garage, and that your left front wheel is either where my car's left front wheel was or where the right back wheel was, relative to the structure of the garage, not the car.
The uses of virtual information processing machinery have made profound
differences in the last half century to what can be done using computers,
as summarised in
discussion of "virtual machine functionalism". Examples of such requirements
for human visual mechanisms can be found in this presentation:
Biological virtual machinery and quantum mechanisms???
A remaining open question is how biological virtual machines might be implemented so as to be rapidly re-linked when required. I have no idea whether mechanisms of quantum superposition could be relevant.
Large collections of partial information structures ready to be linked rapidly into fully instantiated (and changing) completed information structures may be of much more profound importance than dead and alive cats. But that hypothesis needs to be supported by a theory explaining how relevant superposed alternatives can come to be created by learning processes.
In simple cases multi-stable dynamical systems with "winner takes all" mechanisms could do that. But we need multi-stable generative systems, capable, for example, of constructing a novel sentence, or a novel interpretation of a complex picture never seen before (like the "hidden dalmation" picture), or switching between two interpretations as in the necker cube or duck rabbit images below.
In the case of the cube it would be possible to explain the ambiguity and flipping process by constructing an algebraic representation containing unknown numerical values, where there are alternative sets of numbers that produce coherent interpretations. But in the case of the duck-rabbit entirely different sorts of semantic interpretations for complex parts are required (mouth, ears, bill, eye), and -- even more interestingly -- one animal "faces left" while the other "faces right" a very abstract and subtle kind of semantics, whose internal representation (encoding) is far from obvious.
Similar problems arise for explanations of what the brain might be doing when switching between interpretations of the well known Old-woman/Young-woman picture). That sort of thing cannot be done simply by switching interpretations of separate fragments of the image: it requires parts of the image to be grouped in different ways for different interpretations, and information about different 3-D parts in quite different 3-D relationships to be assembled, as well as assignments of different biological roles to different parts. (Compare the Johansson moving and stationary light videos.) Anyone who wishes to invoke quantum indeterminacy of any sort has many questions to answer, if the invocation is not just hand-waving.
In particular, the types of alternative interpretations of visual contents discussed here don't seem to have anything to do with alternative solutions to sets of equations relating numerical variables. So a pair of alternative states can't be specified by setting up some mathematical description that is common to the two states except for changes in the numbers used.
Some of examples of experienced ambiguity require switching between two structures that are more like two parse trees than like two sets of solutions to simultaneous equations.
An old example is the pair of interpretations of "He saw the man with the telescope". In one interpretation "with the telescope" specifies which man, if only one man in view had a telescope. In the other interpretation "with the telescope" explains how it was possible to see a man (and perhaps recognise him, or see what he was doing) by mentioning the use of a telescope to see him. Likewise "Flying aeroplanes can be dangerous" is ambiguous between the dangers of aeroplanes in flight, and the dangers of doing the flying.
Further examples of challenges and possible types of solution can be found in
This is all still much too vague. Whether and how the idea can be made more
concrete with more explanatory power, is a long term question. Answers may
emerge from other strands in the Meta-Morphogenesis project.
Suggestions and criticisms welcome!
Types of construction kit (Added 8 Nov 2014)
Extended 15 Nov 2014
I have met several people who, like me, believe that playing with meccano construction kits for several years had a profound educational impact. There are many other types of constructional toys that children can play with, including plasticine, sand, mud, paper-scissors-glue-clips, lego blocks, tinkertoys, fischertechnik, geomagnetic, and no doubt many more.
An important feature of many, though not all construction kits, is "generative power": whatever has been assembled always permits additional parts to be added (though in some cases it may be necessary first to partially disassemble before extending). In some cases, e.g. plasticine, sand and mud, there is continuous (or almost continuous) variety in what can be added or modified at any stage, because existing portions can be continuously deformed in a variety of ways -- prodding, squeezing, stretching, bending, twisting, etc, and because the size, location, and orientation of a new part added allow a continuum of possibilities.
For other sorts of kit the available extensions or modifications may all be discrete (e.g. lego bricks), or a mixture of continuous and discrete, e.g. meccano. In any case, features of the "minimal" components, i.e. the components that come pre-manufactured, mathematically constrain the kinds of structures that can be assembled, and if they are not all rigid structures, the types of relative motion of parts (types of deformation) will also be mathematically constrained by the degrees of freedom available after a new part has been added, or two previously assembled structures are combined.
Some construction kits include motors, which may be driven by springs, elastic materials, flywheels, batteries, mains electricity, heat (toy steam engines), wind, and possibly others. In such cases, not only does the construction-kit mathematically constrain the kinds of structures that can be assembled, it also constrains the kinds of processes that can be produced by assembled constructions with motors included, charged if necessary, and turned on.
Even more variety can be produced if the kit provides sensors and information processing units (e.g. programmable computers) that can read sensor information and transmit control information to the motors, or to groups of controllers controlling motors. Understanding the combined consequences of the various sorts of components, and in particular understanding what sorts of behaviours the constructions can produce in different environments may require the resources of mechanical and civil engineering, chemical engineering, electrical engineering, and control engineering, the last of which subsumes a great deal of computer science and artificial intelligence, at least in principle, although the teaching of control engineering may focus on a narrow subset, in particular the subset for which standard sorts of mathematics (calculus, differential equations, linear algebra, probability theory, etc. are useful.
The development of computers and programming languages made available new sorts of construction kits whose components included virtual machinery (Sloman 2004/1978). (I suspect biological evolution did something similar long before human engineering capabilities evolved, but the details are very complex and hard to investigate. So researchers tend to focus on brains instead.)
This document is concerned with the question: what sort of construction kit of physical and chemical parts was available at the time the earth was formed, and what was it about that kit that allowed the processes of natural selection to produce the variety of life forms on this planet, from sub-cellular entities to elephants, giant redwood trees, and giant fungi, which not only display an extraordinarily rich variety of physical forms, but an even richer variety of physical processes characteristic of the species, controlled by information processing mechanisms that we have only just begun to understand.
Many researchers assume that all such processes can be replicated, or simulated on a typical digital computer, possibly using a network of communicating computers, an assumption that often goes with assumptions about the forms of information processing used by biological mechanisms. This should be regarded as an open question, until we have deep theories that explain how living things work, and, in particular, how they process information.
Many scientists who have thought about such issues have come up with theories
that assume that all the states and processes in such systems can be represented
by collections of numbers and relations between the numbers, including algebraic
expressions and differential equations. A very influential example is the
excellent little book by Braitenberg:
Valentino Braitenberg, 1984, Vehicles: Experiments in Synthetic Psychology, The MIT Press, Cambridge, MA.
Unfortunately many of the researchers who attempt such general theories assume that all the states, processes, sensor contents, and motor signals that occur in such systems can be adequately represented using collections of numerical values of physical states, relations or processes. That would exclude information structures formed using a grammar, parse trees, and networks whose nodes are structures rather than scalar (numerical) values or sets of scalar values -- among other things that have been investigated by AI researchers, computer scientists and software engineers in the last half century. It would probably also exclude important types of chemical control mechanisms, and many sorts of virtual machinery.
The sort of construction kit considered here needs to be able to explain how some of the forms of information processing on the planet involve thinking about and proving theorems about infinite structures, e.g. the natural number series, the set of possible Turing Machines. This suggests that the construction kit itself needs to have infinite generative power, a feature Chomsky (and others) famously attributed to the grammars of human languages.
It is likely that physical limitations (e.g. gravity, speed at which information can be propagated) will limit the physical complexity of the integrated information processing systems our chemical construction kit can support.
This document is not about explaining particular happenings, but explaining possibilities, i.e. answering questions of the form: "How is X possible?" or "How are things of type X possible?". This is a form of explanation that is not widely recognised in philosophy of science. I discussed such explanations in Chapter 2 of my 1978 book:
A creationist view
An example of the "creationist" scientific thinking to which this is a partial response can be found in http://www.icr.org/article/does-entropy-contradict-evolution/
Henry Morris, 1985. Does Entropy Contradict Evolution?. Acts & Facts. 14 (3).
(Not endorsed here, except as an illustration of the arguments used.)
Of course, my discussion of possibly adequate physical conditions to serve as a
starting point for evolution as we have known it, could be seen as inviting
creationists to abandon arguments that natural selection needs divine guidance,
and instead adopt a quite different sort of creationist argument: namely, that
the provision of initial conditions capable of supporting natural selection,
such as the existence of chemical elements, needs divine guidance. I don't
expect coherent answers to come from any traditional theology for reasons
School of Computer Science
The University of Birmingham