29 Feb 2016: Installed:
4 Apr 2016: Added Schrödinger's Note to chapter 6
9 May 2019: Emphasised the role of quantum physics in explaining structural stability not explained by Newton (revised/expanded 9 Sep 2020)
This is a collection of extracts from Schrödinger's highly influential 1944 book What is life? with some comments added by me, mainly trying (a) to clarify what he was saying about the biological importance of features of (bio) chemistry that cannot be explained in pre-quantum (e.g. Newtonian) physics and (b) to draw out some implications for biology and brains that Schrödinger did not mention, including brain mechanisms required for mathematical discoveries involving impossibility and necessity, as opposed to discoveries of statistical regularities.
Without the features in (a), reliable biological reproduction of known forms of life would be impossible. So his ideas are important for anyone trying to understand how life as we know it is possible in this physical universe.
Regarding (b), a less obvious suggestion added in my notes below and developed elsewhere (in the theory of the "Meta-configured Genome") is that organisms without certain (not yet precisely identified) quantum-based features of brain chemistry would lack important aspects of cognitive (information-processing) functions implemented in brain mechanisms, that are required not only for reliable replication of information in the genome (as discussed by Schrödinger) but also for spatial reasoning and mathematical discovery. For reasons first noticed by Immanuel Kant in 1781, statistics-based mechanisms (including recently proposed neural nets) cannot explain deep ancient mathematical discoveries in geometry and topology because those discoveries are concerned with spatial necessity and impossibility. Neither of those concepts can be represented, or examples identified, in organs that merely use statistical data to derive probabilities.
As Kant understood, necessity and impossibility are totally different from very high and very low probabilities. Yet it is clear that ancient, and not so ancient, mathematicians did make such discoveries, many (but not all) reported in Euclid's Elements. A semi-random collection of examples is here: http://www.cs.bham.ac.uk/research/projects/cogaff/misc/impossible.html. Closely related: a collection of examples of "Toddler theorems": http://www.cs.bham.ac.uk/research/projects/cogaff/misc/toddler-theorems.html
As far as I know, neither Schrödinger nor any of his admirers ever explicitly made claims of the second sort, concerning spatial/mathematical reasoning about necessity and impossibility, although Alan Turing thought deeply about this, without identifying any mechanisms that are adequate for the task. Turing machines can deal with many special cases, e.g. those that use only propositional or predicate logic for all their reasoning, but they don't model or purport to model all the forms of spatial reasoning that led to ancient mathematical discoveries.
Spatial/diagrammatic reasoning, as used by ancient mathematicians as well as many scientists, engineers, clothes designers, architects and other, can be used to discover examples of impossibility or necessity, neither of which can be discovered using statistics-based neural nets. See http://www.cs.bham.ac.uk/research/projects/cogaff/misc/turing-intuition.html Could these mechanisms use chemistry-based reasoning systems in brains?
NOTE ADDED 4 Sep 2018; revised 10 Sep 2020
Schrödinger's ideas, expressed in this little book in 1944, still remain deeply influential/inspirational. E.g. see the discussion in Ogryzko(2008). As far as I know ES discussed only the problems of biological reproduction, i.e. reliable transfer of information across multiple generations. He does not seem to have thought about the different, but closely related, problems of genome-controlled individual development, including production of new components of many different sorts (e.g. bone, muscle, skin, nerves, brain-cells, blood-vessels, digestive mechanisms, anti-bodies, and many more in a developing organism) under the (partial) control of genetic material. For more on the importance of epigenesis (especially cognitive development within an organism, as opposed to initiation of a new organism) see the Meta-Configured Genome project (work done with Jackie Chappell, School of Biosciences, University of Birmingham.
A less obvious (and as far as I can tell previously unnoticed) implication of Schrödinger's book is that much of what is written by philosophers of science is either false or incomplete because they do not take account of the implications of Schrödinger's ideas for chemistry and biology, and the consequential implications for philosophy of physics and more generally philosophy of science. That's because the examples and arguments presented by Schrödinger suggest that regarding the universe as composed of a fixed collection of particles whose behaviours and interactions can be represented by a fixed set of equations, like a large cloud of particles moving in accordance with Newtonian or relativistic laws of motion, does not take into account the consequences of formation and dissolution of strong chemical bonds that link two or more particles to form a larger whole, whose existence and behaviours depend both on relationships between the component particles and also on what the new compound entities are and how they interact with other complex entities, e.g. catalysts. So quantum mechanisms provide the possibility of materials and structures used in biological structures with properties (including rigidity, elasticity, fluidity, electrical conductivity) that support otherwise unavailable biological functions -- including new specialised reasoning mechanisms.
Insofar as some of these products of chemistry-based biological evolution produce new kinds of matter, new kinds of process, new kinds of physical properties, and in some cases new kinds of information processing mechanisms (for perception, learning, reasoning, controlling) they exhibit detailed kinds of metaphysical creativity leading to new life forms and functions.
Note added 14 Sep 2020
I think this is related to the concept of "Niche Construction" in Biology, summarised in https://en.wikipedia.org/wiki/Niche_construction This point is expanded below. Note that a niche is not just a physical environment. A portion of the physical world may provide one sort of niche for one organism and a different sort of niche for another sort (e.g. one eaten by the first sort!). In each case the niche is an abstraction implemented or instantiated in the physical world. Compare Sloman(1995).
One of the core questions of biology is not just how particular forms of life are possible, but how the mechanisms of physics and chemistry support the continual production of new forms of life, while also allowing the faithful reproduction of old forms.
It is often claimed that the Darwin/Wallace theory of evolution by natural selection answers that question, but it doesn't answer this question: What provides the increasingly complex and varied collections of options for natural selection to choose from?
My (partial) answer is: An increasingly complex and varied collection of construction kits, able to create and maintain highly stable, but changeable, structures. These construction kits are themselves products of evolution by natural selection -- using pre-existing construction kits. (Compare recursive structure-building functions in computing.)
All this is possible because the fundamental physics of our universe has important features that make all those higher level realms of possibility reachable from the fundamental level. These features tend to be ignored by philosophers of science because many of them focus on physics and produce accounts of the nature of physics that are inadequate to explain how the physical universe can produce (or in the jargon of philosophers "ground") an extraordinary variety of forms of life and their products, products that, in recent centuries, have been changing at an ever increasing rate as a result of the activities of humans, including human scientists and engineers and the users of their products.
Newton's laws, and similar mathematical laws of behaviour applicable to particles, may explain the formation of clouds of particles, including very large clouds formed by gravitational attraction, but not the formation of meshed gear wheels constraining each other's movements, let alone plant fibres, bones, blood vessels, digestive systems, brains, mathematical-discoveries or angry crowds.
Of course, the new physical objects still (approximately) obey Newton's laws: a structure composed of meshed gear wheels will have a mass and can be accelerated in a particular direction by a force applied in that direction. But Newton's laws will not explain why the atoms in the structure maintain their relationships so as to form a rigid body: required for the function of gear wheels.
This point obviously applies to toys created from components in a human-designed construction kit, such as Meccano, or Tinkertoy, and to many more complex products of human engineering. As primitive components are linked together to form new more complex components that resist some shape changes, it becomes possible to use those components as building blocks for yet more complex components whose structural constraints cannot be derived from the properties of sub-atomic particles using Newton's laws, because they depend on atoms being bonded to form rigid components of the larger structure. Schrödinger's point is that such bonds, and the new "emergent" behaviours they make possible, could not be explained before the development of quantum physics.
In living matter the story becomes far more complicated than in human-designed toy construction kits.
It also applies, less obviously, to biological structures created by evolution and developmental processes from previously constructed less complex parts. Such structures include microorganisms, plants, animals, and also many non-living structures that they, especially humans, have created.
At this level of abstraction, termite cathedrals and human cathedrals both depend on the power of chemical processes to form increasingly complex structures with increasingly complex mutually constraining relationships. But termites (presumably) cannot think about what they are doing, why they are doing it, and why it succeeds.
Describing those relationships and the processes they make possible may require richer kinds of mathematics to be added to the mathematics for describing the behaviours of the fundamental building blocks when they are not in "bonded" relationships. The behaviours of particles in a cloud of gas are very different from those in an internal combustion engine, or a windmill. Theories that are adequate to explain the properties of gases may be inadequate to explain the properties of the other machines, including rigidity of pistons and cylinder walls.
Schrödinger does not explicitly say all this, for his aim is to explain how recently understood features of quantum physics explain the formation and functions of otherwise inexplicable enduring complex molecules that are at the core of biological mechanisms of reproduction that had recently been discovered, based on the work of Mendel and others.
My claim is that his discussion has additional far-reaching implications, whether he intended them or not. I shall not be surprised if it turns out that he also saw the more general implications pointed out in my comments, and discussed them in publications I have not encountered. Some of the issues regarding chemistry and the nature of chemical bonds are discussed in the impressive survey by Sason Shaik (2007), a tribute to the achievements of Gilbert Newton Lewis, who started the "electronic structure revolution in chemistry" (though I have not yet taken in all the details!). [linked here 10 Sep 2020]
From this viewpoint, fundamental physics, including the mechanisms underlying chemistry, provides an amazingly powerful construction kit that is able not merely to produce the phenomena studied in the physical sciences, including physical materials, and physical objects on many scales made of physical objects (including atoms, molecules, rocks, planets, planetary systems, galaxies, ... and beyond), but also all the forms of life and products of forms of life from sub-microscopic organisms to organisms that are millions of times larger and more complex and whose products include increasingly complex physical machines (including the infrastructure of large cities in the last few centuries and more recently the infrastructure of the internet) and products of those machines, e.g. the the ever growing variety of functions and services provided by the internet and linked new devices.
Philosophies of physics that ignore the problem of explaining how all that variety can be fully implemented in a physical universe and merely discuss how more complex physical phenomena can be explained in terms of behaviour and interactions of the simplest physical entities, are grossly unsatisfactory,
Although Schrödinger did not explicitly attempt to explain all that, his book is a major, and seminal, contribution to the task of explaining how physics can "ground" so many non-physical phenomena. At the lowest level this includes explaining facts about biological reproduction, including reliable replication of different structures and behaviours in different inheritance lineages. I don't know whether he realised how much more his ideas potentially explained.
Structure produces constraints: constraints can produce new powers
(Revised: 19 Sep 2020)
This is a point that, as far as I can tell, Schrödinger, did not notice. I have not found it mentioned by any other researchers, but if I am wrong please let me know, and I'll add suitable links here. The point probably goes unnoticed because it is highly counter-intuitive:In a sufficiently complex physical system that develops over time, (or class of physical systems produced by biological evolution), increasing in complexity, not everything that is reachable from the lowest level and is therefore possible at the lowest level remains possible after other possibilities have been realised. Choices of mechanisms with certain new powers can lead to systems that exclude other powers.
In particular, chemical bonds made possible by quantum mechanisms, can add constraints to a physical system, where those constraints support novel physical possibilities that cannot exist when those constraints are not satisfied.
Some of the new possibilities are not describable in the language that suffices for the original space of possibilities.
This can be compared with the difference between a Meccano set without nuts and bolts and a meccano set with nuts and bolts. Nuts and bolts and other connecting devices that constrain relative motion of components of a structure can make it possible to produce new structures (e.g. new toy cranes, towers, or bridges, with behavioural capabilities that did not previously exist, that are not fully describable in the language that suffices for describing the original set of components and their properties.
Readers who have never encountered meccano may find these two videos helpful:
Learn to Build with Meccano - Basic How-To
And a much more complex example
Meccano/Erector | Super Construction 25-in-1 Motorized Building Set
The second video is quite long and detailed, but the point is perhaps made most clearly in the last few steps, e.g. step 33 (starting at 31:30) to the end.
(The videos can be watched speeded up.)
(I grew up with Meccano in the 1940s with no electric motors, only wind-up motors.)
Of course, meccano models do not have self-construction abilities, as products of biological evolution do (although they depend on supporting environments). But they illustrate the same kind of metaphysical creativity, though in a much simpler form than biological evolution.
That process of "metaphysical extension" can be repeated at multiple levels: a fact that accounts for the complexity and diversity of products of biological evolution.
Some transitions are possible only in the context of other transitions. All this is not well captured by the mathematics favoured by physicists and many philosophers of physics who talk about collections of variables and their values. Compare the use of vectors of numerical values often used in physics (and assumed by many philosophers of physics?) with the use of formal grammars to describe possible sentences in particular natural languages or programming languages. The possible grammatical extensions of a particular linguistic structure depend on the contents of that structure. E.g. not all English sentences can meaningfully be extended by appending "but it did not succeed" (including this one). The formulae representing molecular structures and processes are partly analogous to formulae representing grammatical structures and transformations, though the powers of molecules are very different from the powers of verbal phrases and clauses.
Perhaps future formalisms for specifying molecular structures, especially in biology, will resemble grammars and programming languages, including conditionals, loops and templates for matching. (Is this already happening in molecular biology?)
Plants and animals appear to have had a common ancestor, but that does not mean that highly evolved plants, such as giant redwood trees, could evolve into mammals, such as chimpanzees. This illustrates a general metaphysical point about forms of life: it may be that not all evolved pairs with a common origin are connected by a possible evolutionary trajectory from one to the other, because not all evolutionary transitions are reversible?
Reversibility seems to be a feature of formalisms using vectors of numerical values, allowing reversible trajectories of change between two state vectors representing two states of the universe (or portions of the universe). In contrast formalisms (e.g. grammars and programming languages) commonly used to represent multi-layered structures and processes of varying complexity in linguistics, computer science, and modern software engineering, often represent changes that are not reversible. A system that develops by learning is not necessarily able to unlearn, so as return to an earlier state. "Flat" vectors of numerical values are a poor form of representation for many structures and processes. But much philosophy of physics seems to ignore their limitations for representing complex machinery, including chemical structures and processes and processes in various life forms.
I do not claim that Schrödinger shared the opinions expressed here, but re-reading the book recently made me think that he was moving in that direction. If the changing structural constraints made possible by formation of chemical bonds make possible mechanisms supporting reliable reproduction over many generations, perhaps they also make possible a host of mechanisms providing novel functions and capabilities during development of individual complex organisms -- with increasingly complex physical behaviours requiring increasingly complex control mechanisms. Examples include the intricate control of individual fingers at the ends of jointed arms, as seen in the feeding behaviours of many primates.
The construction and training of control mechanisms for such manipulators is another type of biological function that ultimately needs to be explained by mechanisms that evolve and develop in parallel with the controlled components. The majority view now seems to be that that trainable neural nets suffice, but it is clear that they lack some of the required capabilities: intelligent controllers often need to recognize impossibility, but statistics-based neural nets cannot even represent, let alone recognize, impossibility or necessity, e.g. in geometric or topological relationships, such as those discussed in http://www.cs.bham.ac.uk/research/projects/cogaff/misc/impossible.html It follows that they cannot make, or understand, mathematical discoveries -- as opposed to merely learning to give correct answers to mathematical questions, and they cannot intelligently use mathematical discoveries in producing or selecting designs for novel machines, although they happen to stumble across something that works, though they don't understand why it works. (Compare early uses of magnets in direction finding.)
Although statistics-based neural nets lack the required powers, it remains an open question whether such cognitive capabilities can be explained by sub-neural chemical control mechanisms, including relatively recently evolved descendants of ancient chemical assembly mechanisms used for building body parts and nervous systems, including brains.
I presume Schrödinger knew nothing about the roles of formal grammars in linguistic theory (mostly developed later, e.g. by Chomsky in 1956) and the variety of types of formalism that are now found useful in computer system engineering, especially software formalisms. But his emphasis on the importance of stable, persisting structures, and the possible transitions between such structures, anticipates some of those later developments, unlike an emphasis on "flat" state vectors containing only numerical values, assumed in many of the writings in philosophy of physics that I have encountered.
Fundamental and evolved construction kits
Schrödinger's book could be described as an attempt to characterise some key features of the Fundamental Construction Kit (FCK) required to support the many types of Derived Construction Kit (DCK) used in the forms of biological evolution that have occurred on this planet.
Different DCKs are clearly used in the development of organisms of different types, e.g. microbes, fungi, plants, insects, vertebrae, etc. They all make use of strings of molecules to encode information required for production of a new individuals (not necessarily identical with the parent), but they differ enormously in the many types of physical structure produced during individual development and the many types of control required both during development of an organism and later during the behaviours of the fully formed organism (e.g. feeding, climbing, flying, swimming, running and many varieties of feeding and mating behaviours).
The many physical mechanisms used and the many forms of control involved in using those mechanisms depend on prior use of construction kits to produce those mechanisms, as discussed in http://www.cs.bham.ac.uk/research/projects/cogaff/misc/construction-kits.html
I think that in this book Schrödinger implicitly endorsed the view that flat uniform state-vectors, with mathematical operators linking them, are an inadequate form of representation for the physics of a universe containing life as we know it with so many different physical forms and behaviours, and so many different processes involved in producing new individual organisms. (Some of the ideas may have been anticipated by Gilbert Newton Lewis, mentioned above.)
This is part of the Turing-inspired Meta-Morphogenesis project, originally triggered by an invitation to contribute to the Elsevier Turing Memorial volume Alan Turing - His Work and Impact, Ed. S. B. Cooper and J. van Leeuwen, 2013, Elsevier, Amsterdam, 9780123869807, pp. 97--102,
A partial index of discussion notes is in
Late in 2015, while working on a paper on evolution's use of construction kits of many kinds Sloman(2016) I re-read Erwin Schrödinger's little book, What is life?, CUP, Cambridge, 1944, for the first time for many years, and was surprised to find that he had presented so many of the crucial ideas required to explain the possibility of biological evolution and the possibility of construction kits required for development and reproduction.
There are several problems. First the second law of thermodynamics states that complex systems will become increasingly disordered, whereas the opposite is true of individual living things and life in general, which becomes increasingly complex and ordered as a result of biological evolution. Part of the explanation is very familiar: since neither the earth, nor any individual organism on the earth is an isolated physical system, external sources of energy, including solar energy, heat energy from the earth's core, and chemical energy can counter the tendency to increased entropy for some entities on the planet.
But that does not explain the origins or persistence of increasingly complex and detailed structures, that are required to preserve biological information during the development of an organism, during many interactions with other things in the environment (including some high-impact interactions) and during reproductive processes across many generations.
The basic problem is how the genetic information is preserved both within an individual during the complexities of development and across individuals over generations.
A secondary question is how so much new complexity evolves over time -- which Darwin and Wallace suggested could be explained by a mixture of variation (possibly random variation) of heritable features and natural selection.
We'll first focus on the answer Schrödinger [ES] gives to the basic problem, i.e. answering the question: How is it possible for detailed specifications, encoded in complex molecules to survive across generations, despite constant thermal buffetting and potentially disruptive influences during development and reproduction, despite the second law of thermodynamics and despite the fact that the fundamental mechanisms of quantum physics are statistical?
ES provides an answer by pointing out that although quantum theory implies that physical processes essentially involve statistical patterns of change and are therefore not deterministic, it also implies that there can be structures that are in stable states because, although they are capable of switching to new states, they are very unlikely to do so, unless affected by a sufficiently high energy impulse. That allows quantum mechanics to support not only indeterminism, but also long term determinism.
Moreover if a physical structure is in stable state S1 it may be capable of having another stable state S2, which may be at either a higher or a lower level energy state, or the same level as the original state.
The states are stable because the transition from one to the other, or from one of them to some other state requires a minimum energy packet to "get over a hump", because all the intermediate states require provision of more energy to the system than the energy in the initial and final states.
Many human-designed mechanisms use this feature, for example, a box with a liftable hinged lid that is stable when shut and also when opened and folded back, like some dustbin lids. In that case gravity provides the force that has to be overcome to move the lid from one stable state to another.
Another familiar example is the commonly used lever and spring mechanism that forces a wall-mounted electric light switch to be in one or other of two stable positions, using the "toggle" design https://en.wikipedia.org/wiki/Light_switch#Toggle.
Many engineering designs depend on multi-stability, including the combination of springs and levers that allow a heavy hinged item, such as a car boot (trunk) lid to be held safely (i.e. stably) in more than one position, i.e. when shut or when fully open. As it moves up or down springs are stretched or released, and when stretched they hold potential energy. When the lid is fully open gravity alone does not suffice to pull it down past the closest maximum energy peak.
In some cases, instead of two or more fixed stable states a mechanism can allow collections of states each of which allows free motion, within that state (e.g. a ball rolling in a groove), whereas the transition from one state to another (e.g. moving to another grove) requires a significant amount of additional energy.
An example using this is a flat tray that has grooves and circular hollows on it, and a marble that can move horizontally on the tray, while kept on the tray by the earth's gravitational pull.
If the marble falls into one of the hollows or grooves, a small impulse can cause it to move around freely in the hollow or groove, but a much larger impulse is required to allow the marble to jump to one of the other depressed parts of the tray, e.g. making it jump out of the groove, so that it can easily move along a horizontal surface to other hollows or grooves. How long a marble will resist being jolted out of a groove or hollow will depend on how deep the groove or hollow is, and how powerful the jolts are.
More on Ratchets
(Added 9 May 2019)
Ratchet mechanisms used in clocks and watches also illustrate this. There are many variations on the basic idea, with video demonstrations available online, showing how the natural (Newtonian) frequency of a pendulum or or spring-loaded oscillator controls the timing of discrete "bits" of rotation, of a mechanism driving the clock-hands. Search for "clock+ratchet+video". One of many examples is
Around 1959/60 I saw this kind of multi-stable mechanism used in a lecture in Oxford by Lionel Penrose on life and reproduction. He showed how devices made of bits of wood, springs and hinges that could be shaken around on a tray, demonstrated some of the features of feeding, growth, and a-sexual reproduction. He called them "droguli". For more information and ancient videos see this information kindly provided by Rodney Brooks:
More on the importance of ratchet mechanisms for life:
Peter Hoffmann (video lecture), Life's Ratchet: How Molecular Machines Extract Order from Chaos, November 19, 2012,
See also: https://www.microsoft.com/en-us/research/video/lifes-ratchet-how-molecular-machines-extract-order-from-chaos/
Peter M Hoffmann, 2016, How molecular motors extract order from chaos (a key issues review) 10 February 2016, Reports on Progress in Physics, Volume 79, Number 3, IOP Publishing Ltd.
Note on Deacon
While trying to read Terence Deacon's 2011 book Incomplete Nature: How Mind Emerged from Matter, I constantly had the impression that he had not understood these points about ratchets and multi-stability, or had perhaps re-invented them with extraordinarily obscure terminology.
Two molecules with the same types of atoms connected differently
Each may be stable in the absence of a disruptive external influence
E.g. the two isomers of propyl alcohol differ only in whether the oxygen atom (the blue "O" in the figure) is bound to the central carbon atom or an end carbon atom. Each state is stable because all their neighbouring states have more energy, so the change to a neighbouring state cannot occur without an external source of energy. If a sufficiently energetic impulse is received it can push the molecule over the energy "hump" and into another stable state. This example is used in section 39 of the book, as the basis of several deep observations relevant to biological evolution.
In Chapter 7, ES discusses additional questions about the increasing complexity and variety of products of evolution and how that can be reconciled with what we know about the physical universe.
My comments from here on will be indented and italicised, as in this section, whereas quotations from the book are not indented and not italicised.
Many detailed technical sections of the text, and all mathematical sections, are omitted, in order to make this easy for a non-expert to read. I have also occasionally inserted paragraph breaks to help the reader.
After drawing attention to some biological phenomena and a background of physical laws, ES summarises puzzling biological phenomena that he wishes to show can be understood in the framework of Quantum mechanics but not previous physical theories (e.g. Newtonian mechanics augmented by statistical mechanics).
By the time the book was published (1944) there was already evidence that biological genetic information was stored and transmitted in extended chemical structures, and it was assumed that parts of those structures could specify particular inherited features. ES emphasises the fact that in some cases of biological inheritance, a particular unusual feature, which may be a product of a small portion of the genetic material can persist across several generations. He takes the "Habsburg lip" as an example. The reliable transfer of a special feature across several reproductive episodes, each involving the development of a whole human from a fertilized egg cries out for explanation, as would preservation and replication of a triangular shape drawn in sand across Saharan sand dunes.
I think it is fair to say that the latter is impossible. ES tries to show what's special about genetic material that makes reproduction and preservation of detailed structure possible across even more complex disruptive processes than sand-storms. But he also tries to bring out why that is such a remarkable achievement and why it would have been impossible to explain on the basis of pre-quantum physics. For example, life as we know it would not have been possible in a universe composed of Newtonian point masses with mutual gravitational attraction. (I think Newton noticed this limitation of "Newtonian" mechanics, but I am not a Newton-scholar.)
Relevance to brain function
Although this was not discussed by Schrödinger, the huge chemical complexity within each synapse in a brain suggests that neural models of cognition that refer only to changing weights of synaptic connections between neurones and ignore sub-neural chemistry are probably ignoring the most important explanatory mechanisms in brains. A few neural researchers have been making this point, e.g. Grant(2010) and Trettenbrein(2016).
In another document I have tried to show the importance for science of discoveries and explanations of possibilities, as opposed to discoveries and explanations of laws.
In my comments on the implications of this book, I am taking a similar risk. [A.S.]
In later discussions following on from the above remark, ES emphasises the ability of quantum physics to explain the possibility of both....
(a) highly stable enduring structures, required for genetic information to persist (largely) unchanged both throughout the life of each individual from formation of an egg to production of a fully formed adult, and beyond, and also across several generations (although cross generation persistence can in some cases (e.g. sexual reproduction) involve merging of information from different lineages),
(b) discrete, enduring structural changes, such as an oxygen atom swapping places with a hydrogen atom, thereby producing a new molecule, with new physical/chemical properties.
Both changeability and resistance to change are required for continued (Darwinian) evolution of life as we know it, and also for growth of complex highly differentiated parts of individual organisms from a single fertilized cell. Growth of a complex individual organism with multiple parts performing different functions requires highly predictable, controlled changes. However, mechanisms capable of producing rapid partly random changes are required for immune responses (which apparently first evolved about 500 million years ago as a defence against sub-cellular invaders). Evolution seems to have found partial solutions to the problem of balancing these two requirements.
ES uses the 'Habsburg Lip' as an example:Fixing our attention on the portraits of a member of the family in the sixteenth century and of his descendant, living in the nineteenth, we may safely assume that the material gene structure, responsible for the abnormal feature, has been carried on from generation to generation through the centuries, faithfully reproduced at every one of the not very numerous cell divisions that lie between. Moreover, the number of atoms involved in the responsible gene structure is likely to be of the same order of magnitude as in the cases tested by X-rays. The gene has been kept at a temperature around 98°F during all that time. How are we to understand that it has remained unperturbed by the disordering tendency of the heat motion for centuries?
These extracts from the book indicate why such phenomena are problematic for current theories of physics, including thermodynamics, and chemistry. In all the statistical flux of matter in motion at temperatures of human bodies, how could something as minute as a molecular fragment specifying some biological feature, survive unchanged, even across many generations, despite all the copying required for reproduction and development? An outline answer follows:33. Explicable by quantum theory
Added 9 May 2019:....
Sometimes when I report Schrödinger's views about the essential role of quantum mechanisms in making possible long term stability of chemical structures to physicists, and point out that there is nothing in Newtonian mechanics that can explain such chemical stability, the objection is made that Newtonian mechanics explained the stability of planetary orbits around the sun or the enduring shape of a galaxy, including resistance to slight perturbations. E.g. the gravitational effects of the moon on the earth don't eject the earth from its solar orbit.
But that response misses the point that chemical structures can have far more complex enduring relationships and combinations of changing and unchanging relationships. For example, Newtonian mechanics, in which the only thing a force can do is produce an acceleration (or deceleration) in the direction of the force, cannot bind particles together to form long thin stable structures, plates, meshed cogwheels, levers, ratchets, etc. Portions of a rope could not be wound around an axle if the atoms of the rope included no chemical bonds, only particles attracting and repelling one another in the direction of a line between them. Complex stable molecules like haemoglobin or chlorophyll could not exist, and life as we know it on Earth would be impossible.
Perhaps surprisingly, even rigid levers, discussed in elementary introductions to Newtonian mechanics, would be impossible.
When I summarise the above point made by Schrödinger regarding the persistence of relatively stable, interacting, structures, it is sometimes objected that planetary systems and galaxies are relatively stable structures whose enduring spatial containment depends only on gravitational attraction, which is part of Newtonian physics, with no need for chemical bonds or other quantum mechanisms to explain persistence of structure.
That reply ignores the fact that (relatively) stable clouds of physical matter held in place by gravitational forces are quite unlike a skeletal structure in which two rigid bones are linked by a flexible joint, where each bone is rigid, but their hinged joint permits non-rigid behaviour of the combination. Many products of human engineering exhibit a similar mixture of stability and flexibility: e.g. in rack and pinion gears (https://en.wikipedia.org/wiki/Rack_and_pinion) and worm gears (https://en.wikipedia.org/wiki/Worm_drive) the mixture of rigidity in the rotating/moving parts and the non-rigid relations between the two parts of the gear structure, allows motion in one direction and one speed to be converted to motion in another direction, at a different speed.
Other features can include magnification of an applied force. A mechanical loom typically applies multiple changing constraints to a process of weaving threads...
Perhaps the "punch-line" is this: if you grasp one end of a rigid rod there are many ways of pulling, pushing, rotating or twisting the rod that will cause the whole rod to move in ways that preserve its overall structure because of the molecular bonds between the atoms, whereas if you could somehow grasp a part of a galaxy and push, pull, or rotate it, there would be perturbations in the neighbourhood of the moved object, caused by gravitational forces, but nothing like the rigid motions of a grasped rod. Newtonian mechanics cannot explain the difference, because it includes nothing about chemical bonds.
Among the discrete set of states of a given selection of atoms there need not necessarily but there may be a lowest level, implying a close approach of the nuclei to each other. Atoms in such a state form a molecule. The point to stress here is, that the molecule will of necessity have a certain stability; the configuration cannot change, unless at least the energy difference, necessary to 'lift' it to the next higher level, is supplied from outside.
38. First amendment
Some of the mathematical details in the book are skipped here. It turns out that there can be two possible states of a molecule with the same or similar energy levels, between which there are only transitions requiring much higher energy levels -- as in the toggle switch and car boot lid examples above. So either state could be equally stable at a given temperature. In other words, just because two states of molecule have the same energy it does not follow (in quantum physics) that it is easy to switch the molecule between those two states.
In offering these considerations as a theory of the stability of the molecule it has been tacitly assumed that the quantum jump which we called the 'lift' leads, if not to a complete disintegration, at least to an essentially different configuration of the same atoms -- an isomeric molecule, as the chemist would say, that is, a molecule composed of the same atoms in a different arrangement (in the application to biology it is going to represent a different 'allele' in the same 'locus' and the quantum jump will represent a mutation).
To allow of this interpretation two points must be amended in our story, which I purposely simplified to make it at all intelligible.
From the way I told it, it might be imagined that only in its very lowest state does our group of atoms form what we call a molecule and that already the next higher state is 'something else'. That is not so. Actually the lowest level is followed by a crowded series of levels which do not involve any appreciable change in the configuration as a whole, but only correspond to those small vibrations among the atoms which we have mentioned in §37.
So the first amendment is not very serious: we have to disregard the 'vibrational fine-structure' of the level scheme. The term 'next higher level' has to be understood as meaning the next level that corresponds to a relevant change of configuration.
39. Second amendment
The second amendment is far more difficult to explain, because it is concerned with certain vital, but rather complicated, features of the scheme of relevantly different levels.
free passage between two of them may be obstructed, quite apart
from the required energy supply; in fact, it may be obstructed even
from the higher to the lower state.
It is known to the chemist that the same group of atoms can unite in more than one way to form a molecule. Such molecules are called isomeric ('consisting of the same parts').
Isomerism is not an exception, it is the rule. The larger the molecule, the more isomeric alternatives are offered.
Isomerism is illustrated in the figure Isomers above,
copied from the book. The two molecules have the same constituents, but because
the oxygen atom has different locations in the two molecules the molecules have
very different physical and chemical properties. And neither state can easily be
transformed into the other because the transition between the two states
requires the molecule to pass through intermediate configurations which have
significantly more energy than either of them. ES writes:
The remarkable fact is that both molecules are perfectly stable, both behave as
though they were 'lowest states'. There are no spontaneous transitions from
either state towards the other.
The reason is that the two configurations are not neighbouring configurations.
The transition from one to the other can only take place over intermediate
configurations which have a greater energy than either of them. To put it
crudely, the oxygen has to be extracted from one position and has to be inserted
into the other. There does not seem to be a way of doing that without passing
through configurations of considerably higher energy.
Now we can give our 'second amendment', which is that transitions of this 'isomeric' kind are the only ones in which we shall be interested in our biological application. It was these we had in mind when explaining 'stability' in §§35-37
Note: this was published several years before the discovery of the "Double Helix" structure of DNA by Watson, Crick and their collaborators. Both, especially Watson, have publicly admitted to having been influenced by this book in their thinking about DNA.41. The uniqueness of the picture
Quantum mechanics is the first theoretical aspect which accounts
from first principles for all kinds of aggregates of atoms actually
encountered in Nature. The Heitler-London bondage is a unique,
singular feature of the theory, not invented for the purpose of
explaining the chemical bond. It comes in quite by itself, in a
highly interesting and puzzling manner, being forced upon us by
entirely different considerations.
Consequently, we may safely assert that there is no alternative to the molecular explanation of the hereditary substance. The physical aspect leaves no other possibility to account for itself and of its permanence.
XXX Compare the points made above about stability and quantum mechanisms.
42. Some traditional misconceptions
This section discusses possible questions and confusions about similarities and differences between solids (crystalline and amorphous), liquids, gases, and which sorts of material can resist change of structure over long periods of time.
43. Different states of matter
Now I would not go so far as to say that all these statements and distinctions are quite wrong. For practical purposes they are sometimes useful. But in the true aspect of the structure of matter the limits must be drawn in an entirely different way. The fundamental distinction is between the two lines of the following scheme of 'equations':
molecule = solid = crystal. gas = liquid = amorphous.We must explain these statements briefly. The so-called amorphous solids are either not really amorphous or not really solid. In 'amorphous' charcoal fibre the rudimentary structure of the graphite crystal has been disclosed by X-rays. So charcoal is a solid, but also crystalline. Where we find no crystalline structure we have to regard the thing as a liquid with very high 'viscosity' (internal friction). Such a substance discloses by the absence of a well-defined melting temperature and of a latent heat of melting that it is not a true solid.
.... further details omitted here ....
44. The distinction that really matters
The distinction that is really important in the structure of small matter is whether atoms are bound together by those Heitler-London forces or whether they are not. In a solid and in a molecule they all are. In a gas of single atoms (as e.g. mercury vapour) they are not. In a gas composed of molecules, only the atoms within every molecule are linked in this way.
45. The aperiodic solid
The diversity is a consequence of the aperiodicity mentioned in 45 above,
which maximises the amount of genetic information that can be encoded in a
structure of a given length composed of a sequence of items. Consider a linear
structure that is made up of repetitions of a fixed structure as in 8
repetitions of a four length sequence of items, e.g. ABCD:
On the other hand, if the fixed repetition requirement is removed and the four
items can be in any order, and not necessarily each with the same frequency,
then 32 four-way choices are available for assembling each sequence, providing a
much larger total number of possible sequences 4**32 = 18446744073709551616.
This implies that each genetic sequence of atoms, i.e. each genome(?), is a
selection from an astronomically large set of possibilities. Of course, the
number will be reduced if some sequences are excluded, as, for example, the
sequence "TTT" is (somehow) excluded from English words. But the set of English
sentences that can be expressed in N words grows rapidly as N grows, even if it
is less than full exponential growth, because not all sequences of
sentence-components are sentences. (Something similar will be true of genome
It was only later that the work of Crick, Franklin and Watson showed that
evolution used not an alphabet of individual atoms, but a small "alphabet" of
molecules, strung together in aperiodic sequences to specify genomes. It seems
that Schrödinger understood the importance of aperiodicity some time before
Shannon's work explained it.
A small molecule might be called 'the germ of a solid'. Starting from such a small solid germ, there seem to be two different ways of building up larger and larger associations. One is the comparatively dull way of repeating the same structure in three directions again and again. That is the way followed in a growing crystal.
The other way is that of building up a more and more extended aggregate without the dull device of repetition. That is the case of the more and more complicated organic molecule in which every atom, and every group of atoms, plays an individual role, not entirely equivalent to that of many others (as is the case in a periodic structure). We might quite properly call that an aperiodic crystal or solid and express our hypothesis by saying: We believe a gene -- or perhaps the whole chromosome fibre -- to be an aperiodic solid.
46. The variety of contents compressed in the miniature code
In the next section ES shows that he understood the requirement for diversity in specifications expressed in the genetic code and points out that this requirement can be met by his proposed molecular encoding mechanism, whose diversity of possible encodings increases exponentially with the length of the code. This was published a few years before Shannon (1948), but seems to have anticipated some of the ideas about requirements for transmission and storage of information vehicles.
The only way to vary such a structure is to rearrange the order of the first four items, which determines everything else. So the number of possible strings of 32 atoms made of 8 repeated groups of 4 atoms is easily seen to be 4*3*2*1 = 24 (i.e. 4!). Moreover the number of possibilities is unchanged even if there are always 800, or 8000 repeated groups of 4.
It has often been asked how this tiny speck of material, nucleus of the fertilized egg, could contain an elaborate code-script involving all the future development of the organism.
Indeed, the number of atoms in such a structure need not be very large to produce an almost unlimited number of possible arrangements. For illustration, think of the Morse code. The two different signs of dot and dash in well-ordered groups of not more than four allow thirty different specifications. Now, if you allowed yourself the use of a third sign, in addition to dot and dash, and used groups of not more than ten, you could form 88,572 different 'letters'; with five signs and groups up to 25, the number is 372,529,029,846,191,405. ....
What we wish to illustrate is simply that with the molecular picture of the gene it is no longer inconceivable that the miniature code should precisely correspond with a highly complicated and specified plan of development and should somehow contain the means to put it into operation.
The diversity is a consequence of the aperiodicity mentioned in 45 above,
which maximises the amount of genetic information that can be encoded in a
structure of a given length composed of a sequence of items. Consider a linear
structure that is made up of repetitions of a fixed structure as in 8
repetitions of a four length sequence of items, e.g. ABCD:
On the other hand, if the fixed repetition requirement is removed and the four items can be in any order, and not necessarily each with the same frequency, then 32 four-way choices are available for assembling each sequence, providing a much larger total number of possible sequences 4**32 = 18446744073709551616.
This implies that each genetic sequence of atoms, i.e. each genome(?), is a selection from an astronomically large set of possibilities. Of course, the number will be reduced if some sequences are excluded, as, for example, the sequence "TTT" is (somehow) excluded from English words. But the set of English sentences that can be expressed in N words grows rapidly as N grows, even if it is less than full exponential growth, because not all sequences of sentence-components are sentences. (Something similar will be true of genome components.)
It was only later that the work of Crick, Franklin and Watson showed that evolution used not an alphabet of individual atoms, but a small "alphabet" of molecules, strung together in aperiodic sequences to specify genomes. It seems that Schrödinger understood the importance of aperiodicity some time before Shannon's work explained it.
Note that he does not say that the portions of code correspond with structural details or processes in the finished product, which would be a naive interpretation of what genetic code does. Corresponding to a "plan of development" could be almost as restrictive if every detail of the development process is specified in the plan. However if the plan includes conditional items (something like conditionals in a programming language) and loops then the detailed relationships between what is in the plan/code and the final product may be very complex and indirect and definitely not an isomorphism.
It seems that Schrödinger already knew by 1944 that biological reproduction did not constitute an "algorithmic" process that would always produce the same result (physically and behaviourally identical developing organisms) from the same genome. (Was that already common knowledge among biologists/biochemists?)See Sloman&Chappell (in progress).
47. Comparison with facts: degree of stability; discontinuity of
Thus the threshold values the chemist encounters are of necessity precisely of the order of magnitude required to account for practically any degree of permanence the biologist may encounter; for we recall from §36 that thresholds varying within a range of about 1:2 will account for lifetimes ranging from a fraction of a second to tens of thousands of years.
These considerations make it conceivable that an isomeric change of configuration in some part of our molecule is, produced by a chance fluctuation of the vibrational energy, can actually be a sufficiently rare event to be interpreted as a spontaneous mutation. Thus we account, by the very principles of quantum mechanics, for the most amazing fact about mutations, the fact by which they first attracted de Vrie's attention, namely, that they are 'jumping' variations, no intermediate forms occurring.
48. Stability of naturally selected genes
Granted that we have to account for the rare natural mutations by chance fluctuations of the heat motion, we must not be very much astonished that Nature has succeeded in making such a subtle choice of threshold values as is necessary to make mutation rare. For we have, earlier in these lectures, arrived at the conclusion that frequent mutations are detrimental to evolution. Individuals which, by mutation, acquire a gene configuration of insufficient stability, will have little chance of seeing their 'ultra-radical', rapidly mutating descendancy survive long. The species will be freed of them and will thus collect stable genes by natural selection.
49. The sometimes lower stability of mutants
But, of course, as regards the mutants which occur in our breeding
experiments and which we select, qua mutants, for studying their
offspring, there is no reason to expect that they should all show
that very high stability. For they have not yet been 'tried out'
-- or, if they have, they have been 'rejected' in the wild breeds
-- possibly for too high mutability. At any rate, we are not at all
astonished to learn that actually some of these mutants do show a
much higher mutability than the normal 'wild' genes.
In this and the next section ES points out that whereas it is important for the majority of the genetic material to be highly stable, there must be some instability for mutations to occur. Moreover molecular instability can increase if temperature is increased. But if mutant genes are already unstable, a temperature increase should have a smaller effect on them than on more stable non-mutant genes. I've omitted most of the details.
50. Temperature influences unstable genes less than stable ones
The time of expectation is diminished by raising the temperature, the mutability is increased. Now that can be tested and has been tested with the fly Drosophila in the range of temperature which the insects will stand. The result was, at first sight, surprising. The low mutability of wild genes was distinctly increased, but the comparatively high mutability occurring with some of the already mutated genes was not, or at any rate was much less, increased. That is just what we expect on comparing our two formulae.
51. How x-rays produce mutation
The predicted effects of Xrays are different from predictions for temperature increases. The effects of Xrays on the molecules they affect are more "explosive" (via production of ionised particles) and might be expected to affect normal and mutant genes in similar ways. Predicted effects are observed, helping to support the theory being presented. Details are omitted here.
52. Their efficiency does not depend on spontaneous mutability
53. Reversible mutations
Some mutations are reversible. One might expect the energy for the original mutation and for the reverse mutation to be the same. But not if the mutated molecule and the original molecule have different energy levels, with a high energy barrier separating them. In that case the mutation from the higher energy molecule to the lower energy molecule might occur more frequently than the reverse mutation, since the reverse change requires a "bigger kick" to get over the hump. (My paraphrase.) Observed differences in rates of mutation in opposite directions are consistent with this theory.
Very well then, but how does it do this? How are we going to turn 'conceivability' into true understanding? Delbrück's molecular model, in its complete generality, seems to contain no hint as to how the hereditary substance works, Indeed, I do not expect that any detailed information on this question is likely to come from physics in the near future. The advance is proceeding and will, I am sure, continue to do so, from biochemistry under the guidance of physiology and genetics.
No detailed information about the functioning of the genetical mechanism can emerge from a description of its structure so general as has been given above. That is obvious. But, strangely enough, there is just one general conclusion to be obtained from it, and that, I confess, was my only motive for writing this book. From Delbruck's general picture of the hereditary substance it emerges that living matter, while not eluding the 'laws of physics' as established up to date, is likely to involve 'other laws of physics' hitherto unknown, which, however, once they have been revealed, will form just as integral a part of this science as the former.
55. Order based on order
This is a rather subtle line of thought, open to misconception in more than one respect. All the remaining pages are concerned with making it clear. A preliminary insight, rough but not altogether erroneous, may be found in the following considerations:
It has been explained in chapter 1 that the laws of physics, as we know them, are statistical laws. They have a lot to do with the natural tendency of things to go over into disorder.
But, to reconcile the high durability of the hereditary substance with its minute size, we had to evade the tendency to disorder by 'inventing the molecule', in fact, an unusually large molecule which has to be a masterpiece of highly differentiated order, safeguarded by the conjuring rod of quantum theory.
The laws of chance are not invalidated by this 'invention', but their outcome is modified. The physicist is familiar with the fact that the classical laws of physics are modified by quantum theory, especially at low temperature.
There are many instances of this. Life seems to be one of them, a particularly striking one. Life seems to be orderly and lawful behaviour of matter, not based exclusively on its tendency to go over from order to disorder, but based partly on existing order that is kept up.
To the physicist -- but only to him -- I could hope to make my view clearer by saying: The living organism seems to be a macroscopic system which in part of its behaviour approaches to that purely mechanical (as contrasted with thermodynamical) conduct to which all systems tend, as the temperature approaches absolute zero and the molecular disorder is removed.
The non-physicist finds it hard to believe that really the ordinary laws of physics, which he regards as the prototype of inviolable precision, should be based on the statistical tendency of matter to go over into disorder. I have given examples in Chapter 1. The general principle involved is the famous Second Law of Thermodynamics (entropy principle) and its equally famous statistical foundation.
In §§56-60 I will try to sketch the bearing of the entropy principle on the large-scale behaviour of a living organism -- forgetting at the moment all that is known about chromosomes, inheritance, and so on.
56. Living matter evades the decay to equilibrium
What is the characteristic feature of life? When is a piece of matter said to be alive? When it goes on 'doing something', moving, exchanging material with its environment, and so forth, and that for a much longer period than we would expect of an inanimate piece of matter to 'keep going' under similar circumstances. When a system that is not alive is isolated or placed in a uniform environment, all motion usually comes to a standstill very soon as a result of various kinds of friction; differences of electric or chemical potential are equalized, substances which tend to form a chemical compound do so, temperature becomes uniform by heat conduction.
After that the whole system fades away into a dead, inert lump of matter. A
permanent state is reached, in which no observable events occur. The physicist
calls this the state of thermodynamical equilibrium, or of 'maximum entropy'.
Practically, a state of this kind is usually reached very rapidly.
These ultimate slow approaches to equilibrium could never be mistaken for life, and we may disregard them here. I have referred to them in order to clear myself of a charge of Inaccuracy.
57. It feeds on 'negative entropy'
It is by avoiding the rapid decay into the inert state of 'equilibrium' that an organism appears so enigmatic; so much so, that from the earliest times of human thought some special non-physical or supernatural force (vis viva, entelechy) was claimed to be operative in the organism, and in some quarters is still claimed. How does the living organism avoid decay? The obvious answer is: By eating, drinking, breathing and (in the case of plants) assimilating. The technical term is metabolism.
For a while in the past our curiosity was silenced by being told that we feed upon energy.
Needless to say, taken literally, this is just as absurd. For an adult organism the energy content is as stationary as the material content.
What then is that precious something contained in our food which keeps us from death? That is easily answered. Every process, event, happening -- call it what you will; in a word, everything that is going on in Nature means an increase of the entropy of the part of the world where it is going on. Thus a living organism continually increases its entropy -- or, as you may say, produces positive entropy -- and thus tends to approach the dangerous state of maximum entropy, which is of death. It can only keep aloof from it, i.e. alive, by continually drawing from its environment negative entropy -- which is something very positive as we shall immediately see. What an organism feeds upon is negative entropy. Or, to put it less paradoxically, the essential thing in metabolism is that the organism succeeds in freeing itself from all the entropy it cannot help producing while alive.
58. What is entropy?
Let me first emphasize that it is not a hazy concept or idea, but a measurable physical quantity just like of the length of a rod, the temperature at any point of a body, the heat of fusion of a given crystal or the specific heat of any given substance.
59. The statistical meaning of entropy
I have mentioned this technical definition simply in order to remove entropy from the atmosphere of hazy mystery that frequently veils it. Much more important for us here is the bearing on the statistical concept of order and disorder, a connection that was revealed by the investigations of Boltzmann and Gibbs in statistical physics.
An isolated system or a system in a uniform environment (which for the present consideration we do best to include as the part of the system we contemplate) increases its entropy and more or less rapidly approaches the inert state of maximum entropy. We now recognize this fundamental law of physics to be just the natural tendency of things to approach the chaotic state (the same tendency that the books of a library or the piles of papers and manuscripts on a writing desk display) unless we obviate it. (The analogue of irregular heat motion, in this case, is our handling those objects now and again without troubling to put them back in their proper places.)
60. Organization maintained by extracting 'order' from the environment
How would we express in terms of the statistical theory the marvellous faculty of a living organism, by which it delays the decay into thermodynamical equilibrium (death)? We said before: 'It feeds upon negative entropy', attracting, as it were, a stream of negative entropy upon itself, to compensate the entropy increase it produces by living and thus to maintain itself on a stationary and fairly low entropy level.
Thus the device by which an organism maintains itself stationary at a fairly high level of orderliness ( = fairly low level of entropy) really consists continually sucking orderliness from its environment.
This conclusion is less paradoxical than it appears at first sight. Rather could it be blamed for triviality. Indeed, in the case of higher animals we know the kind of orderliness they feed upon well enough, viz. the extremely well-ordered state of matter in more or less complicated organic compounds, which serve them as foodstuffs. After utilizing it they return it in a very much degraded form -- not entirely degraded, however, for plants can still make use of it. (These, of course, have their most powerful supply of 'negative entropy' the sunlight.)
Sections 61--69 omitted
But F. Simon has very pertinently pointed out to me that my simple thermodynamical considerations cannot account for our having to feed on matter "in the extremely well ordered state of more or less complicated organic compounds" rather than on charcoal or diamond pulp. He is right. But to the lay reader I must explain, that a piece of un-burnt coal or diamond, together with the amount of oxygen needed for its combustion, is also in an extremely Well ordered state, as the physicist understands it. Witness to this: if you allow the reaction, the burning of the coal, to take place, a great amount of heat is produced. By giving it off to the surroundings, the system disposes of the very considerable entropy increase entailed by the reaction, and reaches a state in which it has, in point of fact, roughly the same entropy as before.
Yet we could not feed on the carbon dioxide that results from the reaction. And so Simon is quite right in pointing out to me, as he did, that actually the energy content of our food does matter; so my mocking at the menu cards that indicate it was out of place. Energy is needed to replace not only the mechanical energy of our bodily exertions, but also the heat we continually give off to the environment. And that we give off heat is not accidental, but essential. For this is precisely the manner in which we dispose of the surplus entropy we continually produce in our physical life process.
This seems to suggest that the higher temperature of the warm-blooded
animal includes the advantage of enabling it to get rid of its
entropy at a quicker rate, so that it can afford a more intense life process.
I am not sure how much truth there is in this argument (for which I am
responsible, not Simon). One may hold against it, that on the other
hand many warm-blooders are protected against the rapid loss of heat by
coats of fur or feathers. So the parallelism between body temperature
and "intensity of life", which I believe to exist, may have to be accounted
for more directly by van 't Hoff's law, mentioned at the end of Sect. 50[*]:
the higher temperature itself speeds up the chemical reactions involved
in living. (That it actually does, has been confirmed experimentally in
species which take the temperature of the surrounding.)
[*] Not yet included in this online document.
What I wish to make clear in this last chapter is, in short, that from all we have learnt about the structure of living matter, we must be prepared to find it working in a manner that cannot be reduced to the ordinary laws of physics. And that not on the ground that there is any "new force" or what not, directing the behaviour of the single atoms within a living organism, but because the construction is different from anything we have yet tested in the physical laboratory.
NOTETo put it crudely, an engineer, familiar with heat engines only, will, after inspecting the construction of an electric motor, be prepared to find it working along principles which he does not yet understand. He finds the copper familiar to him in kettles used here in the form of long, wires wound in coils; the iron familiar to him in levers and bars and steam cylinders here filling the interior of those coils of copper wire. He will be convinced that it is the same copper and the same iron, subject to the same laws of Nature, and he is right in that. The difference in construction is enough to prepare him for an entirely different way of functioning. He will not suspect that an electric motor is driven by a ghost because it is set spinning by the turn of a switch, without boiler and steam. .......
ES wrote this before the development, later in the 20th century, of computers running complex interacting virtual machines, whose construction could rightly be said to be different from anything physicists and engineers had previously built or tested in physical laboratories, and whose properties and behaviours cannot be described adequately in the language of the physical sciences, a point that is elaborated in a separate document:http://www.cs.bham.ac.uk/research/projects/cogaff/misc/vm-functionalism.htmlI suspect that if ES had been able somehow to spend a week or a month talking to sophisticated AI researchers and software engineers in the 21st Century, about the variety of types of virtual machinery that can run and interact on a physical platform (or collection of connected physical platforms) he might well have said: "Yes that's the sort of thing I was struggling to identify in 1944". On the other hand, there there remain deep gaps between the spatial competences of current robots and the spatial competences of many intelligent animals, including crows, squirrels, octopuses, elephants, and pre-verbal human toddlers, e.g. as demonstrated in:
Virtual Machine Functionalism (VMF) -- The only form of functionalism worth taking seriously in Philosophy of Mind and theories of Consciousness.
A. Sloman and R.L. Chrisley, (2003), Virtual machines and consciousness, Journal of Consciousness Studies, 10, 4-5, pp. 113--172,
Moreover, despite the general belief that computers are very good at mathematics, it is not the case that AI systems show any ability to make the types of mathematical discovery in geometry and topology made by ancient mathematicians, including discoveries that go beyond Euclidean geometry, such as the discovery of the neusis construction, which makes it easy to trisect an arbitrary triangle, despite its impossibility using only Euclid's constructions. See
To be expanded, showing how something with the apparent regularity and precision of clockwork mechanisms can continue operating for long periods of time in accordance with principles of Quantum mechanics but not in accordance with the kinds of reliable regularities found in statistical mechanics arising out of numerosity of individuals.Reminder: This is an incomplete document
So, despite QM being famous for its "uncertainty principle" and for replacing the determinism of Newtonian mechanics with pervasive non-determinism, it is only QM, not Newtonian mechanics, that can explain the kind of persistence and replication of structure that is required for the existence of living things in all their many forms, including the ability to absorb, store, and use "negative entropy" either extracted from solar radiation (using photosynthesis) or by consuming and digesting parts of other organisms that have acquired such stores, or, more importantly, have acquired re-usable molecular structures that can be assembled in new ways to provide new useful structures. That's obvious when we deliberately re-use skin, bones, flesh, fur, or wool of dead animals to create something neither we nor those animals could have created alone from edible matter. What's much less obvious is internal re-use of molecular (sub-)structures extracted during digestion of dead plants and animals, or in the case of parasites, extracted from living hosts. Evolution discovered the usefulness of lazy theft long before we did.
Schrödinger's little book provides a profound example of the importance for science of theories that attempt to answer the (Kantian) question "How is X possible?" Sloman (2014).
Note added 6 Mar 2016
It seems that recent work by Jeremy England referenced below, can be seen as extending the ideas in What is life? by using Quantum theory to explain how it is possible for some important precursors of life to come into existence on a lifeless planet. (Thanks to Aviv Keren for drawing my attention to this.) Some of the structures that might spontaneously form could be building blocks not only for some of the earliest forms of life (as described by Ganti (1971/2003)) but possibly also for some of the "construction-kits" and forms of scaffolding required for biological evolution. See