This draft has been superseded by the longer paper with the same title for the KR96 Conference, available as

D R A F T ( M A Y 1 9 9 6 )
C O M M E N T S -- W E L C O M E

Aaron Sloman
School of Computer Science
The University of Birmingham
Birmingham, B15 2TT, England
A.Sloman@cs.bham.ac.uk Phone: +44-121-414-4775


possibilities, percepts, beliefs, representation, causation, virtual machines

Modal logic is concerned with operators that can be applied to sentences to produce new sentences, e.g. ``It is possible that P'', ``It is necessarily the case that P''. However, in ordinary language, the adjectives ``possible'' and ``impossible'' can be applied directly to objects, events or processes, and not only to propositions. This might be construed as an abbreviation for an assertion in which a modal operator is applied to a complete proposition (de-dicto modality). This position paper explores the alternative hypothesis that there is a more basic notion of possibility than possible truth or falsity of a proposition, namely a property of objects, events or processes (de-re modality). I am making both an ontological claim about the nature of reality, and an epistemological claim about our information processing capabilities.

I conjecture that de-re modality plays an important role in our perception, thinking and communication, and will have to figure in the internal processes of intelligent robots, and that it is used by the brains of animals that cannot construct and manipulate complete propositions.

Such concepts might have significant practical application in the study of natural and artificial visual systems, which have traditionally been thought of as producing information about structure, motion, and spatial relations of objects in the environment. Gibson claimed that the primary function of biological perceptual systems is to provide information about positive and negative affordances, such as support, obstruction, graspability, etc. I don't think he meant that animal visual systems produce propositions and apply modal operators to them. Is there a different form of representation which captures in a more powerful fashion, information which enables the animal to interact fruitfully with the environment?

The paper explores these ideas, links them to notions of ``causation'' and ``machine'', suggests that they are applicable to virtual or abstract machines as well as physical machines, and points out some implications regarding the nature of mind and consciousness.

For some closely related thoughts about indeterminism in classical physics see http://www.cs.bham.ac.uk/~axs/misc/indeterminism

Introduction: physical properties linking possibilities

What does it mean to say that a piece of wire has electrical resistance R ohms?

The notion of resistance can be understood without knowing about deep explanations of electrical phenomena. It suffices to know that some objects are able to conduct electricity, that flow of an electric current can be produced by applying a voltage across the conductor, and that both voltage and current can be measured. More precisely, talking about resistance of a conductor presupposes that there is a range of possible voltages across the conductor, a range of possible currents in the conductor, and that the conductor has a property that limits the possibilities so that the ratio between voltage and current is constant.

The piece of wire can be seen as a transducer from one range of possibilities (possible voltages) to another range of possibilities (possible currents) with the ability to constrain the relationship between inputs (voltages) and outputs (currents). That ability is its electrical resistance. Thus construed, it is a fairly abstract property and such abstract properties typically are implemented in other properties. Some time after learning about resistance, physicists learnt more about the underlying mechanisms and properties in terms of which resistance (and the ability to conduct electricity) is implemented.

I don't want to get into philosophical discussions about whether the resistance IS those other things or not. Certainly physicists in the past knew about resistance as characterised here without knowing about electrons, quantum mechanics, etc. So there is at least an epistemological distinction, even if there is no ontological difference. There probably still are many engineers who know nothing about the underlying physics, and manage with concepts of properties defined solely in terms of ranges of possibilities and constraints linking those ranges of possibilities, like the constraint expressed in the equation V = RC. That sort of knowledge is often more directly relevant to engineering problems than knowledge about the underlying physical implementation.

Other physical properties

The relationship between resistance and sets of possibilities is just one among many examples of properties linking sets of possibilities. Our conducting piece of wire also has a modulus of elasticity, which is a physical property associated with two ranges of possibilities, namely possible tensile forces that can be applied to the ends of the wire and possible changes in length of the wire. As with resistance, the wire links particular possibilities in the first range with particular possibilities in the second range, and within a sub-range the ratio between the measures of the input possibility and output possibility is fixed. Beyond that subrange inelastic deformation occurs.

In some conductors changes in temperature produce changes in resistance. In other words we have a range of possibilities linked to another range of possibilities, each of whose instances in turn links two ranges of possibilities in a particular way. So some physical properties are second-order possibility transducers. The extent to which temperature changes elasticity is another second order link.

Combining linked sets of possibilities

Each piece of wire has a collection of properties, each linking sets of possibilities. But they are separate sets: possible currents in this piece of wire (W1) are different things from possible currents in that wire (W2). Normally, the properties of W1 do not constrain the possibilities associated with W2, only the possibilities associated with W1.

New links between sets of possibilities can be created by combining physical objects into larger structures. For instance if wires W1 and W2 are joined in series their currents are forced to be the same (at least in standard conditions). Similarly if their ends are joined in parallel, the voltages across them are constrained to be the same.

Many physical mechanisms, including both mechanical devices and electronic machines, both digital and analog, consist of collections of physical objects each of which has properties that associate different ranges of properties and limit combinations of possibilities from those ranges. We have discovered how to make the ``output possibilities'' of one physical object the ``input possibilities'' for another, and to construct networks of such such possibility transducers in such a way that the complete configuration does something useful, which might not happen naturally. That is the task of a designer, and we have many very sophisticated designers, assembling fragments of various kinds into larger structures, including cars, gas turbines, aeroplanes, houses, football teams, commercial organisations and software systems. Walls and doors of houses link and limit very complex networks of possibilities: including transmission of sounds, of heat, of temperature, of people, of furniture, and also constrain motion of pictures, wallpaper, etc. They also link and limit sets of possible psychological states of occupants.

In an old fashioned clock all the possibilities involve clearly visible state changes, such as rotation of wheels, downward movements of weights, oscillation of pendulums and ratchets, etc. In an electronic clock with a digital display the collections of interlinked possibilities are of a very different kind, and the laws associated with the various possibility transducers are very different, involving discrete rather than continuous changes, for example. But the general principle is the same in both cases: a complex machine is created from simple components by linking them in such a way that the associated possibilities are also linked, and thus we get large scale constrained behaviour that is useful to us, and emergent capabilities such as telling the time and providing shelter.

Collapsing sets of possibilities in classical physics?

From this viewpoint even the engines that use no principles of quantum mechanics involve linked networks of possibilities. Any particular state of such an engine, or any particular extended piece of behaviour over a time interval will be merely one among many possibilities that are all inherent in the design. All those possible voltages, currents, rotations, velocities, forces, etc. are real possibilities in the sense that the configuration of the machine allows them to occur, while other combinations of possibilities occurring in particular spatio-temporal relations are ruled out by the configuration.

In short: given a particular physical configuration, some combinations of the possibilities associated with components are permitted and others ruled out, at least while the configuration is preserved. Breaking the machine in some way, for example, destroys the configuration and allows some new possibilities to come into existence, and perhaps removes others.

More and less remote possibilities

We can put this a little more precisely. In the normal undamaged configuration there are both sets of possibilities. However some sets are ``more remote'' than others: they cannot be realised without a change in the configuration, whereas the ``less remote'' possibilities are all able to be actualised without changing the configuration. A less remote possibility might include a lever changing its orientation. A more remote possibility might include a large increase in the distance between the ends of the lever, which can occur only if the lever breaks.

The notion of an undamaged configuration needs to be made more precise. In the case of a machine there will be a set of possible states that will be described as undamaged states of the machine and others that are described as damaged states, the difference being related to the intended use of the machine. For now I wish to disregard such purposive notions and consider only classes of states that are identifiable by their physical properties. Thus, for example, instead of talking about damage we can describe a configuration as preserved as long as the components continue to have certain properties and stand in certain relationships. Thus, for a clock, rotation of the hands and the cogwheels to which they are connected preserves a certain sort of configuration, whereas removal of a tooth from a cog does not.

Using the same physical components we might clamp two parts of the clock together and define a new configuration as one that included all the previous relationships and also the clamping relationship. The new configuration will directly support a smaller range of possibilities. Some of the possibilities that were directly supported by the previous configuration are remote relative to the new configuration. However the restriction may also enable new possibilities, such as the unattended clock remaining wound up for a long time.

A configuration then is defined not just by its physical components, but by a particular set of properties and relationships between those components (e.g. those in which all components of the clock retain their shape, their pivot points, the distances between pivot points, etc.).

I shall say that the configuration directly supports the collection of possibilities that require no change in the configuration (e.g. no change of shape of the rigid components) and indirectly supports larger classes of possibilities, which would require more or less drastic changes in the configuration. If we define a technical notion of ``damage'' to a configuration as removal of one of its defining relationships, then we can say that achieving one of the more remote possibilities requires damage to the configuration.

What sorts of configurations are worth considering as having ``defining'' properties is a large question I shall not consider. Living organisms and machines designed by intelligent agents would be obvious examples. Others might be galaxies, the solar system, stable complex molecules, tornados, clouds and other naturally occurring complex behaving structures with some degree of integration or coherence. For now I shall merely assume there are some interesting cases without attempting to delimit the whole class.

Possible world semantics

Readers familiar with possible world semantics for modal operators involving notions of degrees of ``accessibility'' between possible worlds will see obvious links with the notion of more or less remotely supported sets of possibilities. However, it should be clear that there is no simple ordering associated with a degree of remoteness of possibilities supported by a configuration.

For example the sets of possibilities that become accessible when one of the levers is broken, when some of the teeth are broken off a cog wheel, when a belt is removed from a pulley need not form a set ordered by inclusion. If we consider different combinations of kinds of damage or other change to the configuration, we get a very wide variety of sets of possibilities with at most a partial ordering in degree of remoteness from the set of possibilities directly supported by the original configuration. If there are different sequences of kinds of damage leading to the same state there need not even be a well defined partial ordering. In one sequence getting to state A requires more changes than getting to state B. In another sequence it could be the other way round.

Thus there need not be any well-defined metric that can be applied to notions of relative degree of remoteness of possibilities, and these ideas will not correspond exactly to a possible world semantics which requires degrees of accessibility to be totally or partially ordered. (Where there is no such requirement the two views of modality may turn out to be equivalent.)

There is a deeper problem about sticking to modal logic and current semantic theories, namely that we have no reason to presuppose that an adequate ontology for science or engineering would have the form of a model for a logical system, with a well defined set of objects, properties and relationships, so that all configurations can be completely described in the language of predicate calculus, and all changes can be described in terms of successions of logical formulae. For example the wings of a hovering humming bird or a gliding seagull are very complex flexible structures whose shape is constantly changing in such a way as to change the transduction relationships between various possible forces (gravity, wind pressure, air resistance, etc.) and possible changes of location, orientation and speed. There is no reason to believe there is any unique decomposition of the wings into a fixed set of objects whose properties and relationships can be described completely in predicate calculus (augmented if necessary with partial differential equations linking properties, etc.)

Similarly, when a child learns how to dress itself I have no reason to believe that its understanding of the processes of putting on a sweater or tying shoe laces can be expressed in terms some articulation of reality into a form that is expressible in terms of names of individual objects, predicates, relations and functions. It is a completely open question whether some totally different form of representation is still waiting to be discovered that is required for such tasks (Sloman 1989). Maybe the neural nets in animal brains still have a great deal to teach us about powerful forms of representation.

Which collection of possibilities is actualised?

Consider again the directly reachable set of possible states and processes corresponding to an ``undamaged'' machine. How does physical reality make a selection from that set, in any concrete situation, i.e. at a given time and place? What determines which of the many possibilities will be realised in that region of space-time?

The answer to this involves the notion of the environment within which the configuration is embedded.

For a given region of space time the environment will include both the prior state of the machine (its state at the start of the time-interval) and also the states of many other things throughout the duration of the time interval and throughout some spatial region enclosing the machine.

Thus, which among the possible positions of the hands actually exist on the face of a clock at various moments between 8 and 9 am on Thursday morning can depend on the state of the clock at 8 am and the behaviour of other things during that time interval, e.g. whether the weights are fully descended at the start, whether anyone comes and winds up the weights, whether someone or something physically moves the hands of the clock, whether a very strong magnetic field is applied to the pendulum, whether the clock is transported to outer space, where gravitational fields are much weaker, whether a tilting force is applied to it, and so on.

It is common to refer to these ``external'' influences as ``boundary conditions''. Just as the physical relationships between components within the machine link sets of possibilities for those components, so do physical relationships between parts of the machine and parts of the environment produce links between sets of possibilities outside and inside the configuration. (The words ``inside'' and ``outside'' are here used metaphorically. Treacle poured into the mechanism of the clock will be outside the configuration, though spatially inside the clock.)

Thus boundary conditions of a configuration play a role in selecting among the set of possible events and processes supported by that configuration. However, the influence can be two-way: feedback loops can occur both within a configuration and also between portions of a configuration and portions of its environment.

Non-deterministic linkages

The discussion so far has been concerned only with fully deterministic systems, but there is no difficulty in generalising it to accommodate systems which are partly non-deterministic. That is, there may be some components, or configurations, for which input possibilities do not uniquely determine output possibilities. For example, there could be a type of conductor whose internal processes cause frequent random fluctuations to the current flowing through it even while the voltage across it is fixed.

In some of those cases the non-deterministic relationships between linked sets of possibilities may be statistical, with rigidly constrained long run relative frequencies of alternative possible outcomes for a given input. Perhaps some are totally random, with no physical laws even determining the long run distributions of frequencies of different occurrences. Anyhow, these non-deterministic mechanisms will be ignored in what follows, though they may be crucial for the functioning of certain sorts of machines, and not only machines for selecting lottery numbers. For instance, some animal brains may require non-deterministic decision making to prevent predators learning their decision-making strategy.

Possibilities causes and counterfactual conditionals

Physical properties such as electrical resistance, tensile strength, fragility, flexibility, rigidity, all involve relationships between ranges of possibilities, as described above. In some cases the range of possibilities forms a linear continuum (e.g. possible voltages, possible currents) and in those cases the property linking them imposes a constraint that may be expressible in a particularly simple form, such as an equation linking two or more numerical measures. Other sets of possibilities may have more complex structures.

For example, a fragile vase may be struck or crushed in many different ways, and the resulting decomposition into myriad fragments can happen in many different ways. Here the two ranges of possibilities do not form linear sets, and therefore the links between them cannot be expressed as a single simple equation. In the case of a digital circuit or a software system there may be ranges of possibilities, both for inputs and outputs, that are not even continuous, for example possible inputs and outputs for a compiler, or possible startup files for an operating system and possible subsequent behaviours.

Despite the diversity, in all the cases there is a relationship between the sets of possibilities which can at least loosely be characterised by saying that the properties of the machine or configuration ensure that IF certain possibilities occur THEN certain others occur. Moreover some of these possibilities may not actually occur: the fragile vase sits in the museum forever without being hit by a sledge hammer. We can still say that the IF-THEN relationship holds for that vase. In that case we are talking about counterfactual conditionals. However, I do not believe there is any difference in meaning between counterfactual conditions with false antecedent and other conditionals with true antecedent. In both cases the conditionals assert that there is a relationship between elements of certain sets of possibilities.

This, I believe, is also what we mean when we say that one thing causes another, as opposed to merely occurring prior to it. However, this is a large topic on which huge amounts have been written, and I do not intend to address the problem in this paper, merely to point out the connection. If the previous point about expressions in predicate logic being insufficient to express all the types of possibilities that an intelligent agent needs to perceive and understand is correct, then the notion of a conditional has to be construed as something that does not merely link assertions in familiar logical formalisms. We may have to generalise it to cover new types of representations. (Some feed-forward neural nets can already be seen as an example of something different: namely a large collection of of parallel probabilistic IF-THEN rules.)

Levels of virtual machines

I previously remarked that a property like electrical resistance may be implemented in lower level physical structures, whose properties and relationships would be defined in the language of contemporary physics, or perhaps physics of the future.

This is a simple example of a very general phenomenon. There are many other properties of complex systems that are best thought of as implemented in other properties. This is commonplace in computer science and software engineering, where we often talk about virtual machines. It is also arguable that the phenomena typically studied by biologists (including genes) are implemented in mechanisms involving physics and chemistry. Similarly many of the phenomena studied by chemists (e.g. in drug companies) are implemented in mechanisms of quantum mechanics which the chemists may not understand. Social, political, and economic phenomena are also implemented in very complex ways in the physical world, but with many levels of intermediate virtual machines.

I claim that the points made previously about physical objects having properties which are essentially causal linkages between ranges of possibilities applies to ALL sorts of machines, including social mechanisms and abstract or virtual machines running on computers. In other words, it is not only the ``bottom'' level physical constituents of the universe, whatever they may be, that have causal powers: all sorts of more or less concrete objects also do. In fact a great deal of our normal thinking and planning would be completely impossible but for this fact, for we think about causal connections (e.g. connections between poverty and crime) without having any idea about the detailed physical implementation. Sometimes without knowing the details we can still interact very effectively with relevant bits of the world. Admittedly social and economic engineering are very difficult, but we constantly interact with and bring about changes in people and their thought processes, desires, intentions and plans.

Similar causal interactions can happen between coexisting abstract states entirely within a brain or an office information system.

Causal powers of computational states

An important implication of all this is that whereas many people have the intuitive feeling that somehow computational processes are incapable of having the properties required for mental states, events and processes, such as desires, pains, pleasures, emotions, experiences of colour, etc. this may be because they think of computations as if they were simply static structures like collections of symbols on a sheet of paper.

However, when symbolic information structures are implemented in a working system which is actually controlling some complex physical configuration, such as an airliner coming in to land, the limbs of a robot, or the machinery in a chemical plant, then it is a central fact about the system that the information processing states and events and the abstract data-structures, all have causal powers both to act on one another and also (through appropriate transducers) to change, and be changed by, gross physical structures in the environment.

A further development of this line of thought would show how to defend AI theories of mind against charges that no computational system could have the right causal powers to support mental states and processes. But that is a topic for another occasion.

Perception of possibilities

Gibson's theory of perception as the acquisition of affordances (Gibson 1986), which I've tied to develop in a computational framework (Sloman 1989) is interpretable within the framework described here.

One of the key ideas is that in general, for an organism merely to be given information about the structures of objects in the environment is far from sufficient for its needs. Gibson's idea, which I think is essentially correct, is that many organisms have evolved the ability to perceive sets of possibilities and constraints on possibilities, and these are not inferences made by central cognitive mechanisms on the basis of structural information delivered by the visual system, but rather the detection and representation of ``affordances'' happens deep within the perceptual mechanisms themselves. There is no space here to do more than mention this conjecture.

Grammars for things and for processes.

Can we move towards a scientific theory of sets of possibilities and how they are related with sufficient precision to provide a basis of engineering design? I suspect we are not yet ready to complete this task. Perhaps the notion of a formal grammar will turn out to be relevant. Grammars are specifications of sets of possibilities: usually sets of legal formulas within a formalism. However various attempts have been made to generalise this notion to accommodate, for example, grammars for images, grammars for 3-D structures, and grammars for behaviours (e.g. dances). I am not aware of any grammatical formalism that is able to cope with some of the kinds of continuous variability mentioned above (e.g. changes of configuration as a child dons a sweater.) Nevertheless it may be that some future development of some existing grammar formalism will suffice.

The main point for now is that the components of the grammar do not need to be propositions or constituents of propositions. So a grammar provides a different view of ranges of possibilities from that provided by a modal logic. Roughly one is object-centred (de-re?) and the other fact-centred, or proposition-centred (de-dicto?). However there are probably deep connections between the two.


Many common sense and scientific notions of things are inherently modal (i.e. to do with possibilities and relationships between possibilities), including both explicitly dispositional concepts (e.g. ``brittleness'', ``risk'', ``irritability''), and also lots of others (e.g. ``strength'', ``electrical resistance'', ``volume'', ``shape''). These sets of possibilities are real, though of course a possibility can be real without being actualised, like my going for a swim yesterday.

I have offered a view of objects (both physical objects and more abstract objects like data-structures or procedures in a virtual machine) as having properties that are inherently connected with sets of possibilities, some of the possibilities being causal inputs to the object and some outputs, and I have suggested that many of the important properties of the objects are concerned with the relationships between possibilities in the two sets, i.e. causal links between possibilities. These properties are often implemented in lower level properties of different kinds. Moreover, by combining them in larger configurations we can use them to implement higher level ``emergent'' machines.

One consequence of this way of thinking is that you don't have to go to quantum mechanics to be faced with issues concerning collections of coexisting possibilities from which somehow reality makes selections. If quantum mechanics needs consciousness to make selections why not classical mechanics? My own expectation is that eventually our understanding will go in a different direction. We'll see how mental phenomena are actually causally rich phenomena in complex virtual machines implemented ultimately in physical machines with very different properties, though still involving linkages between sets of possibilities.


I am grateful to Pat Hayes and Henry Stapp for stimulating email discussion on these topics, and also colleagues at Birmingham. My thinking is deeply influenced by the work of Gilbert Ryle. I think the ideas presented here overlap considerably with those in Bhaskar (1978). However I have found the latter very difficult to understand and have not yet read it all. Natasha Alechina provided useful comments on a draft of this paper and pointed out that Popper's ideas about propensities were close to those presented here. Chapter 2 of Sloman (1978) contains an early attempt of my own to address these issues.


Roy Bhaskar (1978) A Realist Theory of Science The Harvester Press and The Humanities press.

Gibson, J.J.(1986) The Ecological Approach to Visual Perception, Lawrence Earlbaum Associates, 1986 (originally published in 1979).

Gilbert Ryle (1949) The Concept of Mind, Hutchinson.

Aaron Sloman (1978) The Computer Revolution in Philosophy: Philosophy Science and Models of Mind Hassocks: Harvester Press 1978.

Aaron Sloman (1989) `On designing a visual system: Towards a Gibsonian computational model of vision' Journal of Experimental and Theoretical AI 1,4, 289-337 1989
(Also available as Cognitive Science Research paper 146, University of Sussex).

About this document ...

This document was generated using the LaTeX2HTML translator Version 95.1 (Fri Jan 20 1995) Copyright © 1993, 1994, Nikos Drakos, Computer Based Learning Unit, University of Leeds.

The command line arguments were:
latex2html -split 0 kr96.tex.

The translation was initiated by Aaron Sloman on Thu May 23 02:12:54 BST 1996

Aaron Sloman
Thu May 23 02:12:54 BST 1996