CoSy Discussion paper: COSY-DP-0602 (Revised 14 Feb 2006)
(Work in progress)
Other reports, papers and presentations
(Best viewed using whatever browser, whatever font size, and whatever window width you find congenial.)

A Conceptual Framework and Draft Tool
for Generating Requirements and Scenarios
for Cognitive Systems

Aaron Sloman
Working in the context of the CoSy project
with help from many others, especially the Birmingham CoSy Project Team

Introduction

At a workshop attempting to identify directions for research in intelligent robotics one of the industrial participants stated that over many years he had attended presentations by AI researchers about what they were going to do, and had formed the impression that the presentations hardly changed over time: what they were going to do never seemed to get done. The same workshop discussed the difficulty of developing a feasible and science-based roadmap, with well-defined milestones that could be used both to guide the research and to measure progress.

I encountered the same problem when discussing aims for the DARPA cognitive systems initiative in the USA with people involved in planning project proposals. One of the problems is that there is always a strong (and very understandable) temptation for people to identify long term aims that are related to the techniques and theories they already understand well. This can certainly drive short term progress in particular research areas, and to that extent is useful.

However the risk is that it can also lead to important lines of enquiry going unnoticed. More importantly, it can prolong fragmentary research in which partial systems are designed and implemented, without any consideration of whether they will actually work in the context of a more complete system.

Is there any way to help people think about things they have not noticed, and possibly don't know how to think about yet? For some time I have been trying to find a way to do this in terms of partially ordered sets of scenarios based on detailed descriptions of hypothetical future robots doing specific tasks that humans can already do, and working backwards through sets of simpler "enabling" scenarios, and eventually to scenarios that are close to things we already know how to do. That enables us to select immediate tasks that have a well-defined role in a long term research programme. My attempts to sketch out this idea can be found in

The problem of choosing what to do arises in any ambitious project aiming towards major long term goals, and CoSy is no exception. Members of the project have many ideas about things that we could aim for in the practical work and since we cannot hope to achieve more than a tiny subset, I felt we needed a principled way of surveying the options and choosing among them, instead of simply selecting specific goals on the basis of individual preferences and competences of team members.

After some discussion at Birmingham we came up with a principled way of generating scenarios of varying ambition and a first draft rather crude web-based tool to help the process, though it is all still fairly underdeveloped.

The key idea arose from analysis of requirements for learning in a young child or child-like robot and is fairly simple, although the implications are complex and potentially deep.

The idea is that we can (to a first approximation) classify requirements for the robot in three dimensions. More precisely, there are two dimensions defining a rectangular grid of types of competences, and each box in the grid can have entries of different degrees and kinds of sophistication, some a long time into the future, others relatively easy to achieve soon. So the third dimension is a partial ordering based on difficulty, time required, and enablement.

Then scenarios of different degrees of difficulty can be generated by combining different subsets of competences in the grid and producing specifications of behaviours that would demonstrate those competences.

For a related document describing collections of recombinable orthogonal competences that a young child has to learn, see DP-06-01 (HTML)

Columns and Rows in the Requirements Grid

The two main dimensions of the grid are represented by a collection of rows defined below (types of competences) and a collection of columns, where each column represents a class of entity types (though the columns could become more richly structured later, e.g. a tree-structured taxonomy instead of a fixed set of categories).
    Types of entity       E1,  E2,  E3, ....
    Types of competence
                    C1    [ ]  [ ]  [ ]
                    C2    [ ]  [ ]  [ ]
                    C3    [ ]  [ ]  [ ]
                    .
                    .
                    .
Thus, each cell in the grid represents application of a type of competence to a type of entity. The competence may be physical (e.g. kicking, assembling, eating) or mental (e.g. thinking about, wondering about, designing, being puzzled by), or social, involving communication with other information processing systems. What follows is a first draft, incomplete set of entity types, for the columns, and competence types, for the rows.

Columns: Entity Categories
Examples of column categories could be

Rows: Competences that can be applied to entities
The other dimension of the grid would correspond to rows, where each row is roughly specified by something that a robot or animal can DO to entities of the sorts in the columns. So each row would refer to a kind of competence involving actions (physical or mental).

Examples of types of competences defining rows could be:

Of course, it follows from all this that some of the cells in the grid may be empty, e.g. a cell in which a physical action, such as kicking, is applied to a non-physical object, such as a number, belief, or proof.

Related Issues
A set of closely related requirements issues that don't fit into the grid are concerned with types of architectures required for combining sets of competences from different locations in the grid, types of representations required for handling information, and types of mechanisms, for manipulating information.

The Third Dimension: Time, Difficulty, Dependence
We have started inserting into the boxes in the grid, descriptions of the applications of the different sorts of competences to the different sorts of entities, which we hoped to divide into three main categories according to difficulty/time scale.

In fact because so much of what we are doing is taking us into new territory it is proving very difficult to be very specific about any of these things, but the process of starting to do it has been very interesting and informative. People who have never done anything like this before have to learn a new way of thinking, and most people seem to find it very hard, especially when there's a lot of pressure to get on with other things we are already committed to.

As far as I know, nobody in the history of AI or Cognitive Science has tried to produce such a systematic way of generating sets of requirements for intelligent systems. Instead people mostly focus on specific tasks and then start thinking about how to design mechanisms that can perform those tasks. I feel that without knowing what the full space of possible requirements is, and what sorts of requirements are being ignored, researchers risk going down blind alleys, or at best developing solutions to very specialised problems that cannot be combined with other things to solve broader problems.

The Web-based Requirements Grid

We have a first draft 2-D grid (matrix) of the sort described above described at this web site:
http://www.cs.bham.ac.uk/research/projects/cosy/matrix/dynamic-matrix.php

Future developments of the grid

At some time in the future, as the columns and rows become more finely differentiated, it is likely that the grid will need to be replaced by a very different topology, e.g. if some cells in a column need to be divided into sub-cells corresponding to different entities or different competences, whereas other cells in the same column need not be so finely sub-divided.

It may also be possible in future to spell out which sets of grid cells, at which depth, correspond to the competences of different animal species, or the competences of human children at different stages of development, or the requirements of different sorts of robots to be deployed in different tasks or environments.

Compare the paper on The Altricial-Precocial Spectrum for Robots, presented at IJCAI'05, with Jackie Chappell.
Some interesting evolutionary trajectories may one day be shown to correspond to expanding or changing sets of grid cells.


NOTES

Note 1: What is a requirements specification?
By a requirements specification we understand a logical (though not necessarily temporal) precursor to any design task, namely a specification of the kinds of competences a system will have (e.g. what it can and cannot do in various kinds of situations) without prejudice as to how it does it, i.e. what the underlying mechanisms, forms of representation, architecture, etc. are.

So each cell grid, corresponding to a type of competence applied to a type of entity can be seen as a potential contributor to a requirements specification for some sort of system. The actual specification will normally have to go to a lower level of abstraction, e.g. instead of merely specifying the ability to think about locations, it may need to specify what sorts of locations and in what way and for what purpose they are thought about.

A requirements specification is a logical precursor to a design task since only by reference to the requirements does it makes sense to ask whether the design is good or bad, or whether one design is better than another.

(A similar comment can be made about evaluation of implementations, though there's the further complication that an implementation can exactly fit a design, even though both fail to meet the requirements.)

In this sense you cannot evaluate a requirements specification because it is the basis of evaluation of something. But you can ask: is that a specification of something I want, or something that will make X,Y and Z happy, or something that will keep us safe and warm, or something novice users would be able to use? A scientist can also ask whether it specifies something like a chimpanzee, or something that biological evolution could produce, or something that could survive in a particular environment?

E.g. a great deal of psychology can be construed as searching for a requirements specification for a working human.

The requirements specification need not be a temporal precursor to the design work because in many cases, especially in complex projects, it is often impossible to know what exactly is desired until after much exploration of what can and cannot be done, and some experience of the results of doing it.

So, very often the requirements don't emerge fully until after the processes of designing and testing.

Talk about verification of a design is nonsense if the requirements have not yet been clarified.

Similar problems arise when the task is not engineering but science: e.g. trying to build a working model of a natural system, as part of the process of understanding how that natural system works.

You may start by thinking you know what the competences of the system are (e.g. the weather, or an ecosystem, or a social system, or a normal adult human being) and think that that specifies the requirements, so that all you have to do is work out a suitable design and check that it produces the right set of competences (as displayed in behaviour tests, or by logically proving that the design guarantees all and only those competences).

But the history of AI, psychology, linguistics, sociology, ethology, meteorology, etc. shows that we never do actually know the competences of any complex natural system. So the process of developing and testing an explanatory model often raises questions that provoke new investigations of the natural system, showing that the initial presumptions about what it can do are wrong, and leading to a change in the requirements specification. This is an important way of doing empirical science that is not well understood.

It implies that AI and Cognitive science researchers should not assume that their task is to build something that fits what psychologists say occurs.


NOTE 2: Requirements in CoSy
In the CoSy robotic project one of our problems is to specify long term, medium term and short term requirements.

The long term requirements are those that motivate the whole project, though we cannot hope to understand them all, or to build systems satisfying them, in four years and perhaps not even in forty years (e.g. understanding how to model the competences of five year old human child, in a working robot, or even just how to build a useful domestic robot that could help someone disabled or infirm).

Some shorter term requirements are those that might be met at the end of a funded project.

Very short term requirements are those that could be met at an intermediate stage in the project (e.g. after a year). They need to be chosen in such a way as to form a milestone towards the longer term, even though the longer term requirements are not yet fully understood and therefore some guesswork is involved.


NOTE 3: The Web-based Tool
To help with articulation of such requirements we have built a relatively simple web-based tool in which we present the 2-D grid of types of competence applied to types of entity.

The rows (apart from a 'general' row) correspond roughly to what sorts of things the working system can do (perceive, act on, talk about, create, represent, attend to, control, etc.) and the columns correspond roughly to what sorts of things it can do them to (e.g. locations, regions of space, routes, inanimate physical objects, animate objects, abstract entities like numbers, theorems, plans, explanations, abstract contents of animate objects e.g. thoughts, intentions, desires):

Each cell in the grid can then be given contents in the form of a set of web pages specifying (sometimes very tentatively, or very sketchily) requirements that are very long term, medium term, short term, very short term, etc.

The process of growing the requirements specification can use a mixture of backward-chaining

i.e. starting from very sophisticated expected requirements to be met in the long term, and working back through simplified, more easily achievable, subsets that could provide stepping stones

and forward-chaining

i.e. starting with systems with relatively simple competences then specifying more complex systems that could use or extend those competences.

In December 2005, members of the local CoSy team started growing such a grid and putting three categories of competences into it roughly corresponding to different time scales and degrees of difficulty

  1. long term (Fido,the domestic robot of the future),
  2. short term (CoSy, what our project can hope to produce)
  3. very short term (Kitty a 24-30 month target)

One of the features of such a grid is that particular research projects (or teams) can then select different subsets from the grid of requirements to define research goals.


NOTE 4: The future of the tool
It is already clear that as a tool, the combination of web pages and browser (using some php code very rapidly provided by Nick Hawes) is useful for the short term, but not adequate in the long term. The very first implementation soon raised a requirement to discover what had most recently been changed in the grid, so Nick used the fact that each grid cell corresponded to a sub-directory to produce some php code to make it easy to generate a clickable 'latest-first' listing.

Other mechanisms that we don't have include authoring tools (we all use separate text editors of our preference), searching tools, version control tools (we could use cvs or rcs or ...), consistency checkers, duplication checkers, etc.

It is not clear whether something based on a Wiki would provide sufficient flexibility.

Some tool-components (e.g. redundancy checker) would depend on the development of a suitable formalism for specifying the requirements (i.e. the competences of the end products).

There are lots of formalisms for expressing designs, but the specification of requirements for something like an autonomous human-like robot is a more difficult problem because that can involve describing arbitrary bits of the universe, since requirements include interactions between the system being designed and the environment in which it will act, which can change in unpredictable ways.

We are and always will be far from having good ways to represent everything, including weather patterns, chemical plants, the comfort of airline passengers, social behaviours of humans, the needs of a blind or wheel-chair bound person, and present and future domestic furniture and appliances.

E.g. different sorts of requirements specifications will require quite different ontologies.

There's also the problem that the more formal the notation the less chance there is of checking requirements specifications against the preferences and intuitions of most potential beneficiaries.

For this reason George and Vaughan have proposed the use of 'lightweight formal methods' http://www.stsc.hill.af.mil/crosstalk/2003/01/George.html

There must be many other proposals about which we don't know. A brief amount of searching has not produced any other example of a generative action(or function)/object grid from which requirements at different levels of sophistication or difficulty or for different subsets of a compete system can be derived (or extracted), e.g. for generating intermediate research or development projects.

(There are tools based on a matrix where rows or columns distinguish different priority levels: Must have, should have, etc.)

The sort of tool discussed here could be used as a framework to coordinate a large collection of related research projects, working on different, possibly overlapping subsets of the grid, on different time scales.

My hunch is that if there is not already a more sophisticated version of our tools available somewhere, developing one could produce a useful contribution to software engineering (or engineering more generally?) and also to project management in some complex scientific research projects, like CoSy.

Early versions of the toolkit could also be used to develop more detailed longer term requirements for the toolkit in a bootstrapping process. (A system that is used to build itself will often have fewer bugs than one built using another system, not least because the developers will have to acquire a deeper understanding of the requirements).


Last updated: 13 Feb 2006