SCENARIO-TEMPLATE http://www.cs.bham.ac.uk/research/projects/cosy/scenarios/scenario-template.txt Version 1 (Corrected: Mon Mar 21 12:30:37 GMT 2005) [Aaron Sloman and Jeremy Wyatt] There is a table of contents below, after some introductory notes and comments, and news items. ======================================================================= NOTE (27 Jan 2006): Some time after this template and the example were created, the Birmingham CoSy group developed an informal tool that can help with generating scenarios of different types, of different degrees of difficulty, by encouraging analysis of different competences that can be combined in various ways in different scenarios. The tool takes the form of a web site with a 2-D matrix, accessible here: http://www.cs.bham.ac.uk/research/projects/cosy/matrix Some of the requirements for scenarios are discussed in some of the month 12 CoSy deliverables, including DR.2.1. on requirements for architectures and for representations: http://www.cs.bham.ac.uk/research/projects/cosy/papers/#tr0506 http://www.cs.bham.ac.uk/research/projects/cosy/papers/#tr0507 ======================================================================= Background: ---------- This is an elaboration of some of the ideas about how to do scenario-driven planning and evaluation of long term research projects, previously presented in the context of the UK Computing Grand Challenge Project ('Architecture of Brain and Mind') http://www.cs.bham.ac.uk/research/cogaff/gc/targets.html Those ideas were further developed in the context of the CoSy project in which some of us aim to explore this methodology as part of a 'backward-chaining' research strategy in which research planning starts by analysis of long term research goals, and working backwards through intermediate goals and their requirements. This involves specifying a collection of intermediate scenarios partially ordered according to their dependency relations. What is presented below is a first draft specification of a template for such scenario descriptions. Some example applications of the template to specific scenarios will be added. Each scenario description has a collection of fields, listed below. Not every scenario specification will use all the fields, and there may be some that require additional fields, e.g. referring to specific sorts of web sites or documentation or other information. Later some example scenario specifications using this template will be provided including some relating to the draft sketch of a playmate scenario in this document: http://www.cs.bham.ac.uk/research/projects/cosy/PlayMate-start.html Examples: -------- An example of a PhD thesis using this sort of methodology, though not exactly this template, can be found here http://www.cs.bham.ac.uk/research/cogaff/0-INDEX03.html#03-06 Anytime Deliberation for Computer Game Agents Nick Hawes (University of Birmingham PhD Thesis) A PhD thesis nearing completion based informally on this methodology, where the scenarios are derived from Developmental Psychology literature on attachments in infants has some chapters and papers on Dean Petter's web site: http://www.cs.bham.ac.uk/~ddp A PhD thesis proposal by Marek Kopicki which also uses this methdology informally can be found on his web site http://www.cs.bham.ac.uk/~msk See Report3, especially chapter 5 ======================================================================= Format of this document: ----------------------- At present this document is a plain text file. At some later stage, when we have a better understanding of what we need to do with these documents we can consider a more structured format for the template and its instances. E.g. HMTL would facilitate cross-indexing, and also allows better formatted printing. It can also easily be converted to Latex. Alternatively it may be better to use XML, or some other formal notation. We may need tools to manage scenario specifications. As an incomplete example, still under development, see the CoSy requirements matrix web page: http://www.cs.bham.ac.uk/research/projects/cosy/matrix Instructions (for now): ---------------------- To generate scenario specifications, make a copy of this file for each scenario and delete this introductory text. When filling in portions of the template below, delete the explanatory or illustrative text, keeping only headings. The order of items is merely illustrative: other orders may be preferred. It is possible that different orderings will suit different scenario specifications. TABLE OF CONTENTS OF A TYPICAL SCENARIO DESCRIPTION (Provisional ordering) -- Scenario label (mnemonic name): -- Scenario code (arbitrary label): -- Author(s): -- Modified by: -- Scenario summary: -- Motivation: -- Empirical/Scientific context: -- References: -- Precursor scenarios: -- Follow-on scenarios: -- Scenario Ontology: -- RobotOntology: -- -- Ontology development: -- Prerequisites: -- -- Reactive mechanisms: -- -- Deliberative mechanisms: -- -- Metamanagement/meta-semantic mechanisms: -- -- Affective states/processes: -- -- Motive generators -- -- Reflective evaluation mechanisms -- -- Linguistic capabilities: -- -- Kinds of perceptual mechanisms: -- -- Kinds of action mechanisms: -- -- Alarm mechanisms -- -- Kinds of communication channels -- -- Kinds of learning mechanisms -- -- Conflict resolution mechanisms -- -- Other prerequisites for the scenario to work. -- Kinds of self-understanding: -- SCENARIO SCRIPTS: -- NEGATIVE SCENARIOS: -- KINDS OF INTEGRATION: -- MODE OF EVALUATION: -- PROGRESS SO FAR: -- -- Design documents -- -- Formal requirements specifications -- -- Code completed -- -- Tests and results of tests -- -- People contributing -- -- Publications -- -- Acknowledgements -- RELATED SITES: ----------------------------------------------------------------------- -- Scenario label (mnemonic name): ----------------------------------------------------------------------- -- Scenario code (arbitrary label): (Alternatively this might be a unique structured label indicating a: whether it is a meta-scenario (start with 'M') b: rough indication of degree of difficulty (digit in range 0-9) c: kinds of competence involved, coded in some way d: creation date e: ????) ----------------------------------------------------------------------- -- Author(s): ----------------------------------------------------------------------- -- Modified by: (include change notes in reverse chronological order) ----------------------------------------------------------------------- -- Scenario summary: Brief description of the kinds of tasks that the scenario will involve, and how they will vary e.g. in goals, in context, in difficulty. (This may include extensions of tasks in a precursor scenario). ----------------------------------------------------------------------- -- Motivation: Why the scenario is of interest e.g. o facts about humans or animals that implementation is meant to model or explain o important engineering goals for which this may be an intermediate step o hypotheses being tested ----------------------------------------------------------------------- -- Empirical/Scientific context: Open empirical questions relating to the scenario (e.g. Can children aged X do this? How do they do it? New empirical questions generated by work on the scenario. ----------------------------------------------------------------------- -- References: E.g. to psychological or biological literature, or to related AI or robotics work which is being extended or replicated, or re-implemented in a new way, etc. ----------------------------------------------------------------------- -- Precursor scenarios: List of scenarios that are likely to have to be completed in order to develop mechanisms relevant to this one. [Might include only immediate precursors, i.e. exclude all precursors of precursors, except for particularly important ones.] ----------------------------------------------------------------------- -- Follow-on scenarios: Scenarios for which this is (or will be) a precursor Is this the object-scenario for some meta-scenario(s) If so list the meta-scenario(s) ----------------------------------------------------------------------- -- Scenario Ontology: Brief list of types of objects, properties, relations processes involved in the scenario, including for example whether intentions or knowledge of another agent play a role. (This is independent of any formalism used to express the ontology in the robot. It could be a summary in English, perhaps with diagrams, tables, etc. or a link to another file giving a diagram (e.g. taxonomy)) If the scenario uses a changing ontology (e.g. more and more complex types of objects, properties, relations, scenarios, tasks are introduced over time) then indicate the nature of the progression and its limits (from the point of view of the designer: what the robot learns is below). ----------------------------------------------------------------------- -- RobotOntology: Which subset of the above ontology will be known to the robot. Specify what should change if the scenario includes conceptual development involving changes to the ontology. -- -- Ontology development: If the robot or other system develops its ontology indicate the sort of development that will occur, and under what conditions, including specifying what it cannot do and why. ----------------------------------------------------------------------- -- Prerequisites: Conjectured summary of architectural features, mechanisms, forms of representations, algorithms (or more abstract characterisations of capabilities) required for the scenario to work. Note that the material in this section, which specifies hypotheses about design requirements should be clearly separated from other material specifying required competences through examples in scenarios. The difference is crucial for evaluation: since designs and implementations need to be evaluated in relation to the requirements, and if the requirements are not clear evaluation becomes muddied. -- -- Reactive mechanisms: including fixed or adaptive mechanisms, alarm mechanisms, motivational mechanisms, etc. Whether neural nets, rulesystems, dynamical systems, etc. are used, or some combination. -- -- Deliberative mechanisms: including kinds of knowledge bases, working memories, forms of representation, types of algorithms, etc. Motivational mechanisms, conflict resolution, etc. -- -- Metamanagement/meta-semantic mechanisms: Kinds of self-observation. Meta-semantic ontology used for self categorisation/evaluation/control, etc. Forms of planning, deliberating, decision making (e.g. control of attention, conflict resolution). Types of learning, debugging, repair capabilities. Applications of meta-semantic capabilities to represent mental states of others (e.g. other humans or robots) -- -- Affective states/processes: motivation, preferences, rewards, punishment, etc. (could be distributed over the different layers). -- -- Motive generators Do these change over time? -- -- Reflective evaluation mechanisms Evaluation of what self-monitoring observes (was it necessary for me to search that long to find a plan?) -- -- Linguistic capabilities: Speech or text input (including NL messages from other machines on the network). Speech or text output. Kinds of linguistic competence (we shall need a good taxonomy of stages in linguistic development so that we can refer to relevant stages here. -- -- Kinds of perceptual mechanisms: Including types of sensors and processing capabilities: (e.g. visual or auditory perception involving N levels of abstraction, whether this includes high level recognition of e.g. actions, intentions, states of mind, reading capability, gesture understanding, perception of affordances and causal interactions (seeing X cause Y to fall, or break, etc.)) Note that in many cases it will be necessary for the robot to be able to see processes and causal interactions. It may be necessary to specify types of processes at different levels of detail (e.g. how not only relations between objects but also relations between various parts of different objects change over time, e.g. the corner of one and the edges of another, the robot's finger tips and parts of the surface of a grasped object. (This aspect of perception links up with the robot's ontology, mentioned above.) See: http://www.cs.bham.ac.uk/research/projects/cosy/papers/#pr0505 (Presentation on some requirements for vision in a domestic human-like robot.) -- -- Kinds of action mechanisms: types of effectors including various kinds of action-control subsystem, kinds of feedback, etc. -- -- Alarm mechanisms (able to detect situations require rapid global reorganisation, freezing, withdrawal, disaster prevention, etc.) -- -- Kinds of communication channels Kinds of data-flow and control flow between modules, including whether push, or pull, or use of shared 'blackboards' etc. -- -- Kinds of learning mechanisms Developing new concepts (ontology extension), new forms of represetation, new entries in an existing ontology, new laws, new plans that are worth storing for reuse, new compiled skills for rapid, fluent action, new planning or reasoning capabilities, new ways of generating motives, new ways of evaluating performance, new ways of dealing with conflicts. Learning new links between different modules in the architecture, e.g. learning to touch type or sight-read music requires new links between vision and action (via various intermediate modules). -- -- Conflict resolution mechanisms -- -- Other prerequisites for the scenario to work. including kinds of environment, kinds of training humans may need for interacting with the robot, etc. [NOTE: the initial conjectures about required architectures and mechanisms will often be wrong. Attempts at implementation will be a profound source of new knowledge about the nature of the problems. So one of the ways in which we'll be extending knowledge is revising the prerequisites and adding more and more detail. An example draft, partial, architectural specification can be found for Kitty, the proposed month 30 version of the CoSy PlayMate robot, here http://www.cs.bham.ac.uk/research/projects/cosy/matrix/architectures/kitty/ ] ----------------------------------------------------------------------- -- Kinds of self-understanding: Specify whether this is a meta-scenario, i.e. a scenario in which the robot acquires, uses, or discusses its own mental processes. If it is, list the relevant object-scenario(s) i.e. where the behaviours involve no self-understanding. Summarise benefits (and costs) of self-understanding in this scenario. (REF: J.McCarthy on The well designed child and Why robots will need self-consciousness.) ----------------------------------------------------------------------- -- SCENARIO SCRIPTS: Provide one or more 'film-script' descriptions of sample behaviours demonstrating the competence being explained. Annotate each one with an explanation which features of the mechanisms, etc. are required specifically for it. Give an indication of the variability expected. Include possible errors of performance. Kinds of learning or development: If the scenario includes learning, illustrate the changes that occur during the performance including learning based on repetitive training and one-shot learning. Ontological development: New ontological levels learnt: Refinements of existing levels (eg. new subdivisions, correction of mistaken categorisations, learning new relations, etc.) Learnt types of goals Learnt associations: Learnt explanations, models, theories: New formalisms learnt: Modifications/extensions of old formalisms: [NOTE sometimes there will be a choice between presenting several scripts bringing out different competences, and presenting one script that allows branching options.] ----------------------------------------------------------------------- -- NEGATIVE SCENARIOS: Indicate some variations of the behaviours that would NOT be possible using the proposed mechanisms. Some of these may later become follow-on scenarios. ----------------------------------------------------------------------- -- KINDS OF INTEGRATION: E.g. Different levels of processing integrated Different kinds of task achieved in the same system Different forms of representation Kinds of Learning and development Fruitful asynchronous interactions between sub-systems e.g. language/vision, vision/planning, vision/hearing self-understanding/learning Note that there are implications regarding forms of representation that will enable sub-systems to communicate effectively. What is good for internal processing in a stand-alone module may not be good for internal processing if that module has to be integrated with others with 'anytime' interactions possible. ----------------------------------------------------------------------- -- MODE OF EVALUATION: (Kind of advance of knowledge demonstrated): E.g. demonstration of novel competence (extending the state of the art) conformity with empirical evidence of what humans or animals can do conformity with well established theories or explanations revisions of previous theories or explanations generation of new specific questions for empirical research, e.g. psychology (e.g. Can competence X occur without Y?) inspiring new scientific work in other disciplines, e.g. biology, neuroscience, psychology, linguistics, social science? where appropriate, passing 'usability' tests actual users (e.g. disabled people) surrogate users (e.g. volunteers to simulate actual users) [This will need to be spelled out in detail for each case. Evaluation details may vary considerably, including testing on standard collections of data or images, or performing in robot competitions, etc.] ----------------------------------------------------------------------- -- PROGRESS SO FAR: This can include information about -- -- Design documents -- -- Formal requirements specifications -- -- Code completed -- -- Tests and results of tests -- -- People contributing -- -- Publications -- -- Acknowledgements etc. ----------------------------------------------------------------------- -- RELATED SITES: This could include links to documentation on the scenario or sample code or results of tests, or videos, publications, related research, etc. -----------------------------------------------------------------------