THE UNIVERSITY OF BIRMINGHAM - SCHOOL OF COMPUTER SCIENCE
School of Computer Science
The University of Birmingham

SimAgent Demonstration Movies

Introduction

This directory provides mpeg movies showing what can be done with the SimAgent toolkit running in the Poplog/Pop11 environment using the RCLIB 2-D graphical interface tools.

The movies are viewable with mplayer/gmplayer on linux/unix systems and with Quicktime on Windows and Mac systems).

Some of the movies were produced using techniques and scripts suggested by Mike Lees at Nottingham University where he is using SimAgent on a project directed by Brian Logan.

Later movies were produced using the excellent Xvidcap tool.

The first two movies were provided by Mike Lees. The third is a simple simulated reactive sheepdog demonstration based on an MSc project done by Tom Carter building on an earlier MSc project by Peter Waudby, partly extended by Aaron Sloman.

The fifth example, the hybrid deliberative/reactive sheepdog was an introductory mini-project by Marek Kopicki done in November-December 2003, during his MSc course building on the earlier sheepdog programs.

The sixth example shows a rather old 'Feelings' demo which is one of the tutorial examples in the SimAgent toolkit.

Further examples come from Dean Petters' MSc project in 2001 (sheepdog herding several sheep, one of which has to 'learn' to get away from the sheepdog -- not shown here) and some work in progress on his PhD modelling 'attachment' processes in infants.

All of the facilities demonstrated in the movies rely on SimAgent augmented with the RCLIB graphical toolkit implemented as an extension to Pop11 in Poplog. Some features of RCLIB are described here http://www.cs.bham.ac.uk/research/poplog/figs/rclib/


NOTE: when SimAgent is running on your machine you can move things in a demo with the mouse. These are movies made from such runs, and you cannot interact with the movie demo in the same way.

Anyone interested in obtaining and running the toolkit (including some of the demos 'frozen' here) should see The Free Poplog Web site.


More examples will be added later. And there's always the Pop11 Eliza as a last resort.

Mpeg movies available

  1. Boids (flocking) demonstration (about 1.4Mbytes)-- Mike Lees (June 2003)
  2. Tileworld demonstration (about 1.6mbytes) -- Mike Lees (June 2003)
  3. Sheepdog demonstration 340 frames (about 0.9Mbytes) (June 2003)
  4. Sheepdog demonstration 640 frames (about 1.5Mbytes), suitable for faster machines (June 2003)
  5. Hybrid Deliberative/Reactive Sheepdog by Marek Kopicki (12 January 2004, New version 8 May 2004)
  6. Two toy 'emotional' agents moving around (20 Jan 2004)
  7. The Simworld package by Matthias Scheutz, USA
  8. Dean Petters' concurrent-herding sheepdog
  9. A project modelling some aspects of 'attachment' in infants.
  10. A 'Toy' program (Minder-0) by Ian Wright demonstrating the 'Nursemaid scenario' (about 1994)
  11. Ian Wright's Minder-1 program.
  12. A 'Toy' conversational robot modelled loosely on T. Winograd's SHRDLU
  13. Not a movie, but code and documentation for the 'popracer' project
    This was done in 2005 by a group of 2nd year AI students, using Pop-11 and its tools.
    See the report by Mark Rowan in the Networks magazine, and the project report (PDF)


NOTE: On some of the movies the mouse-movements produced by interacting with the running program produce movements of picture objects without the mouse pointer being visible. The movies made with the xvidcap tool do show the mouse pointer as well as the objects moved. The main point is that when the demos are running objects can be moved with the mouse and their new locations will immediately be sensed and behaviours changed. E.g. in the sheepdog demos, sheep that have been put in the pen can be moved out and the sheepdog will then notice and try to get them back. The 'hybrid sheepdog' movies includes trees and sheep moved with the mouse. Likewise some of the other movies.

5. Hybrid Deliberative/Reactive Sheepdog by Marek Kopicki

These files demonstrate a 'hybrid' deliberative/reactive version of the sheepdog demonstrated above. The new sheepdog is able to find its way round far more complex barriers than the purely reactive one can. It does this by sometimes pausing to 'think' about where to go: i.e. it makes a plan. It alternates between simply reacting to what it senses while following a plan and making plans. One of its reactions is to switch to local plan-repair mode. The lines shown on the display indicate the sheepdog's current plan, and while it is moving it also displays its 'line of sight' to a later plan location. This is used for smoothing the plan, and for detecting new short-cuts.

So the dog combines interleaved local re-planning and reactive plan execution, occasionally having to switch to global re-planning because the current plan has met an obstacle.
There are three movies. The first two (sizes about 3.5 and 5.5Mbytes) were produced using the wonderful Xvidcap tool. They are viewable using gmplayer on unix/linux. They also work in Quicktime on Windows and on Macs. The third movie was produced earlier using a different technique and does not show movements of the mouse.

(1) http://www.cs.bham.ac.uk/research/poplog/figs/no-points-hybrid.mpg (About 3.8MB)
This shows the sheepdog fetching the sheep one at a time and steering them to the pen. It repeatedly plans a route, displayed on the screen, then follows the route, reactively adjusting its plan as new obstacles and opportunities are discovered, resulting from objects (trees, sheep, dog) being moved with the mouse. Sometimes a new obstacle turns up that requires global re-planning. You can see the mouse moving things in the display. (The mouse pointer is a black arrow.) The dog may have to create a new plan when something changes, or modify its existing plan, or modify how it executes the plan, e.g. taking a short cut that either did not exist when the plan was created or was not noticed when the plan was created, because the plan-creation process does not aim for an optimal plan, just a reasonable one. Searching for an optimal plan would take very much longer. Notice that in some sense the dog is conscious of where the sheep are, where the trees are, whether it has been moved, whether a short cut is available. Its range of visibility extends over the whole terrain (as if it could see through trees), but that does not mean it notices every relationship between things it sees.

(2) http://www.cs.bham.ac.uk/research/poplog/figs/hybrid-sheepdog2.mpg (About 5.5MB)
Another demonstration of the same program, this time showing the 'probabilistic waypoints' used by the sheepdog to make its plans: their use hugely reduces the space of possible plans and make it possible to find a plan very quickly by searching the graph of non-obstructed connections between the waypoints. However the plans thus found are often non-optimal and the sheepdog reactively discovers opportunities for improving them by smoothing them as it acts on the plans.
The plan construction is very fast, using the technique of 'probabilistic waypoints' [*]. Instead of searching in the huge space of complete paths in the terrain, the dog randomly generates a collection of 'waypoints' that are close to but not too close to other objects, and searches for paths from the current location to the goal location going only through the waypoints, using non-obstructed straight lines joining the waypoints.
This dramatically reduces the search space for the planner, compared with considering all possible routes in the terrain. As a result, on a 1ghz PC running linux poplog most of the plans shown are found almost instantaneously -- though it is not guaranteed to find the shortest plan and may miss some narrow gaps.
The movie shows the waypoints generated just before each plan is formed, in addition to showing the selected plan. The plan may be somewhat jagged because the waypoints are not necessarily optimal because they are randomly generated. These detours are smoothed as the plans are followed. The movie shows how, while executing a plan the sheepdog can reactively detect an unexpected obstacle or problem produced by the sheep moving in an unexpected way. It can also detect a new opportunity for short cuts e.g. because a tree has been moved out of the way.

(3) http://www.cs.bham.ac.uk/research/poplog/figs/hybrid-sheepdog.mpg (About 8Mb)

In this movie some of the objects were moved using the mouse, sometimes making things worse for the sheepdog, sometimes allowing new short-cuts. However, the mouse is not shown.

As before, some of the newly detected problems and opportunities require only local modifications to the current plan, whereas in some cases an unexpected obstacle (in one case another sheep!) causes global re-planning.

NOTES on the hybrid sheepdog
The enhanced version of the sheepdog was developed, using SimAgent, by Marek Kopicki, as his first mini-project on the MSc in Advanced Computer Science, in the School of Computer Science in Birmingham. The code for the hybrid sheepdog, used to produce the video, is now included as a demonstration library in the SimAgent package.

The program file is browsable separately here. The toolkit, and the hybrid-sheepdog should work in any version of poplog running on a linux/unix machine with the X window system, and also on a windows PC with Vmware.
A report on the mini-project is available here.

References

[*]Matthias S. Benkmann (2001) Motion Planning Using Random Networks

6. Two toy 'emotional' agents moving around

http://www.cs.bham.ac.uk/research/poplog/figs/simagent/emotic.mpg
This 6.5Mb file shows two "toy" emotional agents partly moving of their own accord (each keeps trying to get to its 'target' of the same colour while avoiding obstacles), and partly being moved with a mouse (not shown in the movies).

They each show, in faces at the bottom of the display, a set of 'emotional' states, teasingly labelled 'glum', 'surprised', 'neutral', and 'happy'.

Of course, this is just a shallow toy developed for tutorial purposes rather than anything that could be said to have human-like emotions -- except insofar as some (but not all) human emotions involve reacting to things that achieve or hinder goals or desires. However the notion of 'surprise' in this demonstration is pure fake, since these simple creatures have no expectations and therefore cannot be surprised. But the change in facial expression is fun.

Unfortunately on a PC or other machine with clock speed faster than about 1ghz the movie may run too fast for all the details to be seen unless you have a player that can be slowed down. However if you fetch and install the package on a machine on which poplog can run, e.g. PC with linux or a sun, then you can run the program at varying speeds and play with it -- frustrating or helping the little movers.

These agents differ from many so-called emotion simulations in that these creatures have some self-knowledge so that they not only show their state (in facial expressions and in movements) but also describe them (which humans cannot always do!) This uses, in a simple way, features of the SimAgent toolkit that allow 'self monitoring' activities to run in parallel with others.
Some of the events are caused by objects moved with the mouse while the demonstration was recorded. Unfortunately you will not be able to interact in the same way with the movie of the program running.
This demo uses a slightly modified version of the SimAgent 'Feelings' tutorial file which is included in the SimAgent toolkit:
http://www.cs.bham.ac.uk/research/poplog/newkit/sim/teach/sim_feelings

Like the sheepdog this simple demonstration is potentially indefinitely extendable to include more complex and sophisticated kinds of processes involving learning, reasoning, communicating, etc. The whole package, including poplog is freely available for anyone who wishes to use it for teaching, research or anything else.

There are many more serious papers and presentations concerning emotions, other kinds of affect, and architectures in the Cognition and Affect web site, e.g.


7. The Simworld package by Matthias Scheutz at Indiana University, USA

Matthias Scheutz has the SimAgent toolkit for teaching and research at The University of Notre Dame (USA). (Now at Indiana University). His Simworld package is available as a gzipped tar file.

A short movie is available.


8. Dean Petters' concurrent-herding sheepdog

Two movies (made with xvidcap) show the sheepdog herding several sheep simultaneously, using a program produced by Dean Petters in 2001, during his MSc year. The mouse is used to move the sheep and the dog, sometimes helping the dog, sometimes making its job harder.


9. A project modelling some aspects of 'attachment' in infants.

Two movies (added 24th April 2004) made with xvidcap) using a still incomplete program being developed by Dean Petters as part of his PhD work on modelling 'attachment' processes in infancy. The movies show fragments of a scenario with a carer and a baby, both moving around in an environment containing food and other things. The mother needs food from time to time, and the baby needs to explore, and to get attention from the mother. Sometimes the mother oscillates between going for food and attending to the baby. Different 'personalities' can be given to the mother to show how that affects the development of the baby. More information about the project is on Dean's web site and in a paper presented at the AAAI 2004 Spring symposium in March 2004.

There are two movies both of which show several aspects of the baby's internal state going up and down. The column labelled 'a_sec' corresponds to insecurity, produced by the mother being too far away or not looking at the baby. This is still work in progress.

NOTE ADDED Jan 2007:
The PhD was completed in 2006 and can be downloaded from: http://www.cs.bham.ac.uk/research/projects/cogaff/06.html#605


10. A 'Toy' program (Minder-0) by Ian Wright demonstrating the 'Nursemaid scenario' (about 1994)

The nursemaid scenario http://www.cs.bham.ac.uk/~axs/misc/nursemaid-scenario.html was originally proposed around 1986 by A.Sloman as a framework for investigating architectural issues in complex multi-functional intelligent agents with multiple, changing sources of motivation, embedded in rich and dynamic, only partly known environments. Since about 1991, when he came to Birmingham the scenario, modified in various ways at various times, has inspired much of the work in the Cognition and Affect project, starting with the PhD theses of Luc Beaudoin and later Ian Wright.

in 1994, while working towards his PhD, Ian Wright implemented some of the ideas developed by Luc Beaudoin in a version of the nursemaid scenario referred to as 'Minder Version 0'. The program was shown in a BBC2 Television interview with A.Sloman, broadcast on February 1997. Since then the program has had its 'cosmetics' improved by A.Sloman, and the new version is used for the movie demos below.

The movies show the 'minder' indicated by a capital 'M' looking after variable numbers of 'babies' labelled 'a', 'b', 'c', etc. whose charge state is indicated by an '*' when fully charged and a decreasing single digit number as the charge decreases. The babies move around at random, using up energy and reducing their charge level. There is a recharge point (represented as two curved arrows) to which M can carry babies when their charge gets dangerously low. They die if their charge gets to 0. At the top and bottom edges of the nursery are ditches and if babies get too close they risk falling in and dying. M notices when a baby's charge level is low, when a baby is near a ditch, and when a baby is dead, acquiring new motives in each case. Noticing that a baby is dead generates a motive to carry it to the 'disposal' location, marked with a skull and crossbones! A further motive can be triggered in M whenever there are three or more individuals in a room: that makes the 'room too crowded' and M acquires the motive to carry a baby to another room. M's motives are prioritised (e.g. how low a baby's charge-level is, how close it is to a ditch, etc., with disposal of dead babies as having lowest priority, followed by reducing crowding).

The nursery has several rooms and M's knowledge of the contents of the rooms is constantly updated by a camera that scans the rooms in turn. It is possible for M to act on information about what is happening in a room that is out of date because the camera has not refreshed M's view of the room.

Thus as babies move around and their energy levels change, M's motives keep changing and sometimes M's work on a motive has to be abandoned in favour of a new higher priority motive. M has very little intelligence, and completely lacks deliberative capabilities. Thus all it's plans are stored reactive plans, activated by motive generators, and reactively executed or over-ridden. M also has very little knowledge about itself, and often behaves in stupid ways, e.g. attending to a distant baby before a nearby one. There is no learning. When there are only 5 or 6 babies M may be lucky and keep them alive for a long time, though sometimes even that situation proves too much. The more babies there are, the more harassed M becomes and because it has no way of filtering new motives, they all grab its attention and can cause it to behave in a far from optimal fashion. Later work by Ian Wright developed the suggestion, in Luc Beaudoin's thesis, of using a dynamically varying threshold for an attention filter. (A movie of his later program will be added to this site eventually.)

Here are several runs of the Minder program, the core of which is unchanged since 1994, though the graphics have been altered. (The original showed both the 'real' world and the world as perceived by the minder, and used only the editor buffer for textual output, whereas this version shows only the real world and uses graphical 'posters' in RCLIB to display changes in motivation.)

This program does not use SimAgent, as it is implemented directly in Pop-11. However the experiences and problems arising out of this work helped to define requirements both for the RCLIB package, and the SimAgent toolkit, both developed later. Ian Wright's implementation of Minder 1 for his thesis used the toolkit.

http://www.cs.bham.ac.uk/research/poplog/figs/simagent/minder1.mpg (About 4 MB)
http://www.cs.bham.ac.uk/research/poplog/figs/simagent/minder2.mpg (About 1.6 MB) http://www.cs.bham.ac.uk/research/poplog/figs/simagent/minder3.mpg (About 4.1 MB) http://www.cs.bham.ac.uk/research/poplog/figs/simagent/minder3.mpg (About 4.7 MB)


11. Ian Wright's Minder-1 program.

(Movies will be added)


12. A 'Toy' conversational robot modelled loosely on T. Winograd's SHRDLU

(These demos may run too fast because of the sampling rate of the movies. However they do work in some players.)
http://www.cs.bham.ac.uk/research/poplog/figs/simagent/gblocks.mpeg (About 5 Mbytes)
http://www.cs.bham.ac.uk/research/poplog/figs/simagent/gblocks1.mpeg (About 3.6 Mbytes)
These two demonstrations use only the RCLIB subset of the SimAgent Toolkit, and also show how XVed can be used as part of an interface for textual interaction.
The two demos show a program inspired by the SHRDLU program of Terry Winograd's MIT PhD thesis (1971). This demo is very much simpler than Winograd's program as it uses a very simple grammar and parser, but it does illustrate how a system can use the combination of syntactic, semantic, and 'world' knowledge in understanding potentially ambiguous sentences and also shows what some of the libraries that come with Poplog (and therefore with SimAgent) can do, e.g. a simple grammar library. The program corresponds to this tutorial, originally developed for teaching AI at Sussex University: http://www.cs.bham.ac.uk/research/poplog/teach/msblocks
All the code required to run this demo is included in Poplog and RCLIB.


Last updated: 28 May 2009

Maintained by Aaron Sloman

[Frames-free web site]