Dr Nick Hawes
Student Projects

Project presentation: slides and video.

In general I am interested in supervising Artificial Intelligence projects which develop a single- or multi-agent system to solve a particular problem or display certain functionality. Ideally such agents will be required to both sense and act in a complex environment such as the real world, a simulation, a video game or an online community. I am particularly interested in approaches that involve planning, scheduling, reasoning, agent architectures, or language. I am also interested in supervising projects which develop or extend tools to support the development of such systems.

Examples of such projects are listed below. If you do not see something you're interested in, do not hesitate to suggest something interesting to me instead.

Intelligent Robotics

Within the Intelligent Robotics Lab we have a collection of robots and hardware which you could work with on a project. For a typical robotic project you could focus on mobility and may develop approaches to localization and mapping. Alternatively, you may wish to build upon existing solutions to these problems to create a mobile intelligent system which can perform a particular task such as people following, object search and retrieval, navigation through crowds, or assistance at an event. Projects with real robots are inherently challenging but ultimately rewarding.

This year I am specifically interested in the following projects:

  • Episodic memory for a service robot. We are starting to see the advent or robots and intelligent systems which will be able to sense a large amount of what goes on in our day-to-day lives. If these systems could learn what our typical routines are then they could usefully spot variations from this, e.g. when someone falls ill. Imagine a Roomba which could notice when its owner was collapsed in the middle of the room. In this project I'd like someone to develop a system which could learn a daily routine of a user and then notice variations from this. This could be done in simulation, or on our Pioneer 1 robot. The design could be based on a model of Episodic memory.
  • Household robotics. We have recently purchased a iRobot Roomba development kit. This allows you to program a vacuum-cleaner-like robot to create applications that could be deployed in the home. I am interested in supervising projects that develop a Roomba application that uses an additional technology such as a Kinect, computer vision, localisation, or natural language, or presents a novel interface to the Roomba (perhaps via an iOS or Android device).
  • A robotic open day host. The school of computer science hosts a lot of open days where crowds gather in the Atrium. It would be very useful if a robot could move through these crowds performing tasks such as handing out promotional literature or serving drinks. In this project I'd like one or more students to develop a system able to perform such a task using our B21r or Nao robots.
  • Mobile robot controllers.Debugging complex software systems requires a developer to be able to visualise many aspects of a system's internal state. When such a software system is embedded on a mobile robot this problem becomes very difficult because the developer must be able to debug the robot as it acts in the world. This often makes it impossible to use a desktop PC for this purpose, whilst laptops can be difficult to use while also monitoring the robot. Smart phones and tablets appear much more suited to this task, but lack appropriate software support. Therefore this project will aim to support mobile debugging by developing an app for displaying various parts of a mobile robot's internal state on an Android or iOS device. The app should support the display of log messages at various levels of granularity (similar to Log4j etc.) and sensor and motor readings (similar to playerv from the Player-Stage project). Given enough time, a 3D visualisation of the robot's knowledge would also be highly desirable (similar to Peekabot, or rviz from ROS).
  • Outdoor Robotics. In conjunction with Jeremy Baxter and Rustam Stolkin I am trying to create an intelligent mobile robot system capable of performing search and patrol tasks outdoors (e.g. around campus). This robot will be based on Rustam's tracked robot (which is comparable to a Talon military robot). To help us progress this project we are interested in students to do projects to: create a 3D simulation of campus suitable for use in testing robot software; implementing an 3D SLAM system on the Dora robot (Dora's SLAM system is currently only 2D); and implementing an odometry-based control interface for the tracked robot.

The CoSy Architecture Schema Toolkit

I am the lead developer on a software toolkit called the CoSy Architecture Schema Toolkit (CAST). This is a toolkit for developing intelligent systems using a component-based architecture. I would be very interested in supervising students with strong programming backgrounds who are interested in adding functionality to CAST. Also I'd like help developing visualisation methods such as 3D modelling or timelines showing information exchange, tools for extracting protocol models from source code or running systems, and synchronisation methods for keeping track of time across multiple machines.

There is a chance that a project related to CAST could be carried out in conjunction with the CoR-Lab at Bielefeld University. We hold some money to support joint work on architecture tools, and are willing to fund visits for our students to Bielefeld if appropriate.

This year I am specifically interested in the following projects:

  • ROS integration. ROS is now the most widely-used robot middleware. CAST would benefit from integration with ROS in various ways. This project would explore ways of build generic interfaces between the two systems, or reimplementing parts of CAST on top of ROS.
  • Modelling communication protocols. This project would build frameworks which are capable of taking a running CAST system, or its source code, and extracting the repeated patterns of interaction that occur between its components. The extracted models could then be used for debugging, visualisation or validating the behaviour of the system at runtime.
  • Visualising complex systems CAST systems are often large and complex. Currently the only available visualisation of the systems are print-outs on the command line. In this project I'd like someone to explore 3D (or other software or hardware) visualisations of systems that can be manipulated by a user, and also used to determine system state at a glance.

Virtual Worlds

Virtual worlds such as Second Life, and video games such as Unreal Tournament and Civilization offer a different route for developing intelligent agents. Projects in these domains require less focus on sensing and control and allow you to develop higher-level intelligent behaviour. I am interested in supervising the development of an avatar or NPC for any such world, but am particularly interested in interactive characters for Second Life, where an AI-controlled character is generally assumed to be human unless it gives itself away.

This year I am specifically interested in the following projects:

  • Starcraft AI. There is now a published API for creating AI players for Starcraft, and a set of competition benchmarks for AI players. Along with Jeremy Baxter I am interested in supervising projects that develop Starcraft AI players and compare them to these benchmarks.
  • Situated dialogue for Second Life. NPCs in virtual worlds need to talk to humans, but NLP approaches in existing systems are very limited. In this project I'd like a student to embed natural language engineering techniques into an AI controlled character which can talk about its surroundings, and/or information which is accessible over the web. The implementation of this could be based on anyting from a simple Eliza-like interface to an approach based on Antje Meyer's models for human language process.
  • NPC interface to Second Life. For part of a PhD project, we are currently developing an inteface for developing NPCs in Second Life. The interface will bridge between libOpenMetaverse and our own CAST middleware. In this project I'd like a student to take control of this strand of the research and develop a complete inteface which other developers can use to easily create Second Life NPCs using CAST.

Using AI to Understand Natural Intelligence

The Intelligent Robotics Lab has close ties with researchers working on animal cognition within the School of Biosciences, and also researchers interested in human cognition in Psychology. To help cement this, I am interested in supervising projects that work on applying AI and robotics techniques in the pursuit of understanding natural intelligence.

This year I am specifically interested in the following projects:

  • A Crow Simulator Crow's have been known to display surprisingly intelligent problem-solving and tool-use abilities. To help us explore models of crow cognition we would like a student to build a 3D physics-based simulation of a crow that we could use to test models of behaviour. The simulation could use existing 3D and physics packages and would not need too be incredibly high fidelity. Instead it would be useful to have a programming API which could support natural control of the crow via biologically plausible means.
  • Modelling Physical Problem Solving Researchers are currently studying how both parakeets and orangutans solve various types of physical puzzles (such as boxes containing food with only a small hole which can be used to access it). We would be interested in looking at approaches for developing models of the different ways that these approaches could possibly be solved and comparing these to how the animals are actually doing this.
  • Understanding Theory-of-Mind It is currently an open issue about how much human's use or need an explicit theory of the minds of others. Some research argues that a collection of simple tricks can produce the same effects with minimal representation and processing. In this project the student would create a simple world and use it to model the effects of having a theory of mind on agent behaviour.

Agent Architectures

I'd also be interested in supervising projects to reimplement one or more existing agent architectures within CAST as a research exercise. To do this you would need to pick a particular target problem (e.g. predator-prey interactions, foraging, simulating some aspect of human behaviour, etc.) then implement the solution using CAST as the foundation for a particular higher-level architecture structure. For example, students could choose to implement the following for a particular task domain:

  • Brooks' Subsumption architecture.
  • The 3T architecture
  • The PRS Architecture
  • Sloman's H-CogAff Architecture
  • Wright/Beaudoin's Nursemaid Architecture
  • Bryson's POSH system
  • Sun's CLARION architecture

last.fm recent track image