For post-tutorial information
please see the 'AFTER TUTORIAL' web site.


A two-day tutorial at IJCAI 2005

Saturday 30th July and Sunday 31st July, 2005
Organised by the EC-Funded CoSy Project

We gratefully acknowledge sponsorship by
BT, IBM, SSAISB and InferMed

Aaron Sloman, The University of Birmingham, UK
Bernt Schiele, Darmstadt University of Technology, Germany

The tutorial was held in the David Hume Tower Faculty Room North, George Square, EH8 9JX

TUTORIAL ANNOUNCEMENT (Background and objectives).


There will be two approximately 90 minute slots each morning and each afternoon. Each tutorial presenter is asked to talk for at most 90 minutes, leaving time for questions and discussion. The last slot will be used for a panel discussion with audience participation. Participants should also be able to make good use of the refreshment breaks for discussion.

Day 1: Saturday 30th July

Day 1 Morning:

Tutorial Registration and welcome

SESSION 1 09:00--10:30: Tom Mitchell

Fredkin Professor of AI and Learning
Founding Director since 1997, Center for Automated Learning and Discovery School of Computer Science, Carnegie Mellon University
Author of Machine Learning

Tom is a Fellow and former President of AAAI, and has won prestigious awards, including the IJCAI Computers and Thought Award in 1983, and NSF Presidential Young Investigator Award, in 1984. He is 2004 Chair-Elect, American Association for the Advancement of Science, Section on Information, Computing, and Communication. He is on several editorial boards and national and professional committees. He was co-founder of the Machine Learning journal.
Tom's research Interests range over computer science, machine learning, artificial intelligence, and cognitive science. He focuses on basic and applied problems in machine learning, including new algorithms for time series analysis of human brain image data, and statistical algorithms for natural language processing.

Talk 1: Animal Learning, Machine Learning

For the past 30 years, research on machine learning and research on animal learning have proceeded pretty much independently. We now know enough about both fields to begin looking for points of contact between the two. This tutorial will briefly cover some of the key facts we understand about animal learning and machine learning, and will then explore several machine learning models that appear to be strikingly good starting points for modeling certain types of animal learning. For example, we'll look at machine learning methods such as co-training, reinforcement learning, and explanation-based learning, and align them with specific phenomena in human and animal learning that are well-modeled by these methods.

SESSION 2 11:00--12:30: Alex Kacelnik

Head of the Behavioural Ecology Research Group at Oxford University's Department of Zoology.
Alex trained as a zoologist in Argentina, later worked in zoology and psychology departments in Oxford, Groningen and Cambridge, until returning to Oxford in 1990 to set up the present Behavioural Ecology Group. The common thread of his research is the attempt to marry theories and experimental work. Most of the work includes modelling, but little is purely or even predominantly theoretical. Ecology, psychology and evolutionary theory are all involved both in the selection of research questions and in the nature of the explanations submitted. He has a strong interest in philosophical and social implications of ethology. His group investigates animal -- and human -- decision-making using the tools of experimental psychology and evolutionary biology. Alex's research recently made headlines when one of the group's New Caledonian crows, Betty, became the first animal -- other than a human -- to be observed taking a new material and fashioning a tool out of it -- she bent a straight wire into a hook to retrieve food from a cylinder. See the video.

Talk 2: Animal Learning, Representation and Choice.

Most models of animal decision making assume that subjects store knowledge about problem structure and quantitative parameters, and then take action by retrieving this information and ranking alternatives according to some maximising criterion. This often works well but it also raises issues that are relevant in the robotics field. One example is Weber-Fechneršs Law (the use of logarithmic scaling to store quantitative information): Constant retrieval error in a log scale leads to a bias towards or against variable sources of reward depending on whether desirable outcomes are smaller or large. Agents facing variable inputs then fail to maximise expected benefit due to the pattern of internal representation, Another example is context-dependent valuation: in addition to storing information about the physical parameters of sources of reward, animals seem to store the afforded improvement respect to the context of encounter, leading to paradoxical choices when encounter with sources of reward correlates with specific contexts. I shall describe experimental and modelling work on animal decision making with special reference to avian models. The problems posed by tool making and tool use will also be discussed and illustrated with films.

Day 1: Afternoon

SESSION 3 14:00--15:30: Paul Cohen

Professor and Director
Experimental Knowledge Systems Laboratory
Department of Computer Science, University of Massachusetts, Amherst, MA.
author of Empirical Methods for Artificial Intelligence and a course on Philosophy of Mind for Robots
Paul is a Fellow of the Engineering and Physical Sciences Research Council, a Fellow of AAAI, a former member of the AAAI Council, and has held a number visiting professorships. He is Editor-in-chief of Evaluation of Intelligent Systems, and has been several editorial boards and other academic and professional committees. He has academic training in Psychology and AI.

His recent research projects include a wargaming environment, the Robot Baby project, in which a robot learns representations and their meanings sufficient for natural language and planning; and the Packrats project, in which rats are trained to carry video cameras for search-and-rescue operations. Since 2001, Paul's group has been adapting algorithms for finding patterns in temporal data to intelligence analysis problems.

Talk 3: Architectures for Cognitive Information Processing

Human memory is the most sophisticated information retrieval device in the known universe. Where else can you get semantic matching between cues and responses? Much of what we call intelligence arises from the interplay between perception and memory. Together, these mechanisms ensure that unimportant details are suppressed, attention is focused, salient knowledge is always 'ready at hand,' subtle departures from expectations are detected, meta-knowledge about the content of memory is created, and learning is efficient and effective. (I tend to think reasoning is overrated because much of what we call reasoning is done effortlessly by recall, recognition, analogy, association, and other memory mechanisms.) What are the alternatives to explicitly engineering a human-like memory? If the system is to learn most of what it knows, we run immediately into the problem of learning new representations. The tutorial will present a mixture of ideas from philosophy, cognitive psychology and AI/ML relevant to this problem and the related problem of semantic locality -- ensuring that semantically-related knowledge is ready at hand.

SESSION 4 16:00-17:30:

Because Professor Kaelbling is now not able to attend the tutorial, this session will be split into two shorter sessions, 4a and 4b as follows.

SESSION 4a (16:00--16:45) Jackie Chappell

Jackie was previously a researcher at the Behavioural Ecology Group, Oxford University and in September 2004 became a Lecturer in Animal Behaviour, School of Biosciences, University of Birmingham, UK.

After completing her DPhil at the University of Oxford, she subesquently spent several years studying various aspects of animal cognition. Most recently, her work has focussed on the cognition of tool manufacturing behaviour in New Caledonian crows. These birds manufacture and use at least three distinct types of tool: hook tools made out of twigs, stepped and tapered tools made from Pandanus leaves, and straight sticks. This behaviour is unique among free- living non-humans because of the use of hooks, the degree of standardisation of the tools, and the use of different tool types. One interesting question is whether tool manufacture is rare because of the scarcity of selection pressure on species to use tools, or whether tool use and manufacture requires advanced cognitive capabilities which most species do not possess.

Since moving to the University of Birmingham, her interests have broadened to encompass investigating the cognitive architecture involved in the perception of affordances and causality, and the way in which this develops ontogenetically and phylogenetically. For example, how do animals integrate information about affordances and relationships discovered during exploration with their pre-existing knowledge?

She is co-author of a paper on The Altricial-Precocial Spectrum for Robots to be presented at IJCAI-05.

TALK 4a: How do animals gather useful information about their environment and act on it?

Animals are much more successful than current robots in their ability to gather information from the environment, attribute causes to affects, detect affordances and sometimes generate individually novel behaviour. What kinds of mechanisms make this possible? I will discuss different learning mechanisms in animals, and their strengths and weaknesses. I will also discuss some surprising findings about the richness of the information that animals have about their environment, including the temporal patterning of events and the 'appropriateness' of a stimulus. Exploration and play seem to be very important for some kinds of behaviour, particularly flexible responses to novel problems, and I will briefly explore possible mechanisms which might enable these kinds of behaviour.

[1] R. C. Barnet, R. P. Cole, and R. R. Miller. Temporal integration in second-order conditioning and sensory pre- conditioning. Animal Learning and Behavior, 25:221--233, 1997.
[2] J. Chappell and A. Kacelnik. New Caledonian crows manufacture tools with a suitable diameter for a novel task. Animal Cognition, 7:121--127, 2004.
[3] S. E. Cummins-Sebree and D. M. Fragaszy. Choosing and using tools: Capuchins (Cebus apella) use a different metric than tamarins (Sanguinus oedipus). Journal of Comparative Psychology, 119(2):210--219, 2005.
[4] M. Domjan and N. E. Wilson. Specificity of cue to consequence in aversion learning in the rat. Psychonomic Science, 26:143--145, 1972.
[5] M. Hayashi and T. Matsuzawa. Cognitive development in object manipulation in infant chimpanzees. Animal Cognition, 6:225--233, 2003.
[6] A. A. S. Weir, J. Chappell, and A. Kacelnik. Shaping of hooks in New Caledonian crows. Science, 297(9 August 2002):981, 2002.

TALK 4b (16:45--17:30)
Ales Leonardis will talk on Problems of representation and learning in machine vision.

Ales is Professor of Computer and Information Science in the Visual Cognitive Systems Laboratory, Faculty of Computer and Information Science University of Ljubljana, Slovenia, where he received his PhD in 1993, and where he leads one of the research groups in the EC CoSy project which inspired this tutorial. He is also Adjunct Professor for Computer Science of the Faculty of Computer Science of Graz University of Technology. From 1988 to 1991 he was a visiting researcher in the General Robotics and Active Sensory Perception Laboratory at the University of Pennsylvania. He is author or co-author of more than 130 papers published in journals and conferences. In 2002 he co-authored a paper on 'Multiple Eigenspaces' which won the Twenty-ninth annual Pattern Recognition Society award for originality and presentation. He has co-edited two books on aspects of computer vision and graphics, and co-authored the book Segmentation and Recovery of Superquadrics (Kluwer, 2000).
In 2004 he was chosen as Ambassador of Science for the Republic of Slovenia

His research interests include robust and adaptive methods for computer vision, object and scene recognition, learning, and 3-D object modeling. He has been actively involved in a number of bilateral and multilateral projects projects (EU FP6 projects COSY, MOBVIS, VISIONTRAIN), and is a program co-chair of ECCV2006.

Day 2: Sunday 31st July

Day 2 Morning:

SESSION 5 09:00-10:30: David Forsyth

Professor, Computer Science Division,
University of California, Berkeley.
David holds a BSc and an MSc in Electrical Engineering from the University of the Witwatersrand, Johannesburg, and an MA and D.Phil from Oxford University. He is currently a full professor at U.C. Berkeley and at the University of Illinois at Urbana-Champaign. He has published over 90 papers on computer vision, computer graphics and machine learning. He served as program co-chair for IEEE Computer Vision and Pattern Recognition in 2000, general chair for IEEE CVPR 2006, and is a regular member of the program committee of all major international conferences on computer vision. He has received best paper awards at the International Conference on Computer Vision and at the European Conference on Computer Vision.
His recent textbook, Computer Vision: A Modern Approach (joint with J. Ponce and published by Prentice Hall) is now widely adopted as a course text (adoptions include MIT, U. Wisconsin-Madison, UIUC, Georgia Tech and U.C. Berkeley).

Talk 5: Words and pictures

There is growing interest in the computer vision community in the relationship between vision and language. The details of this relationship are opaque, but a greater understanding might help us reason about what it means to be an object, about how people learn the names of things, and about how objects should be described.

We will discuss a variety of methods for establishing correspondence between structures in pictures and annotations of those pictures. First, we will discuss clustering methods that try to build probabilistic models that can explain both the images and their annotations. While there is no direct correspondence reasoning in these methods, there is a substantial indirect component of correspondence. Such methods make it possible to build systems for organizing and annotating pictures automatically. Second, we will discuss methods for direct reasoning about correspondence. These methods are largely by analogy with methods from the machine translation community. We will show how to attach various forms of annotation to pictures automatically. Third, we will show methods for working with indirect annotations --- where, for example, the statistics of the annotation are known but not the exact annotation. Such methods, which again derive from the machine translation community, allow us to make some kinds of handwritten document searchable with the absolute minimum of supervised data. We will spend some time discussing applications of this kind of correspondence reasoning, including building browsing interfaces for museum collections, building object recognition systems, building image search systems, building systems for interpreting human activities in video, and building systems for interpreting handwriting.

SESSION 6 11:00--12:30: Richard Dearden

Until recently, Richard was Principal Investigator, Probabilistic Fault Detection for Hybrid Discrete/Continuous Systems
NASA Ames Research Center, USA.
In Jan 2005 he was apponted senior lecturer in the School of Computer Science, University of Birmingham, UK.

Richard is interested in a number of different fields of artificial intelligence, but the unifying theme is reasoning under uncertainty. Uncertainty is an area of AI where significant recent progress has been made, and will prove to be an extremely important tool as AI techniques are applied in more and more real-world situations. His research can be broadly categorized into work on planning, scheduling, diagnosis, and machine learning.

From 2000-2004, he worked in the Model-Based Diagnosis and Recovery group at NASA Ames Research Center, encountering a number of interesting diagnosis problems for which new techniques need to be developed. The area that interests him most is diagnosis of hybrid systems. The problem is to track the state of a system described in terms of both continuous and discrete variables. In principle, diagnosis on these models is simply Bayesian belief updating as observations arrive. However, this is rarely computationally feasible in practice, so other techniques are needed, e.g. techniques based on sample-based approximations such as particle filters.

Talk 6: Planning and Learning in Hybrid Discrete-Continuous Models

Many real-world problems require richer representations than are typically studied in planning and learning. For example, state estimation in complex systems such as vehicles or spacecraft often requires a representation that captures the rich continuous behaviour of these kinds of systems. Similarly, planning for such systems may require a representation of continuous resource usage, particularly when planning under uncertainty. In this talk I will discuss some commonly used representations of these systems as hybrid systems, examine some approaches to planning and state estimation in them, and finally discuss some first steps toward learning a hybrid model, or at least parameters of such a model, directly from data.

Day 2: Afternoon

SESSION 7 14:00--15:30: Mark Steedman

Professor of Cognitive Science
School of Informatics
University of Edinburgh.
Also adjunct professor in Computer and Information Science
University of Pennsylvania in Philadelphia,
Mark's PhD is in Artificial Intelligence from the University of Edinburgh. He was a Sloan Fellow at the University of Texas at Austin in 1980/81, and a Visiting Professor at Penn in 1986/87. He is a Fellow of the American Association for Artificial Intelligence, the British Academy, and the Royal Society of Edinburgh.

He works in Computational Linguistics, Artificial Intelligence, and Cognitive Science, on aspects of speech, language, and gesture. Also interested in Computational Musical Analysis and Combinatory Logic. Generation of Meaningful Intonation for Speech by Artificial Agents, Animated Conversation, the Communicative Use of Gesture, Tense and Aspect, and Combinatory Categorial Grammar (CCG). Author of The Syntactic Process
Much of Mark's current NLP research is addressed to probabilistic parsing and to issues in spoken discourse and dialogue, especially the semantics of intonation. He is currently working with colleagues in computer animation using these theories to guide the graphical animation of speaking virtual or simulated autonomous human agents. Some of his research concerns the analysis of music by humans and machines.

Talk 7: Plans and the Computational Structure of Language

For both neuro-anatomical and theoretical reasons, it has been argued for many years that language and planned action are related. I will discuss this relation and suggest a formalization related to AI planning formalisms, drawing on linear and combinatory logic. This formalism gives a direct logical representation for the Gibsonian notion of "affordance" in its relation to action representation. This relation is so direct that it raises an obvious question: since higher animals make certain kinds of plans, and planning seems to require a symbolic representation closely akin to language, why don't those animals possess a language faculty in the human sense of the term? I will show that the recursive concept of the mental state of others that underlies propositional attitudes provides almost all that is needed to generalize planning to fully lexicalized natural language grammar. The conclusion will be that the evolutionary development of language from planning may have been a relatively simple and inevitable process. A much harder question is how symbolic planning evolved from neurally embedded sensory-motor systems in the first place.

SESSION 8 16:00--17:30: Final session: Discussion between speakers and audience, led by the tutorial organisers.

The tutorial will end with a panel discussion on what the major unsolved scientific and engineering problems are and how we can make progress towards solving them.
Unfortunately Alex Kacelnik cannot stay for the second day owing to family commitments. Fortunately, his former colleague Jackie Chappell now at the University of Birmingham will be at the tutorial on both days and has agreed to join the panel on day two. More information about her is available here.


Last updated: 6 Aug 2005
Maintained by Aaron Sloman
School of Computer Science, The University of Birmingham, UK