ARTIFICIAL INTELLIGENCE AND NATURAL COMPUTATION (OLD) SEMINARS NOTE: Seminars in this series prior to Spring 2004 are listed on a separate archive page. Visit http://events.cs.bham.ac.uk/seminar-archive/ainc for more information. -------------------------------- Date and time: Monday 26th January 2004 at 16:00 Location: UG40, School of Computer Science Title: Modularity, specialized learning, and an innate bias for reason Speaker: Joanna Bryson (http://www.cs.bath.ac.uk/~jjb/index.html) Institution: University of Bath (http://www.cs.bath.ac.uk) Abstract: In both psychology and artificial intelligence, many prominent researchers believe that intelligence is at least partially modular, but there has been relatively little work unifying the psychological and AI perspectives on modularity. This talk begins with a model of transitive inference reasoning in non-human primates developed at Edinburgh by Mitch Harris and Brendan McGonigle and ends with a new theory of the neuroscience of learning. On the way, it ponders important questions such as whether production-rule systems really are any kind of model of natural intelligence. -------------------------------- Date and time: Monday 2nd February 2004 at 16:00 Location: UG40, School of Computer Science Title: Understanding Complex Systems Speaker: Peter Andras (http://www.staff.ncl.ac.uk/peter.andras/) Institution: University of Newcastle (http://www.staff.ncl.ac.uk) Abstract: Complex systems of many components with many interactions and intricate behaviours abound. A new apporach is presented here to describe and analyse such systems. This approach has its roots in abstract communication systems theory (Luhmann, 1996). A brief conceptual introduction is presented here, which is followed by the discussion of selected complex systems (cell and genome) in the context of the proposed approach. Reference Luhmann, N, Social Systems, Stanford University Press, 1996. -------------------------------- Date and time: Monday 9th February 2004 at 16:00 Location: UG40, School of Computer Science Title: Emergence of Uptake Signals in Bacterial DNA Speaker: Dominique Chu (http://www.fi.uib.no/~gross/) Institution: University of Birmingham (http://www.fi.uib.no/) Abstract: The DNA of some naturally competent species of bacteria contains a high number of evenly distributed copies of a relatively short (about 10bp long) sequence. This highly overrepresented sequence is believed to be an uptake signal sequence which helps bacteria to selectively take up DNA from (dead) members of their own species. I will present a model designed to demonstrate the emergence of similar uptake signal sequences in a population of simulated evolving agents -------------------------------- Date and time: Monday 16th February 2004 at 16:00 Location: UG40, School of Computer Science Title: Missing from the reading list: the Voynich manuscript Speaker: Gabriel Landini (http://web.bham.ac.uk/G.Landini/home.htm) Institution: University of Birmingham (http://www.bham.ac.uk) Host: Peter Tino Abstract: The Voynich manuscript is a mediaeval to early modern (possibly scientific) book written in an unconventional script and what appears to be an unknown language or code. There have been various attempts to find a solution -- some by prominent cryptologists -- but the book is still unread leaving several open possibilities: a hoax, an artificial language, a code/cipher or even a lossy encoded non-retrievable text. A brief introduction of the history of the manuscript, as well as some statistical properties of the text which are comparable to those found in natural languages will be presented. -------------------------------- Date and time: Tuesday 24th February 2004 at 15:00 Location: UG40, School of Computer Science Title: Learning of Event-Recording Automata Speaker: Dr Martin Leucker (http://user.it.uu.se/~leucker/) Institution: Department of Computer Systems, Uppsala University, Sweden (http://www.docs.uu.se/docs/newindex.shtml) Host: Marta Kwiatkovska Abstract: We first describe and then extend Angluin's algorithm for on-line learning of regular languages to the setting of timed systems. We consider systems that can be described by a class of deterministic event-recording automata. Our algorithm learns a description by asking a sequence of membership queries (does the system accept a given timed word?) and equivalence queries (is a hypothesized description equivalent to the correct one?). In the constructed description, states are identified by sequences of symbols; timing constraints on transitions are learned by adapting algorithms for learning hypercubes. The number of membership queries is polynomially in the minimal zone graph and in the biggest constant of the automaton to learn. -------------------------------- Date and time: Monday 1st March 2004 at 16:00 Location: UG40, School of Computer Science Title: Hyper-heuristcs : An Emerging Search Technology Speaker: Graham Kendall (http://www.cs.nott.ac.uk/~gxk) Institution: University of Nottingham (http://www.cs.nott.ac.uk) Host: Peter Tino Abstract: This talk introduces and overviews an emerging methodology in search and optimisation. One of the key aims of these new approaches, which have been termed hyper-heuristics, is to raise the level of generality at which optimisation systems can operate. An objective is that hyper-heuristics will lead to more general systems that are able to handle a wide range of problem domains rather than current meta-heuristic technology which tends to be customised to a particular problem or a narrow class of problems. Hyper-heuristics are broadly concerned with intelligently choosing the right heuristic or algorithm in a given situation. Of course, a hyper-heuristic can be (often is) a (meta-)heuristic and it can operate on (meta-)heuristics. In a certain sense, a hyper-heuristic works at a higher level when compared with the typical application of meta-heuristics to optimisation problems i.e. a hyper-heuristic could be thought of as a (meta)-heuristic which operates on lower level (meta-)heuristics. In this talk we will introduce the idea and give a brief history of this emerging area. -------------------------------- Date and time: Monday 8th March 2004 at 16:00 Location: UG40, School of Computer Science Title: Notes on Learning to Compute and Computing to Learn Speaker: Khurshid Ahmad (http://www.computing.surrey.ac.uk/staff/KAhmad.htm) Institution: University of Surrey (http://www.computing.surrey.ac.uk) Abstract: This talk will look at the simulation of cognitive abilities/deficits in language, attention, and numeracy that require ensembles/networks. New developments in neurosciences including fMRI scanning have indicated the possible existence of autonomous areas in the animal brain being able to interact with each other through another network. Results of the simulations at Surrey have been encouraging and this I would like to share with others together with some observations in empirical neurophilosophy. -------------------------------- Date and time: Monday 15th March 2004 at 16:00 Location: UG40, School of Computer Science Title: A Hybrid Decision Tree/Genetic Algorithm method for Data Mining Speaker: Alex Freitas (http://www.cs.kent.ac.uk/people/staff/aaf/) Institution: University of Kent (http://www.cs.kent.ac.uk) Host: Jin Li (J.Li@cs.bham.ac.uk) Abstract: This seminar addresses the well-known classification task of data mining, where the objective is to predict the class which an example belongs to. Discovered knowledge is expressed in the form of high-level, easy-to-interpret classification rules. In order to discover classification rules, we propose a hybrid decision tree/genetic algorithm method. The central idea of this hybrid method involves the concept of small disjuncts in data mining, as follows. In essence, a set of classification rules can be regarded as a logical disjunction of rules, so that each rule can be regarded as a disjunct. A small disjunct is a rule covering a small number of examples. Due to their nature, small disjuncts are error prone. However, although each small disjunct covers just a few examples, the set of all small disjuncts can cover a large number of examples, so that it is important to develop new approaches to cope with the problem of small disjuncts. In our hybrid approach, we have developed two genetic algorithms (GA) pecifically designed for discovering rules covering examples belonging to small disjuncts, whereas a conventional decision tree algorithm is used to produce rules covering examples belonging to large disjuncts. We present results evaluating the performance of the hybrid method in 22 real-world data sets -------------------------------- Date and time: Monday 22nd March 2004 at 16:00 Location: UG40, School of Computer Science Title: Can AI be a Surrogate Guardian and Teacher for Someone with Physical and Cognitive/Language Difficulties Speaker: Clive Thursfield (http://www.wmrc.nhs.uk/act/contact_staff.htm#) Institution: R&D, Access to Communication and Technology, NHS (http://www.wmrc.nhs.uk/act/contact_staff.htm) Abstract: At some time in most people's life we ask questions such as "what will be my place be in the world?", "what can I achieve?", "what will make me happy?" or even "what is the purpose of life?". Much of this revolves around concepts such as "do I feel valuable and /or valued?". These questions are hard enough for anyone to answer but there are special challenges for people who have physical and or cognitive / language difficulties as a result of brain injury or other neurological problems, particularly when there is a communication difficulty. The use of technology has made a difference to this situation but generally the level of this technology is quite elementary compared with what technology is being used for in the world at large. In a young childs' development we take it for granted that this will be facilitated by close hands-on guidance (and discipline) extending over many years. Unfortunately society is not well adapted to providing this level of "fly on the wall" steering at other times and in different situations, but this is exactly what is needed in attempting to realise the potential of someone with physical and or cognitive / language difficulties. This seminar will outline the current technologies and techniques in use and ask questions about how artificial intelligence might be relevant in training and modeling, leading to improved outcomes of the use of electronic assistive technology, and ultimately elevating quality of life. -------------------------------- Date and time: Monday 19th April 2004 at 16:00 Location: UG40, School of Computer Science Title: When Aggression Pays off: Insights from Simulations with Artificial Agents in an Evolutionary Survival Task Speaker: Matthias Scheutz (http://www.nd.edu/~mscheutz/) Institution: University of Notre Dame (http://www.nd.edu) Abstract: Aggression is wide-spread in nature and seems to serve, among others, an important role in the interspecies competition for resources. In this talk, we argue that displaying aggression as a means to signal action tendencies (in particular, the probability to continue an encounter) is beneficial for social groups and show that discriminating between "own" and "other" is more beneficial than treating "other" the same as "own". In particular, we demonstrate that aggression plays a crucial role in strategies applied to "other". To test the theoretical prediction, we define seven basic agent types which give rise to 42 different discriminating agents, i.e., agents with different strategies for "own" and "other". In extensive simulation studies we show that discriminating agents, which assume an aggressive attitude towards others, while playing a strategy that distributes resources fairly among "own", are ultimately the most successful ones. We discuss the implications of these results for natural and artificial agents and conclude with a brief outlook on further studies. -------------------------------- Date and time: Monday 26th April 2004 at 16:00 Location: UG40, School of Computer Science Title: The many faces of ROC analysis in machine learning and data mining Speaker: Peter Flach (http://www.cs.bris.ac.uk/~flach) Institution: University of Bristol (http://www.cs.bris.ac.uk) Host: Peter Tino Abstract: Receiver Operating Characteristics (ROC) analysis has been introduced relatively recently in machine learning. The key idea is to distinguish performance on the positive and negative class, which allows us to select an optimal classifier even if the class or misclassification cost distribution varies from training to application context. However, ROC analysis has a much wider applicability than model selection. In this talk I will present some recent work on applying ROC analysis in decision tree and naive Bayes model building. In addition, I will outline a general framework for understanding machine learning metrics through the use of ROC isometric plots. SLIDES AVAILABLE: http://macflach.cs.bris.ac.uk/~flach/downloads/Birmingham.pdf -------------------------------- Date and time: Monday 10th May 2004 at 16:00 Location: UG40, School of Computer Science Title: Principles of Computer Chess Speaker: Colin Frayn (http://www.cs.bham.ac.uk/~cmf/index.php) Institution: University of Birmingham (CERCIA) (http://www.cercia.ac.uk/) Abstract: Computer chess is poised to enter the final phase of its gradual march towards superiority. The best programs can now hold their own against any human being, and in most cases thrash them mercilessly. They are already substantially stronger in rapid games. Soon, computers will forge ahead and even Grandmasters won't stand a chance against them. Or will they? Are computers fundamentally limited by their lack of human intuition? Will electronic brains ever possess that certain extra element of human comprehension which has so far eluded even the most persistent research? In order to answer this question, we must investigate how computers play chess. What are the state-of-the-art algorithms that allow accurate searching of rapidly-branching game trees? I shall cover the anatomy of a chess engine from the fundamentals of board representation right up to the tree search and evaluation algorithms. I will also review the current research being carried out worldwide by groups such as the ChessBrain project and others. -------------------------------- Date and time: Monday 17th May 2004 at 16:00 Location: UG40, School of Computer Science Title: Detecting Languageness: Is anybody out there? Speaker: John Elliott (http://www.lmu.ac.uk/ies/comp/staff/jelliott/jre.htm) Institution: School of Computing, Leeds Metropolitan University (http://www.lmu.ac.uk/ies/comp/) Host: Peter Tino Abstract: What is Languageness? Is there something unique to natural language that distinguishes itself in the 'signal universe'? Can unsupervised computational analysis of a signal's surface structure, detect the lingua ex machina signatures of cognitive, orthotactic and ontological constraints in which a natural language operates? The problem goal is therefore to separate language from non-language without dialogue, and learn something about the structure of language in the passing. The language may not be human (animals, aliens, computers...), the perceptual space can be unknown, and we cannot assume human language structure but must begin somewhere. We need to approach the language signal from a naive viewpoint, in effect, increasing our ignorance and assuming as little as possible. Using an approach, which draws from the areas of computational linguistics, corpus linguistics, information theory, computer visualisation, psychology, neuroscience and statistics, I am endeavouring to isolate computational linguistic universals by analysing a representative sample set of the human chorus. Methods developed are designed to work without any specific in-built prior knowledge of an individual system, for the filtration of inter-galactic objets trouvés and the identification of language structure at its varying levels of abstraction: from the base physical level to the parts-of-speech, where behavioural syntax meets semantics. -------------------------------- Date and time: Monday 24th May 2004 at 16:00 Location: UG40, School of Computer Science Title: On the need for constituency structure in the implementation of natural language understanding systems Speaker: Hermann Moisl (http://www.staff.ncl.ac.uk/hermann.moisl/) Institution: University of Newcastle (http://www.ncl.ac.uk/) Host: Peter Tino Abstract: A fundamental principle of generative linguistic theory is that natural language strings have a recursive constituency structure beyond the strictly temporal or spatial sequentiality of utterances or text. Because generative linguistic theory has historically been highly influential in the field of natural language processing in general, most natural language understanding systems have been built on this fundamental principle. The present discussion argues that constituency structure is theoretically unnecessary for natural language understanding, and addresses the practical design and implementation problems that arise when it is dispensed with using a purely sequential self-organizing architecture based on integration of attractor sequences generated by sensory input systems. -------------------------------- Date and time: Monday 7th June 2004 at 16:00 Location: UG40, School of Computer Science Title: From wireless to sensor networks and beyond Speaker: P. R. Kumar (http://black.csl.uiuc.edu/~prkumar) Institution: University of Illinois, Urbana-Champaign (http://www.uiuc.edu/index.html) Host: Marta Kwiatkowska Abstract: We begin by addressing the question: How much information can wireless networks transport, and what is an appropriate architecture for information transfer? We provide an information theory which is designed to shed light on these issues. Next we consider three protocols for ad hoc networks: the COMPOW protocol for power control, the SEEDEX protocol for media access control, and the STARA protocol for routing and load balancing. Then we turn to sensor networks and address the issue of issue of how to organize their harvesting. Finally, we turn to what could be the next phase of the information technology revolution: The convergence of control with communication and computing. We highlight the importance of architecture, and describe our efforts in developing an application testbed and an appropriate middleware. -------------------------------- Date and time: Monday 21st June 2004 at 16:00 Location: UG40, School of Computer Science Title: Dynamic Language representation in neural networks Speaker: André Grüning (http://personal-homepages.mis.mpg.de/gruening/) Institution: Department of Psychology, University of Warwick (http://www2.warwick.ac.uk/fac/sci/psych/) Host: Peter Tino Abstract: Classical linguistic approaches towards language are ultimately based on the symbolic computation metaphor of cognition: brains work like symbolic (digital) computers. However, biological brains have little resemblance with a digital computer. Artificial neural networks (ANN) are biologically more plausible models of cognitive computing. In mathematical terms, they constitute a dynamical system. We have a look at how ANNs represent some prototypical formal languages, namely a stack and a queue language, and suggest an explanation in terms of dynamical systems, why the queue language is more difficult to learn. -------------------------------- Date and time: Monday 6th September 2004 at 14:00 Location: UG40, School of Computer Science Title: Interleaved Visual Object Categorziation and Segmentation in Real-World Scenes Speaker: Bernt Schiele (http://www.mis.informatik.tu-darmstadt.de/schiele/) Institution: TU Darmstadt (http://www.tu-darmstadt.de/index.en.html) Host: Aaron Sloman Abstract: We present a method for object categorization in real-world scenes. Following a common consensus in the field, we do not assume that a figureground segmentation is available prior to recognition. However, in contrast to most standard approaches for object class recognition, our approach effectively segments the object as a result of the categorization. This combination of recognition and segmentation into one process is made possible by our use of an Implicit Shape Model, which integrates both into a common probabilistic framework. In addition to the recognition and segmentation result, it also generates a per-pixel confidence measure specifying the area that supports a hypothesis and how much it can be trusted. We use this confidence to derive a natural extension of the approach to handle multiple objects in a scene and resolve ambiguities between overlapping hypotheses with an MDL-based criterion. In addition, we present an extensive evaluation of our method on a standard dataset for car detection and compare its performance to existing methods from the literature. Our results show a significant improvement over previously published methods. Finally, we present results for articulated objects, which show that the proposed method can categorize and segment unfamiliar objects in different articulations and with widely varying texture patterns. Moreover, it can cope with significant partial occlusion and scale changes. -------------------------------- Date and time: Monday 6th September 2004 at 16:00 Location: UG40, School of Computer Science Title: Grounding Language in the World: A Framework for Signs and Actions Speaker: Deb Roy (http://web.media.mit.edu/~dkroy/) Institution: MIT Media Lab (http://www.media.mit.edu/) Host: Aaron Sloman Abstract: The meaning of words in everyday language depends on two very different kinds of relations. On one hand, words refer to (are about) the world. This relation rests on causal interactions between information and the physical world. On the other hand, agents use words to pursue goals by producing speech acts. A complete model of language must bridge these two kinds of meaning. These observations have motivated the implementation of a series of situated language processing systems in my lab. I will report on my ongoing attempt to develop a computational framework for language grounding that distills lessons learned from these implementations. Drawing from ideas in semiotics and constructivism, knowledge is represented in terms of signs which are causally connected to their referents, and actions which the agent can perform to verify, acquire, and use knowledge. This framework may be useful for guiding the development of larger scale situated language processing systems, and may shed light on related cognitive processes. -------------------------------- Date and time: Monday 13th September 2004 at 16:00 Location: UG40, School of Computer Science Title: Small Brains, Smart Minds: Vision, Navigation and 'Cognition' in honeybees Speaker: Mandyam V. Srinivasan (http://www.rsbs.anu.edu.au/profiles/m_srinivasan/) Institution: Centre for Visual Sciences, Research School of Biological Sciences, Australian National University, Australia (http://www.rsbs.anu.edu.au/ResearchGroups/VIS/index.asp) Host: Xin Yao Abstract: Anyone who has watched a fly make a flawless landing on the rim of a teacup, or marvelled at a honeybee speeding home after collecting nectar from a flower patch several kilometres away, would know that insects possess visual systems that are fast, reliable and accurate. Insects cope remarkably well with their world, despite possessing a brain that carries fewer than 0.01% as many neurons as ours does. This talk will explore the secrets of their success, by describing research aimed at understanding the mechanisms underlying visual perception, navigation, learning, memory and `cognition' in honeybees. The application of insect-based principles to the design of novel, autonomous land-based and aerial vehicles will also be described. -------------------------------- Date and time: Friday 17th September 2004 at 10:00 Location: UG40, School of Computer Science Title: A private view onto EC, its past, present, and future Speaker: Hans-Paul Schwefel (http://ls11-www.cs.uni-dortmund.de/people/schwefel/WelcomeE.html) Institution: University of Dortmund, Germany (http://www.uni-dortmund.de/UniDo/) Host: Xin Yao Abstract: Evolutionary algorithms are not just helpful and widely accepted tools for optimum seeking. Some of the founders of the discipline now called computational intelligence or natural computing had in mind to model aspects of organic information processing in order to get insight into real life processes. According to both points of view, I shall revisit my own way of designing EA and of thinking about their help in amelioration as well as understanding evolutionary traits like natural selection, introns within the genome, multi-cellularity, and sexual heredity. -------------------------------- Date and time: Friday 17th September 2004 at 11:30 Location: UG40, School of Computer Science Title: Loss Concealments for Low Bit-Rate Packet Voice in Voice over IP Speaker: Benjamin W. Wah (http://manip.crhc.uiuc.edu/index.html) Institution: University of Illinois at Urbana-Champaign, USA (http://manip.crhc.uiuc.edu/index.html) Host: Xin Yao Abstract: In recent years, voice over IP has become an attractive alternative to conventional public telephony. However, the Internet is a best-effort network, with no guarantee on its quality of service. A fundamental issue in real-time interactive voice transmissions over an unreliable Internet protocol network is packet loss. This problem is especially serious in transmitting low-bit-rate coded speech in which pervasive dependencies are introduced in a bit stream, leading to error propagation to subsequent frames when loss happens. In this talk, we present an end-to-end loss-concealment scheme that requires no special support from the underlying network. In particular, we focus on developing a non-redundant sender-receiver collaborated multiple-description coding (MDC) scheme. We propose a new coder-dependent parameter-based MDC that generates multiple descriptions systematically based on correlations of coding parameters. The design is done in such a way that requires no extra transmission bandwidth and that adapts its number of descriptions to network loss conditions. Extensive tests on FS CELP, ITU G.723.1, and FS MELP for different loss scenarios demonstrate the high quality and reliability of our proposed scheme. -------------------------------- Date and time: Monday 4th October 2004 at 16:00 Location: UG40, School of Computer Science Title: Genetic Programming Applied to Morphological Image Processing Speaker: Marcos Quintana (http://www.cs.bham.ac.uk/research/vision/people/M.Quintana-Hernandez.php) Institution: School of Computer Science, The University of Birmingham (http://www.cs.bham.ac.uk/) Host: Peter Tino Abstract: Genetic Programming is a popular technique of Evolutionary Computation with a broad range of applications. Genetic Programming has shown some success in image analysis but it has been mainly used for linear image processing and the search of filters, leaving the important non-linear side of image processing practically unexplored. This talk will present some explorations on Genetic Programming applied to Non-linear Image Processing, particularly to the hardly explored technique known as Mathematical Morphology. The results that I will show in the talk provide empirical evidence demonstrating that the evolution of high quality MM algorithms using GP is possible and that this technique has a broad potential that should be explored further. -------------------------------- Date and time: Monday 11th October 2004 at 16:00 Location: UG40, School of Computer Science Title: A simple evolution strategy for bounded real-valued optimisation and a method for adapting surrogate models Speaker: Jon Rowe (http://www.cs.bham.ac.uk/~jer/) Institution: School of Computer Science, University of Birmingham (http://www.cs.bham.ac.uk/) Host: Peter Tino Abstract: A simple evolution strategy for bounded real-valued optimisation and a method for adapting surrogate models. The first half of the talk will present an evolutionary strategy for real-valued optimisation problems. The algorithm was developed in the context of a theoretical investigation into properties of local search using Gray codes, but seems to be a successful algorithm in its own right. The second half of the talk combines this algorithm with a method for maintaining and improving "surrogate" approximation models, by making successive linear approximations to the error. Surrogate models are typically used when the objective function is computationally expensive. The algorithms have been applied to a model-inversion problem from medical image understanding. -------------------------------- Date and time: Monday 18th October 2004 at 16:30 Location: UG04, Learning Centre <-- NOTE THE UNUSUAL TIME AND PLACE !!! Title: Bayesian methods in machine learning Speaker: Zoubin Ghahramani (http://www.gatsby.ucl.ac.uk/~zoubin/) Institution: Gatsby Computational Neuroscience Unit, University College London (http://www.gatsby.ucl.ac.uk/) Host: Peter Tino Abstract: Bayesian methods can be applied to a wide range of probabilistic models commonly used in machine learning and pattern recognition. The challenge is to discover approximate inference methods that can deal with complex models and large scale data sets in reasonable time. In the past few years Variational Bayesian (VB) approximations have emerged as an alternative to MCMC methods. I will review VB methods and demonstrate applications to clustering, dimensionality reduction, time series modelling with hidden Markov and state-space models, independent components analysis (ICA) and learning the structure of probablistic graphical models. Time permitting, I will discuss current and future directions in the machine learning community, including non-parametric Bayesian methods (e.g. Gaussian processes, Dirichlet processes, and extensions). -------------------------------- Date and time: Monday 25th October 2004 at 16:00 Location: UG40, School of Computer Science Title: An Experimental Intrusion Detection Prototype based on a Cognitive Architecture Speaker: Catriona Kennedy (http://www.cs.bham.ac.uk/~cmk/) Institution: School of Computer Science, The University of Birmingham (http://www.cs.bham.ac.uk/) Abstract: An important area of Artificial Intelligence is concerned with the integration of sensing, reasoning, self-monitoring and action into a single cognitive architecture capable of surviving in a hostile environment. So far, this integrated approach has mostly been applied to robotics and agents in visio-spatial environments. We argue that it can also be applied to intrusion detection systems (IDS) in a network environment. Our experience of building a prototype in a physical network has confirmed that the capabilities of a multi-layered cognitive system are required for practical intrusion detection and response. Furthermore, the "complete cognition" approach leads to a high level view of both the IDS and the protected network, including its software and hardware components. This makes it easier (for humans and potentially the IDS itself) to identify and correct security weaknesses than is the case with typical IDS packages. -------------------------------- Date and time: Monday 1st November 2004 at 16:00 Location: UG40, School of Computer Science Title: Explosive Detection Systems in Aviation Security: Automated Image Enhancement and Segmentation Optimisation Speaker: Sameer Singh (http://www.dcs.ex.ac.uk/people/ssingh/index.htm) Institution: School of Engineering and Computer Science, University of Exeter (http://www.secsm.ex.ac.uk/) Host: Peter Tino Abstract: This research seminar details the results of our work conducted on the problem of improving the quality of luggage images at airports. Commercial software for improving image quality for dual energy x-ray images used in aviation security software developed by Rapiscan and Heimann Systems is not well suited to enhance images of all varieties. Our research investigates optimising image enhancement tools on a per image basis using a machine learning system. A neural network based mapping is performed that maps the properties of a given image to the choice of one or more ideally suited algorithms for that image. The research was conducted with Home Office Scientific Development Branch and airports including Heathrow, Gatwick and Exeter. The results have shown that this scheme significantly outperforms using a single best scheme of image enhancement on a batch of images. A similar scheme is developed to optimise image segmentation (which serves as a preliminary step of image analysis before any shape analysis can be performed to give an estimate of threat in the luggage). The research talk will introduce the audience to novel concepts of adaptive parameter setting in image analysis, new measures of image viewability and better estimates of colour purity in image histograms. -------------------------------- Date and time: Monday 8th November 2004 at 16:00 Location: UG40, School of Computer Science Title: Pfeiffer - an Interactive Global Genetic Developmental L-System demonstration Speaker: William Langdon (http://www.cs.ucl.ac.uk/staff/W.Langdon/) Institution: University College, London (http://www.cs.ucl.ac.uk/) Host: Xin Yao Abstract: Pfeiffer, http://www.cs.ucl.ac.uk/staff/W.Langdon/pfeiffer.html is an example of interactive evolution. The evolved seeds contain L-systems and parameters which develop via Lindenmayer grammars and LOGO style turtle graphics into new fractal patterns in many cases similar to Koch's snow flake. The images are animated (computerised snowstorms) by www web browsers anywhere on the planet. The system builds on earlier internet based distributed parallel genetic programming (GP) work [ftp://ftp.cs.bham.ac.uk/pub/tech-reports/1999/CSRP-99-07.ps.gz] at Birmingham. The system is written in Javascript and Perl. There are now web based and stand alone versions. It has provided open end collaborative evolution between approximately 600 users across the world. Future work might include artificial life (Alife), evolving autonomous agents and virtual creatures in higher dimensions from a free format representation in the context of neutral networks, gene duplication and the evolution of higher order genetic operators. -------------------------------- Date and time: Monday 15th November 2004 at 16:00 Location: UG40, School of Computer Science Title: Probabilistic methods in BCI research Speaker: Peter Sykacek (http://www.sykacek.net) Institution: Department of Pathology, Cambridge University (http://www.path.cam.ac.uk) Host: Peter Tino Abstract: The idea of a Brain Computer Interface (BCI) is to enable people with severe neurological disabilities to operate computers by thought rather than by physical means. BCI Research has been carried out for more than a decade on two types of BCI: Non adaptive versions rely on the user to adapt to the interface; Machine learning type of approaches, based on signal processing and mostly static classification, are adapted to the user. A good average case performance of such systems will provide bit rates in the range of 5-25 bit per minute. A problem of current BCI's is their tendency to degrade in performance over time. Increasing the bit rate and the BCI's reliability requires considering the determining neuro-cognitive and technical factors. Learning effects and fatigue change the cortical dynamics during and between BCI sessions. The electrolyte used to establish conductivity between electrodes and the scalp changes impedance. This leads to temporal variations in signal amplitude and dynamics which clearly invalidates static translation algorithms. To accommodate for this, we propose an adaptive approach which is dealt with in a probabilistic setting. -------------------------------- Date and time: Monday 22nd November 2004 at 16:00 Location: Lecture Room 7, Arts Title: Searching for Extra-Terrestrial Intelligences: Refining the Enterprise Speaker: William H Edmondson (http://www.cs.bham.ac.uk/~whe/) Institution: School of Computer Science, University of Birmingham (http://www.cs.bham.ac.uk/index.html) Host: Peter Tino Abstract: The Search for Extra-Terrestrial Intelligences (SETI) being conducted by a few teams around the world, including one here at Birmingham, is concerned with the simple question: "Are we alone?". Some related questions include: "How would we Communicate with ETIs?" (CETI) and "How does work on AI and TI help us think about ETI and viceversa?". Two further questions of note are i) 'Fermi's paradox' - "Where IS everybody?" ; ii) "How do you actually do the searching?". The approach we are taking at Birmingham is to specify and conduct a targeted search for targeted signals, using pulsars as beacons. This is a new way of answering the last question above and the implications/requirements of targeting (cognitive and semiotic issues, as well as technical ones) will be discussed. Preliminary results will be reported from a search using the telescope at Arecibo. -------------------------------- Date and time: Monday 29th November 2004 at 16:00 Location: UG40, School of Computer Science Title: Time Series Data Mining Speaker: Tony Bagnall (http://www.cmp.uea.ac.uk/people/ajb/) Institution: School of Computing Sciences, University of East Anglia (http://www.cmp.uea.ac.uk/) Host: Peter Tino Abstract: Slides: http://www.cs.bham.ac.uk/~jxl/cercialink/web/slides/Birmingham_Presentation.ppt. Time series data mining (TSDM), the detection of patterns in longitudinal databases, has found application in a huge range of problem domains, such as web mining, meteorology, medical analysis (ECG/EEG/MRI), gene expression analysis, and economics. This talk will introduce the main currently popular research topics in the field by concentrating on how the DM algorithms employed are driven by the need to measure similarity between series and to represent data compactly. Broadly speaking, there are three types of similarity between series: similarity in time; similarity in shape; and similarity in change. After providing a background context, I will then describe two pieces of research to address some of the problems identified. The first theoretically and experimentally evaluates how clipping, the conversion of a real valued series into a binary series, affects accuracy of clustering in order to group together series with similar autoregressive structures (similarity in change). We find that for certain classes of problem clipping the data (and thus compressing it considerably) does not decrease the clustering accuracy and can actually improve it in the presence of outliers. The second research project introduces an alternative similarity measure based on the likelihood ratio statistic for use with the fourier transform of time series for complex TSDM tasks (similarity in shape and change). This measure is advantageous because it has known statistical properties, and can be shown to better descriminate between series for problems with complex spectra. Finally, I will introduce an application area that have been one of the driving forces behind my interest in TSDM. An insurance company is currently experimenting with a pay as you drive insurance scheme, where motoring telemetry data is collected constantly from a large sample of cars. This massive data base can be analysed to find relationships based on the three types of similarity previously identified. -------------------------------- Date and time: Monday 6th December 2004 at 16:00 Location: UG40, School of Computer Science Title: Probabilistic Models for Automated ECG Interval Analysis Speaker: Nick Hughes (http://www.robots.ox.ac.uk/~nph) Institution: Robotics Research Group, University of Oxford (http://www.robots.ox.ac.uk/) Host: Peter Tino Abstract: The electrocardiogram (ECG) is an important non-invasive tool for assessing the condition of the heart. By examining the ECG signal in detail it is possible to derive a number of informative measurements from the characteristic ECG waveform. Perhaps the most important of these measurements is the "QT interval", which plays a crucial role in clinical drug trials. In particular, drug-induced prolongation of the QT interval (known as long QT syndrome) is now the most common cause of development delays, non-approvals and market withdrawls for new drugs (e.g. terfenadine/triludan). In this talk I will describe my work on developing probabilistic models for automatically segmenting ECG waveforms into their constituent waveform features. I will show how wavelet methods, and in particular the undecimated wavelet transform, can be used to provide a representation of the ECG which is more appropriate for subsequent modelling. I will then examine the use of hidden Markov models for segmenting the resulting wavelet coefficients, and show that the state durations implicit in a standard HMM are ill-suited to those of real ECG features. This motivates the use of advanced duration modelling techniques to enable more robust segmentations. Finally i will show how we can leverage the probabilistic generative nature of the HMM to provide a confidence measure based upon the log likelihood of the waveform under consideration. -------------------------------- Date and time: Monday 13th December 2004 at 16:00 Location: UG40, School of Computer Science Title: The Aspect Bernoulli Model: Multiple Causes of Presences and Absences - CANCELLED Speaker: Ella Bingham (http://www.cis.hut.fi/ella/index_en.shtml) Institution: Helsinki University of Technology (http://www.hut.fi/English/) Host: Ata Kaban Abstract: The talk is based on the work I have conducted during the past year with Ata Kaban, and especially during my visit in Birmingham. We present a probabilistic multiple cause model for the analysis of binary data. A distinctive feature of the model is its ability to automatically detect and distinguish between two types of zeros in the data: ``true'' and ``missing''. We demonstrate the model on paleoecological data where the observations consist of remains of mammal genera found at different sites of excavation. In this data, the two types of zeros arise naturally: a zero, indicating that remains of a genus were not observed at a particular site, arises either because the genus did not live in the site, or because it did but no remains could be found. The former is a ``true'' zero and the latter a ``missing'' zero. Also, results on other data sets are briefly shown. -------------------------------- Date and time: Monday 17th January 2005 at 16:00 Location: UG40, School of Computer Science Title: Optimal and Not Very Optimal Reinforcement Learning Speaker: Jeremy Wyatt (http://www.cs.bham.ac.uk/~jlw/) Institution: School of Computer Science, The University of Birmingham (http://www.cs.bham.ac.uk/) Host: Peter Tino Abstract: Optimal Learning and exploration in RL is a long standing interest of mine. In this talk I will give an overview of some of the ways optimal learning problems manifest themselves. I'll introduce the idea of Bayesian optimal reinforcement learning, and the idea of optimistic model selection. I'll show its application in some different representations. If I have enough time I'll also introduce exploration control in the context of hierarchy, and the problems that partial observability causes for learning control. -------------------------------- Date and time: Monday 24th January 2005 at 16:00 Location: UG40, School of Computer Science Title: Knowledge Grid and Semantic Grid in China Speaker: Hai Zhuge (http://www.knowledgegrid.net/~h.zhuge) Institution: China Knowledge Grid Research Group and Chinese Academy of Sciences (http://kg.ict.ac.cn) Host: Jun He Abstract: Having an intelligent environment in which we can live and work in cooperation is more important for our society than having personal intelligent machines or supercomputing power. The Intelligent Grid Environment is a scalable, live, sustainable and intelligent network environment where humans, software, machines and nature can harmoniously co-exist, work and evolve. It automatically collects useful information from nature and society according to requirements, transforms it into resources in the environment, and then after in-telligent processing, affects nature and society through machines. According to the regulations and principles of the environment, people and resources can in-telligently cooperate with each other to accomplish tasks, generate knowledge and solve problems by actively participating in versatile flow cycles in the envi-ronment through roles and machines. The Intelligent Grid Environment is the unity of the natural material world, virtual world and spiritual world. Various types of attraction in the environment drive the flows. The rules of flows guide the development and management of the environment. -------------------------------- Date and time: Monday 24th January 2005 at 16:00 Location: UG40, School of Computer Science Title: Exploring flow in Intelligent Grid Environment Speaker: Hai Zhuge (http://www.knowledgegrid.net/~h.zhuge/) Institution: China Knowledge Grid Research and Chinese Academy of Sciences (http://kg.ict.ac.cn/) Host: Xin Yao Abstract: Having an intelligent environment in which we can live and work in cooperation is more important for our society than having personal intelligent machines or supercomputing power. The Intelligent Grid Environment is a scalable, live, sustainable and intelligent network environment where humans, software, machines and nature can harmoniously co-exist, work and evolve. It automatically collects useful information from nature and society according to requirements, transforms it into resources in the environment, and then after in-telligent processing, affects nature and society through machines. According to the regulations and principles of the environment, people and resources can in-telligently cooperate with each other to accomplish tasks, generate knowledge and solve problems by actively participating in versatile flow cycles in the envi-ronment through roles and machines. The Intelligent Grid Environment is the unity of the natural material world, virtual world and spiritual world. Various types of attraction in the environment drive the flows. The rules of flows guide the development and management of the environment. -------------------------------- Date and time: Monday 7th February 2005 at 16:00 Location: UG40, School of Computer Science Title: On update of beliefs in game theoretic learning Speaker: Iead Rezek (http://www.robots.ox.ac.uk/~irezek/) Institution: Pattern Analysis & Machine Learning Research Group, University of Oxford (http://www.robots.ox.ac.uk/~parg/) Host: Peter Tino Abstract: Probabilistic inference methods are particularly well suited for learning models under uncertainty. In game theory playing models a dynamic environment in which players devise a strategy of actions aimed at winning the game, or maximising the reward. Game theory and probabilistic inference seem to offer advantages for each other which have been noticed. To be able to achieve maximal integration of each field's methods, a deeper understanding of the theoretical connections is needed. -------------------------------- Date and time: Monday 14th February 2005 at 16:00 Location: UG40, School of Computer Science Title: Running scared, learning to care: two affective agent architectures Speaker: Ruth Aylett (http://www.macs.hw.ac.uk/~ruth/) Institution: School of Mathematical and Computer Sciences, Heriot-Watt University (http://www.macs.hw.ac.uk/cs/index.htm) Host: Mark Lee Abstract: Affective processing is a growing research area, as for example shown in the new EU network Humaine. However the role that affect should play in agent architectures is still an open question, where the low-level neurophysiologically-oriented community and the high-level appraisal theory community are tackling questions in rather different ways. We consider two agent architectures: one for panicking sheep and the other for empathic characters in education, and consider if the twain can meet. -------------------------------- Date and time: Monday 21st February 2005 at 16:00 Location: UG40, School of Computer Science Title: Managing diversity in regression ensembles Speaker: Gavin Brown (http://www.cs.man.ac.uk/~gbrown/) Institution: Dept of Computer Science, University of Manchester (http://www.cs.man.ac.uk/) Host: Peter Tino Abstract: I'll talk about the issues involved when training a group (ensemble) of learning machines on regression problems. I'll first cover the general problem, and introduce the existing theoretical motivations for ensembles. Then I'll introduce and present analysis of an ensemble learning technique that can explicitly control the 'complementarity', or 'diversity' of the ensemble members. This enables them to learn *cooperatively* to solve tasks more efficiently than single machines, both in terms of absolute errors and computational resources. -------------------------------- Date and time: Monday 28th February 2005 at 16:00 Location: UG40, School of Computer Science Title: Statistical Language Learning: Analysis of an 'Ideal' Language Learner - !!! CANCELLED !!! Speaker: Nick Chater (http://www2.warwick.ac.uk/fac/sci/psych/people/academic/nchater/) Institution: Department of Psychology, University of Warwick (http://www2.warwick.ac.uk/fac/sci/psych/) Host: Peter Tino Abstract: Theories have language acquisition have often had difficulties with understanding how children can learn language from only positive input. They observe what sentences can occur; but seem fairly insensitive to information about what sentences cannot occur (if, indeed, much such information is available). In particular, it has often been viewed as paradoxical that language learners can avoid overgeneral grammars, which allow ungrammatical sentences---because these grammars 'fit' all the observed input to the child. This has been one motivation for nativist views of language acquisition. I discuss some recent results, from joint work with Paul Vitanyi, that show that, in principle, language learning from positive evidence is possible, in a statistical sense. This suggests that empiricist views of language acquisition, and perhaps learning in other domains, may be more feasible than is frequently assumed. -------------------------------- Date and time: Monday 7th March 2005 at 16:00 Location: UG40, School of Computer Science Title: Biorthogonalization-based techniques for non-linear signal approximation Speaker: Miroslav Andrle (http://www.ncrg.aston.ac.uk/~andrlem/) Institution: Neural Computing Research Group, Aston University (http://www.maths.aston.ac.uk/) Host: Peter Tino Abstract: The problem of highly non-liner approximation concerns the representation of a given signal through the selection of functions, called atoms, which are taking from a redundant set, called a dictionary. This problem has been the subject of significant recent theoretical work with regard to some classes of dictionaries. Moreover, a number of heuristic techniques for selecting atoms have been devised in the context of signal processing over the last fifteen years. An advantage in the use of dictionaries for signal representation is often recognised by the possibility of considering atoms of different nature to match different structures of a signal. Some techniques, based on adaptive forward and backward biorthogonalization, will be presented. They accomplish the following goals: i)Fast updating and downdating of the model in the Least Square problem, ii)Fast implementation of methodologies for non-linear signal approximation. The techniques will be illustrated by using cardinal B-spline dictionaries. -------------------------------- Date and time: Monday 14th March 2005 at 16:00 Location: UG40, School of Computer Science Title: Fast Unsupervised Greedy Learning of Multiple Objects and Parts from Video Speaker: Chris Williams (http://www.dai.ed.ac.uk/homes/ckiw/) Institution: School of Informatics, University of Edinburgh (http://www.inf.ed.ac.uk/) Host: Peter Tino Abstract: We consider data which are images containing views of multiple objects. Our task is to learn about each of the objects present in the images. This task can be approached as a factorial learning problem, where each image must be explained by instantiating a model for each of the objects present with the correct instantiation parameters. A major problem with learning a factorial model is that as the number of objects increases, there is a combinatorial explosion of the number of configurations that need to be considered. We develop a method to extract object models sequentially from the data by making use of a robust statistical method, thus avoiding the combinatorial explosion. We also present two methods for further speeding up the algorithm on video data, (i) using approximate tracking of the multiple objects in the scene and (ii) by clustering the motions of invariant features. We present results showing successful extraction of objects and parts from a number of image sequences. Work with Michalis Titsias, Moray Allan -------------------------------- Date and time: Monday 21st March 2005 at 16:00 Location: UG40, School of Computer Science Title: Evolutionary Algorithms, Representations and Operators for Combinatorial Optimization Speaker: Christine Mumford (http://www.cs.cardiff.ac.uk/user/C.L.Mumford/) Institution: School of Computer Science, Cardiff University (http://users.cs.cf.ac.uk/) Host: Peter Tino Abstract: Successful implementations of evolutionary algorithms (EAs) depend on making “good choices” for the problem representation, the genetic operators, and for the evolutionary framework itself. How do we make these choices, and how do we assess the outcome? The vast search spaces involved with combinatorial optimization problems make them a popular choice with EA researchers. Yet EAs frequently struggle to match the solution quality and run times achieved using other methods. Why is this and what can be done about it? I will be airing these issues in my talk, and sharing my experience, but probably asking more questions than I will be answering. -------------------------------- Date and time: Monday 11th April 2005 at 16:00 Location: UG40, School of Computer Science Title: !!! CANCELLED !!! Speaker: Raymond Kwan (http://www.iri.leeds.ac.uk/people/researchers/kwan.htm) Institution: Informatics Network, University of Leeds (http://www.informatics.leeds.ac.uk/) -------------------------------- Date and time: Tuesday 12th April 2005 at 14:00 Location: UG06, Learning Centre (opposite Computer Science) Title: Steps towards Cognitive Vision Speaker: Tony Cohn (http://www.comp.leeds.ac.uk/cgi-bin/sis/ext/staff_pub.cgi/agc.html?cmd=displaystaff) Institution: University of Leeds, School of Computing (http://www.comp.leeds.ac.uk/) Host: Aaron Sloman Abstract: In this talk I will present some results from a recent EU project on Cognitive Vision. Our main goal was to build a system which could take perceptual inputs (both visual and auditory) turn them into symbols, reason about the behaviour being observed and then demonstrate the understanding by having the computer perform actions in the world. Moreover, our aim was for the system to learn all this autonomously, simply by observing the world. I will discuss a system which achieves this in a simple table top game world, watching two players, and then taking over the part of one of the players using a talking head. The behavioural descriptions are learned through inductive logic programming (Progol). We are also able to learn mathematical principles such as equivalence and transitivity of orderings. I will also discuss how we were able to improving classification by reasoning about spatio-temporal continuity. -------------------------------- Date and time: Tuesday 12th April 2005 at 15:20 Location: UG06, Learning Centre (opposite Computer Science) Title: "What can I do with this?" Investigating how animals learn what they can do with objects and the environment Speaker: Jackie Chappell (http://www.biosciences.bham.ac.uk/staff/staff.htm?ID=90) Institution: Biosciences, University of Birmingham (http://www.biosciences.bham.ac.uk/) Host: Aaron Sloman Abstract: Many organisms have to deal with objects in their environment in some way; nest building birds manipulate sticks and vegetation to form complex structures, arboreal animals use branches and other features as supports in locomotion, and foraging animals must move objects covering their prey. How do animals learn what they can do with objects (the affordances provided by objects), and what do they learn about them? How is this information acquired - by ontogenetic or phylogentic means? Tool using species might be under particularly strong selection pressure to learn about objects, and I will discuss some experiments exploring New Caledonian crows' knowledge of the properties and functions of tools. These results suggest that New Caledonian crows do not simply learn which tool to use for which task by associative learning, but are selective, choosing appropriate tools for a task. I will also discuss proposed experiments to test similar cognitive abilities in non-tool users. Are these abilities adaptations for tool use, or are they shared to some extent by all sufficiently intelligent species? -------------------------------- Date and time: Monday 18th April 2005 at 16:00 Location: UG40, School of Computer Science Title: No Optimisation without Representation: Principled Design for Local Search Speaker: Andrew Tuson (http://www.soi.city.ac.uk/~andrewt/) Institution: School of Informatics, City University, London (http://www.soi.city.ac.uk/) Abstract: Though local search optimisers are clearly effective, their design is often rather ad hoc in practice. More specifically, though it is generally agreed that incorporating domain knowledge is needed to produce effective optimisers, how to achieve this is still an open question. This talk outlines a semi-formal knowledge based approach for acquiring and representing domain knowledge to local search optimisers. -------------------------------- Date and time: Monday 16th May 2005 at 16:00 Location: UG40, School of Computer Science Title: Earth System Models: machine learning and the next big research agenda? Speaker: Dan Conford (http://www.ncrg.aston.ac.uk/~cornfosd/) Institution: NCRG, Aston University (http://www.ncrg.aston.ac.uk/) Host: Peter Tino Abstract: I will look at what I believe the higher echellons of UK science (NERC in particular) think is the next BIG question. I will introduce the topic of Earth System modelling with an emphasis on the role of probablistics methods in addressing the key questions. I will discuss issues such as model error, parameterisation, data and its use, and briefly touch on computational issues that these raise. I will describe how the (machine learning inspired) research we are undertaking goes some way to providing a framework for solving some of these problems. The talk is designed to provide an overview of the field and is not excessively technical! -------------------------------- Date and time: Monday 6th June 2005 at 16:00 Location: UG40, School of Computer Science Title: Learning complex symbolic processes with artificial neural networks Speaker: Whitney Tabor (http://www.sp.uconn.edu/~ps300vc/tabor.html) Institution: University of Connecticut, Dept. of Psychology (http://web.uconn.edu/psychology/) Host: Peter Tino Abstract: Natural computation methods, like artificial neural networks (ANNs), are appealing because of their powerful learning algorithms, their robustness in the presence of noise, and their similarity to human minds. However, in complex symbolic tasks, like the processing of natural language syntax, the effectiveness of known natural computation methods still lags far behind that of symbolic methods---i.e., Turing-machines and the subclasses of Turing-machines. For example, prior work has succeeded in getting a recurrent ANN to learn a^nb^n, but failed on the palindrome language, WW^r. ANN researchers have generally approached this problem by seeking better learning algorithms. Here, I suggest that a two-step approach may be more fruitful: first, discover how complex symbol processing is most naturally implemented in ANNs---i.e., identify the probable end-states of learning; then, design learning algorithms that converge on these end-states. Following this approach, I will first describe a method of using fractal sets to encode context free languages in ANNs. Then, in pursuit of Step 2, I will describe a device, suggested by the analysis, that learns several complex languages related to the palindrome language. -------------------------------- Date and time: Monday 27th June 2005 at 15:00 Location: UG40, School of Computer Science Title: Interactive Concept-based Search technique for Multi-objective Problems using MOEA Speaker: Amiram Moshaiov (http://www.eng.tau.ac.il/~moshaiov/) Institution: Faculty of Engineering, Tel-Aviv University, Israel (http://www.eng.tau.ac.il/) Host: X. Yao Abstract: This work originated from our wish to support engineering design by developing a tool for interactive search of the design space. Its findings appear generic and could potentially be implemented in other application areas. An EC parallel search technique is described for a multi-objective search of conceptual solutions by way of their associated particular solutions. The conceptual solution space is represented by an AND/OR tree of sub-concepts. Each of the conceptual solutions has a one-to-many relationship with particular solutions. A structured genetic code is used where each individual in the population is a compound individual. It holds the code for the concept structure and for the associated particular solution. The survivability of solutions is influenced by both a model-based fitness and subjective human preferences. Two approaches are described including a progressive targeting technique and a Pareto-based one. In the later, the suggested method produces an objective-subjective Pareto front, with a simultaneous evolution of the concepts. Academic, as well as mechatronic design and path planning examples are employed to demonstrate the proposed approach. This work is done with Gideon Avigad -------------------------------- Date and time: Thursday 21st July 2005 at 14:00 Location: UG40, School of Computer Science Title: The Altricial-Precocial Spectrum for Robots Speaker: Jackie Chappell (http://users.ox.ac.uk/~kgroup/jackie.html) Institution: School of Biosciences, University of Birmingham (http://www.biosciences.bham.ac.uk/) Abstract: *** Co-speaker is Aaron Sloman, School of Computer Science, University of Birmingham *** This is a trial (joint) presentation (including fun videos) of a paper to be presented at IJCAI05 in Edinburgh in August. http://www.cs.bham.ac.uk/research/cogaff/altricial-precocial.pdf Several high level methodological debates among AI researchers, linguists, psychologists and philosophers, appear to be endless, e.g. about the need for and nature of representations, about the role of symbolic processes, about embodiment, about situatedness, about whether symbol-grounding is needed, and about whether a robot needs any knowledge at birth or can start simply with a powerful learning mechanism. We argue that consideration of the variety of capabilities and development patterns on the altricial-precocial spectrum in biological organisms will help us to see these debates in a new light. To a first approximation: precocial species (e.g. deer, horses, chickens, fishes, reptiles, insects, spiders) have individuals whose behavioural competences are almost completely determined by their genes (apart from small amounts of adaptation and calibration), and are available from birth or hatching, whereas altricial species (e.g. hunting mammals, nest building birds, primates, humans) seem to be born helpless and incompetent, yet, in adult life they have deeper, richer, structurally more varied competences, and seem to be capable of kinds of rapid learning that are not explained by current popular models of learning. More accurately, within each species there are combinations of more or less 'precocial' (genetically determined) and 'altricial' (individually learnt) competences. We analyse some of the evolutionary pressures favouring one or other extreme, the costs (e.g. in terms of parental commitment), and the environmental constraints, and offer some high level first-draft proposals regarding mechanisms that might support altricial competences, using precocial *meta-level* abilities to explore, play, test, store chunked information, re-use, and re-combine. This biologically rare ability to bootstrap new virtual machines could be a biological pre-cursor to language learning. Robot designers, and more generally, designers of complex computing systems, have not considered the altricial-precocial spectrum, even when they have thought about adaptive systems that are capable of some kind of self-modification. We suggest that understanding the costs and benefits of different combinations of altricial and precocial competences can lead to major advances in designs of robots and other complex systems. For a two page summary presented at a 'grand challenges' workshop see http://www.cs.bham.ac.uk/research/cogaff/summary-gc7.pdf -------------------------------- Date and time: Monday 10th October 2005 at 16:00 Location: UG40, School of Computer Science Title: Levels of Description in Dynamical Systems Speaker: Simon McGregor Institution: University of Sussex Host: John Woodward Abstract: The notion of "emergence" is sometimes used in a number of biologically-inspired AI paradigms such as ant colony optimisation or ANNs, but nobody agrees on exactly what it means. I discuss the reasons for having different levels of description of a single system and apply some concepts from information theory to try and formally clarify these ideas. (This talk is based on a paper in press for Artificial Life) -------------------------------- Date and time: Monday 17th October 2005 at 16:00 Location: UG40, School of Computer Science Title: On the Time Complexity of Evolutionary Algorithms Speaker: Carsten Witt (http://ls2-www.cs.uni-dortmund.de/~witt/) Institution: University of Dortmund (http://ls2-www.cs.uni-dortmund.de/) Host: Jun He Abstract: Evolutionary algorithms (EAs) are general, randomized search heuristics often applied to optimization problems for which no problem-specific algorithm is available. Despite surprisingly good results reported by practitioners, the theoretical foundation of the computational time complexity of EAs is still a relatively young research area. The talk will deal with two different aspects of EAs and related randomized local search algorithms. First, the time complexity of obtaining approximate solutions to an NP-hard optimization problem is discussed. Second, so-called populations of search points are investigated. It is proven that the time complexity of EAs can crucially depend on the size of the population used. -------------------------------- Date and time: Monday 24th October 2005 at 16:00 Location: UG40, School of Computer Science Title: Why is the Lucas-Penrose Argument Invalid? Speaker: Manfred Kerber (http://www.cs.bham.ac.uk/~mmk) Institution: School of Computer Science (http://www.cs.bham.ac.uk) Abstract: It is difficult to prove that something is not possible in principle. Likewise it is often difficult to refute such arguments. The Lucas-Penrose argument tries to establish that machines can never achieve human-like intelligence. It is built on the fact that any reasoning program which is powerful enough to deal with arithmetic is necessarily incomplete and cannot derive a sentence that can be paraphrased as "This sentence is not provable.'' Since humans would have the ability to see the truth of this sentence, humans and computers would have obviously different mental capacities. The traditional refutation of the argument typically involves attacking the assumptions of Goedel's theorem, in particular the consistency of human thought. The matter is confused by the prima facie paradoxical fact that Goedel proved the truth of the sentence that "This sentence is not provable.'' Adopting Chaitin's adaptation of Goedel's proof which involves the statement that ``some mathematical facts are true for no reason! They are true by accident'' and comparing it to a much older incompleteness proof, namely the incompleteness of rational numbers, the paradox vanishes and clarifies that the task of establishing arbitrary mathematical truths on numbers by finitary methods is as infeasible to machines as it is to human beings. -------------------------------- Date and time: Monday 31st October 2005 at 16:00 Location: UG40, School of Computer Science Title: From molecules to insect communities Speaker: Mike Holcombe (http://www.dcs.shef.ac.uk/~wmlh/) Institution: University of Sheffield (http://www.dcs.shef.ac.uk) Host: Chris Fernando Abstract: Agent-based approaches to modelling biological phenomena are becoming popular and proving successful in a number of areas. However, the underlying basis of these techniques is sometimes rather 'ad-hoc' and they are often only applied to specific systems. This paper describes a general approach which is based on the use of fully general computational models, using a formal model of an agent and a rigorous approach to building systems of communicating agents within virtual environments. A collection of tools have been built which allow for efficient simulation of such systems and their visualisation. Included in this work is the implementation of the simulations on parallel clusters of computers to enable large numbers of agents to be simulated. Application areas where the method has been successfully applied include: 1) Signal transduction pathways, specifically the NFkappaB pathway. This model has been validated using single cell data from GFP transvected cells. The model has enabled the prediction of the possible role of actin filaments in the sequestration and feedback control of IkappaB. 2) The epitheliome project invovles building models of the development of both skin and urothelial tissue and the investigation of the role of calcium and juxtracrine signalling in the development and differentiation of tissue. Again, the models have been validated with 'in virto' tissue cultures under a number of important laboratory conditions. 3) Populations of Pharoah's ants have been simulated and closely compared with real populations within the laboratory. The role of pheromone signalling has been studied and the modelling has led to a new understanding of the use of pheromone trails in foraging behaviour. This has shown that the geometry of the trails contains vital information which is used in the navigation of the trails by the insects. -------------------------------- Date and time: Monday 7th November 2005 at 16:00 Location: UG40, School of Computer Science Title: The Emergence of Automated Reason in Victorian England Speaker: Seth Bullock (http://www.ecs.soton.ac.uk/people/sgb) Institution: University of Southampton (http://www.ecs.soton.ac.uk) Host: Chris Fernando Abstract: While it was clear to Victorian engineers that all manner of unskilled manual labour could be achieved by cleverly designed mechanical devices, the potential for the same kind of machinery to truly replicate *mental labour* was far more controversial. Charles Babbage's contribution to this debate was typically robust. In demonstrating how computing machinery could take part in (and thereby partially automate) biological debate, he challenged the limits on what could be achieved with mere automata, and stimulated the next generation of "machine analysts'' to conceive and design (bio)logical devices capable of moving beyond mere mechanical calculation in an attempt to achieve fully-fledged automated reason. In this talk, some of the historical research that has focussed on Babbage's early machine intelligence and its ramifications will be brought together and summarised. The implications of this activity on wider question of machine intelligence and computational modelling will then be discussed, and the relationship between automation and intelligibility will be explored. Connections between the concerns of Babbage and his contemporaries and those of modern artificial intelligence (AI) will be noted. In particular, difficulties associated with emergence, early evidence of bio-inspiration, and methodological concerns with simulation modelling will all be addressed. -------------------------------- Date and time: Monday 14th November 2005 at 16:00 Location: UG40, School of Computer Science Title: Evaluating the Correctness of Information Speaker: Antoni Diller (http://www.cs.bham.ac.uk/~ard/) Institution: University of Birmingham (http://www.cs.bham.ac.uk) Abstract: Almost all the information that a person has has been obtained by believing what other people have written or said. Unfortunately, not every assertion we encounter is true. We evaluate the information we come across: accepting some of it and jettisoning the rest. The ultimate goal of my research is to automate this assessment of information as much as is possible. Assertion evaluation is hardly investigate in AI, although most of our knowledge comes from testimony. Perception is relatively unimportant as a source of knowledge and yet it is much studied. I will explain why this is the case. Too much time and effort in AI is wasted in meaningless implementations because people fail to understand what they are trying to emulate. Thus, it is important to carefully analyse the human ability to sift truth from error before we try to automate such evaluation. This is what I will mainly talk about, but I will also briefly outline some of the requirements that an automated assertion evaluator would have to satisfy. -------------------------------- Date and time: Monday 21st November 2005 at 16:00 Location: UG40, School of Computer Science Title: Can We Count on Neural Networks? Speaker: Matthew Casey Institution: University of Surrey Host: Peter Tino Abstract: Has artificial intelligence ‘lost the way’? Do we focus more on improving algorithm performance by some small amount, say for classification, than on achieving the long term aim of building ‘intelligent machines’ (whatever this might mean)? Recent initiatives, such as the Foresight Cognitive Systems Programme, have highlighted how we can still learn much from other disciplines to help achieve this long term aim. In this talk I will highlight some of the ongoing inter-disciplinary work I have been involved with that is trying to learn from human behaviour. With research centred around the use of multiple neural networks, we have built simple models of cognitive abilities that lay a foundation for exploring multi-task/multi-sensory processing, whilst also allowing us to explore the underlying theory of these multi-net systems. Can we count on neural networks? Well, these neural network models ‘count’ at least as well as a four-year old child. -------------------------------- Date and time: Monday 28th November 2005 at 16:00 Location: UG40, School of Computer Science Title: The Self-Organising Map, Data Visualisation and Beyond Speaker: Hujun Yin (http://images.ee.umist.ac.uk/hujun/) Institution: University of Manchester (http://images.ee.umist.ac.uk/IENC/) Host: Peter Tino Abstract: The topology preserving self-organising map (SOM) has been become a useful tool for dimensionality reduction and visualising high dimensional data in various applications such as decision support, bioinformatics and data mining. However the SOM does not directly apply to scaling, which aims to reproduce proximity in (Euclidean) distance on a low visual space. The recently proposed visualization-induced SOM (ViSOM) applies a regularisation term to the standard SOM algorithm and is able to directly preserve the distance as well as topology on the map. The ViSOM can be used as a natural algorithm for finding the principal curve/ surface of a nonparametric and principled nonlinear extension of the PCA. Such a connection can provide an insight to the visualization ability (power) of the SOM. The second half of the talk will look at recent on developments on kernelising the SOM and its relationships with the self-organising mixture network and the kernel method in general. The relationships enlighten the suitability of the SOM for clustering and classification. The revelation may also provide an explanation to the self-organisation dynamics of the unsupervised learning paradigm. -------------------------------- Date and time: Monday 5th December 2005 at 16:00 Location: UG40, School of Computer Science Title: Modelling Periodic Time Series with Neural Networks Speaker: Tugba Taskaya-Temizel (http://portal.surrey.ac.uk/portal/page?_pageid=798,504874&_dad=portal&_schema=PORTAL) Institution: University of Surrey (http://portal.surrey.ac.uk/portal/page?_pageid=798,1&_dad=portal&_schema=PORTAL) Abstract: Time series often exhibit periodical patterns that can be analysed by conventional statistical techniques. These techniques rely upon an appropriate choice of model parameters that are often difficult to determine. Whilst neural networks also require an appropriate parameter configuration, they offer a way in which non-linear patterns may be modelled. However, evidence from a limited number of experiments has been used to argue that periodical patterns cannot be modelled using such networks. Researchers have argued that combining models for forecasting gives better estimates than single time series models, particularly for seasonal and cyclic series. For example, a hybrid architecture comprising an autoregressive integrated moving average model (ARIMA) and a neural network is a well-known technique that has recently been shown to give better forecasts by taking advantage of each model's capabilities. However, this assumption carries the danger of underestimating the relationship between the model's linear and non-linear components, particularly by assuming that individual forecasting techniques are appropriate, say, for modelling the residuals. In this presentation, I will show that such combinations do not necessarily outperform individual forecasts. On the contrary, I will show that the combined forecast can underperform significantly compared to its constituents'. I will also present a method to overcome the perceived limitations of neural networks by determining the configuration parameters of a time delayed neural network from the periodic data it is being used to model. The motivation of the method is that Occam's razor should guide us in selecting a simpler solution compared to a complex solution. The method uses a fast Fourier transform to calculate the number of input tapped delays, with results demonstrating improved performance as compared to that of other linear and hybrid modelling techniques on twelve benchmark time series. -------------------------------- Date and time: Monday 16th January 2006 at 16:00 Location: UG40, School of Computer Science Title: Template replication: cancer development to magnetic toys Speaker: Jarle Breivik (http://folk.uio.no/jbreivik/) Institution: University of Oslo, Norway () Host: Chris Fernando Abstract: Template replication of nucleotide sequences forms the basis for life on earth. It represents the primary mechanism of information transfer in living systems and the molecular dynamics underlying Darwinian evolution and biological complexity. We are exploring template replication from two very different, and mutually elucidating perspectives: Our primary focus concerns the evolutionary dynamics of cancer development (Breivik, Semin. Cancer Biol. 2005). Cancer represents an evolutionary process within the body of an organism, and we aim to model this process from the perspective of molecular evolution. In particular, we have developed a model that explains the paradoxical loss of DNA repair genes in mutagenic environments. This model has been tested and confirmed by independent research groups and has links to fundamental aspects of information processing (Breivik, PNAS 2001). Our ongoing research suggests that important aspects of cancer development may be explained by “selfish” DNA repair genes competing for replication resources within the genome. Our second perspective concerns artificial template replication. We have patented the general concept of making template-replicating polymers from physical object (US Patent 6,652,285) and have demonstrated proof of principle in a system of ferromagnetic building blocks interacting in a turbulent heat-bath (Breivik, Entropy 2001). We have also taken some initial steps toward simulating this system in a virtual environment. The primary ambition (lacking funding) has been to develop an interactive educational tool for demonstrating template replication and molecular evolution, but the concept may also have more far-reaching applications. Important aspects have been elaborated in the thesis of Chrisantha Fernando (Fernando, 2005), and our patent was recently commented in a review on nanomedicine and nanorobotics (Freitas, 2005). Further developments of these ideas demands interdisciplinary collaboration, and we are seeking partners that share our interest in this exciting field. -------------------------------- Date and time: Monday 23rd January 2006 at 16:00 Location: UG40, School of Computer Science Title: Gradient-based Optimisation of Support Vector Machines Speaker: Christian Igel (http://www.neuroinformatik.rub.de/PEOPLE/igel) Institution: Institut f. Neuroinformatik, Ruhr-Uni. Bochum (http://www.neuroinformatik.rub.de) Host: Xin Yao Abstract: Recent studies on gradient-based optimisation of support vector machines (SVMs) are presented. First, efficient second order quadratic programming for large scale SVM learning is discussed. Then model selection for SVMs is considered. In particular, gradient-based maximisation of the kernel-target alignment is proposed to adapt kernel parameters. The method is applied to optimise sequence kernels for the detection of bacterial gene starts. -------------------------------- Date and time: Monday 30th January 2006 at 16:00 Location: UG40, School of Computer Science Title: Evolutionary symbiosis in cultural transmission Speaker: Monica Tamariz (http://www.ling.ed.ac.uk/~monica/index.html) Institution: University of Edinburgh () Host: Thorsten Schnier Abstract: Models of cultural transmission have focussed on either the evolution of cultural forms or of ideas. I propose a framework to study cultural phenomena based on two selection systems for forms (such as art objects, the structure of languages or social behaviours) and meanings (such as concepts that can be transmitted through art, language or social interaction). The novelty of this approach is that the two selection systems are symbiotic, and they constraint and provide elements and mechanisms necessary for one another. Additionally, the resulting symbiotic combination must increase human fitness, since natural selection has favoured genes that provide culture with the neural substrate it needs. In this talk I will examine the general structure of the proposed framework, and explore its implementation for art, language and social interaction, focusing on parameters that might inform a computer modelling approach. -------------------------------- Date and time: Monday 6th February 2006 at 16:00 Location: UG40, School of Computer Science Title: Talking with robots Speaker: Jeremy Wyatt (http://www.cs.bham.ac.uk/~jlw) Institution: School of Computer Science (http://www.cs.bham.ac.uk) Abstract: In the first year of the CoSy project we have concentrated on working with computational linguists to find ways of linking visual processing with language. This involves problems such as interpreting spatial references, understanding the role of an utterance in a dialogue, being able to learn about objects and their properties, and being able to trigger learning events in one sub-system, e.g. vision when prompted by another sub-system. Visual learning also benefits from the use of visual attention, and I will also talk about the visual attention system that we have employed, and how attention and language can be linked. If there is time I'll talk about the limitations of the approach we have taken, and mention some of the things we are currently working on. -------------------------------- Date and time: Monday 13th February 2006 at 16:00 Location: UG40, School of Computer Science Title: Unsupervised and semi-supervised classification - a multiobjective perspective Speaker: Julia Handl (http://dbk.ch.umist.ac.uk/handl/index.html) Institution: University of Manchester () Abstract: In this talk, three related tasks in data-driven classification are considered, namely clustering, feature subset selection, and semi-supervised learning. All three of these tasks can be formulated as optimization problems but, traditionally, have been framed using a single objective only. Instead, multiobjective fomulations can overcome key fundamental problems in tackling each of these tasks. The advantages of these formulations are motivated in detail, and multiobjective evolutionary approaches based on these formulations are outlined. Experimental results indicate some practical performance benefits of the algorithms proposed. -------------------------------- Date and time: Monday 20th February 2006 at 16:00 Location: UG40, School of Computer Science Title: Rule Extraction from Recurrent Neural Networks Speaker: Henrik Jacobsson (http://www.ida.his.se/~henrikj/) Institution: University of Skövde (http://www.ida.his.se/) Abstract: Rule extraction (RE) from recurrent neural networks (RNNs) refers to finding a model of the underlying RNN, typically in the form of a finite state machine, that mimics the RNN to a satisfactory degree while having the advantage of being more transparent. The Crystallizing Substochastic Sequential Machine Extractor, or CrySSMEx (pronounced somewhat like Christmas), is a new technique for extracting stochastic rules from RNNs and other similar dynamic systems. CrySSMEx extracts the machine from sequence data generated from the RNN in interaction with its domain. It is parameter free, deterministic and generates a sequence of increasingly deterministic extracted stochastic models until a fully deterministic machine is found. I will in my talk present a brief background to what underlies the problem of rule extraction, describe an outline of the CrySSMEx algorithm and suggest a new perspective on what the problem of RE from RNNs constitutes, scientifically speaking. I will also suggest a number of ambitious future goals for the field. -------------------------------- Date and time: Monday 27th February 2006 at 16:00 Location: UG40, School of Computer Science Title: MicroPsi - building a cognitive architecture around a motivational core Speaker: Joscha Bach (http://www.ikw.uni-osnabrueck.de/cogsci/en/m.Personal.php?sid=4365&id=4365&mode=show&type=o3_staff) Institution: Institut für Kognitionswissenschaft, Universität Osnabrück (http://www.ikw.uni-osnabrueck.de/cogsci) Host: Aaron Sloman Abstract: The Psi theory of German psychologist Dietrich Dörner provides a conceptual framework for motivated, emotional agents, with neurosymbolic representations that are grounded in interaction with the agents' environment. This marks a difference to classic models of cognition, such as Soar and ACT-R, which attempt to tackle cognition as an isolated faculty and focus on models of problem solving and memory. Psi is largely qualitative, which is not necessarily a drawback, since most interesting and pressing questions in cognitive science still start with "how" and "what", rather than with "how much". Still, to put the Psi framework to the test, it has to commit itself to implementated models. MicroPsi is an attempt to transform the Psi theory into a set of computational models. It comprises a framework to develop and simulate agents using a spreading activation network formalism, and supplies simulation environments that can be distributed among networks of machines. MicroPsi also offers interfaces to physical sensors and actuators, so it can be used as a robot control architecture. -------------------------------- Date and time: Monday 6th March 2006 at 16:00 Location: UG40, School of Computer Science Title: On learning structured outputs Speaker: Craig Saunders (http://www.ecs.soton.ac.uk/~cjs/) Institution: University of Southampton (http://www.ecs.soton.ac.uk/) Host: Peter Tino Abstract: Recently there have been many approaches to learning with structured outputs; that is where the label space has some structure (e.g. tree, graph, string) rather than just the standard +1/-1 of binary classification. On the surface structured classification looks impractical: for many structures, the number of possible labellings grows exponentially with the number of elements in the structure. Therefore, when optimising one might expect the optimisation variables or the number of constraints also to grow exponentially, rendering the problem computationally impractical. Several methods have been proposed however that can eliminate the exponential growth for a wide variety of problems. This talk will give an introduction to some of these methods and highlight some of the open questions that still remain in this area of research. -------------------------------- Date and time: Monday 20th March 2006 at 16:00 Location: UG40, School of Computer Science Title: The ATT-Meta Metaphor-Understanding Approach and its Relevance to the E-Drama Project Speaker: John Barnden (http://www.cs.bham.ac.uk/~jab) Institution: School of Computer Science (http://www.cs.bham.ac.uk) Abstract: The figurative-language group in the School has for some time being developing a theoretical approach and implemented system, called ATT-Meta, for performing types of pragmatic reasoning needed for metaphor understanding. This talk will, first, summarize the approach and system, and sketch some recent developments on the conflict-resolution component of the defeasible-reasoning aspect. The talk will then go on to discuss the ways in which the approach helps with or is challenged by examples of metaphor that have arisen naturally in improvisations (role-plays) conducted by means of a virtual-drama system under development in the E-Drama project in the School. That project has as one main research focus the ways in which emotion and other affective qualities are conveyed through metaphor. -------------------------------- Date and time: Monday 24th April 2006 at 16:00 Location: UG40, School of Computer Science Title: An Ecological Approach to the Evolution of Organism Complexity Speaker: Mikel Maron Institution: University of Sussex Host: Chris Fernando Abstract: Study of the evolution of organism complexity has mostly bypassed the role of ecological relationships and interactions. This presentation details simulations based on the opposite conjecture -- that ecological interactions have a core role in the process of complexification. Webworld is a robust model of species evolution in food webs. This study extends Webworld for variability in organism complexity under evolution. Statistical and network analysis indicates a clear tendency for complexification within the model, led by adaptations that initially disconnect species from trophic interactions. This suggests a process where short term fitness is increased by less connection to the ecosystem, but long term fitness is insured by incorporation within the ecosystem This work will be presented at the upcoming Workshop on the Evolution of Complexity at ALifeX Conference [http://ecco.vub.ac.be/ECO/]. For more detail, the dissertation form of this work is posted at [http://brainoff.com/easy/dissertation.pdf] -------------------------------- Date and time: Monday 15th May 2006 at 16:00 Location: LG32, Learning Centre Title: Foundations of Natural Computation Speaker: Jonathan Rowe (http://www.cs.bham.ac.uk/~jer) Institution: School of Computer Science (http://www.cs.bham.ac.uk) Abstract: Natural computation is the study of the kinds of computations that occur in nature. Such systems typically comprise a population of elements that interact with each other according to some (usually stochastic) rules. The relationship between the behaviour of the individual elements and that of the population as a whole is a key issue. Such systems are usually studied to either throw light on specific biological systems, or to provide new ideas for algorithm design. I will provide examples of both, but also present the view that this kind of system is interesting to study in its own right. Such an abstract view emphasises the commonality of certain phenomena which recur in various guises in apparently quite different contexts. (N.B. This is a joint Systems Biology/AI & Natural Computation seminar) -------------------------------- Date and time: Monday 22nd May 2006 at 16:00 Location: UG40, School of Computer Science Title: Understanding the Advantages of Modularity in Neural Systems Speaker: John Bullinaria (http://www.cs.bham.ac.uk/~jxb) Institution: School of Computer Science (http://www.cs.bham.ac.uk) Abstract: Modularity in the human brain remains a controversial issue. One promising approach for clarifying matters is to build computational models of neural systems and look for the advantages and disadvantages of incorporating modularity. This can be done efficiently by simulating the evolution of such systems and seeing whether natural selection will result in the emergence of modularity or fully distributed systems. In this talk I will begin by reviewing earlier work in this area, and then present my own latest results, looking particularly at: 1. The dependence on the neural network learning algorithm; 2. The dependence on the learning tasks; and 3. The effect of incorporating physical constraints. I will conclude that taking proper account of the known physical constraints of biological brains is crucial for obtaining reliable results. -------------------------------- Date and time: Monday 5th June 2006 at 16:00 Location: UG40, School of Computer Science Title: Coase Theorem, Complexity and Transaction Costs Speaker: Hamid Sabourian Institution: Birkbeck College, London Host: Xin Yao Abstract: This paper, by introducing complexity considerations, provides a dynamic foundation for the Coase Theorem and highlights the role of transaction costs in generating inefficient outcomes in bargaining/negotiation. We show that, when the players have a preference for less complex strategies, the Coase Theorem is valid in negotiation models with repeated surplus and endogenous disagreement payoffs if and only if there are no transaction costs. Specifically, complexity considerations allow us to select only efficient equilibria in these models without transaction costs while in sharp contrast every equilibrium outcome induces perpetual disagreement and inefficiency with (arbitrarily small) transaction costs. We also show that the latter is true in the Rubinstein bargaining model with transaction costs. -------------------------------- Date and time: Monday 3rd July 2006 at 16:00 Location: UG40, School of Computer Science Title: A MULTI-SUB-SWARM PSO ALGORITHM FOR MULTIMODAL FUNCTION OPTIMIZATION Speaker: De-Shuang Huang (http://www.intelengine.cn/English/people/hds.htm) Institution: Chinese Academy of Sciences () Host: Xin Yao Abstract: This talk first overviews at large the state-in-the-art of swarm optimization or intelligence including particle swarm optimization (PSO), niche technique, and niche particle swarm optimization (NPSO). Then, I present a novel multi-sub-swarm Particle Swarm Optimization (PSO) algorithm. The proposed algorithm can effectively imitate a natural ecosystem, in which the different sub-populations can compete with each other. After competing, the winner will continue to explore the original district, while the loser will be obliged to explore another district. In particular, the hill valley function is used as a niche identification technique (NIT). At the same time, the proposed algorithm integrates a sequential technique with a parallel one. As a result, the advantage of the proposed method is that it has the running speed of the parallel technique, and also possesses the ability to share the search information effectively among the swarm like the sequential one. Finally, five benchmark multimodal functions of varying difficulty are used as test functions. The experimental results show that the proposed method has a stronger adaptive ability and a better performance for complicated multimodal functions with respect to other methods. -------------------------------- Date and time: Monday 17th July 2006 at 16:00 Location: UG40, School of Computer Science Title: Structured strategies for games on finite graphs Speaker: Sunil Simon Institution: Institue of Mathematical Sciences (India) Host: Uday Reddy Abstract: We consider infinite plays over finite game graphs where players have possibly overlapping objectives. When strategies are functions that map game positions or plays to moves, existence of bounded memory best response strategies can be established. However, when an opponent's strategy is known only by its properties, the notion of best response needs to be re-examined. We propose a structural definition of strategies built up from atomic decisions of the form ``when condition x holds play a'' and where a player's strategy may depend on properties of other players' strategies. These strategies can be represented by finite state automata. In this framework, we look at the algorithmic questions of checking whether a strategy achieves a certain objective and synthesising strategies to achieve certain conditions. We propose a simple logic to describe composite strategies and reason about how they ensure players' objectives. We show that checking such an assertion on a game graph is decidable. We also present an axiom system for the logic and prove that it is complete. -------------------------------- Date and time: Friday 21st July 2006 at 10:00 Location: UG40, School of Computer Science Title: How biological neurons learn Speaker: Luba Benuskova (http://www.ii.fmph.uniba.sk/~benus/) Institution: Auckland University of Technology, New Zealand (http://www.aut.ac.nz/research/research_institutes/kedri/) Host: Peter Tino Abstract: In spite of concentrated effort it is still not known how the biological neurons learn from examples. Is there only one learning rule in the brain or are there several rules, depending on the task? Are learning rules used in artificial neural networks the same as those used in brain neural networks? In this study we simulate the changes in synaptic weights measured in one part of the brain, the hippocampus, which is involved in the long-term memory formation. We propose a new learning rule that is a combination of the spike-timing-dependent plasticity (STDP) and the moving LTD/LTP threshold from the Bienenstock, Cooper and Munro (BCM) theory of synaptic plasticity. Another novelty in our modelling approach is the proposed role of the spontaneous spiking activity in the heterosynaptic plasticity, in which the increased stimulation of one input brings about the heterosynaptic change in the unstimulated inputs. The new rule is sufficiently simple to be used in large networks of spiking neurons. -------------------------------- Date and time: Monday 9th October 2006 at 16:00 Location: UG04 Learning Centre Title: Evolution of ontology extension Speaker: Aaron Sloman (http://www.cs.bham.ac.uk/~axs) Institution: School of Computer Science (http://www.cs.bham.ac.uk) Abstract: The key idea is that all information-processing systems have *direct* access only to limited sources of information. For some systems it suffices to detect and use patterns and associations found in those sources, including conditional probabilities. But sometimes it is far more economical to refer beyond the available data to entities that exist independently of the information processing system, and which have properties and relationships that are not definable in terms of patterns in sensed data. This is commonplace in science: genes, neutrinos, electromagnetic fields, and many other things are postulated because of their explanatory role in theories, not because they are directly sensed. Does something similar go on in learning processes in infants and hatchlings that discover how the environment works by playful exploration and experiment? Is ontology extension beyond the sensor data also set up in the genome of species whose young don't have time to go through that process of discovery but must be highly competent at birth or hatching? If so, is there anything in common between the different ways ontologies get expanded in biological systems? This may relate to some other questions about what a genome is, and about varieties of epigenesis. -------------------------------- Date and time: Monday 16th October 2006 at 16:00 Location: UG04 Learning Centre Title: Artificial ecosystem selection in a simulated microbial microcosm Speaker: Hywel Williams (http://www.uea.ac.uk/env/people/williamsh/index.shtml) Institution: University of East Anglia (http://www.uea.ac.uk/env/index.shtml) Host: Chris Fernando Abstract: Recent work with microbial communities has demonstrated an adaptive response to artificial selection at the level of the ecosystem. However, the reasons for this response are not clear and there is some uncertainty concerning the level at which adaptation occurs: does the artificial selection scheme implicitly select for traits of a single species or are higher-level community traits the subject of selection? Here we present an individual-based evolutionary simulation model of microbial ecology in which artificial selection experiments similar to those reported by Swenson and Sloan-Wilson are performed, and where a similar response to artificial ecosystem selection is observed. We find that the response to artificial ecosystem selection is a robust phenomenon that occurs even when strong individual-level selection pressure acts independently on related traits. The size of the community response to selection depends inversely on the duration of the period between artificial selection events, which is explained by the occurrence of individual-level mutation and relaxation towards the non-selected ecological state during the inter-selection event period. The rate of relaxation depends on the rate of individual-level mutation during reproduction. The ecological function of a selected microbial ecology is found to be a complex function of community activity and abiotic factors, and we show that in many cases the community response to selection cannot be decomposed into the responses of individual species. Our findings also cast doubt on the possibility of developing generally applicable microbial communities for the degradation of environmental pollutants. -------------------------------- Date and time: Thursday 26th October 2006 at 16:00 Location: UG40, School of Computer Science Title: Small worlds, language and growing networks Speaker: Maria Markosova (http://www2.fiit.stuba.sk/~mark/) Institution: Faculty of Mathematics, Physics and Informatics, Comenius University, Bratislava, Slovakia (http://www.fmph.uniba.sk/mffuk/e/) Host: Peter Tino Abstract: Small world networks are graphs having some degree of relatively rigid local structure, and some degree of randomness. Several real networks can be modelled with a help of small worlds (e.g. www, word-web. internet...). We have studied small world properties of word-web (words of a natural language are nodes, arcs represent positional relations within well-formed sequences) . Real networks are usually endowed with some form of dynamics. The manner in which nodes are added determines the final global structure of the net. Several continuum models of groving networks will be presented and analysed. Special emphasis will be put on word-web structures. -------------------------------- Date and time: Monday 13th November 2006 at 16:00 Location: UG40, School of Computer Science Title: On The Effect of Populations in Evolutionary Multi-objective Optimisation Speaker: Per Kristian Lehre (http://www.idi.ntnu.no/~lehre/) Institution: Norwegian University of Science and Technology, Trondheim, Norway (http://www.idi.ntnu.no/) Host: Xin Yao Abstract: Multi-objective evolutionary algorithms (MOEAs) have become increasingly popular as multi-objective problem solving techniques. An important open problem is to understand the role of populations in MOEAs. We present a simple biobjective problem which emphasises when populations are needed. Rigorous runtime analysis point out an exponential runtime gap between the population-based algorithm Simple Evolutionary Multi-objective Optimiser (SEMO) and several single individual-based algorithms on this problem. This means that among the algorithms considered, only the population-based MOEA is successful and all other algorithms fail. -------------------------------- Date and time: Monday 4th December 2006 at 16:00 Location: UG40, School of Computer Science Title: A cognitive systems roadmap Speaker: Bill Sharpe (http://www.eucognition.org/wiki/index.php?title=Research_Roadmap) Host: Aaron Sloman -------------------------------- Date and time: Monday 19th February 2007 at 16:00 Location: UG40, School of Computer Science Title: Fusing Natural Computational Paradigms for Cryptanalysis Speaker: John Clark Institution: University of York Host: Per Kristian Lehre Abstract: Recent years have seen the application of evolutionary and other nature-inspired search approaches to achieve human-competitive results in cryptography and cryptanalysis. We have also seen the emergence of quantum computation as a tremendously exciting computational paradigm with significant potential applications in these areas. To date there seems to have been no synergistic application of these techniques in these fields. All applications are geared to the effective exploitation of one computational paradigm or another. Nature-inspired search and quantum computing can, however, be combined to achieve results neither is capable of individually. All that is needed is that classical search get 'close enough' for quantum search to take over and solve the residual problem. This observation has significant implications for the security of crypto-systems and our understanding of the power and usefulness of nature-inspired and quantum search. The talk discusses how trajectories based on principles of thermostatistical annealing and also on problem perturbation can be used to get most of the information we need, with quantum going the final part. -------------------------------- Date and time: Monday 5th March 2007 at 16:00 Location: UG40, School of Computer Science Title: What is human language and how might it have evolved? Speaker: Aaron Sloman (http://www.cs.bham.ac.uk/~axs/) Institution: School of Computer Science (http://www.cs.bham.ac.uk) Host: Nick Hawes Abstract: It is thought by many that the primary function of language is communication between individuals. If we use the word 'language' to refer to a system for encoding information of varying kinds and varying complexity, where the information encoded in a complex structure is derived in principled ways from (a) meanings of parts and (b) how the parts are related in the structure, i.e. using compositional semantics, then it is arguable that many intelligent animals and human children who have not yet learnt to talk, must have one or more languages which they use internally for perception, planning, thinking, remembering, reasoning, formulating questions, formulating goals, etc. On that view, languages serving these internal, cognitive, functions evolved before languages used for communication, and also develop earlier in children. In any case, could a system for communication evolve before there was something to communicate? This idea was proposed in a paper in 1979 ( http://www.cs.bham.ac.uk/research/projects/cogaff/81-95.html#43 ) and various other researchers have had similar ideas (e.g. Bruce Bridgeman). Recent work in the CoSy robotic project on the role of indexicals (e.g. 'this', 'here', 'now') and spatial prepositions (e.g. 'above', 'to the left of') combined with some of Grice's ideas about linguistic communication, along with evidence about sign language and the spontaneous invention of a new sign language by Nicaraguan deaf children, have led me to some surprising hypotheses about both the evolution of human communicative language (e.g. the hypothesis that sign language probably came first, though not for the reasons proposed by Arbib and Corballis) and very many sentences should be thought of as not communicating complete meanings but functions from non-linguistic contexts and purposes to meanings. This has many consequences, including reinterpreting supposedly vague words and phrases as instead expressing precise higher-order meanings. There is no suggestion that this requires an innate 'language of thought' into which all meanings are translated as proposed by Fodor, not least because that would not make substantive ontological development (e.g. learning about quantum theory) possible. Some of these points are discussed here: * http://www.cs.bham.ac.uk/research/projects/cosy/papers/#dp0605 [http://www.cs.bham.ac.uk/research/projects/cosy/papers/#dp0605] * http://www.cs.bham.ac.uk/research/projects/cosy/papers/sloman-enf07.pdf [http://www.cs.bham.ac.uk/research/projects/cosy/papers/sloman-enf07.pdf] -------------------------------- Date and time: Monday 12th March 2007 at 16:00 Location: UG40, School of Computer Science Title: Cartesian Genetic Programming Speaker: Julian Miller and James Walker (http://www.elec.york.ac.uk/staff/academic/jfm.html) Institution: University of York () Host: Xin Yao Abstract: Cartesian Genetic Programming (CGP) is a graph-based genetic programming technique that has a number of unusual features (e.g. non-coding regions). Recently the technique has been extended to include automatically defined functions (ADFs). These are sub-functions that are acquired, evolved and can be utilized in the main code. This has been achieved through the use of automatic module acquisition, evolution and re-use. The technique has been tested on a number of benchmark problems. The results show the new modular method evolves solutions quicker than the original non-modular method, and the speed-up is more pronounced on larger problems. Also, the new modular method performs favourably when compared with other GP methods. By evolving computer program that write a finite string of symbols it is possible to use CGP to solve standard optimisation problems (typically handled with evolutionary algorithms). Results are presented that indicate that this technique is also advantageous for these problems. Currently modules in CGP are constrained to contain only primitive functions (rather than allowing sub-modules). Ideas for extending CGP to allow modules to call modules are currently under investigation and will be discussed. -------------------------------- Date and time: Monday 19th March 2007 at 16:00 Location: UG40, School of Computer Science Title: On Artificial and Real Neural Networks: Black Boxes and Blobs Speaker: Larry Bull (http://www.csm.uwe.ac.uk/~lbull/) Institution: Faculty of Computing, Engineering & Mathematical Sciences, University of the West of England (http://www.uwe.ac.uk/cems/) Host: Xin Yao, Shan He Abstract: Artificial Intelligence has always sought all powerful black boxes capable of solving arbitrary tasks with minimal user input/knowledge. Artificial neural networks of various types have perhaps proven the most effective method in many domains. The talk will begin with a description of the Neural Learning Classifier System, which exploits evolutionary computing to design composite artificial neural network ensembles under the reinforcement learning paradigm. A simple mobile robot navigation task will be shown as proof of concept. The talk will then move to recent work looking into harnessing the computational abilities of in vitro neuronal networks through machine learning. In particular, initial findings from an aggregate model using multi-electrode array technology will be presented. Early results suggest that such models exhibit different properties from traditional monolayer models. -------------------------------- Date and time: Monday 26th March 2007 at 16:00 Location: UG40, School of Computer Science Title: Nonlinear Semidefinite Programming - Why and How? Speaker: Michal Kocvara (http://web.mat.bham.ac.uk/kocvara/) Institution: School of Mathematics (http://www.mat.bham.ac.uk/) Host: Xin Yao Abstract: Several optimization problems based on linear and nonlinear semidefinite programming will be presented. SDP allows us to formulate and solve problems with difficult constraints that could hardly be solved before. We will show that sometimes it is advantageous to prefer a nonlinear formulation to a linear one. All the presented formulations result in large-scale sparse (nonlinear) SDPs. In the second part of the talk we will show how these problems can be solved by our augmented Lagrangian code PENNON. Numerical examples will illustrate the talk. -------------------------------- Date and time: Friday 30th March 2007 at 12:00 Location: UG40, School of Computer Science Title: Feature Modeling for Visual Attention Deployment Speaker: Xiaopeng Hu Institution: Department of Physics, Royal Holloway University of London Host: Jeremy Wyatt Abstract: This talk presents a computational framework for modeling visual feature selection and spatial attention deployment. The key element of the method is the integration of feature space analysis into the modeling process of spatial attention selection. Issues about feature extraction, feature evaluation and saliency map computation are also discussed. -------------------------------- Date and time: Monday 2nd April 2007 at 16:00 Location: UG40, School of Computer Science Title: The Evolution of Knitting Stitch Patterns by Genetic Programming Speaker: Aniko Ekart (http://keg.cs.aston.ac.uk/stfDtls/stfDtls.php?id=22) Institution: The Knowledge Engineering Group, Computer Science, Aston University (http://keg.cs.aston.ac.uk/) Host: John Bullinaria Abstract: Knitwear designers usually start their work by browsing pattern books to find so-called stitch patterns to use in their design. They do not actually create new stitches, but only reuse, combine and sometimes slightly modify old stitches. A knitting stitch is represented as a comprehensible two dimensional chart of well-defined symbols, which can be read as a list of instructions to be followed by the knitter. Without knowing anything about knitting, anyone presented with the alphabet and the constraints that have to be satisfied for a knittable stitch will be able to produce charts for stitch patterns. However, producing nice or interesting looking stitch patterns requires mental visualisation of how the stitch will actually look and possibly knitting of a small sample, as well. This research can be seen from two main perspectives: (1) applying genetic programming to a closed and constrained discreet domain and (2) modelling human creativity and understanding of aesthetics by genetic programming. In this talk, I shall present an abstract representation of knitting stitch patterns for genetic programming that allows for the evolution of new knitting stitch patterns. I shall discuss in detail evaluation, possibilities for fully automatic evolution and interactive evolution, where the human designer evaluates the evolving stitches. I shall also present stitches evolved by genetic programming. -------------------------------- Date and time: Monday 23rd April 2007 at 16:00 Location: UG40, School of Computer Science Title: Graphical Models in Statistical Genetics Speaker: Kuruvilla Abraham Institution: School of Biosciences Host: Ata Kaban Abstract: We will review the role of Graphical Models in Statistical Genetics and related issues of computational complexity. The application of Monte Carlo simulations will be discussed in some detail along with some related open problems. -------------------------------- Date and time: Monday 30th April 2007 at 16:00 Location: UG40, School of Computer Science Title: On the Semantics and Pragmatics of Metaphor Speaker: Rodrigo Agerri (http://www.cs.bham.ac.uk/~rxa/) Institution: School of Computer Science (http://www.cs.bham.ac.uk/) Abstract: The study of discourse and its interpretation is a central issue of current research on computational semantics. Computational semantics revolves around two fundamental issues: (i) How can we automate the process of associating semantic representations with expressions of natural language? and, (ii) How can we use semantic representations of natural language expressions to automate the process of drawing inferences? It is generally accepted that much of everyday discourse shows evidence of metaphor. Consequently, the question of how metaphor should be interpreted is of major importance in the study of discourse. In this talk it is assumed a general view of metaphor understanding that involves some notion of events, properties, relations, etc. being transferred from a source domain into a target domain. In this view, a metaphorical utterance conveys information about the target domain. This talk presents ongoing work within the ATT-Meta project on a set of invariant mappings that it is claimed are required for the transfer of information such as causation, event rate and duration. It will also be shown the role that these invariant mappings play in the interpretation of metaphor from the perspective of computational semantics. -------------------------------- Date and time: Monday 14th May 2007 at 17:00 Location: UG40, School of Computer Science Title: Predicting Protein Functions with Bio-inspired Algorithms Speaker: Alex Freitas (http://www.cs.kent.ac.uk/people/staff/aaf/) Institution: Computing Laboratory, University of Kent (http://www.cs.kent.ac.uk/) Host: Chris Bowers, Xin Yao Abstract: One of the greatest challenges faced by bioinformatics is to predict the function of a protein "in silico", based on data describing that protein. At present there is a large amount of data about proteins in biological databases, creating both a need and an opportunity for applying data mining methods to the prediction of protein function. The data mining task addressed here is classification, or supervised learning, where protein functions correspond to classes. Since protein functions are often specified in the form of a hierarchy, this talk also address the problem of hierarchical classification, where the classes to be predicted are arranged in a hierarchy in the form of a tree. This talk will present an overview of several classification methods that our group has developed and then applied to protein function prediction. The methods to be presented include several types of bio-inspired algorithms such as swarm intelligence (particle swarm/ant colony optimization) and genetic programming combined with rule induction algorithms. -------------------------------- Date and time: Monday 4th June 2007 at 16:00 Location: UG40, School of Computer Science Title: Temporal Data Clustering via RPCL Ensemble Networks with Different Representations Speaker: Ke Chen (http://www.informatics.manchester.ac.uk/aboutus/academics/index.html?staff_id=KEC) Institution: Data and Decision Engineering Group, School of Infomatics, University of Manchester (http://www.informatics.manchester.ac.uk/research/groups/dde/index.html?group_id=DDE) Host: Peter Tino Abstract: Temporal data clustering provides underpinning techniques for discovering the intrinsic structure and condensing/summarizing information conveyed in temporal data, which is demanded in various fields ranging from time series analysis to sequential data understanding. In this talk, I will describe an ensemble learning approach to temporal data clustering by combining rival-penalised competitive learning (RPCL) networks with different representations of temporal data. I will start by introducing the background and our motivation, which leads us to propose a novel approach to temporal data clustering. I will describe our model and report experimental results on benchmark data sets and a real world problem. Finally I will discuss some related issues and our ongoing work. -------------------------------- Date and time: Monday 11th June 2007 at 16:00 Location: UG40, School of Computer Science Title: Anaphora Resolution: To what extent does it help NLP applications? Speaker: Ruslan Mitkov (http://pers-www.wlv.ac.uk/~le1825/) Institution: University of Wolverhampton (http://www.wlv.ac.uk) Host: Alan Wallington Abstract: Research in anaphora resolution has focused almost exclusively on the intrinsic evaluation of the algorithm/system and not on the issue of extrinsic evaluation. In the context of anaphora resolution, extrinsic evaluation is concerned with the impact of an anaphora resolution module on a larger NLP system of which it is part. In this presentation I shall discuss whether the well-known anaphora resolution system MARS (Mitkov et al. 2002) can improve (and if it can, to what extent?) or not the performance of three NLP applications: text summarisation, term extraction and text categorisation. -------------------------------- Date and time: Monday 18th June 2007 at 16:00 Location: UG05, Learning Centre Title: Swarm Robotics Speaker: Amanda Sharkey (http://www.dcs.shef.ac.uk/~amanda/) Institution: Neurocomputing and Robotics Group, Neurocomputing and Robotics Group, University of Sheffield (http://www.dcs.sheffield.ac.uk/nrg/) Host: Peter Tino Abstract: Swarm Robotics is a recent approach to robotics that is generally supposed to be inspired by social insects, and flocking and herding phenomena. The guiding principles of Swarm Robotics appear to be drawn from Swarm Intelligence, but it is one of a number of terms that have been applied to collective robotics, and there is some confusion and lack of agreement about its defining features and characteristics. In this talk, I will describe some examples of swarm robotic studies, and discuss possible definitions of swarm robotics, and the constraints they engender. For instance, should robots in a swarm be restricted to reactive control, and indirect (stigmergic) communication? Or could a collection of humanoid robots with cognitive abilities be considered to be a swarm? -------------------------------- Date and time: Monday 25th June 2007 at 16:30 Location: UG40, School of Computer Science Title: Take that, Turing: we're solving the halting problem! Speaker: Dan Ghica (http://www.cs.bham.ac.uk/~drg) Institution: School of Computer Science (http://www.cs.bham.ac.uk/) Host: Nick Hawes Abstract: In this very informal talk I will look at some recent developments in automated software verification and how they might be of interest to researchers in artificial intelligence. -------------------------------- Date and time: Monday 2nd July 2007 at 15:00 Location: UG40, School of Computer Science Title: “Fashion Sense Solves Crystal Structures”: Cultural Differential Evolution in Crystallography Speaker: Maryjane Tremayne (http://www.chem.bham.ac.uk/staff/tremayne.shtml) Institution: School of Chemistry (http://www.chem.bham.ac.uk) Host: Xin Yao Abstract: Knowledge of the crystal structure of organic materials such as pharmaceuticals, pigments and agrochemicals is essential if their properties are to be fully understood. However, the limited data available from powder methods means that more powerful computational methods are needed to obtain this structural information. The most successful approach has been direct space structure solution techniques in which a global optimisation technique is used to locate the best crystal structure solution. This talk will focus on our recent work on the development and application of the Cultural Differential Evolution (CDE) technique to this crystallographic problem, combining the traditional biological dictates of mating, mutation and natural selection in the Differential Evolution algorithm with an approach that models human social behaviour or cultural selection. Our results show an average 40% improvement in terms of the efficiency of a structure solution calculation, and demonstrate a simple real-world implementation of the CDE approach that has potential applications in many other optimisation problems in chemistry. -------------------------------- Date and time: Tuesday 3rd July 2007 at 12:00 Location: UG40, School of Computer Science Title: Development and Evaluation in Genetic Programming Speaker: Bob McKay Institution: Dept of Computer Science and Engineering, College of Engineering, Seoul National University Host: Per Kristian Lehre Abstract: Genetic Programming systems generate essentially unstructured solutions (unless structure is imposed on them as in ADFs). Meanwhile, natural genetic systems build complex, hierarchical, structured systems. We argue that two processes interact to produce selection for structured solutions, namely developmental processes, and selection during development. We describe a system based on these ideas, and provide preliminary evidence of the evolution of more structured solutions than occur with standard GP. -------------------------------- Date and time: Monday 16th July 2007 at 16:00 Location: UG40, School of Computer Science Title: Modeling and Analysis of Neurodynamics in Large Data Sets from Small Brains Speaker: Peter Passaro (http://www.sussex.ac.uk/informatics/profile185961.html) Institution: Informatics, University of Sussex (http://www.sussex.ac.uk/informatics) Host: Chrisantha Fernando Abstract: Despite the huge body of knowledge that exists about nervous systems in small organisms, even in the simplest systems, the neurodynamics of how they operate at the network level is still not completely understood. This is partially due to the lack of technology to record from large numbers of cells at once, and partly due to a lack of modeling and analysis tools to understand the data sets produced by large scale network recordings. I will give an overview of the tools and technology that currently exist for these types of experiments, and where I think the field needs to go to make progress in understanding the global dynamics of neuronal networks. Using examples from work in rodents, snails, and insects, I will highlight the challenges, and the potential payoff, of taking a systems level perspective of small nervous systems when performing analysis and building models. -------------------------------- Date and time: Thursday 13th September 2007 at 16:00 Location: UG40, School of Computer Science Title: Evolution, Learning, and Games Speaker: Simon M. Lucas (http://cswww.essex.ac.uk/staff/lucas/lucas.htm) Institution: University of Essex (http://cswww.essex.ac.uk) Host: Xin Yao Abstract: Two main ways to train agents given no prior export knowledge are temporal difference learning, and evolution (or co-evolution). We'll study ways in which these methods can train agents for games such as Othello and simulated car racing, and Ms Pac-Man. The results show that each method has important strengths and weaknesses, and understanding these leads to the development of new hybrid algorithms such as EvoTDL, where evolution is used to evolve a population of TD learners. Examples will also be given of where seemingly innocuous changes to the learning environment have profound effects on the performance of each algorithm. The main conclusion is that these are powerful methods capable of learning interesting agent behaviours, but there is still something of a black art in how best to apply them, and there is a great deal of scope for designing new learning algorithms. The talk will also include live demonstrations. -------------------------------- Date and time: Monday 24th September 2007 at 16:00 Location: UG40, School of Computer Science Title: A Bayesian Approach to Information Retrieval Using Sets of Items Speaker: Katherine Heller (http://www.gatsby.ucl.ac.uk/~heller/) Institution: Gatsby Computational Neuroscience Unit , University College London (http://www.gatsby.ucl.ac.uk/) Host: Peter Tino Abstract: We consider the problem of retrieving items from a concept or cluster, given a query consisting of a few items from that cluster. We formulate this as a Bayesian inference problem and describe a very simple algorithm for solving it. Our algorithm uses a model-based concept of a cluster and ranks items using a score which evaluates the marginal probability that each item belongs to a cluster containing the query items. We focus on sparse binary data and show that our score can be evaluated exactly using a single sparse matrix multiplication, making it possible to apply our algorithm to very large datasets. We evaluate our algorithm on such tasks as retrieving movies from the Each Movie dataset and finding completions of author sets from the NIPS dataset. We have also extended our "Bayesian Sets" method to form the basis of a new system for content-based image retrieval, which we evaluate using a Corel image database of 32,000 images. Lastly I'll discuss how, in our most recent work, the Bayesian Sets framework has served as inspiration for a new approach to automated analogical reasoning. -------------------------------- Date and time: Monday 1st October 2007 at 16:00 Location: UG40, School of Computer Science Title: Aspects of Repulsion in Text Speaker: Antoinette Renouf (http://www.lhds.uce.ac.uk/research/pages/antoinette-renouf) Institution: School of English, UCE Birmingham (http://www.lhds.uce.ac.uk/english/) Host: Alan Wallington, John Barnden Abstract: In this talk, I shall report on the setting up of methods for an innovative approach to analysing text, and present some initial findings. We have proposed that there is a hitherto unexplored textual feature, which we call ‘repulsion’, which operates on the construction of meaning in an opposing way to that of word collocation. To illustrate, we do not say ‘cheerfully happy’ even though we say ‘blissfully happy’. We are focussing on ‘lexical repulsion’, by which we mean the intuitively-observed tendency in conventional language use for certain pairs of words not to occur together, for no apparent reason other than convention, and I shall focus on the particular case of repulsion between sense-related word pairs. Our goal is to establish how repulsion as a whole operates and to what extent it is an objective and measurable ‘force’. It is anticipated that this approach will have implications for corpus linguistics, language teaching and NLP. -------------------------------- Date and time: Monday 22nd October 2007 at 16:00 Location: UG40, School of Computer Science Title: Ant communication - it takes two to tandem Speaker: Tom Richardson Institution: Biological Sciences, University of Bristol Host: Chrisantha Fernando Abstract: The study of active animal communication, and more specifically, active social learning, has historically been limited to vertebrates. However ants can perform a limited form of 'teaching', a behaviour that was thought to be limited to humans, via tandem running. Here I outline the mechanics of tandem running, and discuss how ostensibly complex behaviours can emerge from simple algorithmic rules. -------------------------------- Date and time: Monday 12th November 2007 at 16:00 Location: UG40, School of Computer Science Title: Hovering Information - Self-Organising Information that Finds its Own Storage Speaker: Giovanna Di Marzo Serugendo Host: Xin Yao Abstract: Hovering information is a concept characterising self-organising information responsible to find its own storage on top of a highly dynamic set of mobile devices. The main requirement of a single piece of hovering information is to keep itself stored at some specified location, despite the unreliability of the device on which it is stored. Whenever the mobile device, on which the hovering information is currently stored, leaves the area around the specified storage location, the information has to hop - "hover" - to another device. This talk presents the hovering information model as well as preliminary results of simulations obtained using an "Attractor Point" algorithm that allows single pieces of hovering information to get attracted to their respective locations. -------------------------------- Date and time: Monday 26th November 2007 at 16:00 Location: UG40, School of Computer Science Title: Biological Computing Substrates Speaker: Klaus-Peter Zauner (http://www.ecs.soton.ac.uk/people/kpz) Institution: Electronics & Computer Science, University of Southampton (http://www.ecs.soton.ac.uk/) Abstract: A crucial difference sets apart present computing technology from information processing mechanisms utilised by organisms: The former is based on formalisms which are defined in disregard of the physical substrate used to implement them, while the latter directly exploit the physico-chemico properties of materials. There are many advantages to isolating the operation from implementation as is the case in current computers---but theses come at the cost of low efficency. In applications where size and energy-consumption is tightly restricted, or where real-time response to ambiguous data is required organisms cope well, but existing technology is unsatisfactory. Taking heed of the clues from biology the question arises how the realm of computer science can be extended from formal to physical information paradigms. The aim is to arrive at a technology in which the course of computation is driven by the physics of the implementation substrate rather than arbitrarily enforced. The traditional tools and approaches of computer science are ill suited to this task. In particularly it will be necessary to orchestrate autonomously acting components to collectively yield desired behaviour without the possibility of prescribing individual actions. Bio-electronic hybrid systems can be serve as a starting point to explore approaches to computing with autonomous components. We take a two-pronged approach in which we recruit both molecules and complete cells as biological computing substrate. Molecules offer reproducible nonlinearity, self-assembly, and high integration density of complex input-output mappings. Cells, on the other hand, provide cheap and fast nano-engineering through self-reproduction, build in quality-assurance through testing at the point of assembly, self-reconfiguration, and self-repair. Molecules, however, require infrastructure and cells are typically too complex for efficient computation. Our expectation therefore is that in the long term practical biological computing substrates will be situated at the supramolecular and subcellular level, i.e., at the interface between inanimate and animate matter. -------------------------------- Date and time: Monday 3rd December 2007 at 16:00 Location: UG40, School of Computer Science Title: Ideomotor Theory and Imitation as the basis of a Robot Tutoring System Speaker: Joe Saunders (http://homepages.feis.herts.ac.uk/~sj2ay/) Institution: Adaptive Systems Research Group, University of Hertfordshire (http://adapsys.feis.herts.ac.uk/) Host: Nick Hawes Abstract: One way in which a robotic system might be considered `intelligent' may be an ability to learn and then generalise its learnt abilities in situations not previously encountered. Thus the ability to learn may be a prerequisite in allowing a robot to be adaptable when coping with changing environments, be useful when dealing with changing user requirements and expectations as well as being in itself a mechanism which supports the idea of intelligence and intelligent behaviour. One of the aims of the robotics research presented here is to study adaptive mechanisms which support learning with a focus on supervised learning via the interaction between a human teacher and a robot learner. This interaction is based upon learning via imitation from observation of others as well as self-imitation, where the learner learns by reproducing actions it has perceived from another manipulating its body in order to highlight affordances and effectivities available for action. In this talk I will suggest that the psychological notion of `Ideomotor Theory' may be a good starting point from which to construct adaptable robots which support learning from imitation. In doing this I will explain the ideas behind ideomotor theory, ranging from the initial work of Lotze and James to more recent research by Prinz where the theory is extended to support imitation. I will contrast and compare Ideomotor Theory with other prevailing theories of imitation, such as Active Intermodal Matching and Associative Sequence Learning and suggest that the ideomotor approach may have more to offer the roboticist in conceiving learning systems. Finally I will demonstrate how these ideas can be put into practise with some examples of scaffolded teaching used to teach physical robots an object following task. -------------------------------- Date and time: Monday 28th January 2008 at 16:00 Location: UG40, School of Computer Science Title: Unnoticed Seeing Speaker: Andy Clark (http://www.philosophy.ed.ac.uk/staff/clark.html) Institution: Philosophy, University of Edinburgh (http://www.philosophy.ed.ac.uk/index.html) Host: Aaron Sloman Abstract: There is now substantial evidence for preserved, and sometimes surprisingly rich, representations in some forms of 'change blindness'. But what does this mean for the elusive topic of conscious seeing? Do we visually experience, at the time of encounter, the elements that later surface in experiments revealing preserved representations, or were they merely registered in some non-conscious manner? Drawing on recent work by Fred Dretske, I shall argue that not only do we encode more than it sometimes appears, but that we experience more than we sometimes notice that we experience. Such a view can seem empirically puzzling. How can we demonstrate that experience outruns a subject's capacity to notice what they are experiencing? And if we can show this, or at any rate make it plausible that this is so, what does this tell us about the functional role of conscious perceptual experience? -------------------------------- Date and time: Monday 25th February 2008 at 16:00 Location: UG40, School of Computer Science Title: Metaphor and language technology Speaker: Rodrigo Agerri (http://www.cs.bham.ac.uk/~rxa/) Institution: School of Computer Science (http://www.cs.bham.ac.uk/) Abstract: Using metaphorical language is common in most forms of everyday language, from ordinary conversation, `"having ideas in the back of the mind" to newspaper articles, "the NASDAQ dropped off a cliff". Metaphor is important in part because it is an economical and directly appealing way of talking about many sorts of subject matter in human life, such as time, money, relationships, emotions, politics, etc. Most importantly, metaphor can have major effects on what can be properly inferred from an utterance or passage. Metaphor has seen much study in disciplines such as Cognitive Linguistics, Applied Linguistics, Psychology and Philosophy, but much less so in Natural Language Processing. This trend is started to change as recent developments within the NLP community are already acknowledging the importance of processing figurative uses of language. However, up to date there is not a common evaluation framework or corpus for metaphor processing. Consequently, previous computational approaches to metaphor have mainly been stand-alone systems for which the small-scale evaluations that are carried out are not empirically comparable. I will present an overview of the various initiatives devised within our group to bring metaphor processing into the development of textual inference systems where it is emphasized the need for objective evaluation. We believe that the ability of processing metaphor may improve the performance of textual inference systems. Furthermore, it would provide a much needed general semantic framework for common evaluations and computational testing of theories that aim to explain open-ended usages of metaphor in everyday text. -------------------------------- Date and time: Monday 3rd March 2008 at 16:00 Location: UG40, School of Computer Science Title: Gaussian Processes for Online Information Processing Speaker: Mike Osborne (http://www.robots.ox.ac.uk/~mosb/) Institution: Robotics Research Group, Department of Engineering Science, University of Oxford (http://www.robots.ox.ac.uk) Host: Peter Tino Abstract: We propose a powerful prediction algorithm built upon Gaussian processes (GPs). They are particularly useful for their flexibility, facilitating accurate prediction even in the absence of strong physical models. GPs further allow us to work within a completely Bayesian framework. As such, we show how the hyperparameters of our system can be marginalised by use of Bayesian Monte Carlo, a principled method of approximate integration. We employ the error bars of the GP's prediction as a means to select only the most informative observations to store. This allows us to introduce an iterative formulation of the GP to give a dynamic, on-line algorithm. We also show how our error bars can be used to perform active data selection, allowing the GP to decide where and when it should next take a measurement. We demonstrate how our methods can be applied to multi-sensor prediction problems where data may be missing, delayed and/or correlated. In particular, we present a real network of weather sensors as a testbed for our algorithm. -------------------------------- Date and time: Monday 10th March 2008 at 16:00 Location: UG40, School of Computer Science Title: Working in a Virtual World Speaker: David Burden (http://www.daden.co.uk, david.burden@daden.co.uk) Host: William Edmondson Abstract: For the last decade chatbot development has been primarily focussed on the web and the PC. However the last couple of years have seen the emergence of a new interaction model – the virtual world. Here the computer creates a complete 3D environment, and the user, represented by their own avatar, can move around the 3D space, meet and interact with avatars controlled by other users, and change and build new environments and new devices. Linden Lab's Second Life is probably the best current example of an open virtual world. The idea of creating artificial avatars within a virtual world has generated a lot of interest, since for the first time, within the context of the Turing Test, it places the human and the computer on an equal footing. Both are operating at a level of abstraction beyond their "normal" environment - and there are no initial visual clues to tell them apart. As such the computer is finally presented with a level playing field in which to take the Turing Test. This seminar will look at the potential that virtual worlds offer for the development and testing of chatbots and artificial characters, what new challenges such worlds present, and how a Turing Test in a virtual world may differ from the conventional Turing Test. --- David Burden started his career in army communications managing a range of mobile and wireless systems in a variety of challenging situations. After being "demobbed" in 1990, David joined Ascom, the Swiss telecoms company, and then Aseriti, the Ł70m turnover IT arm of Severn Trent plc. During this time David was involved in the operation and development of both conventional and web based mobile systems, before moving onto the commercial side of the business, ultimately becoming the company's Marketing Director. During the Dot Com boom David founded a wireless data company developing both WAP and Voice XML systems, as well as founding the Midlands chapter of the First Tuesday Networking organisation. David founded Daden, a Virtual Worlds and Information 2.0 Consultancy in 2004. David has been involved in virtual worlds since the mid 1990s, having created early spaces using VRML and played in several early 3D communal worlds. David's first virtual home was at Retsmah Crossing in Alpha World and he spent much of the early naughties hoverboarding off the slopes of Nene's Peak - the giant volcano in There. David has been in Second Life since 1994 where his real-life and SL business Daden Limited helps businesses and organisations explore the social and commercial potential of virtual worlds. David also has a keen interest in artificial intelligence and Daden have an AI platform for use both in SL and on the web. David is a Chartered Engineer and lives in Birmingham - where he is active in City and Regional business, technology and innovation initiatives. -------------------------------- Date and time: Monday 17th March 2008 at 16:00 Location: UG40, School of Computer Science Title: Distributed Coordination for Robotic Agents Speaker: Alessandro Farinelli (http://www.ecs.soton.ac.uk/people/af2) Institution: School of Electronics and Computer Science, University of Southampton (http://www.ecs.soton.ac.uk/) Host: Mohan Sridharan Abstract: Robotic agents involved in real world applications (search and rescue, surveillance, environmental monitoring, etc.) have to face several issues such as constraints on communication and computation, unpredictable world changes, communication failures. We consider the problem of performing distributed coordination of robotic agents operating in such environments. Specifically, we address the generic problem of maximising social welfare within a group of interacting robots. We show how approximated techniques can be extremely useful for robotic systems, showing results we obtained in several scenarios including the RoboCup Rescue Simulator and a Multi-Robot System composed by AIBO platforms, using a token-based approach to task assignment. Finally, we focus on a novel distributed optimisation technique (max-sum) based on the sum-product algorithm. Max-sum allows to address the coordination problem through local message passing. We empirically evaluate this approach on a canonical coordination problem (graph colouring). We compared the approach against state of the art approximate and complete algorithms, and validated its applicability in real world scenario by implementing it on low-power Chipcon CC2431 System-on-Chip devices. -------------------------------- Date and time: Monday 7th April 2008 at 16:00 Location: UG40, School of Computer Science Title: Message Passing Algorithms and Estimation of Distribution Algorithms Speaker: Jose A. Lozano (http://www.sc.ehu.es/ccwbayes/members/jalozano/home/index.html) Institution: Intelligent Systems Group, Department of Computer Science and Artificial Intelligence, University of the Basque Country (http://www.sc.ehu.es/ccwbayes/home/index.html) Host: Ramon Sagarna Abstract: In the talk, I will introduce message passing (MPA) algorithms and put them in connection with evolutionary computation, particularly with estimation of distribution algorithms (EDAs). MPA have been used in many different fields. For instance, in artificial intelligence, they have been used to carry out inference in probabilistic graphical models, while in information theory, they have been employed in error correcting codes. Recently, these algorithms have been used in the context of optimization with high success. The talk will describe the basic elements and properties of MPA over probabilistic graphical models from an optimization point of view. In addition, we will devise some ideas for the incorporation of these algorithms into EDAs. -------------------------------- Date and time: Monday 21st April 2008 at 16:00 Location: UG40, School of Computer Science Title: Factorial Switching Linear Dynamical Systems for Physiological Condition Monitoring Speaker: Chris Williams (http://www.dai.ed.ac.uk/homes/ckiw/) Institution: Institute for Adaptive and Neural Computation, School of Informatics, University of Edinburgh (http://anc.ed.ac.uk/) Host: Zeyn Saigol Abstract: Condition monitoring often involves the analysis of measurements taken from a system which "switches" between different modes of operation in some way. Given a sequence of observations, the task is to infer which possible condition (or "switch setting") of the system is most likely at each time frame. In this paper we describe the use of factorial switching linear dynamical models for such problems. A particular advantage of this construction is that it provides a framework in which domain knowledge about the system being analysed can easily be incorporated. We demonstrate the flexibility of this type of model by applying it to the problem of monitoring the condition of a premature baby receiving intensive care. The state of health of a baby cannot be observed directly, but different underlying factors are associated with particular patterns of measurements, e.g. in the heart rate, blood pressure and temperature. We use the model to infer the presence of two different types of factors: common, recognisable regimes (e.g. certain artifacts or common physiological phenomena), and novel patterns which are clinically significant but have unknown cause. Experimental results are given which show the developed methods to be effective on real intensive care unit monitoring data. -------------------------------- Date and time: Monday 28th April 2008 at 16:00 Location: UG40, School of Computer Science Title: The smooth relevance vector machine: controlling sparsity in Bayesian regression and classification. Speaker: Richard Everson (http://www.secamlocal.ex.ac.uk/people/staff/reverson/index.php/Main/HomePage) Institution: School of Engineering, Computer Science and Mathematics, The University of Exeter (http://www.secam.ex.ac.uk/) Host: Peter Tino Abstract: Enforcing sparsity constraints has been shown to be an effective and efficient way to obtain state-of-the-art results in regression and classification tasks. Unlike the support vector machine (SVM) the relevance vector machine (RVM) explicitly encodes the criterion of model sparsity as a prior over the model weights. However the lack of an explicit prior structure over the weight variances means that the degree of sparsity is to a large extent controlled by the choice of kernel (and kernel parameters). This can lead to severe overfitting or oversmoothing & ”possibly even both at the same time (e.g. for the multiscale Doppler data). We detail an efficient scheme to control sparsity in Bayesian regression by incorporating a flexible noise-dependent smoothness prior into the RVM. We present an empirical evaluation of the effects of choice of prior structure on a selection of popular data sets and elucidate the link between Bayesian wavelet shrinkage and RVM regression. Our model encompasses the original RVM as a special case, but our empirical results show that we can surpass RVM performance in terms of goodness of fit and achieved sparsity as well as computational performance in many cases. For classification we show how multi-objective optimisation can be used to effectively control sparsity and yield an approximate Pareto front of solutions in true-positive -- false positive -- complexity space. -------------------------------- Date and time: Monday 12th May 2008 at 16:00 Location: UG40, School of Computer Science Title: Applications of Natural Computing to the Protein Folding Problem Speaker: Roy Johnston (http://www.tc.bham.ac.uk/~roy/) Institution: School of Chemistry (http://www.chem.bham.ac.uk/) Host: Xin Yao Abstract: In this talk, I will present an overview of some of the recent work we have carried out in the study of the folding of model proteins, using algorithms inspired by nature - specifically: genetic algorithms; ant colony optimization; and immune algorithms. I will also describe the coupling of principal component analysis and disconnectivity tree diagrams to enable the visualisation of hyperdimensional energy landscapes for protein folding. References Graham A. Cox, Thomas V. Mortimer-Jones, Robert P. Taylor and Roy L. Johnston, "Development and Optimisation of a Novel Genetic Algorithm for Studying Model Protein Folding", Theor. Chem. Acc. 2004, 112, 163-178. Gareth J. Rylance, Roy L. Johnston, Yasuhiro Matsunaga, Chun-Biu Li, Akinori Baba and Tamiki Komatsuzaki, "Topographical Complexity of Multidimensional Energy Landscapes", Proc. Natl. Acad. Sci. USA 2006, 103, 18551-18555. -------------------------------- Date and time: Monday 19th May 2008 at 16:00 Location: UG40, School of Computer Science Title: Generating artificial genetic regulatory networks with evolutionary algorithms Speaker: Maria Schilstra (http://strc.herts.ac.uk/bio/maria/) Institution: Biological & Neural Computation Laboratory, Science & Technology Research Institute, University of Hertfordshire (http://homepages.feis.herts.ac.uk/~nngroup/bncg.html) Host: Dov Stekel Abstract: Genetic regulatory networks (GRNs) are at the basis of the development of single fertilized egg cells into multi-cellular organisms in which different cells have different functions. We are attempting to harness some of the potential of these control networks by using evolutionary algorithms (EAs) to create artificial GRNs that have pre-defined behavioural characteristics. I will briefly outline current ideas about the structure, function, and dynamic behaviour of biological GRNs, and indicate how some of their features are modelled in our artificial developmental system. I will then discuss the results that we have obtained so far on artificial GRN systems that respond to a noisy incoming periodic signal in a way that is reminiscent of biological circadian clocks. If time permits, I will also present some of our latest work on the application of our GRN/EA system for autonomous generation of diffusion gradients for positional information in a growing multi-cellular aggregate. Selected references: JF Knabe, C L Nehaniv, MJ Schilstra (2008). "Genetic Regulatory Network Models of Biological Clocks: Evolutionary History Matters." Artificial Life 14: 135-148. MJ Schilstra, CL Nehaniv (2008). "Bio-Logic: Gene Expression and the Laws of Combinatorial Logic." Artificial Life 14: 121-133. J F Knabe, CL Nehaniv, M J Schilstra, T Quick (2006) “Evolving Biological Clocks using Genetic Regulatory Networks.” Proc Artificial Life X, eds LM Rocha et al, MIT Press: Boston, MA, pp 15-21. -------------------------------- Date and time: Friday 13th June 2008 at 16:00 Location: UG40, School of Computer Science Title: What limits the biological evolution of cultural evolution? Modelling modularity in evolution and learning Speaker: Joanna Bryson (http://www.cs.bath.ac.uk/~jjb/) Institution: Department of Computer Science, University of Bath (http://www.cs.bath.ac.uk/department/) Host: Nick Hawes Abstract: Since Darwin first presented the theory of natural selection, scientific (and other) debate has focussed around whether this simple process can explain the level of diversity we witness in nature. Recently, EvoDevo has focussed research on the way modularity is used to develop complexity. The primary research questions here are both proximate: how modules differentiate themselves from homogeneous initial conditions, and ultimate: why is this a useful strategy? These questions can equally be applied to the interacting systems that provide intelligent behaviour: biological and cultural evolution, and individual learning and planning. My initial interest in cultural evolution arose from an intuitive dislike of one strand of research on language origins, which suggested that language was "extra-Darwinian" because it requires the evolution of altruism. In previous work (Cace and Bryson 2007) I have shown that altruistic communication is easily evolved. What then limits the extent to which species utilise culture for rapidly evolving intelligent behaviour? I believe there are a number of mechanisms: *) Speed vs Reliability trade-offs: These tradeoffs are fairly well understood when applied to modular learning systems in individuals --- particularly short term and long term learning and memory (Bryson & Leong 2007). I believe similar concerns apply here. *) The Baldwin effect: The mechanisms which shape evolution (including the speed of change in the environment) determine what will in the long term be encoded genetically and what left to individual learning. I believe this statement can be extended to include the third process of cultural evolution. *) Niche size and competition with other species: I believe the advantage of the cognitive strategy given its costs and benefits are more limited than we tend to realize. *) Representational issues: Just as biological complexity has been dependent on the evolution of genetic representations for modular and hierarchical instructions, so cultural complexity is limited by the capacity to transmit not only quantity but structure. I hypothesise about representational advantages humans have over other species for the evolution of languages (c.f. Bryson 2007; 2008). In my talk I present these factors and my evidence for them to date, mostly in the form of agent-based models, but with some models of individual task learning. I welcome feedback and discussion as this is very much work in progress. References: Joanna J. Bryson ``Embodiment versus Memetics'', Mind & Society, 7(1):77-94, June 2008. Joanna J. Bryson, ``Representational Requirements for Evolving Cultural Evolution'', invited and reviewed target article (and responses) in interdisciplines' Web conference, Adaptation and Representation 28 May 2007. Joanna J. Bryson and Jonathan C. S. Leong ``Primate Errors in Transitive `Inference''' Animal Cognition, 10(1):1-15, January 2007. Ivana Cace and Joanna J. Bryson, ``Agent Based Modelling of Communication Costs: Why Information can be Free'', in Emergence and Evolution of Linguistic Communication C. Lyon, C. L Nehaniv and A. Cangelosi, eds., pp. 305-322, Springer 2007. -------------------------------- Date and time: Monday 16th June 2008 at 16:00 Location: LG33, Learning Centre Title: The Learning and Grounding of Language in Cognitive Agents and Robots Speaker: Angelo Cangelosi (http://www.tech.plym.ac.uk/soc/staff/angelo/) Institution: School of Computing, Communications & Electronics, University of Plymouth (http://www.plymouth.ac.uk/pages/view.asp?page=7491) Host: Ben Jones Abstract: Computational approaches to modeling adaptive behavior and cognition, such as artificial life and cognitive robotics, are advantageous when studying the evolutionary and developmental learning of language and communication (Cangelosi & Parisi 2002). In these models, the level of description of the communicating agents and their environment varies significantly. This constitutes a continuum from ungrounded, abstract agent models to grounded multi-agent and robotic approaches. Our approach focuses on the use of adaptive grounded agents where (i) symbols are directly grounded in the agents’ own sensorimotor and cognitive abilities and (ii) the communicative/linguistic behaviour evolves/develops through the interaction of agents in their physical and social environment. In such grounded adaptive agent models, the perceptual, motor, cognitive and linguistic capabilities of the agents are controlled by evolving neural networks. Various models and simulations on the evolution and emergence of linguistic communication will be discussed. For example, a simulation studies the evolutionary origins of proto-linguistic categories, such as nouns and verbs, and their grounding in sensorimotor abilities (Cangelosi 2001). The techniques of categorical perception and synthetic brain imaging are used to analyze the sensorimotor bases of linguistic structure. Analyses on the agents’ neural network controllers show that the neural processing of verbs is consistently localized in the regions of the networks that perform sensorimotor integration. Nouns, instead, are associated with sensory processing areas (Cangelosi & Parisi, 2004). The agent grounding approach is then extended to an epigenetic robotic model of action learning and grounding transfer (Cangelosi & Riga, 2006). These studies demonstrate that agents can use previously grounded labels of actions to generate new composite sensorimotor concepts. Finally, we introduce the new EU project “ITALK” on the integration and transfer of action and language knowledge in the humanoid robotic platform iCub. References and links to papers: Cangelosi A., Parisi D. (2002). Simulating the Evolution of Language. London: Springer. http://www.tech.plym.ac.uk/soc/staff/angelo/angelo_pubs.htm Cangelosi A. (2001). Evolution of communication and language using signals, symbols and words. IEEE Transactions on Evolutionary Computation. 5(2), 93-101 http://www.tech.plym.ac.uk/soc/staff/angelo/papers/Cangelosi-IEEE-2001.pdf Cangelosi A. & Harnad S. (2000). The adaptive advantage of symbolic theft over sensorimotor toil: Grounding language in perceptual categories. Evolution of Communication, 4(1), 117-142 http://www.tech.plym.ac.uk/soc/staff/angelo/papers/cangelosi-harnad-evocom.pdf Cangelosi A., Parisi D. (2004). The processing of verbs and nouns in neural networks: Insights from synthetic brain imaging. Brain and Language, 89(2), 401-408 http://www.tech.plym.ac.uk/soc/staff/angelo/papers/cangelosi_parisi_BrainLanguage.pdf Cangelosi A, Riga T (2006). An embodied model for sensorimotor grounding and grounding transfer: Experiments with epigenetic robots, Cognitive Science, 30(4), 673-689 http://www.tech.plym.ac.uk/soc/staff/angelo/papers/cangelosi-riga-cognitivescience2006.pdf ITALK project: http://italkproject.org/ -------------------------------- Date and time: Friday 4th July 2008 at 16:00 Location: UG40, School of Computer Science Title: Bio-inspired Telecommunications Speaker: Muddassar Farooq (http://www.nexginrc.org/) Institution: National University of Computer and Emerging Sciences, Pakistan (http://www.nu.edu.pk) Host: Thorsten Schnier Abstract: The rapid advances in computing and transmission technologies are giving impetus to the large-scale deployment of interconnected systems for communication and transport of data, voice, video and resources. The global Internet, and cellular, satellite, and Wi-Fi networks, as well as power and logistic networks, just to mention a few remarkable examples, are at the same time ubiquitous and at the very heart of the functioning and success of modern societies. On the other hand, all these networks are increasingly heterogeneous, complex, and dynamic, such as they present a number of challenging issues concerning their analysis and design, management and control, robustness and security. Biological systems show a number of properties, such as self-organization, adaptivity, scalability, robustness, autonomy, locality of interactions, distribution, which are highly desirable to deal with the growing complexity of current and future networks. Therefore, in recent years a growing number of effective solutions for problems related to networked systems have been proposed by taking inspiration from the observation of natural systems and processes such as natural selection, insect societies, immune systems, cultural systems, collective behaviors of groups of animals/cells, etc. The aim of the tutorial is to provide the cutting edge research on Bio/Nature inspired approaches to network-related problems. The tutorial will also focus on protocol engineering in order to introduce different frameworks that have been developed to realize such Bio/Nature inspired protocols inside the network stack of the Linux kernel. The presentation will also introduce a comprehensive performance evaluation framework that is crucial for getting an insight into the behavior of a Bio/Nature inspired routing algorithm over wide operational landscape of a real network. The designers of the routing protocols can use it to verify their Linux model by comparing important performance values obtained from Linux model with the simulation model. The presentation will also introduce a novel testing framework for MANETs to compare and verify the results of MANET routing protocol in real world MANETs. Last but not least tutorial will also introduce security threats that a designer has to be aware of while deploying such protocols in real world fixed and MANETS. We believe that the tutorial will be instrumental in highlighting the potential of Bio/Nature inspired protocols in real world networks. Towards the end, a brief introduction of "Bio-inspired Security Solutions for Enterprise Security" will be introduced. The tutorial is intended for telecommunication managers, protocol developers, network engineers, network software developers and optimization researchers who want to work in non-linear real time dynamic problems. -------------------------------- Date and time: Monday 14th July 2008 at 16:00 Location: UG40, School of Computer Science Title: A Syntactic Justification for Occam's Razor Speaker: John Woodward (http://www.cs.nott.ac.uk/~jrw/) Institution: Formerly at The University of Birmingham, currently at The University of Nottingham, soon to be at The University of Nottingham Ningbo, China. (http://www.cs.nott.ac.uk/) Abstract: Informally, Occam's razor states, "given two hypotheses which equally agree with the observed data, choose the simpler", and has become a central guiding heuristic in the empirical sciences. We criticize previous arguments for the validity of Occam's razor, which amount to circular arguments. The nature of hypotheses spaces is explored and we observe a correlation between the complexity of a concept yielded by a hypothesis and the frequency with which it is represented when the hypothesis space is uniformly sampled. We argue that there is not a single best hypothesis but a set of hypotheses which give rise to the same predictions (i.e. they are semantically equivalent), whereas Occam's razor suggests there is a single best hypothesis. We prefer one set of hypotheses over another set because it is the larger set (and therefore the most likely) and the larger set happens to contain the simplest consistent hypothesis. This gives the appearance that simpler hypotheses generalize better. Thus, the contribution of this paper is the justification of Occam's razor by a simple counting argument. Occam's razor is in contrast to the No Free Lunch theorems and Conservation of Generalisation results. These results state it impossible to generalize assuming that all functions are equally likely. These results assume a uniform distribution over functions. Rather than assuming a uniform distributions over semantic structures (i.e. functions), we assume a uniform distribution over syntactic structures (i.e. programs) to justify Occam's razor. -------------------------------- Date and time: Thursday 31st July 2008 at 16:00 Location: UG40, School of Computer Science Title: Task encoding and machine learning for dynamically dexterous robot behaviours Speaker: Subramanian Ramamoorthy (http://homepages.inf.ed.ac.uk/sramamoo/index.html) Institution: School of Informatics, The University of Edinburgh (http://www.inf.ed.ac.uk/) Host: Mohan Sridharan Abstract: Robust autonomy and dynamical dexterity are two important qualities we expect of modern robots (that we hope will one day go out and perform a variety of activities ranging from rescue and exploration to playing football). The combination of these two qualities has been hard to achieve. However, this is an important aspect of intelligent autonomous behaviour - in artificial and natural systems. I will begin with a discussion of what it means to be robust, autonomous and dynamically dexterous; why we care and where many existing methodologies fall short of achieving the desired objectives. Then, I will outline a specific approach to task encoding, planning and control - arguing in favour of a layered representation involving consistent symbolic abstractions of the underlying continuous dynamics and learning algorithms that leverage this structure. The conceptual ideas will be supported by concrete examples such as a control strategy for dynamical bipedal walking on irregular terrain. Finally, I will provide a brief overview of our current work within this research program. -------------------------------- Date and time: Monday 19th January 2009 at 16:00 Location: UG40, School of Computer Science Title: Framework Theories: What are they, how many kinds are there, why are they needed, where do they come from, how do they change, how are they used, how are they represented, and why are they better than 'Core Knowledge'? Speaker: Aaron Sloman (http://www.cs.bham.ac.uk/~axs) Institution: School of Computer Science (http://www.cs.bham.ac.uk) Host: Nick Hawes Abstract: I have recently been giving talks about the implications of the observation that young humans can discover things empirically that later turn out not to be empirical (e.g. that counting fingers from left to right gives the same result as counting right to left.) I postulated that this is an important, previously unnoticed product of biological evolution which provides the basis for the development of human mathematical competences, at least in its early phases. At present it is not clear what sorts of mechanisms could account for such phenomena. Questions that arose after talks about the ideas, here and elsewhere, e.g. using these slides: http://www.cs.bham.ac.uk/research/projects/cogaff/talks/#toddler [http://www.cs.bham.ac.uk/research/projects/cogaff/talks/#toddler] led me to the conclusion that there are more intermediate cases than I had previously noticed. Attempting to categorise those intermediate cases led to the notion of a 'framework theory' which determines what an individual is capable of thinking about, perceiving, asking questions about, etc. (closely related to the philosopher Immanuel Kant's ideas in 1780). He, and others, e.g. developmental psychologists like Elizabeth Spelke, proposed that humans are born with some innate knowledge that provides the framework for everything else that is learnt. My alternative suggestion is that what is innate, at least in humans, is a *provisional* framework theory, combined with mechanisms that test the theory, discover its limits, debug it and extend it, and that this process can continue throughout life, and also across generations within a culture, as shown by the discontinuities in the history of science. This implies that the claim about transformations of empirical knowledge has to be modified: some things that are learnt empirically are later discovered to be necessary, while others are 'semi-necessary', i.e. necessary only relative to a current theory, which may or may not be a framework theory. Some of the ideas come from the work of Imre Lakatos on the history of mathematics in his 'Proofs and refutations'. The ideas also need to be debugged and extended, so the talk will simply be a progress report on work that still has a long way to go. -------------------------------- Date and time: Tuesday 3rd February 2009 at 16:00 Location: UG40, School of Computer Science Title: DyKnow: A Stream-Based Knowledge Processing Middleware -- Bridging the Sense-Reasoning Gap in Autonomous UAV Applications Speaker: Fredrik Heintz (http://www.ida.liu.se/~frehe/) Institution: Linköping University (http://www.liu.se/) Host: Nick Hawes Abstract: To achieve complex missions an autonomous UAV operating in dynamic environments must create and maintain situational awareness. This requires a steady flow of information from sensors to high level reasoning components. However, while sensors tend to generate noisy and incomplete quantitative data, reasoning often requires crisp symbolic knowledge. The gap between sensing and reasoning is quite wide, and cannot in general be bridged in a single step. Instead, this task requires a more general approach to integrating and organizing multiple forms of information and knowledge processing on different levels of abstraction in a structured and principled manner. We propose knowledge processing middleware as a systematic approach for organizing such processing. In talk I will present how DyKnow, a stream-based knowledge processing middleware framework, can bridge the gap in a concrete UAV traffic monitoring application. In the presented example, sequences of color and thermal images are used to construct and maintain qualitative object structures modeling the parts of the environment necessary to recognize the traffic behavior of the tracked vehicles in real-time. The system has been implemented and tested both in simulation and on data collected during test flights. -------------------------------- Date and time: Monday 9th February 2009 at 16:00 Location: UG40, School of Computer Science Title: Modelling the Sensory Hierarchy: From Gaze Shifts to Emotions Speaker: Matthew Casey (http://www.cs.surrey.ac.uk/BIMA/People/M.Casey/index.html) Institution: Department of Computing, University of Surrey (http://www.cs.surrey.ac.uk/) Host: Peter Tino Abstract: We have learnt much about the human senses, but have we learnt enough to build machines that can sense in any way that is comparable to a human? The simple answer is no, yet a multi-disciplinary approach to this problem still appears a viable solution. Here, our understanding of natural systems can help us to apply biological strategies to artificial systems, while computational modelling is a useful tool for exploring natural systems. Low-level sensory processing, particularly in the midbrain and forebrain, is one area where both neuroscientists and computer scientists can mutually benefit. Take for example multisensory integration in the superior colliculus, which combines visual, auditory and somatosensory stimuli to shift gaze. This brain structure combines the senses through topographic maps to form a multisensory representation of our environment, which then results in our eyes shifting to look at a particular location irrespective of whether the stimulus was heard, seen or felt. As a modelling study, this structure is ideal due to its well-understood architecture, functional specialism and evolutionary stability. Such computational models can help us address key neuroscience questions, including what role the cortex plays in controlling multisensory integration. At the same time, they allow us to understand how to develop artificial systems capable of integrating and reacting to different sensory stimuli seamlessly. The superior colliculus is but one interesting structure in the sensory hierarchy. Moving from the midbrain to the forebrain, the amygdala, which is responsible for priming our bodies for action upon sensing something threatening, holds similar neuroscience challenges and potential applications, notably rapid sensory processing without the complexity of the visual cortex. Again, this structure is intimately linked to the midbrain and low-level sensory processing, while it is moderated by the cortex. Yet further challenges lie in exploring the interactions between these and other hierarchical sensory structures, both bottom-up and top-down. In this talk we will look at how we have used a Hebbian-based neural network to model parts of the midbrain, forebrain and cortex. We have used this topographic conditioning map (TCM) to model the integration of auditory and visual stimuli in the superior colliculus, visual fear conditioning in the amygdala, as well as visual perceptual processing in the thalamus, visual and inferotemporal cortices. Each model has been developed to represent the neural structures involved sufficiently to explore their behaviour, allowing us to test biological hypotheses. However, they are also now starting to show the potential for practical application, such as multisensory integration in artificial agents and a facial expression classifier. In the long term, there may even be the possibility of using the techniques in a wearable vision system to help blind people avoid obstacles. We may not have learnt enough to build machines that can truly sense, but a multi-disciplinary approach is providing insight. -------------------------------- Date and time: Monday 16th February 2009 at 16:00 Location: UG40, School of Computer Science Title: Learning the Kernel: Theory and Applications Speaker: Yiming Ying (http://www.cs.ucl.ac.uk/staff/Y.Ying/) Institution: Department of Engineering Mathematics, University of Bristol (http://www.enm.bris.ac.uk) Host: Ata Kaban Abstract: I will talk about learning the kernel problem in supervised learning. The principal motivations range from the classical model selection problem on tuning the hyper-parameter in SVMs and data integration problems to enhance biological inference in bioinformatics. For the general kernel learning problem, I will approach it from regularization theory and statistical learning theory. Previous kernel methods on data integration focus on maximizing the margin in SVMs which enjoys the essential idea of sparse l^1-regularization. In contrast, we propose a novel approach based on Kullback-Leibler (KL) divergence between the output kernel matrix and the input kernel matrix to integrate heterogeneous data features. The potential advantage of this approach is its easy adaption to different learning tasks, by choosing the output matrix, such as multi-label classification, multi-task learning and structured outputs etc. We formulate it as a difference of convex (DC) problem which can be solved by a sequence of convex semi-infinite linear programs. The effectiveness of the proposed algorithm is evaluated on a benchmark dataset for protein fold recognition and a yeast protein function prediction problem. -------------------------------- Date and time: Monday 23rd February 2009 at 16:00 Location: UG40, School of Computer Science Title: Efficient Bayesian Online Learning using an Ensemble of Experts Speaker: Narayanan Edakunni (http://www.cs.bristol.ac.uk/~nara) Institution: University of Bristol (www.bris.ac.uk) Host: Jeremy Wyatt Abstract: There have been a number of efficient learning algorithms geared towards batch learning, but very few that can learn from a stream of data in real time. In this talk we will be looking at an online learning algorithm that learns through independent localised models distributed along the input space performing a local linear fit for function approximation. We would examine the Bayesian modeling of the local experts and the resultant Bayesian inference procedures to learn the parameters of the model alongwith some applications where this method could be used for efficient learning. -------------------------------- Date and time: Monday 9th March 2009 at 16:00 Location: UG40, School of Computer Science Title: OntoReg: A pharmaceutical regulatory knowledge base to support the validation process of new chemical products Speaker: Maricela Bravo Institution: University of Oxford Host: John Bullinaria Abstract: Currently the regulatory compliance of a new pharmaceutical product, is a costly and time consuming process. The validation process involves many expertise people to generate and submmit the required documentation to the regulatory authorities. Therefore, research and technoligical improvements to support this process, will benefit the chemical industry by reducing the amount of time required in documentation generation and the time for reviewing the validation plan. In this talk, I will present OntoReg, a Domain Ontology designed to represent the pharmaceutical production process knowledge and the regulatory rules to support the validation of new pharmaceutical products. The main subject of this talk will be a knowledge engineering approach for the extraction, representation and verification of regulatory rules, as well, as their application to the validation process. -------------------------------- Date and time: Monday 16th March 2009 at 16:00 Location: UG40, School of Computer Science Title: Deriving dynamical models from palaeoclimatic records: application to glacial millennial-scale climate variability Speaker: Frank Kwasniok (http://secamlocal.ex.ac.uk/people/staff/fk206/) Institution: School of Engineering, Computing and Mathematics, University of Exeter (http://www.secam.ex.ac.uk/) Host: Peter Tino Abstract: Simple conceptual nonlinear dynamical models are derived from ice-core data, thus integrating models and theories with palaeoclimatic records. The method is based on parameter estimation using the unscented Kalman filter, a nonlinear extension of the Kalman filter. Unlike the conventional linear Kalman filter and the widely used extended Kalman filter, the unscented Kalman filter keeps the full system dynamics rather than linearising it, leading to a superior treatment of nonlinearities. The unscented Kalman filter truncates the filter probability density to a Gaussian in each iteration by only propagating first and second moments but neglecting higher-order moments. The method is applicable to both deterministic and stochastic models. It offers a practical and computationally cheap alternative to more complete but also considerably more cumbersome approaches like particle filters or Markov chain Monte Carlo methods. Two different conceptual models for glacial millennial-scale climate transitions (the so-called Dansgaard-Oeschger events) are considered and their parameters estimated from a North Greenland ice-core record. Firstly, we adopt the model of stochastically driven motion in a potential that allows for two distinctly different states. The shape of the potential and the noise strength are determined from the data. The data reveal that during glacial times the potential is asymmetric and almost degenerate. There is a deep well corresponding to a cold stadial state and a very shallow well corresponding to a warm interstadial state. Secondly, a damped stochastically forced nonlinear oscillator is considered. The restoring force is given by a bistable potential. The shape of the potential, the damping coefficient and the noise level are estimated from the data. -------------------------------- Date and time: Monday 23rd March 2009 at 16:00 Location: UG40, School of Computer Science Title: New Applications of PAC-Bayes Analysis Speaker: John Shawe-Taylor (http://www.cs.ucl.ac.uk/staff/J.Shawe-Taylor/) Institution: Centre for Computational Statistics and Machine Learning, UCL (http://web4.cs.ucl.ac.uk/research/csml/) Host: Ata Kaban Abstract: PAC-Bayes techniques provide bounds on generalisation error of learning systems that are inspired by Bayesian analysis. We will review earlier work and go on to describe extensions of the technology that enable two new applications. The first is to maximum entropy classification, a thresholded linear classifier that regularises by maximising the entropy of the weights. The second application is to Gaussian process regression and extensions to fitting non-linear stochastic differential equation (SDE) models to observations. For the SDE case, the analysis is inspired by a variational Bayesian approximate inference algorithm that models the posterior distribution by a time varying linear stochastic differential equation. The approach provides a lower bound on the expected value of the fit of new data to the posterior marginal distribution. -------------------------------- Date and time: Monday 20th April 2009 at 16:00 Location: UG40, School of Computer Science Title: All You Can Eat Ontology-Building: Feeding Wikipedia to Cyc Speaker: Cathy Legg (http://www.waikato.ac.nz/wfass/staff/phils/clegg) Institution: University of Waikato (http://www.waikato.ac.nz/) Host: Aaron Sloman Abstract: In order to achieve a genuinely intelligent World Wide Web, it seems that building some kind of general machine-readable ontology is an inescapable task. Yet the past 20 years have shown that hand-coding formal ontologies is not practicable. A recent explosion of free user-supplied knowledge on the Web has led to great strides in automatic ontology-building (e.g. YAGO, DBpedia), but here quality-control is still a major issue. Ideally one should automatically build onto an already intelligent base. I suggest that the long-running Cyc project can finally come into its own here, describing methods developed at the University of Waikato over the past summer whereby 35K new concepts mined from Wikipedia were added to appropriate Cyc collections, and automatically categorized as instances or subcollections. Most importantly, Cyc itself was leveraged for ontological quality control by ‘feeding’ assertions to it one by one, allowing it to ‘regurgitate’ those that are ontologically unsound. Cyc is arguably the only ontology currently sophisticated enough to be able to perform such a ‘digestive’ function, using its principled taxonomic structure and purpose-built inference engine. It is suggested that a traditional fixation of AI researchers on realizing the intelligence of the brain has perhaps caused us to overlook more humble yet genuine steps towards the AI vision which might be gained by realizing the intelligence of the stomach. -------------------------------- Date and time: Monday 27th April 2009 at 16:00 Location: UG40, School of Computer Science Title: Finding and Exploiting Symmetries in MDPs Speaker: Ravi Balaraman Institution: Indian Institute of Technology - Chennai Host: Jeremy Wyatt Abstract: In this work we address the question of finding symmetries of a given MDP. We show that the problem is Isomorphism Complete, that is, the problem is polynomially equivalent to verifying whether two graphs are isomorphic. Apart from the theoretical importance of this result it has an important practical application. The reduction presented can be used together with any off-the-shelf Graph Isomorphism solver, which performs well in the average case, to find symmetries of an MDP. In fact, we present results of using NAutY (the best Graph Isomorphism solver currently available), to find symmetries of MDPs. If time permits, I will talk about some earlier work on efficiently using symmetries with factored MDPs. -------------------------------- Date and time: Monday 11th May 2009 at 16:00 Location: UG40, School of Computer Science Title: Minds and their Places in Nature Speaker: Peter Simons (http://people.tcd.ie/psimons) Institution: Department of Philosophy, Trinity College Dublin (http://www.tcd.ie/Philosophy/) Host: Aaron Sloman and Darragh Byrne Abstract: The talk attempts to place mentality in its various manifestations in the framework of a systematic naturalistic metaphysics that the author has been developing since the 1990s. The metaphysical position involves an abandonment of the logico-linguistic methods of 20th century analytic philosophy in favour of a factored approach to ontological taxonomy combined with a systematic approach to its applications, with chiefly biological precedents. In this context the main functional phenomena characteristic of mentality, including agency, representation, awareness and signification, are anatomized and situated in a evolutionary framework in which their appearance is considered as a series of phylogenetic apomorphies. Minds have often been said to be emergent, but it is shown that while this is true in temporal and epistemic senses, it is contrary to naturalism to suppose that emergence is ontic in nature. The senses in which this view is or is not reductionistic are considered. The final section speculates on the future of minds and research into the forms of mentality. -------------------------------- Date and time: Monday 1st June 2009 at 16:00 Location: UG40, School of Computer Science Title: Robotic hand-eye coordination without global reference: A biologically inspired learning scheme Speaker: Martin Huelse (http://users.aber.ac.uk/msh) Institution: Deptment of Computer Science, University of Aberystwyth, (http://www.aber.ac.uk/compsci/public/) Host: Nick Hawes Abstract: Understanding the mechanism mediating the change from inaccurate pre-reaching to accurate reaching in infants may confer advantage from both a robotic and biological research perspective. In this work, we present a biologically meaningful learning scheme applied to the coordination between reach and gaze within a robotic structure. The system is model-free and does not utilize a global reference system. The integration of reach and gaze emerges from the learned cross-modal mapping between reach and vision space as it occurs during the robot-environment interaction. The scheme showed high learning speed and plasticity compared with other approaches due to the low level of training data required. We discuss our findings with respect to biological plausibility and from an engineering perspective, with emphasis on autonomous learning and re-learning. -------------------------------- Date and time: Monday 8th June 2009 at 16:00 Location: UG40, School of Computer Science Title: Back-propagation as reinforcement in prediction tasks Speaker: Andre Gruning (http://www.computing.surrey.ac.uk/personal/st/A.Gruning/) Institution: Department of Computing, University of Surrey (http://www.cs.surrey.ac.uk/) Host: Peter Tiňo Abstract: The back-propagation (BP) training scheme is widely used for training network models besides its well known technical and biological short-comings. In this talk we present a method to make the BP training scheme more acceptable from a biological point of view in cognitively motivated prediction tasks overcoming one of its major drawbacks. Traditionally, recurrent neural networks in symbolic time series prediction (e.g. language) are trained with gradient decent based learning algorithms, notably with back-propagation (BP) through time. A major drawback for the biological plausibility of BP is that it is a supervised scheme in which a teacher has to provide a fully specified target answer. Yet, agents in natural environments often receive a summary feed-back about the degree of success or failure only, a view adopted in reinforcement learning schemes. In this talk we show that for simple recurrent networks in prediction tasks for which there is a probability interpretation of the network's output vector, Elman BP can be reimplemented as a reinforcement learning scheme for which the expected weight updates agree with the ones from traditional Elman BP. -------------------------------- Date and time: Monday 28th September 2009 at 16:00 Location: UG40, School of Computer Science Title: Causal Analysis Speaker: Prof. Jianfeng Feng (http://www2.warwick.ac.uk/fac/sci/dcs/people/jianfeng_feng/) Institution: Department of Computer Science, The University of Warwick (http://www2.warwick.ac.uk/fac/sci/dcs/) Host: Prof Xin Yao Abstract: Causality is probably one of the key and most controversial notation in AI and statistics. In the talk, I will concentrate on the idea proposed originally by Granger, a Nobel laureate, graduated from Nottingham and passed away a few months ago. Detailed comparisons with traditional approaches such as ODE, Bayesian Network and information theory are reviewed. Granger causality and its extensions are then applied to gene data (microarray), protein data (image) and neuronal data (multi-electrode array), aiming to answer biological questions based upon experimental data. -------------------------------- Date and time: Monday 5th October 2009 at 16:00 Location: UG40, School of Computer Science Title: Fast Learning of Stimulus-Response Associations. Speaker: Dr Guido Bugmann (http://www.tech.plym.ac.uk/soc/Staff/GuidBugm/Bugmann.htm) Institution: School of Computing, Communications and Electronics, University of Plymouth (http://www.plymouth.ac.uk/pages/view.asp?page=7491) Host: Dr Jeremy Wyatt Abstract: Fast responses generated in time-constrained situations, such as a tennis game, are most likely based on a set on set of Stimulus-Response (SR) associations encoded along the shortest neural route linking visual input to motor output. Such SR associations can be learnt by various methods, e.g. from examples or from practice. However, the most fascinating and fastest method is to learn from verbal instructions. This fast process cannot be modelled by existing neural networks training algorithms. A new architecture and learning algorithm will be proposed to address this problem. Capabilities and open problems will be discussed, covering capacity, selectivity, dynamics, spiking neuron implementation and concepts of event-driven computing. -------------------------------- Date and time: Monday 12th October 2009 at 16:00 Location: UG40, School of Computer Science Title: Feature Selection by Filters: A Unifying Perspective Speaker: Dr Gavin Brown (http://www.cs.man.ac.uk/~gbrown/) Institution: School of Computer Science, University of Manchester (http://www.cs.manchester.ac.uk/) Host: Dr Chris Bowers Abstract: Feature Selection is an essential aspect of many fields - from computer vision, to data mining, to probabilistic modelling. The principle is to eliminate irrelevant or redundant variables from a dataset, given the requirement to predict a target. This has the dual advantage of reducing computation time, and increasing interpretability. Datasets with thousands to millions of variables require fast methods for selection---these are known as "filters". The last 15 years has seen a huge publication surge of candidate filter methods, with no common way to relate them or pick the right one for the right task. We focus on filters based on mutual information. This talk will give an overview of information theoretic methods, and present a recent unifying framework that shows the existence of a continous space of filters. Each paper over the last 15 years corresponds to a point in the space. Most of the space has never been explored. Based on recent work in AI-STATS 2009 [http://jmlr.csail.mit.edu/proceedings/papers/v5/brown09a/brown09a.pdf] . -------------------------------- Date and time: Friday 16th October 2009 at 16:00 Location: UG40, School of Computer Science Title: Reducing the time complexity of chemical reaction-network simulations to O(N) Speaker: Prof Ivo F. Sbalzarini and Rajesh Ramaswamy (http://www.mosaic.ethz.ch/people/ivos) Institution: Institute of Theoretical Computer Science and Swiss Institute of Bioinformatics, ETH Zurich, Switzerland (http://www.inf.ethz.ch/) Host: Dr Pietro Oliveto Abstract: We present an alternative formulation of the exact stochastic simulation algorithm (SSA) for sampling trajectories of the chemical master equation for a well-stirred system of coupled chemical reactions. Our formulation is based on factored-out, partial reaction propensities. This novel exact SSA, called the partial propensity direct method (PDM), is highly efficient and has a computational cost that scales at most linearly with the number of chemical species, irrespective of the degree of coupling of the reaction network. In addition, we propose a sorting variant, SPDM, which is especially efficient for multiscale reaction networks. -------------------------------- Date and time: Monday 26th October 2009 at 16:00 Location: UG40, School of Computer Science Title: Data Visualisation: Recent Progress and Applications Speaker: Prof Ian Nabney (http://www.ncrg.aston.ac.uk/~nabneyit/) Institution: Engineering & Applied Science, Aston University (http://www1.aston.ac.uk/eas/) Host: Dr Rami Bahsoon Abstract: Data visualisation is a powerful tool to help non-statisticians understand and analyse large quantities of data. Probabilistic methods for projecting high-dimensional data to a 2d visualisation space, such as the Generative Topographic Mapping, are now well established in practical use. In this talk I will discuss how the basic algorithms can be enhanced to provide more information. Interactive tools help users interpret and drill down interactively into their data; block-structured covariance allows for the incorporation of prior understanding of relationships between variables; and technical improvements in the learning process allow us to better capture complex non-linear structure. These (and other new features) will be illustrated on a range of applications, including human obesity measurement, geochemical data for oil exploration, and analysis of small ligands for pharmaceuticals. -------------------------------- Date and time: Monday 2nd November 2009 at 16:00 Location: UG40, School of Computer Science Title: Non-Markov Probabilistic Models for Sequence Data Speaker: Dr Yee Whye Teh (http://www.gatsby.ucl.ac.uk/~ywteh/) Institution: Gatsby Computational Neuroscience Unit, University College London (http://www.gatsby.ucl.ac.uk/) Host: Noel Welsh Abstract: In this talk I will present a new approach to modelling sequence data called the sequence memoizer. As opposed to most other sequence models, our model does not make any Markovian assumptions. Instead, we use a hierarchical Bayesian approach which enforces sharing of information across the different parts of the model to alleviate overfitting. To make computations with the model efficient, and to better model the power-law statistics often observed in sequence data, we use a Bayesian nonparametric prior called the Pitman-Yor process as building blocks in the hierarchical model. We show state-of-the-art results on language modelling and text compression. Joint work with Frank Wood, Jan Gasthaus, Cedric Archambeau and Lancelot James. -------------------------------- Date and time: Monday 9th November 2009 at 16:00 Location: UG40, School of Computer Science Title: Robotic hand-eye coordination without global reference: A biologically inspired learning scheme Speaker: Dr Martin Huelse (http://users.aber.ac.uk/msh/) Institution: Department of Computer Science, Aberystwyth University (http://www.aber.ac.uk/compsci/public/) Host: Dr Jeremy Wyatt Abstract: Understanding the mechanism mediating the change from inaccurate pre-reaching to accurate reaching in infants may confer advantage from both a robotic and biological research perspective. In this work, we present a biologically meaningful learning scheme applied to the coordination between reach and gaze within a robotic structure. The system is model-free and does not utilize a global reference system. The integration of reach and gaze emerges from the learned cross-modal mapping between reach and vision space as it occurs during the robot-environment interaction. The scheme showed high learning speed and plasticity compared with other approaches due to the low level of training data required. We discuss our findings with respect to biological plausibility and from an engineering perspective, with emphasis on autonomous learning and re-learning. -------------------------------- Date and time: Monday 16th November 2009 at 12:00 Location: UG40, School of Computer Science Title: Job Scheduling in Grid Systems: Computational Models & Resolution Methods Speaker: Dr Fatos Xhafa (http://www.lsi.upc.edu/~fatos/) Institution: Department of Computer Science and Information Systems, Birkbeck, University of London (http://www.bbk.ac.uk/) Host: Prof Xin Yao Abstract: In this talk we will address the modelling and resolution methods for the problem of Job Scheduling in Computational Grids. The talk will focus on the following aspects: Motivation for addressing again scheduling problems * New features of scheduling problem in Grid systems * Computational models and resolution methods * Benchmarking and Grid simulators for evaluating scheduling algorithms. -------------------------------- Date and time: Monday 16th November 2009 at 12:30 Location: UG40, School of Computer Science Title: Hierarchic Genetic-Based Scheduler of Independent Jobs in Computational Grids Speaker: Dr Joanna Kolodziej (http://www.km.ath.bielsko.pl/jkolodziej) Institution: Department of Mathematics and Computer Science, University of Bielsko-Biala, Poland (http://www.ath.bielsko.pl/english/) Host: Prof Xin Yao Abstract: In this talk we will focus on resolution methods, namely meta-heuristic approaches for solving Grid Scheduling problems. Genetic Algorithms, and specifically an implementation of Hierarchic Genetic Strategy (HGS) for Independent Job Scheduling in Computational Grids will be presented. In HGS approach both makespan and flowtime parameters are simultaneously optimized. Our objective is to present the results of a simple experimental and theoretical analysis of HGS-scheduler and compare its efficiency with some selected single-population genetic algorithms for the benchmark of static scheduling in Grids. -------------------------------- Date and time: Monday 16th November 2009 at 16:00 Location: UG40, School of Computer Science Title: Playing Games with Intelligence Speaker: Prof Simon M. Lucas (http://dces.essex.ac.uk/staff/lucas) Institution: School of Computer Science and Electronic Engineering, The University of Essex (http://www.essex.ac.uk/csee/Default.aspx) Host: Peter Lewis Abstract: Games provide a most satisfying and illuminating environment in which to study computational intelligence. This talk will begin with an overview of the field including some sample applications and the main learning algorithms: evolution and temporal difference learning. Despite each of these having a long history, there is still little agreement on which works best when, and why. I'll attempt to shed some more light on this including some insights from information theory, and the effects of the choice of function approximator. I'll also discuss some recent results on Monte Carlo Tree Search, and how this relates to game learning. -------------------------------- Date and time: Monday 23rd November 2009 at 16:00 Location: UG40, School of Computer Science Title: Scheduling Dynamic Job Shops Speaker: Dr. JĂźrgen Branke (http://www.wbs.ac.uk/faculty/members/Juergen/Branke) Institution: Warwick Business School, The University of Warwick (http://www.wbs.ac.uk/) Host: Dr Phillip Rohlfshagen Abstract: Most practical scheduling problems are dynamic and stochastic: New jobs arrive over time and need to be integrated into the schedule, machines break down, raw material is delivered late, etc. In this talk, we present two quite different approaches to tackle such dynamic scheduling problems. Both utilize evolutionary algorithms, but in very different ways. The first approach is to re-schedule whenever new information becomes available. As we show, it is then advantageous to search for solutions that are not only good with respect to the primary objective (e.g., minimising tardiness), but also flexible and easy to adapt when new information becomes available. Evolutionary algorithms can be modified easily to take this into account. The second approach renounces planning and uses simple priority rules to decide, based on local information, which job should be processed on a machine when this machine becomes available. Such an approach is very popular in practice, but it is quite challenging to design effective priority rules for a particular shop. Here, we demonstrate how evolutionary algorithms can support the design of such priority rules by generating difficult problem instances, highlighting a rule's weaknesses. -------------------------------- Date and time: Monday 30th November 2009 at 16:00 Location: UG40, School of Computer Science Title: Self-modifying Cartesian Genetic Programming Speaker: Dr Julian Miller (http://www.elec.york.ac.uk/staff/jfm7.html) Institution: The Department of Electronics, The University of York (http://www.elec.york.ac.uk/index.html) Host: Dr Jon Rowe Abstract: Cartesian Genetic Programming (CGP) is a graph based form of Genetic Programming. A generalization of CGP has been devised called Self-modifying CGP (SMCGP). SMCGP is a developmental form of CGP that changes over time by modifying its own phenotype during execution of the evolved program. This is done by the inclusion of self-modification operators in the function set. The talk will discuss the application of the technique on several different design, sequence generation and regression problems. It is shown that SMCGP can evolve solutions to problems that cannot be solved using CGP and also can provide general solutions to classes of problems. -------------------------------- Date and time: Monday 7th December 2009 at 16:00 Location: UG40, School of Computer Science Title: Efficient Intelligent Diagnostics for Autonomous Systems Speaker: Prof Chris Price (http://users.aber.ac.uk/cjp/Home.html) Institution: Department of Computer Science, University of Wales, Aberystwyth (http://www.aber.ac.uk/compsci/public/) Host: Drs. Richard Dearden and Juhan Ernits Abstract: The prognostics and health management demands on autonomous systems are significantly greater than those on manned vehicles. There is no outside recourse - all relevant information on vehicle state needs be available on-line, and all diagnoses, remedial actions and prognoses need to be available at all levels of the hierarchy of autonomy on-board the vehicle. This talk will address the issues of producing intelligent diagnostics for autonomous systems at several levels: * the use of qualitative reasoning to efficiently produce diagnostics and prognostics * the ways in which those technologies can be used on board autonomous systems * the wider issue of the interactions needed between reasoning levels in order to ensure that all relevant information is available for problem monitoring, for diagnosis, and for mission planning. -------------------------------- Date and time: Wednesday 16th December 2009 at 16:00 Location: UG40, School of Computer Science Title: Searching parameter spaces by mapping likelihood Speaker: Dr David Young (http://www.cogs.susx.ac.uk/users/davidy/) Institution: Department of Informatics, University of Sussex (http://www.sussex.ac.uk/informatics/) Host: Prof Aaron Sloman Abstract: The efficient use of negative evidence in search problems has always been important: for example, string search algorithms such as Boyer-Moore make use of negative evidence to achieve greatly increased speed. However, it is not always clear how negative evidence can be exploited in a probabilistic framework. In this talk, I explore the accumulation of negative and positive statistical evidence by building a map of likelihood in parameter space, allowing a directed search of this space. I illustrate the approach with simple line detection and image matching examples, which emphasise the value of accurately modelling (or learning) image statistics. I discuss the conditions under which the method may be useful, and I propose that the correct framework for it is not a Bayesian one, but rather the likelihood method of A.W.F. Edwards. -------------------------------- Date and time: Wednesday 13th January 2010 at 11:00 Location: UG40, School of Computer Science Title: People, Sensors, Decisions: Customizable and Adaptive Technologies for Healthcare Speaker: Dr Jesse Hoey (http://www.computing.dundee.ac.uk/staff/jessehoey/index.php) Institution: School of Computing, University of Dundee (http://www.computing.dundee.ac.uk/) Host: Dr Richard W Dearden Abstract: The ratio of healthcare professionals to care recipients is dropping at an alarming rate, particularly for the older population. Further, patients are becoming more aware and involved in their own health care decisions. This is creating a void in which technology has an increasingly important role to play as a tool to connect providers with recipients. Examples range from telecare for remote regions to computer games promoting fitness in the home. Currently, such technologies are developed for specific applications, and are difficult to modify to suit individual user needs. The future potential economic and social impact of technology in the health care field therefore lies in our ability to make devices that are customizable by healthcare professionals and their clients, that are adaptive to users over time, and that generalize across tasks and environments. In this talk, I will describe my research addressing these three requirements, thereby increasing uptake by users and long-term efficiency and robustness of healthcare technology. I will present a general approach, followed by detailed descriptions of four ongoing projects that use this approach to build assistive technologies for persons with cognitive or physical disabilities: a device to help persons with dementia to wash their hands, a customizable tool for art therapists to engage clients in visual artwork, a haptic robotic system for upper-arm rehabilitation after stroke, and a prototype system to automatically build and tailor situated prompting systems for individuals based on minimal data. I will give an overview of current open problems and related projects. I will close with a discussion of the longer-term directions I foresee for this area of research. Biography: Jesse Hoey is a lecturer (assistant professor) in the School of Computing at the University of Dundee, Scotland, and an adjunct scientist at the Toronto Rehabilitation Institute in Toronto, Canada. He received the B.Sc. degree in physics (1992) from McGill University in Montreal, Canada, the M.Sc. degree in physics (1995) and the Ph.D degree in computer science (2004) from the University of British Columbia in Vancouver, Canada. His postdoctoral research was carried out at the University of Toronto, jointly in the Department of Computer Science and the Department of Occupational Science and Occupational Therapy. His research goal is to build customizable and adaptive intelligent assistants for applications in healthcare. In pursuing this goal, he works on problems in probabilistic and decision theoretic planning, in human behaviour modelling using computer vision, and in user-centered design. He has worked extensively on systems to assist persons with cognitive and physical disabilities, and hold four current grants funding research in assistive technology. Dr. Hoey has published over thirty peer reviewed scientific papers in highly visible journals and conferences. He won the Best Paper award at the International Conference on Vision Systems (ICVS) in 2007 for his paper describing an assistive system for persons with dementia during hand washing. The system also won a "Solution of the Year" Award in 2007 by Advanced Imaging Magazine, and was named one of the top 20 Science and Medicine Stories of the Year 2007 by The Toronto Star. He won the Microsoft/AAAI Distinguished Contribution Award at the 2009 IJCAI Workshop on Intelligent Systems for Assisted Cognition, for his paper on technology to facilitate creative expression in persons with dementia. He also works on devices for ambient assistance in the kitchen, on stroke rehabilitation devices, and on spoken dialogue assistance systems. -------------------------------- Date and time: Monday 25th January 2010 at 15:00 Location: NG08, School of Biosciences Title: Challenges in Temporal--Numeric Planning Speaker: Dr Andrew Coles (http://personal.cis.strath.ac.uk/~ac/) Institution: Department of Computer and Information Sciences, University of Strathclyde (http://www.strath.ac.uk/cis/) Host: Dr Charles Gretton Abstract: Solving temporal--numeric planning problems presents difficult challenges to planners, particularly in the presence of continuous numeric change, a key facet of many interesting problems. Here, the reasoning about time is inseparable from reasoning about numbers: the times at which we start and finish actions depends on the resource levels; but these, in turn, depend on when we start and finish the actions. In this talk, I will give an overview of the key challenges in this area, and then present the planner COLIN, an approach to solving a useful subset of such problems --- those where the continuous numeric change is linear --- using a combination of forwards-search heuristic planning and linear programming. -------------------------------- Date and time: Monday 1st February 2010 at 16:00 Location: UG40, School of Computer Science Title: Automatic Fault Detection for Autosub6000: Why I Spent Three Weeks on a Boat last Semester Speaker: Dr Richard Dearden (http://www.cs.bham.ac.uk/~rwd/) Institution: School of Computer Science, University of Birmingham (http://www.cs.bham.ac.uk/) Host: Dr Per Kristian Lehre Abstract: AFDA (Automated Fault Detection for Autosub6000) is a three year NERC-funded project to provide fault detection technology for a deep-diving autonomous underwater vehicle operated by the National Oceanographic Centre. In this talk, I will describe the vehicle, the project, and what we've accomplished so far. In particular, I will describe and demonstrate Livingstone 2, the diagnosis approach we are applying, and talk about some of the novel problems in applying it to Autosub6000. Applying Livingstone 2 to Autosub6000 has required building a diagnosis model of the program the vehicle is executing, and I will talk about how we automatically generate this model. Finally, I will talk about my experience on-board the Royal Research Ship Discovery learning about Autosub and testing our technologies. -------------------------------- Date and time: Monday 8th February 2010 at 16:00 Location: UG40, School of Computer Science Title: Biomimetic Robotics Speaker: Dr Patrick van der Smagt (http://www.robotic.de/Smagt/) Institution: Institute of Robotics and Mechatronics, German Aerospace Center (DLR) (http://www.robotic.dlr.de/) Host: Dr Jeremy Wyatt Abstract: "Is it man? Or is it machine?" Alan Turing already addressed this problem in 1950, as he introduced an intelligence test with which the difference between human and computer intelligence could be measured. Passing this "Turing Test" still is "subject to research" and will remain so for quite some time. But while computers cannot keep in pace with the human brain on this issue, copying human motion behaviour is slowly coming to realisation, for a large part due to advances in mechatronic systems. Biology uses the concept of "embodied intelligence" and thus obtains a perfect integration of body and mind. How can we use the concept of embodied intelligence in the development of more advanced robotic systems, which can augment and replace their biological counterparts? -------------------------------- Date and time: Monday 15th February 2010 at 16:00 Location: UG40, School of Computer Science Title: Inferring the appropriate architecture of a student network learning from a hard teacher Speaker: Dr Juan Pablo Neirotti (http://www.ncrg.aston.ac.uk/~neirotjp/Welcome.html) Institution: Engineering and Applied Science, Aston University (http://www1.aston.ac.uk/eas/research/groups/ncrg/) Host: Dr Ata Kaban Abstract: We investigated the problem of finding the appropriate architecture of a feed-forward network learning from a generic Boolean teacher. We found that the complexity measure defined through the averaged discrepancy, appears to provide a reasonable answer to this problem. In particular we found that for balanced functions with continuous average discrepancy the student network can be represented by a layered committee whit a hierarchical structure of synaptic vectors. -------------------------------- Date and time: Monday 22nd February 2010 at 16:00 Location: UG40, School of Computer Science Title: New perspectives of bacteria identification by mass spectrometry and machine learning Speaker: Dr. Frank-Michael Schleif (http://gaos.org/~schleif/) Institution: Medical Department, Leipzig University (http://www.medizin.uni-leipzig.de/) Host: Dr Peter Tino Abstract: The automatic identification of bacteria is a very important topic in different fields of medicine but also in cases where impurities are an issue like in the food industry. Recent achievements in mass spectrometry have simplified this task but the quick and safe identification of bacteria is still challenging. The talk gives a short introduction to mass spectrometry based identification of bacteria and presents an advanced machine learning approach to identify measured samples with respect to a database of known bacteria signatures. I will highlight some critical points in the current approaches and shown how some of them can be overcome. The presented approach is based on the famous tree based self organizing map, a prototype based learning method and is extended to fit to the considered analysis task. Initial results are presented and potential future research directions. -------------------------------- Date and time: Monday 8th March 2010 at 16:00 Location: UG40, School of Computer Science Title: One-shot Learning of Poisson Distributions - Information Theory of Audic-Claverie Statistic for Analysing cDNA Arrays Speaker: Dr Peter Tino (http://www.cs.bham.ac.uk/~pxt/) Institution: School of Computer Science, University of Birmingham (http://www.cs.bham.ac.uk/) Host: Dr Per Kristian Lehre Abstract: It is of utmost importance for biologists to be able to analyse patterns of expression levels of selected genes in different tissues possibly obtained under different conditions or treatment regimes. Even subtle changes in gene expression levels can be indicators of biologically crucial processes such as cell differentiation and cell specialisation. Measurement of gene expression levels can be performed either via hybridisation to microarrays, or by counting gene tags (signatures) using e.g. Serial Analysis of Gene Expression (SAGE) or Massively Parallel Signature Sequencing (MPSS) methodologies. The SAGE procedure results in a library of short sequence tags, each representing an expressed gene. The key assumption is that every mRNA copy in the tissue has the same chance of ending up as a tag in the library. Selecting a specific tag from the pool of transcripts can be approximately considered as sampling with replacement. The key step in many SAGE studies is identification of `interesting' genes, typically those that are differentially expressed under different conditions/treatments. This is done by comparing the number of specific tags found in the two SAGE libraries corresponding to different conditions or treatments. Audic and Claverie were among the first to systematically study the influence of random fluctuations and sampling size on the reliability of digital expression profile data. For a transcript representing a small fraction of the library and a large number N of clones, the probability of observing x tags of the same gene will be well-approximated by the Poisson distribution parametrised by its mean (and variance) m>0, where the unknown parameter m signifies the number of transcripts of the given type (tag) per N clones in the cDNA library. When comparing two libraries, it is assumed that under the null hypothesis of not differentially expressed genes the tag count x in one library comes from the same underlying Poisson distribution as the tag count y in the other library. However, each SAGE library represents a single measurement only! From a purely statistical standpoint resolving this issue is potentially quite problematic. One can be excused for being rather sceptical about how much can actually be learned about the underlying unknown Poisson distribution from a single observation. The key instrument of the Audic-Claverie approach is a distribution P over tag counts y in one library informed by the tag count x in the other library, under the null hypothesis that the tag counts are generated from the same but unknown Poisson distribution. P is obtained by Bayesian averaging (infinite mixture) of all possible Poisson distributions with mixing proportions equal to the posteriors (given x) under the flat prior over m. We ask: Given that the tag count samples from SAGE libraries are *extremely* limited, how useful actually is the Audic-Claverie methodology? We rigorously analyse the A-C statistic P that forms a backbone of the methodology and represents our knowledge of the underlying tag generating process based on one observation. We show will that the A-C statistic P and the underlying Poisson distribution of the tag counts share the same mode structure. Moreover, the K-L divergence from the true unknown Poisson distribution to the A-C statistic is minimised when the A-C statistic is conditioned on the mode of the Poisson distribution. Most importantly (and perhaps rather surprisingly), the expectation of this K-L divergence never exceeds 1/2 bit! This constitutes a rigorous quantitative argument, extending the previous empirical Monte Carlo studies, that supports the wide spread use of Audic-Claverie method, even though by their very nature, the SAGE libraries represent very sparse samples. Full paper: http://www.biomedcentral.com/1471-2105/10/310/ [http://www.biomedcentral.com/1471-2105/10/310/] -------------------------------- Date and time: Monday 15th March 2010 at 16:00 Location: UG40, School of Computer Science Title: Easy Trees: from video a forest shall grow Speaker: Dr Peter Hall (http://www.cs.bath.ac.uk/~pmh/start/home.html) Institution: Department of Computer Science, University of Bath (http://www.cs.bath.ac.uk) Host: Dr Peter Tino Abstract: Trees are part of the general scenery around us, so it is important to model and animate them. Current solutions build individual trees then add wind; all build single trees and require skilled interaction, making them expensive. This talk explains how {em easy trees} allows users to create a forest of trees that look and move naturally. The only demand on the user is to outline a tree, in one frame of an ordinary video. The dynamic tree model acquired can then be used to populate a forest of moving individuals, at the touch of a few buttons. Simplicity for users rests on the use of probabilistic generative models to create new trees, and a Bayesian approach to reconstruct three dimensional models from two dimensional ones. Easy trees outputs high quality tree model, they are unique in including motion in all modeling stages. All tree models can be rendered in a wide variety of ways, from photorealism to cartoons, in any season. Similarly, users have complete control over motion, wind strength is easy to change. -------------------------------- Date and time: Monday 22nd March 2010 at 16:00 Location: UG40, School of Computer Science Title: Multi-Sensing Artificial Perception -- A Probabilistic Approach supported by Robotic Technologies Speaker: Dr Jorge Miranda Dias (http://paloma.isr.uc.pt/testjdias/) Institution: Institute Systems & Robotics, University of Coimbra, Portugal (http://www.isr.uc.pt/home.php) Host: Dr Jeremy Wyatt Abstract: In this seminar it will be presented some results on the application of Bayesian models and approaches in order to develop artificial cognitive systems that can carry out complex tasks in real world environments. These studies try to address the question how information derived from these different sensory modalities converges in order to form a coherent and robust perception is central to develop processes of artificial perception. These applications take inspiration from the brains of mammals including humans and apply our findings to the developments of robotic systems. Contemporary robots and other cognitive artifacts are not yet ready to autonomously operate in complex real world environments and one of the major reasons for this failure in creating cognitive situated systems is the difficulty in the handling of incomplete knowledge and uncertainty. The development of these artificial perception systems focused on multimodal and multi-sensing integration using computational/ statistical models supported by observations of biological systems and experimental evidences obtained by psycho-physical methods/studies. -------------------------------- Date and time: Wednesday 24th March 2010 at 16:00 Location: UG40, School of Computer Science Title: Selection, Fusion and Detection: Case Studies for Memory-based Architectures in Cognitive Robotics Speaker: Dr Sebastian Wrede (http://aiweb.techfak.uni-bielefeld.de/blog/sebastian-wrede) Institution: Research Institute for Cognition and Robotics, University of Bielefeld, Germany (http://www.cor-lab.de/corlab/cms/frontpage) Host: Drs Marc Hanheide and Nick Hawes Abstract: Research on cognitive robots that learn in interaction as exemplified in the RoboCub@Home competition is original in its aim to develop multi-modal active perception, social and action competence for robots as a truly joint effort. This joint process and the increasing technological complexity of the resulting systems therefore call for novel engineering approaches. In this talk I will present a recent approach developed and used in current EU research projects, which promotes the use of so-called memory architectures for the engineering of cognitive robotics systems. The approach enhances event-driven architectures such as ROS from Willow Garage by stateful interaction through memory spaces. However, it considers not only technological challenges such as low-coupling between software components to facilitate rapid progress in development but at the same time allows a flexible organization of learning processes around memory spaces. These information spaces provide temporal context, allow for reasoning also on past events, and furthermore facilitate adaptation and learning. The concepts of the presented approach are exemplified along a number of uses-cases and challenges for building memory-centered systems taken from ongoing collaborative EU integrated projects such as COGNIRON or ITALK. The presented examples span from bottom-up segmentation of multi-modal input data in a tutoring scenario using the iCub humanoid robot, over person anchoring and learning on a mobile robot to an adaptive self-awareness model generally applicable for anomaly detection in event-based systems. -------------------------------- Date and time: Friday 26th March 2010 at 11:00 Location: Room 245, School of Computer Science Title: Reactive Search Optimization and Brain-Machine Optimization: Machine and Human Learning for Decision Making Speaker: Prof Battiti Roberto (http://rtm.science.unitn.it/~battiti/) Institution: Department of Information Engineering and Computer Science, University of Trento, Italy (http://disi.unitn.it/) Host: Xin Yao Abstract: Reactive Search Optimization (RSO) advocates the integration of sub-symbolic machine learning techniques into search heuristics for solving complex optimization problems. The word reactive hints at a ready response to events during the search through an internal online feedback loop for the self-tuning of critical parameters. As a concrete example, the case of Brain-Machine Optimization will be considered. The focus is on solving multiple-objective optimization problems through interaction with the final decision manager, where learning and optimization are deeply interconnected. -------------------------------- Date and time: Monday 12th April 2010 at 16:00 Location: UG40, School of Computer Science Title: Learning a high dimensional structured matrix and multi-task learning Speaker: Dr Massimiliano Pontil (http://www.cs.ucl.ac.uk/staff/m.pontil/) Institution: Department of Computer Science, University College London (http://www.cs.ucl.ac.uk/) Host: Dr Rami Bahsoon Abstract: This talk presents the problem of learning a high dimensional matrix from noisy linear measurements. A main motivating application for this study is multi-task learning, in which the matrix columns corresponds to different regression or binary classification tasks. Our learning method consists in solving an optimization problem which involves a data term and a penalty term. We will discuss three families of penalty terms: quadratic, structured-sparse and spectral. They implement different types of matrix structure. For example, the quadratic penalty may encourage certain linear relationships across the tasks, the structured-sparse penalty may favor tasks which share similar sparsity patterns, and the spectral penalty may favor low rank matrices. We will present an efficient algorithm for solving the optimization problem, and report on numerical experiments comparing the different methods. Finally we will discuss how these ideas can be extended to learn non-linear task functions by means of reproducing kernel Hilbert spaces. -------------------------------- Date and time: Monday 19th April 2010 at 16:00 Location: UG40, School of Computer Science Title: The origins of a quantitative methodology for design of computer vision algorithms Speaker: Dr Neil Thacker (http://www.niac.man.ac.uk/~nat/) Institution: School of Medicine, The University of Manchester (http://www.medicine.manchester.ac.uk/imaging/) Host: Prof Ela Claridge Abstract: The seminar will start with basic properties of probability and quantitative use of conditional notation, including the restrictions on Bayesian methodologies when considered in a frequentist framework. I will then explain how the use of this approach can be used to understand Likelihood as a design principle, and the way that common errors in its use during algorithm design can be identified and avoided. I will give answers to simple problems which are regularly claimed either impossible or a reason for introducing subjective probability in standard AI texts. I will finish with practical illustrations in more complicated computer vision algorithms intended for robotic, scientific and medical applications. Slides (PDF) [http://www.cs.bham.ac.uk/~exc/Research/Presentations/qtalk.pdf] -------------------------------- Date and time: Monday 26th April 2010 at 16:00 Location: UG40, School of Computer Science Title: Learning Through Programming Games: Teaching AI With Pac-Man and Netlogo Speaker: Dr Jim Smith (http://www.cems.uwe.ac.uk/~jsmith/) Institution: Department of Computer Science, University of the West of England (http://www.uwe.ac.uk/cems/) Host: Dr Nick Hawes Abstract: Teaching Artificial Intelligence to students from across a range of degree programmes carries a number of problems: * first making the materials seem relevant * finding consistent example problems that can illustrate a range of topics, and * the desirability of providing practical experience without the overheads of learning many different s/w packages or the risk of turning tutorials into programming debugging sessions. I describe a series of practical exercises designed to aid the teaching of introductory topics in Artificial Intelligence using the metaphor of the well-known arcade game “Pac Man”. They are aimed at level one students from a range of disciplines and motivated by a view of Artificial Intelligence as a means of automating the problem solving process. Therefore each piece of practical coding is preceded by an exercise to illuminate the human cognitive activities involved. The first set of exercises start with search strategies and gradually build up via rule-based and expert-system approaches to create a pac-man player based on “traditional AI”. The second semester’s activities concentrate on how computational approaches such as artificial neural networks and evolutionary computation provide a radically different approach to generating and improving controllers. The exercises are all developed in the Netlogo package which provides an intuitive user interface and a simple language for agent-based programming. The results of this new approach have been greatly improved attendance and participation in lectures and tutorials sessions, and early indications are of unproved assessment performance. -------------------------------- Date and time: Monday 17th May 2010 at 16:00 Location: UG40, School of Computer Science Title: Estimating the scale parameter for Quantum Clustering Speaker: Dr Adrian G. Bors (http://www-users.cs.york.ac.uk/~adrian/) Institution: Department of Computer Science, University of York (http://www.cs.york.ac.uk/) Host: Dr Peter Tino Abstract: After introducing the quantum clustering as a probability density estimation based method, we address the problem of scale estimation. The scale is estimated using a Bayesian approach when assuming a Gamma prior. The Gamma distribution is evaluated from the data set itself. The scale is applied to three different machine learning algorithms mainly used for data segmentation: scale-space, mean shift and quantum clustering. The proposed approach is applied to modulated signal classification and terrain segmentation using distributions of surface normal orientations. -------------------------------- Date and time: Monday 24th May 2010 at 16:00 Location: UG40, School of Computer Science Title: Untangling the tangled bank: A connectionist view of biological complexity Speaker: Dr Richard Watson (http://www.ecs.soton.ac.uk/people/raw) Institution: School of Electronics and Computer Science, University of Southampton (http://www.sense.ecs.soton.ac.uk/) Host: Prof Xin Yao Abstract: For Darwin the network of relationships between species in an ecosystem is splendidly complex but 'tangled'. Each species may have numerous well-adapted interdependencies with other species, but because each species is independently or 'selfishly' motivated by natural selection, and the ecosystem as a whole is not a unit of selection, ecosystem structure cannot be holistically adapted. The work we present challenges this doctrine. We show that evolved changes to inter-species relationships 'wire together' species the commonly co-occur. This simple observation has an intuitive explanation but significant consequences for ecosystem behaviour, resilience and functional structure. This kind of change causes an ecosystem as a whole to develop an associative memory that can 'recall' past configurations, and under general conditions arrive at configurations of species that are globally adaptive even though each species is acting selfishly. We can understand how these results follow from this observation using associative learning theory from computational neuroscience. This implies that inter-species relationships are not merely tangled, but exhibit adaptive organisational principles in common with connectionist models of organismic learning. Whereas prior evolutionary theory treats ecological relationships and dynamics merely as the backdrop to the adaptation of the entities therein, we suggest a connectionist view of evolutionary adaptation where the relationships between entities and their dynamical interactions take the foreground. Understanding how system-level adaptation is possible in systems of selfish components, and how subsets of synergistic agents mutually reinforce conditions that are self-sustaining, sheds light on vital evolutionary questions such as the evolution of individuality, the major evolutionary transitions and the evolution of biological complexity. We also show how these insights lead to novel optimisation methods that are provably superior to conventional evolutionary algorithms in problems with a nearly-decomposable or modular structure. -------------------------------- Date and time: Monday 7th June 2010 at 16:00 Location: UG40, School of Computer Science Title: Theories of high level cognition in natural and artificial systems: can we agree on any underlying principles? Speaker: Prof John Fox (http://www.cossac.org) Institution: Department of Engineering Science, University of Oxford (http://www.eng.ox.ac.uk/) Host: Prof Aaron Sloman Abstract: Under the heading of "high level cognition" I include systems that are "responsible for perception, learning, reasoning, decision-making, communication and action" (www.foresight.gov definition of cognitive systems). By "we" I mean researchers interested in psychology and cognitive neuroscience, philosophy and logic, AI, autonomous agents and robotics. The talk will include examples from work on medical reasoning, decision-making and planning, and draw some lessons from lab experiments, formal "rational" theories, and the design of systems for supporting clinicians in their routine practice. My conclusion about underlying principles is broadly optimistic, but the aim of the talk is to stimulate discussion and invite alternative views. -------------------------------- Date and time: Monday 27th September 2010 at 16:00 Location: UG40, School of Computer Science Title: Towards Dynamic Cognitive Systems and Evolving Intelligence Speaker: Dr Plamen Angelov (http://www.lancs.ac.uk/staff/angelov/) Institution: Lancaster University (http://www.lancs.ac.uk/) Host: Dr Jeremy Wyatt Abstract: Traditionally, the computational (artificial, machine) intelligence has been developed as a mapping of observations, measurements, and sensed data onto structures which albeit complicated where usually with a fixed structure over the time of exploitation of these constructs (models). This is true for neural networks, fuzzy rule-based, probabilistic (e.g. Bayesian, Hidden Markov, Particle filters etc.) models. During the last decade or so a new trend emerged which breaks these assumptions and addresses the problems and the potential that dynamically evolving structures and model constructs (fuzzy rule-based, neural-network, HMM based, etc.). This emerging sub-discipline is called Evolving Systems and it differs from the traditional Evolutionary Computation and Adaptive Systems. In this talk the problems/challenges, some approaches (specifically for fuzzy rule-based and neuro-fuzzy cases) and a number of applications (including real-life industrial case studies) will be presented. They summarise the experience of the author during last decade or so in development of this emerging area of research. The potential of application of these new results to robotics, autonomous learning and cognition will be placed a special emphasis. -------------------------------- Date and time: Wednesday 6th October 2010 at 16:00 Location: UG05, Learning Centre Title: Meme-centric Approaches for Problem-Solving Speaker: Professor Meng-Hiot Lim (http://www3.ntu.edu.sg/home/emhlim/) Institution: Nanyang Technological University (Singapore) (http://www.ntu.edu.sg/) Host: Dr. Shan He Abstract: In our most recent work, we expanded on the notion of memetic computation as a framework embracing the metaphor of cultural and biological evolution. In this talk, we showcase the use of memetic approach for allocating tasks among multiple unmanned aerial vehicles (UAVs). Here a task refers to the act of systematically scanning the region or area that is assigned to a UAV with a pre-specified capability. The problem considered can be defined as partitioning a polygonal plane into sub-regions whereby the area of each sub-region is proportionate to the relative capability of a UAV assigned. We consider each sub-region to be scanned either in a raster or circular manner and the choice of whether to use circular or raster combing pattern is determined based on the cost associated to overall flight time of the UAV. In another example, we demonstrate the use of a meta-meme scheme for combinatorial optimization. In particular, a neural meta-meme scheme serves as a plan or blueprint to balance the searh effort of multiple search algorithms integrated within a framework for problem-solving. -------------------------------- Date and time: Monday 18th October 2010 at 16:00 Location: UG40, School of Computer Science Title: Optimizing Monotone Functions with Standard Bit Mutations Speaker: Dr. Thomas Jansen (http://www.cs.ucc.ie/~tj2/) Institution: Department of Computer Science, University College Cork (http://www.cs.ucc.ie/) Host: Dr. Peter Oliveto Abstract: Randomized local search (RLS) and the most simple evolutionary algorithm, the (1+1) EA, differ only in the variation operators they employ. When optimizing pseudo-Boolean functions RLS flips exactly one bit that is selected uniformly at random. The (1+1) EA decides for each bit independently if the bit is flipped, each time with a fixed mutation probability p(n) where n is the length of the bit string. In most cases these so-called standard bit mutations are used with a mutation probability of p(n)=c/n where c is some positive constant, often c=1. For linear functions it is known that RLS and the (1+1) EA both find a global optimum on average in O(n log n) steps. For the (1+1) EA this holds regardless of the constant c in the mutation probability. For unimodal functions it is known that RLS and the (1+1) EA both need exponentially long to find a global optimum. For the (1+1) EA this also holds regardless of the constant c. Strictly monotone functions are a proper subset of unimodal functions and a proper superset of linear functions. RLS needs on average time O(n log n) to optimize an arbitrary strictly monotone function. We investigate the performance of the (1+1) EA on such functions and exhibit a surprising dependence on c. For c<1 the expected optimization time is also O(n log n) but for sufficiently large constants c it becomes exponential. -------------------------------- Date and time: Thursday 21st October 2010 at 16:00 Location: LC-LG32, Learning Centre Title: Nao: platform for research and education. Speaker: Mr Marc Duruy Institution: Aldebaran Robotics Host: Dr. Jeremy Wyatt and Dr. Nick Hawes Abstract: This presentation will serve as an overview of the humanoid robot Nao, developed and manufactured by Aldebaran Robotics SA, a young European company based in Paris, France. The live demonstration will consist in showing the robot interact autonomously and showing the capacities of high level programming through Choregraphe software. Nao stands tall in all points amongst its robotic brethren. Platform agnostic, it can be programmed and controlled using Linux, Windows or Mac OS. The hardware has been built from the ground up with the latest technologies providing great fluidity in its movements and offering a wide range of sensors. Nao contains an open framework which allows distributed software modules to interact together seamlessly. Depending on the user’s expertise, Nao can be controlled via ChoregrapheÂŽ, our user friendly behavior editor, by programming C++ modules, or by interacting with a rich API from scripting languages. In addition to the high level API which allows users to make Nao walk and balance, advanced users can take advantage of low level access to sensors and actuators and can, if they wish, replace our code with custom adaptations. In order to allow users to validate motion sequences, simulators are available for Microsoft Robotics Studio and Webots. Company profile: ALDEBARAN ROBOTICS was founded in 2005 in Paris to develop and market humanoid robots. Since May 2008, Aldebaran is shipping its first generation robot. Nao is a 58cm tall friendly robot that includes a computer and networking capability at its core. Delivered with a full set of development tools, NAO addresses the needs of universities including RoboCup players and research labs around the world. It’s an evolving platform, which is unique in its ability to handle multiple applications. At the moment 500 Naos are spread around the world. Today Aldebaran’s regroups 90 people including +45 first class engineers and PhDs involved in R&D and production. In January 2008, Aldebaran Robotics raised Series A financing of EUR 5 million led by CDC Innovation alongside I-Source Gestion. -------------------------------- Date and time: Monday 1st November 2010 at 16:00 Location: UG40, School of Computer Science Title: Full-Class Set Classication Using the Hungarian Algorithm Speaker: Dr. Ludmila Kuncheva (http://www.bangor.ac.uk/~mas00a/) Institution: School of Computer Science, Bangor University (http://www.bangor.ac.uk/) Host: Leandro Minku Abstract: Consider a set-classication task where c ob jects must be labelled simultaneously in c classes, knowing that there is only one ob ject coming from each class (full-class set). Such problems may occur in automatic attendance registration systems, simultaneous tracking of fast moving ob jects and more. A Bayes-optimal solution to the full-class set classification problem is proposed using a single classier and the Hungarian assignment algorithm. The advantage of set classication over individually-based classification is demonstrated both theoretically and experimentally, using simulated, benchmark and real data. -------------------------------- Date and time: Monday 8th November 2010 at 16:00 Location: UG40, School of Computer Science Title: Document Engineering for Digital Libraries Speaker: Dr. Petr Sojka (http://www.fi.muni.cz/usr/sojka/) Institution: Faculty of Informatics, Masaryk University (http://www.fi.muni.cz/) Host: Dr. Volker Sorge Abstract: Several innovative document transformations and tools developed in the process of building the Digital Mathematical Library DML-CZ http://dml.cz are described. The main result is our new PDF re-compression tool, developed using a enhanced jbig2enc library. Together with pdfsizeopt.py by PĂŠter SzabĂł, we have managed to decrease PDF storage size and transmission needs by 62%: using both programs we reduced the size of the original already compressed PDFs to 38%. We briefly describe workflow and tools developed for creating the digital library. The batch digital signature stamper, the document similarity metrics which uses four different methods, a [meta]data validation process and math OCR tools represent some of the main [by]products. Such document engineering, together with Google Scholar indexing optimization, have led to the success of serving digitized and born-digital scientific math documents to the public in DML-CZ, and are being employed also in The European Digital Mathematics Library, EuDML. -------------------------------- Date and time: Monday 6th December 2010 at 14:00 Location: LG33, Learning Centre Title: Semigroup Enumeration Speaker: Dr. Tom Kelsey and Dr. Andreas Distler Institution: School of Computer Science, University of St Andrews; and Centro de Álgebra da Universidade de Lisboa (CAUL) Host: Dr. Volker Sorge Abstract: We describe the problem of enumerating semigroups of order n up to isomorphism and anti-isomorphism. We outline previous approaches, and the the approach we took to solve the problem for n=9. We describe our distributed-computation approach for solving for n=10 (which has a search space of 10^{100}) both in terms of the underlying algebraic structures, and in terms of practical cloud and grid computation. -------------------------------- Date and time: Monday 6th December 2010 at 16:00 Location: UG40, School of Computer Science Title: How Hard is Competition for Rank? Speaker: Prof. Paul Goldberg (http://www.csc.liv.ac.uk/~pwg/) Institution: Computer Science Department, University of Liverpool (http://www.csc.liv.ac.uk/) Host: Dr. Peter Lewis Abstract: Competition for rank occurs whenever the outcome is a ranking, or league table, of the competitors. One can note that it is widespread throughout the plant and animal kingdoms, politics, higher education, and artificial contests. In the talk, I will describe a class of games that capture important aspects of this type of competition, and consider the problem of computing their Nash equilibria. An important background fact that motivated this study is the hardness of computing Nash equilibria of unrestricted games, which raises interest in more specific types of game for which the computational problem is tractable. I will also give a general overview of these hardness results, and how they arise. -------------------------------- Date and time: Monday 13th December 2010 at 16:00 Location: UG40, School of Computer Science Title: Analyzing Human Robot Interaction with the iCub robot to detect tutoring situations Speaker: Ms Katrin Lohan (http://www.cor-lab.de/users/klohan) Institution: CoR-Lab Research Institute for Cognition and Robotics, Bielefeld University (http://www.cor-lab.de/) Host: Dr. Marc Hanheide Abstract: The work I will present in my talk is about developing a tutoring spotter, based on knowledge gained from human-robot-, adult-adult- and adult-child-interaction. In the first step, this project focuses on the analysis of tutoring behaviour. The goal is to derive feature sets that are suitable to detect tutoring behaviour of an interaction partner that can be then used to implement a “tutoring spotter”. Multimodal analysis of the collected data at different levels of granularity give insights into different levels of behavioural variability. In the second step of the project, a tutor spotter will be developed and implemented onto a robot system (iCub), where the effect of the robot’s reaction (i.e. signalling attention) upon the detection of tutoring behaviour on the behaviour of the tutor will be analysed in more details and evaluated. In sum, my work tackles the following questions: Are robots tutored like infants? Does embodiment affect tutoring behaviour? How do children of different age contribute to a tutoring interaction ? -------------------------------- Date and time: Monday 24th January 2011 at 16:00 Location: UG40, School of Computer Science Title: Genetic Search Reinforced by the Population Hierarchy: Hierarchic Genetic Strategy (HGS) Speaker: Dr Joanna Kolodziej (http://www.km.ath.bielsko.pl/jkolodziej) Institution: Department of Mathematics and Computer Science, University of Bielsko-Biala (http://info.ath.bielsko.pl/) Host: Mr. Marcin Bogdanski and Prof. Xin Yao Abstract: As a result of their ability to deliver high quality solutions in reasonable time, Meta-heuristics are usually employed as effective methods to solve the complex multi-objective optimization problems. One class of such meta-heuristics is Hierarchic Genetic Strategy (HGS). A Genetic Algorithm variant, HGS differs from other genetic methods in its capability of searching concurrently the solution space. The HGS efficiency is therefore produced by the simultaneous execution of many dependent evolutionary processes. Every single process is then interpreted as the branch in a tree structure and can be defined as a sequence of evolving populations. The overall dependency relation among processes has a restricted number of levels. In this talk we present the theoretical and experimental evaluation of HGS in solving various complex multi-objective optimisation problems in discrete and continuous domains. In particular, the application of the strategy in scheduling the independent tasks in Computational Grids is highlighted. Joanna Kolodziej’s Short Bio Dr Joanna Kolodziej graduated in Theoretical Mathematics from the Jagiellonian University in Cracow (Poland) in 1992, where she also obtained the PhD in Theoretical Computer Science in 2004. She is an associate professor at the Department of Mathematics and Computer Science of the University of Bielsko-BiaÂła (Poland), which she joined in 1997. Evolutionary computation, modelling of stochastic processes, Grid computing and global optimization meta-heuristics are the main topics of her research. She has served and is currently serving as PC Co-Chair, General Co-Chair and IPC member of several international conferences and workshops including PPSN 2010, ECMS 2011, CISIS 2011, 3PGCIC 2011, CISSE 2006, CEC 2008, IACS 2008-2009, ICAART 2009-2010. Dr Kolodziej has been awarded for the best MSD Thesis in Theoretical Mathematics by Polish Mathematical Society in 1992 and for the best PhD Thesis in Computer Science, Physics and Mathematics by The Foundation for Polish Science in 2004. She has published in international journals, books and conference proceedings of the research area. She is Managing Editor of IJSSC Journal and serves as a EB member and guest editor of several peer-reviewed international journals. -------------------------------- Date and time: Monday 21st February 2011 at 16:00 Location: UG40, School of Computer Science Title: Discussion of some themes in Piaget's two books on Possibility and Necessity Speaker: Prof. Aaron Sloman (http://www.cs.bham.ac.uk/~axs/) Institution: School of Computer Science, The University of Birmingham (http://www.cs.bham.ac.uk/) Abstract: It is not widely known that shortly before he died Jean Piaget and his collaborators produced a pair of books on Possibility and Necessity, exploring questions about how two linked sets of abilities develop: (a) The ability to think about how things might be, or might have been, different from the way they are. (b) The ability to notice limitations on possibilities, i.e. what is necessary or impossible. I believe Piaget had deep insights into important problems for cognitive science that have largely gone unnoticed, and are also important for research on intelligent robotics, or more generally Artificial Intelligence (AI), as well as for studies of animal cognition and how various animal competences evolved and develop. The topics are also relevant to understanding biological precursors to human mathematical competences and to resolving debates in philosophy of mathematics, e.g. between those who regard mathematical knowledge as purely analytic, or logical, and those who, like Immanuel Kant, regard it as being synthetic, i.e. saying something about reality, despite expressing necessary truths that cannot be established purely empirically, even though they may be initially discovered empirically (as happens in children). It is not possible in one seminar to summarise either book, but I shall try to present an overview of some of the key themes and will discuss some of the experiments intended to probe concepts and competences relevant to understanding necessary connections. In particular, I hope to explain: (a) The relevance of Piaget's work to the problems of designing intelligent machines that learn the things humans learn. (Most researchers in both Developmental Psychology and AI/Robotics have failed to notice or have ignored most of the problems Piaget identified.) (b) How a deep understanding of AI, and especially the variety of problems and techniques involved in producing machines that can learn and think about the problems Piaget explored, could have helped Piaget describe and study those problems with more clarity and depth, especially regarding the forms of representation required, the ontologies required, the information processing mechanisms required and the information processing architectures that can combine those mechanisms in a working system -- especially architectures that grow themselves. That kind of computational or "design-based" understanding of the problems can lead to deeper clearer specifications of what it is that children are failing to grasp at various stages in the first decade of life, and what sorts of transitions can occur during the learning. I believe the problems, and the explanations, are far more complex than even Piaget thought. The potential connection between his work and AI was appreciated by Piaget himself only very shortly before he died. An expanded, growing, abstract for the talk will be available here: http://www.cs.bham.ac.uk/research/projects/cogaff/misc/piaget-possibility-necessity.html THE BOOKS: Piaget, Jean, et al., Possibility and Necessity Vol 1. The role of possibility in cognitive development, University of Minnesota Press, Tr. by Helga Feider from French in 1987, (Original 1981) Piaget, Jean, et al., Possibility and Necessity Vol 2. The role of necessity in cognitive development, U. of Minnesota Press, Tr. by Helga Feider from French in 1987, (Original 1983) -------------------------------- Date and time: Monday 28th February 2011 at 16:00 Location: UG40, School of Computer Science Title: From T-cells to Robotic Sniffer Dogs Speaker: Prof Jon Timmis (http://www-users.cs.york.ac.uk/jtimmis/) Institution: Department of Computer Science and Department of Electronics, University of York (http://www.york.ac.uk/) Host: Dr. Peter Oliveto Abstract: There are many areas of bio-inspired computing, where inspiration is taken from a biological system and 'magically' transplanted into some engineered system. In this talk, I will explore thoughts on a slightly more principled approach to bio-inspired system development, that hopefully does not include any magic, and discuss in the context of immune-inspired systems, some of the potential and pitfalls of using biological systems as inspiration. To help ground the talk, we will explore a case study from our recent work with DSTL in the development of an immune-inspired robotic sniffer dog detection system, inspired by a signalling mechanism in T-cells that are present in the immune system. -------------------------------- Date and time: Friday 11th March 2011 at 12:00 Location: Room 245, School of Computer Science Title: Computers, the Division of Labor and Crowdsourcing Speaker: Dr. David Alan Grier (http://elliott.gwu.edu/faculty/grier.cfm) Institution: Elliott School of International Affairs, George Washington University (http://elliott.gwu.edu/) Host: Prof. Xin Yao Abstract: In the past 4 years, we have seen the growth of large systems that combine both computer and human labor. Often identified as “artificial artificial intelligence” these systems are used for tasks as diverse as creating metadata, object recognition, translation, editing and other complicated tasks. Examples of these systems include systems as diverse as those supported by Amazon Mechanical Turk and Wikipedia. Though they are often identified as new step in computer science, they actually have a long history and represent the next step in the processes that have created divided labor. Short biography: David Alan Grier is the an Associate Professor at the George Washington University and is the First Vice President of the IEEE Computer Society. He writes the monthly column “The Known World” in IEEE Computer and is the author of two books When Computers Were Human (Princeton 2005) and Too Soon To Tell (Wiley 2009). -------------------------------- Date and time: Monday 14th March 2011 at 16:00 Location: UG40, School of Computer Science Title: Automating Biology using Robot Scientists Speaker: Prof. Ross King (http://users.aber.ac.uk/rdk/) Institution: Department of Computer Science, Aberystwyth University (http://www.aber.ac.uk/en/cs/) Host: Dr. Jeremy Wyatt Abstract: A Robot Scientist is a physically implemented robotic system that applies techniques from artificial intelligence to execute cycles of automated scientific experimentation. A Robot Scientist can automatically execute cycles of: hypothesis formation, selection of efficient experiments to discriminate between hypotheses, execution of experiments using laboratory automation equipment, and analysis of results. We have developed the Robot Scientist “Adam” to investigate yeast (Saccharomyces cerevisiae) functional genomics. Adam has autonomously identified genes encoding locally “orphan” enzymes in yeast. This is the first time a machine has discovered novel scientific knowledge. To describe Adam's research we have developed an ontology and logical language. Use of these produced a formal argument involving over 10,000 different research units that relates Adam's 6.6 million biomass measurements to its conclusions. We are now developing the Robot Scientist “Eve” to automate drug screening and QSAR development. -------------------------------- Date and time: Monday 21st March 2011 at 16:00 Location: UG40, School of Computer Science Title: A novel mathematical and computational paradigm to compute stability boundaries and bifurcations directly from data in closed-loop experiments Speaker: Dr. Serafim Rodrigues (http://www.bris.ac.uk/contact/person/getDetails?personKey=Ji7O7eCP1KoO2pu3fhyoDycCRckZpo) Institution: Department of Engineering Mathematics, Bristol University (http://www.enm.bris.ac.uk/) Host: Dr. Jeremy Wyatt Abstract: I will present a novel and unprecedented paradigm that enables to track unstable states and transitions between qualitatively different dynamics from noisy experimental data. These data are recorded from real-time computer-controlled closed-loop experiments such as dynamic-clamp in electrophysiology, closed-loop robots, HILL systems in electronics and hybrid testing in mechanical engineering. This technique does not assume any underlying model nor does it rely on inverse problems, however it suitably relies on the combine application of dynamical systems theory and feed-back control theory. This result opens new avenues of research, allowing the possibility of implementing intelligent closed-loop machine brain interfaces that allow to efficiently control and explain both normal and pathological brain states (e.g. Deep Brain Stimulation devices). Additionally other interfaces will efficiently communicate with central patterns generators and sensory systems (e.g. vision). Consequently, this result has great scope towards the medical community, clinical neuroscience, public health and theoretical neuroscience in general. -------------------------------- Date and time: Monday 4th April 2011 at 16:00 Location: UG40, School of Computer Science Title: Emergence of social networks from cooperative interactions Speaker: Dr. Steve Phelps (http://www.essex.ac.uk/ccfea/staff/profile.aspx?ID=205) Institution: Centre for Computational Finance and Economic Agents, University of Essex (http://www.essex.ac.uk/ccfea/) Host: Dr. Peter Lewis Abstract: Traditional game-theoretic models of cooperative behaviour assume complete mixing: the probability that x interacts with y is the same for all y. In contrast, recent models emphasise the importance of interactions occurring over networks and the resulting effect on cooperative outcomes. Many of these models assume that the process of network formation is exogenous (eg preferential attachment), or alternatively that the network structure is endogenous but explicit: agents have full knowledge of their own edges which they can manipulate strategically. In contrast, in this talk I introduce a model of cooperation in which network structures emerge from the low-level interactions between agents. This model gives rise to networks whose network properties change dynamically over time, which is consistent with longitudinal studies of social networks in human societies. -------------------------------- Date and time: Monday 11th April 2011 at 16:00 Location: UG40, School of Computer Science Title: Planning under uncertainty for real-world multiagent systems Speaker: Dr. Matthijs Spaan (http://users.isr.ist.utl.pt/~mtjspaan/) Institution: Intelligent Robot and Systems Group, Institute for Systems and Robotics, Instituto Superior TĂŠcnico, Lisbon (http://welcome.isr.ist.utl.pt/labs/irsgroup/) Host: Dr. Jeremy Wyatt Abstract: As distributed intelligent systems are becoming more ubiquitous in society, the need for intelligent decision making in multiagent systems grows. The decision-making problem is particularly challenging when uncertainty is involved, and has not yet been solved satisfactorily. In this talk I will give an overview of our recent work, which targets planning in real-world multiagent systems. For an agent in isolation, planning under uncertainty has been studied using decision-theoretic models like Partially Observable Markov Decision Processes (POMDPs). I will discuss how we use POMDPs for problems involving active cooperative perception, in which a mobile robot interacts with a network of surveillance cameras. In the multiagent case, related models such as Decentralized POMDPs have been gaining popularity. I will present recent algorithmic advances in optimal Dec-POMDP solving, as well as techniques for exploiting local interactions between agents. Finally, I will detail how multiagent planning under uncertainty can be applied to networks of robots and sensors. -------------------------------- Date and time: Friday 6th May 2011 at 14:00 Location: UG06, Learning Centre Title: Some inter-disciplinary projects in computer vision, robotics and sensor systems Speaker: Dr. Rustam Stolkin (http://www.cs.bham.ac.uk/~stolkinr/) Institution: School of Computer Science, The University of Birmingham (http://www.cs.bham.ac.uk/) Host: Dr. Jeremy Wyatt Abstract: Rustam Stolkin is an interdisciplinarian, with broad research interests that have spanned many areas of science and engineering, as well as the arts and humanities. Much of his work addresses problems in robotics, computational vision and sensor systems, as well as the issues of transferring the impact of this research to society through industry collaborations, educational outreach, and the public communication of science. In this talk, Rustam will provide an overview of several of his projects and try to highlight some common connecting themes. He will begin by describing his work on computational vision. This has focused on tracking algorithms characterized, firstly, by techniques for rapid and continuous relearning of models of the tracked target and its background scene, and, secondly, techniques for probabilistic fusion of observed image data with various kinds of prior knowledge of tracked objects, and predictions of their behaviours, as well as fusion of data from multiple imaging modalities. Since coming to Birmingham, Rustam has become increasingly involved in projects on robotic manipulation with arms, hands and fingers. He will provide a brief overview of the EU GeRT project (Generalising Robotic manipulation Tasks), for which he is part of a Birmingham team who are developing novel algorithms for grasping and manipulation with humanoid robots. Work in robotics has also included planning algorithms, used for trajectory planning on underwater robotic vehicles as well as for trajectory planning of fingers and work-pieces in robotic manipulation problems. Related ideas will be explored in new projects, now underway, on navigation and planning for outdoor all-terrain robot vehicles. Rustam’s work on underwater robots also links to his other research on marine sensing systems, both for oceanography and harbour security. Rustam will also talk about ongoing efforts to promote robotics research at Birmingham through industrial collaborations, which have recently led to funding from the defence and nuclear industries. He will also describe recent and ongoing work in science education, with efforts to use robots and sensor networks as motivating vehicles for classroom teaching of science, engineering and mathematics. -------------------------------- Date and time: Monday 9th May 2011 at 16:00 Location: UG40, School of Computer Science Title: Controlled Permutations for Testing Adaptive Classifiers Speaker: Dr. Indrė Ĺ˝liobaitė (http://sites.google.com/site/zliobaite/) Institution: Smart Technology Research Center, Bournemouth University (http://www.bournemouth.ac.uk/strc/) Host: Dr. Leandro Minku Abstract: The talk will address evaluation of online classifiers that are designed to adapt to changes in data distribution over time (concept drift). A standard procedure to evaluate such classifiers is the test-then-train, which iteratively uses the incoming instances for testing and then for updating a classifier. Such learning risks to overfit, since a dataset is processed only once in a fixed sequential order while every output of the classifier depends on the instances seen so far. The problem is particularly serious when several classifiers are compared, since the same test set arranged in a different order may indicate a different winner. To reduce this risk we propose to run multiple tests with permuted data. The proposed procedure allows us to assess robustness of classifiers when changes happen unexpectedly. -------------------------------- Date and time: Wednesday 11th May 2011 at 12:00 Location: Room 245, School of Computer Science Title: Online learning for Tracking-by-Detection using P/N-Constraints Speaker: Mr. Georg Nebehay and Dr. Roman Pflugfelder Institution: Austrian Institute of Technology Host: Dr. Peter Lewis Abstract: A new Tracking-by-Detection method for general objects visible in a single video stream appeared recently in the Computer Vision community. Given an initial sample of a particular object, the method is able to learn on-line a detector of the underlying object's appearance while simultaneously tracking the object frame by frame. The detector shows promising performance in terms of computation time, precision and recall. This talk will focus on the semi-supervised principle used in this method to learn on-line the object detector given so-called Positive (P) and Negative (N) constraints. The former P-constraint allows to label unlabeled samples of the object's appearance while the latter N-constraint enables reliable pruning of false positives. It has been shown empirically and under certain assumptions to a certain extent analytically that the error canceling property of the N-constraint prevents the learning process of drifting away over time from the underlying set of possible object appearances. At the end of this talk we would like to critically discuss with the audience the novelty of this learning principle seen from the viewpoint of Machine Learning experts and to potentially identify promising existing or even new learning mechanism in the paradigm of Tracking-by-Detection for multiple camera views. -------------------------------- Date and time: Thursday 12th May 2011 at 10:00 Location: UG40, School of Computer Science Title: Interaction - Opportunities and Challenges for Autonomous Robots Speaker: Dr. Marc Hanheide (http://www.cs.bham.ac.uk/~hanheidm/) Institution: School of Computer Science, University of Birmingham & Center of Excellence "Cognitive Interaction Technology", Bielefeld University (http://www.cs.bham.ac.uk/) Host: Dr. Jeremy Wyatt Abstract: The role of a human in robotics is mostly limited to either being the engineer who builds and trains a robot or the user who receives the services it performs. This holds true for various applications in industry, edutainment, and domestic services; the robot is designed or customised to specific needs and then runs autonomously fulfilling its task. However, with robots becoming more general tools and at the same time more common in various application domains this approach does not scale. With the number of autonomous robot deployed in the world, they need to be equipped with adaptive behaviours and general knowledge that can be specialised and instantiated for dedicated tasks and specific situations without requiring consultation of the manufacturer or tedious customisation by a technician. Hence, the end-user herself has to interact with a robotic system with the aim to provide it with the knowledge required and help it adapt to the situation. In this sense interaction with humans is an opportunity for a robot to achieve a given task. However, interaction with non-expert human users is also tremendously challenging for a robot, sometimes making it difficult to exploit the existing opportunities. In this talk, I will present different aspects of my work carried out in a variety of different collaborative projects related to the long-term objective of improving the behaviour of robotic systems through the interaction with non-expert users. I will present different domestic robotic systems, means of interaction specifically considering the embodiment of robots, and system architectures facilitating learning and adaptation by interaction. Also, I will shed some light on the specific challenges that arise from interaction with robots and how these can be studied and tackled as part of an interdisciplinary endeavour. -------------------------------- Date and time: Thursday 12th May 2011 at 11:00 Location: UG40, School of Computer Science Title: Structured Prediction and Inference for Scene Analysis Speaker: Dr. Matthew Blaschko (http://www.robots.ox.ac.uk/~blaschko/) Institution: Visual Geometry Group, University of Oxford (http://www.robots.ox.ac.uk/~vgg/) Host: Dr. Jeremy Wyatt Abstract: Learning methods have been widely applied in computer vision to solve tasks such as image classification, regression, dimensionality reduction, and clustering. This is highly simplified from the original goal of enabling machines to process visual data in unconstrained environments with similar sophistication to humans, and is largely the result of the application of black box learning algorithms that do not have specific knowledge of the problem structure. Much research in computer vision over the past two decades has been devoted to fitting part of a computer vision problem into one of these existing paradigms rather than directly predicting the desired output. Structured output learning promises to provide a more domain-aware learning paradigm that can help overcome these shortcomings. The problem of predicting structured data is central to vision problems, in which the outputs to be predicted are not simply binary labels or scalar values, as in classification and regression, respectively, but encode the rich structure of scene understanding. In this talk, I will discuss the application of the structured output learning paradigm to object detection, a core component of scene understanding. In order to feasibly apply this strategy, we must solve a number of challenges, in particular in relation to efficient inference strategies for object detection. Object detection is in general a highly non-convex problem with many local optima. I show how the application of a branch-and-bound strategy can be developed for efficient and optimal inference both at test time, as well as in a cutting plane optimization loop for structured output support vector machines. I further develop an extension of the structured output SVM objective to ranking with weak supervision. This enables the structured output learning framework to incorporate highly imbalanced data for which the majority of training samples have no correct structured output prediction, and to training data with heterogeneous levels of supervision, e.g. a mixture of binary labels and correct object detections. These examples indicate that structured output learning is an effective strategy for efficient and accurate object detection, as well as a flexible framework that is readily extensible to many useful output spaces and heterogeneous sources of training data that may be assembled at reduced cost to human labelers. Joint work with Christoph Lampert, Thomas Hofmann, Andrea Vedaldi, and Andrew Zisserman. -------------------------------- Date and time: Monday 16th May 2011 at 10:00 Location: UG40, School of Computer Science Title: Algorithmic Robotics: Enabling Autonomy in Challenging Environments Speaker: Dr. Ioannis Rekleitis (http://www.cim.mcgill.ca/~yiannis/) Institution: Centre for Intelligent Machines, McGill University (http://www.cim.mcgill.ca/) Host: Dr. Jeremy Wyatt Abstract: The last few years, robots have moved from the pages of science fiction books into our everyday reality. Currently, robots are used in scientific exploration, manufacturing, entertainment, and household maintenance. While the above advances were made possible by recent improvements in sensors, actuators, and computing elements, the research of today is focused on the computational aspects of robotics. In particular, methodologies for utilizing the vast volumes of data that can be generated by a robotic mission, together with techniques that would allow a robot to respond adequately in unforeseeable circumstances are the challenges of tomorrow. This talk presents an overview of algorithmic problems related to robotics, with the particular focus on increasing the autonomy of robotic systems in challenging environments. Cooperative Localization Mapping and Exploration employs teams of robots in order to construct accurate representations of the environment and of the robot's pose. The problem of coverage has found applications ranging from vacuum cleaning to humanitarian mine removal. A family of algorithms will be presented that solve the coverage problem optimally in terms of distance travelled. The robotic exploration of other planets presents numerous challenges. During my work at the Canadian Space Agency, I developed algorithms for over-the-horizon navigation on Mars-like terrains. Different laser range finding sensors were employed to provide accurate models of the terrain in the form of irregular triangular meshes. Path planning algorithms based on the A* graph search algorithm were used to navigate to target positions much further than the local sensing horizon of the rover. On the other end of the spectrum from planetary exploration is underwater exploration. Interestingly, the two fields share many challenges when viewed from an algorithmic point of view. An a priori unknown environment and limited communications are among the most obvious. I will present current research in underwater robotics together with future plans. The work that I will present has a strong algorithmic flavour, while it is validated in real hardware. Experimental results from several testing campaigns will be presented. -------------------------------- Date and time: Monday 16th May 2011 at 11:00 Location: UG40, School of Computer Science Title: Insights from robotics methodology applied to human motor control Speaker: Dr. Michael Mistry (http://www-clmc.usc.edu/Main/MichaelMistry) Institution: Disney Research (http://www.disneyresearch.com/) Host: Dr. Jeremy Wyatt Abstract: Recently, roboticists have made significant strides in the theory and development of artificial and human-like machines. While these robots have yet to achieve any level of dexterity or elegance to be considered at par with human motion, we can begin to learn from the insights obtained, in order to enrich our understanding of human motor control. For example, we asked if humans may employ operational-space control, a methodology from robotics developed for the control of redundant manipulators. To do so, we developed a novel exoskeleton platform that permits us to alter human arm dynamics during reaching in full 3-D space. Results of our experiment suggest that: humans are able to adapt to novel dynamics by learning predictive internal models, but importantly, humans plan their reaching motion in an operational space, utilizing available redundancy to assure the task is achieved efficiently. In a second experiment, we looked deeper into the issue of human motion planning, and specifically asked, what role, if any, is there for an a priori desired trajectory? To test our subjects, we created a dynamic environment that affects motor effort, but not accuracy: If a force perturbation first pushes the hand off-course in one direction and then subsequently back in the opposite direction, the reaching task may still be achieved with minimal correction, but a strongly curved trajectory. Under such conditions, we found subjects are willing to unnecessarily increase motor effort in order to maintain straightness. Modeling insights from stochastic optimal control theory suggest that the CNS is willing to trade off energy efficiency in order to maintain robustness. Finally, I will address the control of whole-body motion, in particular, when the human body or robot must be considered as an underactuated or a "floating-base" system unattached to the world. I will show how we can solve the ill-posed problem of floating base inverse dynamics, or whole-body internal model control, by planning for operational-space motion in a sufficiently constrained space, e.g. in a space independent of constraint forces. I will conclude by suggesting additional areas where robotics methodology can be applied to neuroscience, and discuss the possible implications. -------------------------------- Date and time: Monday 16th May 2011 at 12:00 Location: UG40, School of Computer Science Title: Human-Centered Robotics - Bridging between control, robotics, psychology, and neuroscience Speaker: Dr. Angelika Peer (http://www.lsr.ei.tum.de/staff/detail/angelikapeer) Institution: Institute of Automatic Control Engineering and Institute of Advanced Studies of the Technische Universität MĂźnchen () Host: Dr. Jeremy Wyatt Abstract: In the past, working spaces of humans and robots were strictly separated, but recent developments have sought to bring robots into closer interaction with humans. Beside verbal and nonverbal communication, especially physical (haptic) interaction is challenging because of the bilateral signal and energy exchange and the mutual adaptation that takes place. Starting from teleoperation, one of the earliest examples in human-robot interaction, and then moving on to physical robot assistants and social haptic interaction partners, this talk will emphasize typical challenges faced in designing robotic systems that stay in close interaction with humans. Issues like robust stability despite of human uncertain behaviour, recognition of human intention from haptic signals, adaptation to the interaction partner as well as challenges in evaluating performance of human-centred robotic systems will be discussed. Angelika PEER is currently a senior researcher and lecturer at the Institute of Automatic Control Engineering and Junior Fellow of the Institute of Advanced Studies of the Technische Universität MĂźnchen, Munich, Germany. She received the engineering degree in Electrical Engineering and Information Technology in 2004 and the Doctor of Engineering degree in Electrical Engineering in 2008 from Technische Universität MĂźnchen. From 2004-2008 she has been research assistant at the Institute of Automatic Control Engineering of the same university. Her research interests include robotics, haptics, teleoperation, human-human and human-robot interaction with focus on multi-user telepresence and teleaction systems, haptic assistance systems, human motor control and brain and body computer interfaces. -------------------------------- Date and time: Tuesday 31st May 2011 at 16:00 Location: UG40, School of Computer Science Title: Managing Uncertainty in Robotics: From Control to Planning Speaker: Dr. Seth Hutchinson (http://www-cvr.ai.uiuc.edu/~seth/) Institution: University of Illinois (http://illinois.edu/) Host: Dr. Jeremy Wyatt Abstract: Robots never know exactly where they are, what they see, or what they're doing. They live in dynamic environments, and must coexist with other, sometimes adversarial agents. All of these factors contribute to the uncertainty that is inherent in any real-world robotic task. In this talk, I will describe a full range of methods that can be used to cope effectively with these uncertainties, from robust sensor-based controllers, to game theoretic motion strategies, to general models such as partially observable Markov decision processes (POMDPs). In each case, it is important to choose a solution strategy that is sufficiently powerful to cope with the level of uncertainty inherent in the task, while employing the minimal acceptable level of generality. This is particularly important in the face of real-time performance demands (e.g., for sensor-based manipulation tasks), or when fully general solutions may be intractable (e.g., finding optimal policies for POMDPs). Algorithms and experimental verification will be presented. -------------------------------- Date and time: Wednesday 1st June 2011 at 14:00 Location: UG40, School of Computer Science Title: Combining compositional shape hierarchy and multi-class object taxonomy for efficient object categorisation Speaker: Dr. Ales Leonardis (http://vicos.fri.uni-lj.si/alesl/) Institution: Visual Cognitive Systems Laboratory, University of Ljubljana (http://vicos.fri.uni-lj.si/) Host: Dr. Jeremy Wyatt Abstract: Visual categorisation has been an area of intensive research in the vision community for several decades. Ultimately, the goal is to efficiently detect and recognize an increasing number of object classes. The problem entangles three highly interconnected issues: the internal object representation, which should compactly capture the visual variability of objects and generalize well over each class; a means for learning the representation from a set of input images with as little supervision as possible; and an effective inference algorithm that robustly matches the object representation against the image and scales favorably with the number of objects. In this talk I will present our novel approach which combines a learned compositional hierarchy, representing (2D) shapes of multiple object classes, and a coarse-to-fine matching scheme that exploits a taxonomy of objects to perform efficient object detection. Our framework for learning a hierarchical compositional shape vocabulary for representing multiple object classes takes simple contour fragments and learns their frequent spatial configurations. These are recursively combined into increasingly more complex and class-specific shape compositions, each exerting a high degree of shape variability. At the top-level of the vocabulary, the compositions represent the whole shapes of the objects. The vocabulary is learned layer after layer, by gradually increasing the size of the window of analysis and reducing the spatial resolution at which the shape configurations are learned. The lower layers are learned jointly on images of all classes, whereas the higher layers of the vocabulary are learned incrementally, by presenting the algorithm with one object class after another. However, in order for recognition systems to scale to a larger number of object categories, and achieve running times logarithmic in the number of classes, building visual class taxonomies becomes necessary. We propose an approach for speeding up recognition times of multi-class part-based object representations. The main idea is to construct a taxonomy of constellation models cascaded from coarse-to-fine resolution and use it in recognition with an efficient search strategy. The structure and the depth of the taxonomy is built automatically in a way that minimizes the number of expected computations during recognition by optimizing the cost-to-power ratio. The combination of the learned taxonomy with the compositional hierarchy of object shape achieves efficiency both with respect to the representation of the structure of objects and in terms of the number of modeled object classes. The experimental results show that the learned multi-class object representation achieves a detection performance comparable to the current state-of-the-art flat approaches with both faster inference and shorter training times. -------------------------------- Date and time: Friday 3rd June 2011 at 14:00 Location: UG40, School of Computer Science Title: Better learning algorithms for neural networks Speaker: Prof. Geoffrey Hinton (FRS) (http://www.cs.toronto.edu/~hinton/) Institution: Department of Computer Science, University of Toronto (http://www.cs.toronto.edu) Host: Prof. Aaron Sloman Abstract: Neural networks that contain many layers of non-linear processing units are extremely powerful computational devices, but they are also very difficult to train. In the 1980's there was a lot of excitement about a new way of training them that involved back-propagating error derivatives through the layers, but this learning algorithm never worked very well for deep networks that have many layers between the input and the output. I will describe a way of using unsupervised learning to create multiple layers of feature detectors and I will show that this allows back-propagation to beat the current state of the art for recognizing shapes and phonemes. I will then describe a new way of training recurrent neural nets and show that it beats the best other single method at modeling strings of characters. About the speaker Geoffrey Hinton received his BA in experimental psychology from Cambridge in 1970 and his PhD in Artificial Intelligence from Edinburgh in 1978. He did postdoctoral work at Sussex University and the University of California San Diego and spent five years as a faculty member in the Computer Science department at Carnegie-Mellon University. He then became a fellow of the Canadian Institute for Advanced Research and moved to the Department of Computer Science at the University of Toronto. He spent three years from 1998 until 2001 setting up the Gatsby Computational Neuroscience Unit at University College London and then returned to the University of Toronto where he is a University Professor. He is the director of the program on "Neural Computation and Adaptive Perception" which is funded by the Canadian Institute for Advanced Research. He is a fellow of the Royal Society, the Royal Society of Canada, and the Association for the Advancement of Artificial Intelligence. He is an honorary foreign member of the American Academy of Arts and Sciences, and a former president of the Cognitive Science Society. He received an honorary doctorate from the University of Edinburgh in 2001. He was awarded the first David E. Rumelhart prize (2001), the IJCAI award for research excellence (2005), the IEEE Neural Network Pioneer award (1998) and the ITAC/NSERC award for contributions to information technology (1992) and the NSERC Herzberg Medal which is Canada's top award in Science and Engineering. He investigates ways of using neural networks for learning, memory, perception and symbol processing and has over 200 publications in these areas. He was one of the researchers who introduced the back-propagation algorithm that has been widely used for practical applications. His other contributions to neural network research include Boltzmann machines, distributed representations, time-delay neural nets, mixtures of experts, Variational learning, products of experts and deep belief nets. His current main interest is in unsupervised learning procedures for multi-layer neural networks with rich sensory input. -------------------------------- Date and time: Monday 6th June 2011 at 16:00 Location: UG40, School of Computer Science Title: Robotic Mapping into the Fourth Dimension Speaker: Dr Tom Duckett (http://www.lincoln.ac.uk/socs/staff/1928.asp) Institution: University of Lincoln (http://www.lincoln.ac.uk/home/) Host: Dr. Nick Hawes Abstract: Future service robots will be required to run autonomously in dynamic environments for really long periods of time. These robots will be required to live together with people and adapt to the changes that people make to the world. This includes the problems of the learning and adapting a robot’s spatial knowledge – in the form of a map – throughout the lifetime of the robot. However, almost all past research on robotic mapping addresses only the initial learning of an environment, a phase which will only be a short moment in the lifetime of a service robot that may be expected to operate for many years. This talk will explain the research challenges and the state-of-the-art methods for mapping and localisation by mobile robots in dynamic environments, including the well-known problem of “simultaneous localisation and mapping” (SLAM). First I will outline the major sub-problems and the corresponding solutions developed so far by the robotics research community for mapping and self-localisation in static environments. Then I will explain the special challenges for mobile robotic mapping and navigation in dynamic and changing environments, and present the novel solutions developed within our research group over the past few years to meet these challenges. -------------------------------- Date and time: Friday 10th June 2011 at 16:00 Location: Room 245, School of Computer Science Title: Automatic Algorithm Configuration: Tools and Applications Speaker: Dr. Thomas StĂźtzle (http://iridia.ulb.ac.be/~stuetzle) Institution: IRIDIA, Universite' Libre de Bruxelles (http://code.ulb.ac.be/iridia.home.php) Host: Prof. Xin Yao Abstract: Virtually all algorithms for tackling computationally hard problems can be seen as being composed of a set of different algorithmic components. Additionally, many of these algorithmic components have further parameters that influence their behaviour. If one wants to design and develop effective algorithms, one is faced with an algorithm configuration problem: which of the available algorithmic components should be chosen and how should their parameters be set such that some measure of performance is maximised. Traditionally, this algorithm configuration problem has been tackled by a manual trial-and-error process. In our research, we have developed approaches that allow to automatise this algorithm configuration process. In particular, we have developed the F-race approach and, more recently, iterated F-race. In the first part of the talk, I will give an introduction to F-race and iterated F-race and discuss shortly related work. In the second part of the talk, I will give an overview of research projects in which we have designed and developed high-performing algorithms using principled algorithm design steps and automatic algorithm configuration techniques. -------------------------------- Date and time: Monday 13th June 2011 at 16:00 Location: UG40, School of Computer Science Title: Natural Language Analysis and Synthesis based on Dependency Structures Speaker: Dr. Bernd Bohnet (http://www.ims.uni-stuttgart.de/~bohnetbd/) Institution: Institute for Natural Language Processing, Universität Stuttgart (http://www.ims.uni-stuttgart.de/) Abstract: A key problem in Natural Language Processing is tomap from a natural sentence to syntactic and semantic structures and vice versa (i.e. text analysis and synthesis). Over the past decade, this problem has been progressively approached using dependency structures that describe syntax in terms of the directed relations between two words within a sentence. My research has developed systems for text analysis and synthesis that employ stochastic learning methods based annotated corpora and are guided by linguistic principles. In the field of text analysis, I have demonstrated that syntactic and semantic parsing systems can be rendered more accurate and efficient by combining support vector machines with a hash kernel. Conversely, I have introduced elements (e.g. maximum spanning tree algorithm) that have proved powerful for parsing systems into text generation to enable the mapping from semantic graphs onto syntactic trees. Since multilevel annotated corpora have become increasingly available for multiple languages, my multilingual deep stochastic sentence realiser allows text generation directly from semantic inputs without additional syntactic information (as previously required). I will conclude by highlighting directions of how text analysis and generation systems can usefully implemented in Dialog and Translation systems. -------------------------------- Date and time: Friday 9th September 2011 at 15:00 Location: LG34, Learning Centre Title: DIARC - Steps Towards an Integrated Architecture for Human-Robot Interactions in Natural Language Speaker: Dr. Matthias Scheutz (http://hrilab.cs.tufts.edu/people/matthias.php) Institution: Human-Robot Interaction Laboratory, Tufts University (http://hrilab.cs.tufts.edu/) Host: Dr. Jeremy Wyatt Abstract: Perception, action, and language processing in humans are all tightly intertwined, involving complex patterns of actions, utterances and responses, where meaningful linguistic fragments result from their context together with prosodic, temporal, task and goal information. As a result, robots that have to interact with humans in natural language have to respect the human modes of processing language and need to be able to detect meaningful linguistic fragments which are often not aligned with sentence boundaries, to extract the intended meaning. In this talk, we present the latest version of our incremental NLP system integrated into our robotic DIARC architecture, which is heavily based on findings in psycholinguistics about the nature of lexical, syntactic, semantic, and pragmatic processing and its relation to action execution. We demonstrate with examples from human-robot interactions how our proposed architecture can quickly and robustly deal with natural dialogues that contain incomplete, ungrammatical, and ambiguous utterances, interleave actions with language processing, and provide the kind of feedback that humans expect. -------------------------------- Date and time: Wednesday 21st September 2011 at 16:00 Location: UG40, School of Computer Science Title: An Overview on Ordinal Regression Speaker: Dr. Pedro Antonio GutiĂŠrrez (http://www.uco.es/ayrna/index.php?option=com_jresearch&view=member&task=show&id=1&Itemid=55) Institution: Department of Computer Science and Numerical Analysis, University of CĂłrdoba () Host: Dr. Peter Tino Abstract: Ordinal classification (so called ranking, sorting or ordinal regression) is a supervised learning problem of predicting categories of ordinal scale. The samples are labelled by a set of ranks with an ordering among the different categories. In contrast to the usual classification, there is an ordinal relationship among the categories and it is different from regression in that the number of ranks is finite and the exact amounts of difference among the ranks are not defined. In this way, ordinal classification lies somewhere between classification and regression. Ordinal classification problems are important, since they are very common in our everyday life where many problems require the classification of items into naturally ordered classes: selecting the best route to work, where to stop, which product to buy, and where to live, are just examples of daily ordinal decision-making. In this talk, we will try to summarize the main current trends in ordinal regression, presenting and comparing some of the methods that try to improve classification performance under these constraints. -------------------------------- Date and time: Monday 10th October 2011 at 16:00 Location: UG40, School of Computer Science Title: Visual Motion Estimation and Tracking of Rigid Bodies by Physical Simulation Speaker: Damien Jade Duff (http://www.cs.bham.ac.uk/~djd/) Institution: School of Computer Science, The University of Birmingham (http://www.cs.bham.ac.uk) Host: Dr. Jeremy Wyatt Abstract: This talk is based on my PhD at Bham SoCS, submitted a matter of weeks ago. The thesis applies knowledge of the physical dynamics of objects to estimating object motion from vision when estimation from vision alone fails. It differentiates itself from existing physics-based vision by building in robustness to situations where existing visual estimation tends to fail: fast motion, blur, glare, distractors, and partial or full occlusion. A real-time physics simulator is incorporated into a stochastic framework by adding several different models of how noise is injected into the dynamics. Several different algorithms are proposed and experimentally validated on two problems: motion estimation and object tracking. The performance of visual motion estimation from colour histograms of a ball moving in two dimensions is improved when a physics simulator is integrated into a MAP procedure involving non-linear optimisation and RANSAC-like methods. Process noise or initial condition noise in conjunction with a physics-based dynamics results in improved robustness on hard visual problems. A particle filter applied to the task of full 6D visual tracking of the pose an object being pushed by a robot in a table-top environment is improved on difficult visual problems by incorporating a simulator as a dynamics model and injecting noise as forces into the simulator. Here are some of the papers covered by the talk: http://ieeexplore.ieee.org/search/srchabstract.jsp?arnumber=5509590 http://ieeexplore.ieee.org/search/srchabstract.jsp?arnumber=5980535 -------------------------------- Date and time: Monday 31st October 2011 at 16:00 Location: UG40, School of Computer Science Title: Architectures for life (Including emotional life) Speaker: Prof. Aaron Sloman (http://www.cs.bham.ac.uk/~axs/) Institution: School of Computer Science, The University of Birmingham (www.cs.bham.ac.uk) Abstract: An introduction to some themes concerning: architectures, informed control, and ontologies, in meta-morphogenesis (MM). I use the term "Meta-morphogenesis", suggested by reading Turing's 1952 paper on Morphogenesis, to refer to the study of changes in information-processing and the environmental and other influences on those changes, across evolutionary transitions, during individual development and learning, and in social-cultural evolution and development. These changes are partly provoked by environmental features and changes (including features and changes in other organisms), and partly by consequences of earlier evolution and development leading to new sensorymotor morphologies, needs and competences. The transitions relevant to MM involve information processing architectures, forms of representation, ontologies, types of learning, types of perception, types of problem solving, reasoning, planning, thinking, types of control, and types of communication and collaboration (among others). Last week I presented some of these ideas to the Language and Cognition group, emphasising a subset of developmental transitions related to some of Annette Karmiloff-Smith's ideas about "Representational Redescription" (in Beyond Modularity, 1992) and Piaget's work on Necessity and Possibility. This talk will instead focus mainly on evolutionary and developmental transitions in types of ARCHITECTURE, which will include some of the other topics mentioned above, especially types of control -- e.g. motive formation, motive processing, and meta-management (including some of the ideas introduced in Luc Beaudoin's PhD thesis here in 1994). I believe most current investigations of affect (including motivation, emotions, moods, preferences, inclinations, attitudes, values, ideals, morals, etc.) are impoverished because they ignore these architectural issues and focus too much on shallow behaviours, lean too much on ill-informed common-sense ontologies for mental states and processes, and leave out most of the aspects of affect that are important for intelligent animals. (Probably best understood at present by novelists and other artists.) Some parts of the presentation will probably relate closely to Jackie Chappell's departmental seminar (Thurs 26th Oct 4pm) on "Acting on the world: understanding how animals use information to guide their action" http://www.cs.bham.ac.uk/events/seminars/seminar_details.html?seminar_id=863 Some of the background to my presentation (still in a very messy and incomplete form) can be found in: http://www.cs.bham.ac.uk/research/projects/cogaff/misc/meta-morphogenesis.html http://www.cs.bham.ac.uk/research/projects/cogaff/#overview http://www.cs.bham.ac.uk/research/projects/cogaff/misc/toddler-theorems.html (includes links to philosophy of mathematics) -------------------------------- Date and time: Monday 7th November 2011 at 16:00 Location: UG40, School of Computer Science Title: Search-based approaches for Software Development Effort Estimation Speaker: Federica Sarro (http://www.dmi.unisa.it/people/sarro/www/) Institution: University of Salerno, Italy (http://www3.unisa.it/) Host: Dr. Leandro Minku Abstract: Software development effort estimation is a critical activity for planning and monitoring software project development and for delivering the product on time and within budget. Indeed, significant over or under-estimates expose a software project to several risks. Thus, the competitiveness of a software company heavily depends on the ability of its project managers to accurately predict in advance the effort required to develop software system. However, several challenges exist in making accurate estimates and several techniques have been proposed in the literature to support project managers in this activity. In the last years the use of Search-Based techniques has been suggested to this end. These techniques are meta-heuristics able to find optimal or near optimal solutions to problems characterized by large space. In the context of effort estimation Search-Based (SB) approaches can be exploited to build estimation models or to enhance the effectiveness of other methods. In the first case the problem of building an estimation model is reformulated as an optimization problem where the SB method builds many possible models - exploiting past projects data - and tries to identify the best one, i.e., the one providing the most accurate estimates. In the second case, SB methods can be exploited in combination with other estimation techniques to improve critical steps of their application (e.g., features subset selection or the identification of critical parameters) aiming to obtain better estimates. The main aim of the talk is to provide an insight on the use of Search-Based techniques for effort estimation reporting some recent results achieved with these approaches for both the uses above mentioned. -------------------------------- Date and time: Monday 14th November 2011 at 16:00 Location: UG40, School of Computer Science Title: The Evolution of Neural Generative Models Speaker: Dr. Chrisantha Fernando (http://www.cogs.susx.ac.uk/users/ctf20/dphil_2005/index.htm) Institution: Deptartment of Informatics, Sussex University (http://www.sussex.ac.uk/) Host: Dr. Jon Rowe Abstract: Within the paradigm of unsupervised learning it is becoming increasingly clear that the brain makes generative models of the environment that can explain observations in a parsimonious way. This helps with inference about the current environmental state and predictions of future environmental states given observations. What kinds of generative model are there in principle? What methods exist that allow these generative models to be learned? I propose a set of major transitions in the evolution of cognition in which new classes of generative model have evolved, along with new methods for learning such models. I consider the neural correlates of these transitions. -------------------------------- Date and time: Monday 21st November 2011 at 16:00 Location: UG40, School of Computer Science Title: Perceptual Similarity, Visual Metaphor and Creativity Speaker: Prof. Bipin Indurkhya (http://faculty.iiit.ac.in/~bipin/) Institution: Cognitive Lab, International Institute of Information Technology, Hyderabad (http://www.iiit.ac.in/) Host: Dr. John Barnden Abstract: Researchers who study creativity in real-world situations have found that a primary hurdle facing human creativity is in stepping outside of our habitual conceptual association. A few techniques have been suggested to aid this process: making-the-familiar-strange and de-conceptualization, for example. The main objective of these techniques is to help the cognitive agent break the bond of conceptual association it has acquired culturally and through a lifetime of experiences. In this process, imagery, perceptual similarities, and visual metaphors play a key role, as we will demonstrate using several examples. We are following a number of research veins to explore the role of imagery and visual metaphor in creativity and cognition, and in this talk I will present some of our results. In particular, we will present some experiments to compare and contrast mono-modal (text-text or image-image) vs. cross-modal (text-image) metaphors. We have also investigated how explicit imagery, presented before or after the metaphor, influences the comprehension of metaphor. We have argued, and demonstrated, in our past research, that computer-based systems can be very helpful in stimulating creativity. For instance, we found that incorporating one familiar but unrelated object in a picture setting, or presenting unrelated pairs of objects to a cognitive agent, and asking them to make a story out of it, or to make sense of it in some other way, stimulates their creativity and imagination. In another line of research, we are exploring the hypothesis that low-level perceptual similarity — that is, similarity based on low-level perceptual features like color, shape and texture — plays a key role in creation of novel conceptual associations. In other words, we are claiming that if the unrelated object that is introduced into a picture or paired with another object bears some low-level perceptual similarity with other objects in the picture, or with the paired object, it is likely to be more effective in stimulating creativity than a random unrelated object. We will present here the results of some of our preliminary experiments to explore this hypothesis. As the similarity with respect to low-level perceptual features is determined algorithmically using image-based pattern matching, our approach can be used to design more effective computer-based creativity-support systems. We will present an outline of such architecture and mention a number of possible application domains for such systems. -------------------------------- Date and time: Monday 5th December 2011 at 16:00 Location: UG40, School of Computer Science Title: Multi-objective Evolutionary Algorithms: Applications and Technology Speaker: Dr. Lyndon While (http://www.csse.uwa.edu.au/~lyndon) Institution: Walking Fish Group, The University of Western Australia (http://www.csse.uwa.edu.au/) Host: Prof. Xin Yao and Dr. Leandro Minku Abstract: Evolutionary algorithms have been applied successfully to a wide range of multi-objective optimisation problems, but the technology is still maturing. In this talk I will discuss some of the problems to which the WFG has applied MOEAs, including scheduling problems and problems from the mining industry, and some of the contributions that we have made to the underlying technology. Lyndon While leads the Walking Fish Group in Perth, Western Australia, which performs research using MOEAs in collaboration with local industry and other academics around the world. -------------------------------- Date and time: Monday 12th December 2011 at 16:00 Location: UG40, School of Computer Science Title: EPS College Research and Knowledge Transfer Support Speaker: Dr Paul Marshall Institution: Head-Research and Knowledge Transfer, College of Engineering and Physical Sciences, The University of Birmingham Host: Dr Leandro Minku Abstract: *** Note: non-AINC topic *** The talk will be an overview of the College's research support team and will inform us of the support which will be provided to us. -------------------------------- Date and time: Monday 23rd January 2012 at 16:00 Location: UG40, School of Computer Science Title: Crime and Punishment: studies in the evolution of co-operation Speaker: Prof. Peter Ross (http://www.soc.napier.ac.uk/~peter/) Institution: School of Computing, Edinburgh Napier University (http://www.napier.ac.uk/soc/Pages/Home.aspx) Host: Prof. Xin Yao Abstract: How did co-operative behaviour evolve? Theories abound: group selection, kin selection, evolutionary games, indirect reciprocity and so on. Within the game-theoretic strand, Axelrod's famous Iterated Prisoner's Dilemma studies had suggested that co-operative behaviour could successfully invade a population of non-co-operators. However, the payoffs typically involved facilitated such an invasion: co-operators could benefit each other significantly even when they encountered each other rarely. But in the Continuous Prisoner's Dilemma, payoffs may be very small, and co-operative behaviour then has huge difficulty invading a non-co-operative population. Shutters studied a form of third-party retaliation in which punishment of non-co-operators, even at a cost to the punisher, could promote co-operative behaviour but this depended on the network (social) structure of interactions. In particular, scale-free networks seemed to produce anomalous results. This talk presents some ongoing computational studies, in collaboration with Shutters, that explore this model further. The results presented are based on trillions of simulated encounters. Interestingly, it turns out than in this model co-operation can arise 'spontaneously' by means of small mutations even when starting with a totally non-cooperating and non-punishing population. -------------------------------- Date and time: Monday 6th February 2012 at 16:00 Location: UG40, School of Computer Science Title: 3-Dimensional Random Assignment Problems Speaker: Prof. Gregory Sorkin (http://www2.lse.ac.uk/management/people/gsorkin.aspx) Institution: Department of Management, London School of Economics and Political Science (http://www2.lse.ac.uk/management/home.aspx) Host: Prof. Xin Yao Abstract: The 2-dimensional assignment problem (minimum cost matching) is solvable in polynomial time, and it is known that a random instance of size n, with entries chosen independently and uniformly at random from [0,1], has expected cost tending to pi^2/6. In dimensions 3 and higher, there are natural Axial and Planar assignment generalizations. Both are NP-complete, but what is the expected cost for a random instance, and how well can a heuristic do? The asymptotic behavior remains an open question. For 3-dimensional Planar assignment, we give a ln n approximation algorithm, and for Axial assignment, an unrelated n^eps approximation algorithm. In higher dimensions, both algorithms fail dismally. Joint work with Alan Frieze. -------------------------------- Date and time: Monday 20th February 2012 at 16:00 Location: UG40, School of Computer Science Title: A Distributed Associative Memory for Cognitive Systems Speaker: Dr. Paul Baxter (http://www.plymouth.ac.uk/staff/pebaxter) Institution: School of Computing and Mathematics, Plymouth University (http://www.plymouth.ac.uk/) Host: Dr. Nick Hawes Abstract: Memory entails the capacity for the prior experiences of an agent to influence its ongoing and future behaviour. As such, it is necessarily a central feature of any account of a cognitive system. The typical use of memory in such architectures tends to focus on what memory does, rather than what memory is, leading to memory being viewed as a storage device that may be written to, and read from. Conversely, increasing indications from neuroscientifically-founded theory point to memory as having an inherently distributed associative network structure, which is intertwined with cognitive function. This leads to a reconsideration of memory, not solely as storage, but as an active component in its own right. Given that biological memory+cognition is the substrate for the best examples of the type of flexible and robust behaviour desired for synthetic agents, the questions are: what features of such an interpretation of what memory *is* may support such desired behaviour, and how may they be applied to augment the competencies of current cognitive systems? I will present some initial steps in the development of a memory system for cognitive architectures drawn on such principles, describing preliminary investigations with mobile robots, and discussing its ongoing application to a far more unconstrained and complex domain: human-robot interaction. In doing so, the notion of developmental systems will be drawn, as a means for exploring an integrated account of the ontogeny of a memory-centred cognitive system. -------------------------------- Date and time: Monday 19th March 2012 at 16:00 Location: UG40, School of Computer Science Title: Adaptive Video Analytics Speaker: Dr. Mohamed Sedky (http://www.staffs.ac.uk/directory/viewperson.php?staffid=3550) Institution: Faculty of Computing, Engineering and Technology, University of Staffordshire (http://www.staffs.ac.uk/) Host: Dr. Rustam Stolkin Abstract: Whilst used in broad range of applications, Video Surveillance still has a number of practical operational and cost restraints. The screen-to-camera ratios are 1:4 up to 1:78, the operator-to-screen ratio can be as high as 1:16. Studies show that, for a relatively empty scene, an operator watching two screens misses 95% of actions after 22 minutes. Simply, there are too many cameras to monitor, and video analysis (computer software) does a much better job of constantly watching them all, than operators. One of the main challenges of video analytic solutions is how the software can adapt itself to different illumination conditions, scene structures and object materials. Environmental factors can be detrimental to the functionality of analytics software. In Staffordshire University, we have developed, Spectral-360, a novel physics-based object detection technology which emulates the human vision to detect camouflaged objects under rapid illumination changes and the presence of sever environmental conditions. Leveraging Spectral-360 advanced detection capabilities; the research team at Staffordshire University is currently developing a fully end-to-end integrated suite for advanced CCTV based perimeter protection, assets monitoring, fall detection, and condition monitoring. -------------------------------- Date and time: Tuesday 10th July 2012 at 13:00 Location: School of Psychology, Hills building, room 3.24 Title: Understanding Human Hand Use to Motivate Design of Low-Dimensional Mechanical Hands Speaker: Dr. Aaron Dollar (http://www.seas.yale.edu/faculty-detail.php?id=30) Institution: Yale School of Engineering and Applied Science (http://www.seas.yale.edu/home.php) Host: Dr. Rustam Stolkin Abstract: Abstract: “Despite decades of research, current robotic systems are unable to reliably grasp and manipulate a wide range of unstructured objects in human environments. Traditional approaches attempt to copy the immense mechanical complexity of the human hand in a stiff “robotic” mechanism along with complicated sensing and control schemes. Alternatively, by careful inclusion of adaptive underacted transmissions and tuned compliance, we have been able to achieve a level of dexterity and reliability as yet unseen in the robotics community. I will describe our ongoing efforts to study human grasping and manipulation during the activities of daily living as well as work towards developing robust, open-loop grasping and dexterous manipulation capabilities in engineered systems including robotics, prosthetics, and small aerial vehicles." Bio: Aaron Dollar is an Assistant Professor of Mechanical Engineering and Materials Science at Yale University. He is the Director of the GRAB Lab, which conducts research into robotic grasping and manipulation, prosthetics, and assistive and rehabilitation devices. Prof. Dollar is co-founder and editor of RoboticsCourseWare.org, an open repository for robotics pedagogical materials, and the recipient of a number of young investigator awards, including the 2010 Technology Review TR35 Young Innovator Award, and the 2010 NSF CAREER Award. -------------------------------- Date and time: Tuesday 10th July 2012 at 15:00 Location: School of Psychology, Hills building, room 3.24 Title: TBA Speaker: Christoph Borst (http://www.robotic.dlr.de/Christoph.Borst) Institution: DLR German Aerospace Research Centre () Host: Dr. Rustam Stolkin Abstract: TBA: Robotic manipulation research at DLR -------------------------------- Date and time: Thursday 12th July 2012 at 15:00 Location: Mechanical Engineering, room B04 Title: TBA Speaker: Dr. Peter Kyberd (http://www.unb.ca/research/institutes/biomedical/people/faculty/peter-kyberd.html) Institution: Inst. Biomedical Engineering, University of New Brunswick () Host: Dr. Rustam Stolkin Abstract: TBA: Prosthetic grasping -------------------------------- Date and time: Thursday 12th July 2012 at 16:00 Location: School of Mechanical Engineering, room B04 Title: TBA Speaker: Dr. Marco Gabiccini (http://www.dimnp.unipi.it/gabiccini-m/) Institution: University of Pisa () Host: Dr. Rustam Stolkin Abstract: TBA: Haptics and dexterous manipulation research at Pisa -------------------------------- Date and time: Monday 20th August 2012 at 16:00 Location: Room 245, School of Computer Science Title: Computational Explorations in Alzheimer's Disease Speaker: Mark Rowan (http://www.tamias.co.uk/) Institution: School of Computer Science, The University of Birmingham (http://www.cs.bham.ac.uk/) Abstract: In this talk (no background medical knowledge required!), I will give an overview of my recent research in the field of computational modelling of neurological disorders. I am interested in using artificial neural network models to understand more about Alzheimer's disease in the brain. In particular, I am investigating the way in which reinforcement of learned memories ("synaptic compensation") during network damage leads to distinctive cascades of degradation, and the use of information theory to characterise the pathways and mechanisms of cognitive decline. I will present findings which suggest that current care methods focusing on recalling early memories may actually do more harm than good, and explain one reason why Alzheimer's disease may not be detected in patients until it's already too late to begin preventative treatment. Medical science is constantly looking for early-stage bio-markers of Alzheimer's disease so that it can be detected earlier and treatment can be more effective: I will introduce work currently in-progress to apply these findings to realistic models of parts of the brain in order to look for such early-stage bio-markers. -------------------------------- Date and time: Monday 1st October 2012 at 16:00 Location: UG05, Learning Centre Title: CANCELLED -- How do people make choice about what to do given limits on what they can do? (In terms of human information gathering behaviours) Speaker: Xiuli Chen Institution: School of Computer Science, The University of Birmingham Abstract: I am interested in how people gather information in service of making decisions. People gather information for a variety of reasons, e.g. medical diagnosis, purchasing, etc. A critical concern is understanding how people balance the potential advantage gained by gathering information against the time or money spent. For example, when people gather information in service of say choosing a holiday they are selective about what information they read; balancing the potential benefits of reading customer reviews, for example, against the required time. My starting point is the assumption that what people choose to do is the solution to an optimal control problem defined by an adaptation environment, the information processing bounds and a utility function. I seek to test the extent to which what information people choose to gather can be explained as the solution to an optimal control problem define by these three factors. It is a well-known fact that one limit on what information to gather is bounds on how that information can be integrated. However, by ‘information integration’, it is usually referred as ‘external information integration’, e.g. finding an average of the sampled numbers. In contrast, there is another level of information integration, internal information integration, e.g. integrating the learned information from the experience. In our experiments, we isolated the internal information integration from the external information integration by providing them the external information integration. The data analysis provides evidence that people are able to integrate the useful information from their experience for their goals. An experiment focusing on the external information integration and future work will be discussed. -------------------------------- Date and time: Monday 22nd October 2012 at 16:00 Location: UG05, Learning Centre Title: Genetic Improvement Programming Speaker: Dr. William Langdon (http://www0.cs.ucl.ac.uk/staff/w.langdon/) Institution: CREST, Department of Computer Science, University College London (http://crest.cs.ucl.ac.uk/) Host: Dr. Leandro Minku Abstract: Evolutionary computing, particularly genetic programming, can optimise software and software engineering, including evolving test benchmarks, search meta-heuristics, protocols, composing web services, improving hashing and garbage collection, redundant programming and even automatically fixing bugs. Often there are many potential ways to balance functionality with resource consumption. But a human programmer cannot try them all. Also the optimal trade off may be different on each hardware platform and it vary over time or as usage changes. It may be genetic programming can automatically suggest different trade offs for each new market. Recent results include substantial speed up by generating a new version of a program for a special case. -------------------------------- Date and time: Monday 5th November 2012 at 16:00 Location: UG05, Learning Centre Title: Robust Background Subtraction for Automated Detection and Tracking of Targets in Wide Area Motion Imagery Speaker: Dr. Simon Maskell (http://www.simonmaskell.com/) Institution: QinetiQ (http://www.qinetiq.com/Pages/default.aspx) Host: Dr. Jeremy Baxter Abstract: Performing persistent surveillance of large populations of targets is increasingly important in both the defence and security domains. In response to this, Wide Area Motion Imagery (WAMI) sensors with Wide FoVs are growing in popularity. Such WAMI sensors simultaneously provide high spatial and temporal resolutions, giving extreme pixel counts over large geographical areas. The ensuing data rates are such that either very bandwidth data links are required (e.g. for human interpretation) or close-to-sensor automation is required to down-select salient information. For the latter case, we use an iterative quad-tree optical-flow algorithm to efficiently estimate the parameters of a perspective deformation of the background. We then use a robust estimator to simultaneously detect foreground pixels and infer the parameters of each background pixel in the current image. The resulting detections are referenced to the coordinates of the first frame and passed to a multi-target tracker. The multi-target tracker uses a Kalman filter per target and a Global Nearest Neighbour approach to multi-target data association, thereby including statistical models for missed detections and false alarms. We use spatial data structures to ensure that the tracker can scale to analysing thousands of targets. We demonstrate that real-time processing (on modest hardware) is feasible on an unclassified WAMI infra-red dataset consisting of 4096 by 4096 pixels at 1Hz simulating data taken from a Wide FoV sensor on a UAV. With low latency and despite intermittent obscuration and false alarms, we demonstrate persistent tracking of all but one (low-contrast) vehicular target, with no false tracks. -------------------------------- Date and time: Wednesday 14th November 2012 at 16:00 Location: UG10, Learning Centre Title: Decoding complex facial expressions - from cognition to computation Speaker: Dr. Christian Wallraven (http://cogsys.korea.ac.kr/People.html) Institution: Cognitive Systems Lab, Dept. of Brain & Cognitive Engineering, Korea University (http://cogsys.korea.ac.kr/Cognitive_Systems.html) Host: Prof. Ales Leonardis Abstract: The face is capable of producing an astonishing variety of movements ranging from larger scale head movements to minute muscle twitches that are barely visible. Equally astonishing are the perceptual and cognitive processes with which we humans decode these signals in order to identify someone's particular smile, read the mood a person is in, or detect whether a comment was meant seriously or ironically. In both the cognitive and the computational sciences, however, the focus of research has been largely on the so-called universal expressions - expressions such as sad and happy that carry strong emotional contents and that are commonly identified across cultures. While important, in daily life these universal expressions occur relatively rarely, with conversational and communicative facial expressions such as slight smiles, or bored faces being much more common. Incidentally, these expressions are usually also much more subtle in terms of the facial movements making them much harder to detect and process computationally, for example. In this talk, I will describe our recent research in two areas: first, a summary of cognitive, cross-cultural studies investigating the processes underlying the decoding of complex, conversational facial expressions. Second, I will discuss our initial work on computational approaches towards processing and interpreting conversational facial expressions using dynamic graph models such as Conditional Random Fields. Short bio: Christian Wallraven is head of the Cognitive Systems Lab (http://cogsys.korea.ac.kr) in the Department of Brain and Cognitive Engineering at Korea University. His research focuses on understanding the algorithms employed by the human cognitive system using machine learning and computer graphics, coupled with perceptual and cognitive experiments. In addition, the lab also works on transferring this knowledge to implementations of intelligent, artificial cognitive systems that can be used in robotics, computer vision, computer animation, and clinical applications. Active research areas in the Cognitive Systems Lab are: face perception, facial expressions for communication, multisensory object perception, object and scene classification, and computational aesthetics. -------------------------------- Date and time: Monday 19th November 2012 at 16:00 Location: UG05, Learning Centre Title: Evaluating Machine Consciousness Speaker: Prof. Igor Aleksander (http://www3.imperial.ac.uk/people/i.aleksander) Institution: Department of Electrical and Electronic Engineering, Imperial College London (http://www3.imperial.ac.uk/electricalengineering) Host: Prof. Xin Yao Abstract: The paradigm of machine consciousness is now over 10 years old. The talk will review the evolution of the topic so far and consider what might usefully happen in the future. Models that have arisen will be discussed and an evaluative methodology based on our intuitive grasp of what consciousness is will be suggested. -------------------------------- Date and time: Wednesday 21st November 2012 at 16:00 Location: UG09, Learning Centre Title: Systems Engineering for Visual Cognition Speaker: Prof. Visvanathan Ramesh (http://fias.uni-frankfurt.de/neuro/ramesh/) Institution: Frankfurt Institute of Advanced Studies & Goethe University, Frankfurt am Main (http://fias.uni-frankfurt.de/) Host: Prof. Aaron Sloman Abstract: Rapid advances in sensing, computing, communication, machine learning, artificial intelligence algorithms, brain sciences, and allied fields have changed the landscape of the type of intelligent system functions that can be developed and demonstrated. We have been following a systematic engineering approach for design, analysis, validation and performance quantification of intelligent vision systems for over 20 years. In our methodology, system design is seen as the process by which user specifications are translated to Bayesian models of application contexts and the subsequent design of approximate inference engines. Vision is seen as a two stage process: “Indexing” followed by “Estimation”. “Indexing” is the step wherein hypotheses and feature descriptors may be generated through the design of specific filters that exploit contextual regularities. “Estimation” is a detailed step that takes as input the hypotheses and feature descriptors to produce a world state estimate. The state estimate along with online estimates of uncertainty are then used for active planning and control. The architecture that we have used has relationships to dual-system mental models for cognition in psychology literature. We will elaborate our design philosophy through concrete examples in video monitoring and discuss systems level issues such as: granularity of world models, choice and specification of prior distributions, choice of representational schemes for features, choice of tools and methodologies for inference etc. === His research interests are very broad and part of his reason for coming here is to explore possible increased research collaboration with us, including collaboration with people in AI/Robotics/Neuroscience and biology. (Informal collaboration started about a year ago.) He used to work at Siemens research in the USA http://www.siemens.com/innovation/apps/pof_microsite/_pof-spring-2011/_html_en/portrait-of-ramesh-visvanathan.html If anyone would like to meet and talk to him during his visit, or join him for dinner on Wed 21st please email a.sloman@cs.bham.ac.uk === -------------------------------- Date and time: Monday 26th November 2012 at 16:00 Location: UG05, Learning Centre Title: Towards a Unified Framework for Intelligent Robotics Speaker: Prof. Honghai Liu (http://www.liuh.myweb.port.ac.uk/Home.html) Institution: School of Creative Technologies, University of Portsmouth (http://www.port.ac.uk/departments/academic/ct/) Host: Dr. Jeremy Wyatt Abstract: This talk introduces research conducted in the Intelligent Systems and Biomedical Robotics Group at Portsmouth University, it is focused on a theoretical framework for bridging the gap among different types of data representations and its application of human skill transfer. After introducing the theoretical framework in which conventional robotics is generalized, three case studies has been presented in which the framework has been implemented. First, a three-layered solution is presented to recognize the human hand gestures and transfer them into prosthetic manipulation in terms of numeric values, Gaussian mixtures and data dependency structure. Then, generalized robotics kinematics has been employed to form templates to filter suspicious human motion behavior in the context of surveillance. Finally, a case study on speed estimation has been presented for addressing the effectiveness of the implementation in intelligent transportation systems. The speaker believes that contextual information is the key to efficient human skill transfer. ========= Biosketch of Honghai Liu http://userweb.port.ac.uk/~liuh Honghai Liu received the PhD in Intelligent Robotics from King's College London, UK. Dr. Liu is Professor of Intelligent Systems at the University of Portsmouth, UK, where he heads the Intelligent Systems and Biomedical Robotics Group. He previously held research appointments at King's College London and University of Aberdeen, and project leader appointments in Large-scale industrial control and system integration industry. His main research interests include approximate computation, pattern recognition, multi-sensor based information analytics, intelligent robotics and their practical applications, especially in cognition-driven biomechatronics and information abstraction. He has (co)edited one book and five conference proceedings, and (co)authored over 250 peer-reviewed journals and conference papers. He is IEEE Senior Member and Fellow of IET. -------------------------------- Date and time: Monday 3rd December 2012 at 16:00 Location: UG05, Learning Centre Title: Multiple Criteria Decision Making in Control and Systems Design Speaker: Prof. Peter Fleming (http://www.shef.ac.uk/acse/staff/peter_fleming) Institution: Department of Automatic Control and Systems Engineering, The University of Sheffield (http://www.shef.ac.uk/acse/index) Host: Prof. Xin Yao Abstract: Designs arising in control and systems can often be conveniently formulated as multi-criteria decision-making problems. Inevitably, these problems often comprise a relatively large number of criteria. Many-objective optimisation poses difficulties for multiobjective optimisation algorithms which have been designed to solve problems with two or three objectives and alternative approaches for addressing many objectives will be described. Through close association with designers in industry, a range of machine learning tools and associated techniques have been devised to address the special requirements of many-criteria decision-making. These include visualisation and analysis tools to aid the identification of conflicting and non-conflicting criteria, interactive preference articulation techniques to assist in interrogating the search region of interest and methods for exploring design options for cases where constraints may be relaxed or tightened. Industrial design exercises will demonstrate these approaches. ------ Peter Fleming is Professor of Industrial Systems and Control in the Department of Automatic Control and Systems Engineering and was Director of the Rolls-Royce University Technology Centre for Control and Systems Engineering at the University of Sheffield, UK from 1993-2012. His control and systems engineering research interests include control system design, system health monitoring, multi-criteria decision-making, optimisation and scheduling, and applications of e-Science. He is a Fellow of the Royal Academy of Engineering, a Fellow of the International Federation of Automatic Control, a Fellow of the Institution of Engineering Technology, a Fellow of the Institute of Measurement and Control, and is Editor-in-Chief of International Journal of Systems Science. Further details may be found at http://www.shef.ac.uk/acse/staff/peter_fleming -------------------------------- Date and time: Monday 10th December 2012 at 16:00 Location: UG05, Learning Centre Title: Knowledge gap detection for interactive continuous learning of categorical knowledge Speaker: Dr. Danijel Skocaj (http://vicos.fri.uni-lj.si/danijels/) Institution: Visual Cognitive Systems Laboratory, Faculty of Computer and Information Science, University of Ljubljana (http://vicos.fri.uni-lj.si/) Host: Prof. Ales Leonardis Abstract: Interactive continuous learning is an important characteristic of a cognitive agent that is supposed to operate and evolve in an ever-changing environment. Especially when acquiring categorical knowledge, the communication with a human tutor could significantly facilitate the incremental learning processes. In this talk I will present our work on this research topic. I will address this problem from the theoretical and from the implementation point of view. The representations and mechanisms that facilitate continuous learning of visual concepts in dialogue with a tutor will be presented. Since an important part of the active learning process is detection of knowledge gaps, a formal model for knowledge gap detection in the case of acquiring categorical knowledge will be presented. Once the gaps are detected, the plans for filling these gaps can be created and executed. I will present an integrated system that implements this interactive continuous learning paradigm on a real robot . I will show how the beliefs about the world are created by processing visual and linguistic information and how they are used for planning the system behaviour aiming at satisfying its internal drive - to extend its knowledge. -------------------------------- Date and time: Monday 14th January 2013 at 16:00 Location: UG05, Learning Centre Title: It doesn't matter what you do, only who does it Speaker: Prof. Martin Shepperd (http://people.brunel.ac.uk/~csstmms/) Institution: School of Information Systems, Computing & Maths, Brunel University (http://www.brunel.ac.uk/about/acad/siscm/) Host: Dr. Leandro Minku and Prof. Xin Yao Abstract: The reach and complexity of computationally-intensive research methods has grown greatly in recent years. This is particularly true of machine learning usage by scientists. Typically machine learning induces prediction models and classifiers from known cases to solve new, unseen cases[1]. To achieve this many competing algorithms have been proposed, yet no one approach dominates, i.e., consistently outperforms all other techniques[2]. Here we show by means of a meta-analysis of 600 experimental results that the choice of algorithm has little impact upon performance ( 1.3%). By contrast, the major (31%) explanatory factor is the research group. Put crudely, it matters twenty five times more who does the work than what is done. This is problematic if we are seeking reproducibility[3]. Having explored the data for confounders we propose that this researcher bias is the product of (i) differences in researcher expertise, (ii) incomplete reporting and (iii) preference for particular types of result. To overcome these sources of bias we argue that computational researchers need (i) more intergroup studies and intergroup exchange, (ii) improved reporting protocols and (iii) blind analysis[4]. Keywords: machine learning, classi er, prediction system, meta-analysis, researcher bias. References: [1] Michie, D., D.J. Spiegelhalter, and C.C. Taylor, eds. Machine learning, neural and statistical classification. Ellis Horwood Series in Artificial Intelligence, Ellis Horwood: Chichester, Sussex, UK, 1994. [2] Menzies, T. and M. Shepperd, "Editorial: Special issue on repeatable results in software engineering prediction," Empirical Software Engineering, vol. 17, pp. 1-17, 2012. [3] Fomel, S. and J. Claerbout, "Guest Editors' Introduction: Reproducible Research," Computing in Science and Engineering, vol. 11, pp. 5-7, 2009. [4] Heinrich, J. "Benefits of Blind Analysis Techniques," University of Pennsylvania CDF/MEMO/STATISTICS/PUBLIC/6576 Version 1, 2003. Biography: Martin Shepperd received a PhD in computer science from the Open University in 1991 for his work in measurement theory and its application to empirical software engineering. He is professor of software technology at Brunel University, London, UK. He has published more than 150 refereed papers and three books in the areas of software engineering and machine learning. He was editor-in-chief of the journal Information & Software Technology (1992-2007) and was Associate Editor of IEEE Transactions on Software Engineering (2000-4). Presently he is an Associate Editor of the journal Empirical Software Engineering. He was Program Chair for Metrics 2001 and 2004 and ESEM 2011. He has supervised more then 20 PhD students to completion and been external examiner for more than 25 including for universities in Germany, Sweden, Malta, Norway and Australia. He also previously worked for a number of years as a software developer for a major UK bank. He is a Fellow of the BCS. Recent Journal Publications: Menzies, T. and Shepperd, M. “Special issue on repeatable results in software engineering prediction”, Empirical Software Engineering, 17(1-2) pp1-17, 2012. Shepperd, M. and MacDonell, S. “Evaluating prediction systems in software project estimation”, Information and Software Technology, 54(8), pp 820–827, 2012. Song, Q. and Shepperd, M., “Predicting software project effort: A grey relational analysis based method,” Expert Systems with Applications, 38(6) pp 7302-7316, 2011. Q. Song, Z. Jia, M. Shepperd, S. Ying, and J. Liu, "A General Software Defect-Proneness Prediction Framework," IEEE Transactions on Software Engineering, 37(3), pp356-370, 2011. Nasseri, E. Counsell, S. and Shepperd, M. “Class Movement and Re-location: an Empirical Study of Java Inheritance Evolution”, J. of Systs & Softw., 83(2), pp303-315, 2010. MacDonell, S. Shepperd, M. Kitchenham, B. Mendes, E. “How Reliable Are Systematic Reviews in Empirical Software Engineering?” IEEE Transactions on Software Engineering, 36(5), pp676-687, 2010. Wang, Y., Song, Q., MacDonell, S., Shepperd, M.J. and Shen, J. “Integrate the GM(1,1) and Verhulst Models to Predict Software Stage-Effort”. IEEE Trans on Systems, Man & Cybernetics (Part C). 39(6), pp647-658, 2009. Jayal, A., and Shepperd, M.J. “The Problem of Labels in e-Assessment of Diagrams”. ACM J. of Educational Resources in Computing, 8(4), pp1-13, 2009. Song, Q. and Shepperd, M.J. Chen, X. Liu, J. “Can k-NN imputation improve the performance of C4.5 with small software project data sets? A comparative evaluation,” J. of Systs & Softw., 81(12), pp2361-2370, 2008. Song, Q. and Shepperd, M.J. , ‘Missing Data Imputation Techniques’. Intl J. of Bus Intelligence & Data Mining, 2(3), pp261-291, 2007. Jørgensen, M. and Shepperd, M.J. ‘A Systematic Review of Software Development Cost Estimation Studies’, IEEE Transactions on Software Engineering, 33(1), pp33-53, 2007. -------------------------------- Date and time: Monday 28th January 2013 at 16:00 Location: UG05, Learning Centre Title: Salient Features and Snapshots in Time: an interdisciplinary perspective on object representation Speaker: Veronica Arriola and Zoe Demery (http://www.cs.bham.ac.uk/~vxa855/) Institution: School of Computer Science, and School of Biosciences, The University of Birmingham (http://www.birmingham.ac.uk/) Host: Dr. Leandro Minku Abstract: Faced with a vast, dynamic environment, some animals and robots often need to acquire and segregate information about objects. The form of their internal representation depends on how the information is utilised. Sometimes it should be compressed and abstracted from the original, often complex, sensory information, so it can be efficiently stored and manipulated, for deriving interpretations, causal relationships, functions or affordances. We discuss how salient features of objects can be used to generate compact representations, later allowing for relatively accurate reconstructions and reasoning. Particular moments in the course of an object-related process can be selected and stored as ‘key frames’. Specifically, we consider the problem of representing and reasoning about a deformable object from the viewpoint of both an artificial and a natural agent. -------------------------------- Date and time: Monday 11th February 2013 at 16:00 Location: UG05, Learning Centre Title: Information-selectivity in Alzheimer's disease progression Speaker: Mark Rowan (http://www.tamias.co.uk) Institution: School of Computer Science, The University of Birmingham (http://www.cs.bham.ac.uk) Abstract: In this talk, I will discuss the role of homeostatic synaptic scaling in neural networks. Scaling has two complimentary roles: moderating activity during learning to prevent runaway activation, and compensating for damage to other parts of the network. However, synaptic scaling may have a darker side: it could be a means for progression of disease pathology in Alzheimer's disease. Using a neural network model, I will show how Alzheimer's disease may actually attack neurons based on their significance to the network. Neurons which are less significant die off first, leading to the deaths of other low-significance neurons. This could explain why the symptoms of Alzheimer's disease take so long to appear after disease onset. I will then present work using a biologically-realistic spiking neural simulation which demonstrates the principles in a clinically-relevant way. This is a short talk (~30 min) which I will be giving to an external audience on Wednesday 13th Feb, so any critical feedback and discussion at the end of my talk will be very welcome! -------------------------------- Date and time: Monday 18th February 2013 at 16:00 Location: UG05, Learning Centre Title: Cooperative bare bones swarm optimisation Speaker: Dr Tim Blackwell (http://www.gold.ac.uk/computing/staff/t-blackwell/) Institution: Department of Computing, Goldsmiths, University of London (http://www.gold.ac.uk/computing/) Host: Dr. Peter Lewis Abstract: Although particle swarm optimisation, a heuristic global optimisation technique, is almost twenty years old, the complex dynamics of the stochastic particle motion remains mysterious. Bare bones PSO simplifies particle motion by removing velocity and basing the update rule on normal sampling and is consequently easier to understand. However BB does not perform well. I will demonstrate how comparable performance to standard PSO can be obtained by uncovering a hidden parameter, assumed to be unity, and with a subtle jumping mechanism. I will also consider optimisation in high dimensions and demonstrate how subspace search allows BB to optimise (for some classes of function) in high dimensional spaces. -------------------------------- Date and time: Wednesday 6th March 2013 at 12:00 Location: B01, Mechanical Engineering Building Title: Shared Control of Multiple Mobile Robots Speaker: Dr. Antonio Franchi (http://antoniofranchi.com/robotics/) Institution: Max Planck Institute for Biological Cybernetics (http://www.kyb.mpg.de/research/dep/bu/hri/) Host: Dr. Jeremy Wyatt Abstract: This talk will show some recent theoretical and experimental results in the multi-robot field, with special attention to the UAV (Unmanned Aerial Vehicle) case. The main topic will the presentation of a novel control framework where a team of human assistants are able to interact with a group of semi-autonomous mobile robots by using haptic interfaces. Within this framework, special attention will be given to the following topics: formation control by means of camera sensors, generalized connectivity and rigidity maintenance of the robotic network, mutual localization without identity measurements, human/multi-robot collaboration. The talk will present both the theoretical methodologies used in the control algorithms and the practical aspects behind the application on a group multiple quadrotor UAVs. ------ Bio: Antonio Franchi is the project leader of the "Autonomous Robotics and Human-machine Systems" group at the Max Planck Institute for Biological Cybernetics, Germany. He received the Laurea degree "summa cum laude" in Electronic Engineering in 2005 and the Ph.D. degree in control and system theory in 2009, both from Sapienza University of Rome , Italy. He was a visiting student at University of California at Santa Barbara in 2009 and a senior research scientist with the Max Planck Institute for Biological Cybernetics from 2010 to 2012. His main research interests include autonomous systems and robotics, with a special regard to planning, control, estimation, human-machine interaction, haptics, and robotics software architectures. WebSites: http://www.kyb.mpg.de/research/dep/bu/hri/, http://antoniofranchi.com/robotics/ -------------------------------- Date and time: Monday 11th March 2013 at 16:00 Location: UG05, Learning Centre Title: 101 Uses for a Non-Directed Probabilistic Graphical Model Speaker: Prof. John McCall (http://www.rgu.ac.uk/dmstaff/mccall-john) Institution: Robert Gordon University, Aberdeen (http://www.rgu.ac.uk/) Host: Dr. Huanhuan Chen and Dr. Shuo Wang Abstract: Markov Network EDAs use a non-directed Probabilistic Graphical Model (PGM) based on the Gibbs-Boltzmann Distribution to estimate the distribution of good solutions in a bit-string search space. Iterative sampling and re-estimation yield a metaheuristic search which has proved successful and competitive with other EDAs on some important problems, including Ising Spin Glass and Maximum Satisfaction. The Markov Network PGM has a number of interesting and unique theoretical properties and practical applications including: use as a surrogate fitness model; transferable learning embedded in model parameters; interpretations and refinements from the perspective of information geometry and potential uses in problem classification. The talk will explore these themes and outline current research directions. -------------------------------- Date and time: Monday 18th March 2013 at 16:00 Location: UG05, Learning Centre Title: The Dendritic Cell Algorithm: Review and Evolution Speaker: Dr. Julie Greensmith (http://www.cs.nott.ac.uk/~jqg/) Institution: School of Computer Science, University of Nottingham (http://www.nottingham.ac.uk/computerscience/index.aspx) Host: Dr. Christine Zarges Abstract: The Dendritic Cell Algorithm (DCA) is the newest of the mainstream artificial immune system algorithms. It is a data fusion and classification algorithm used primarily for anomaly detection problems. It provides a mechanism of associative classification in ordered or time-series data. It has been applied to a number of real world applications including computer network security, autonomous robotics, embedded systems and to standard machine learning datasets. The DCA is inspired by the dendritic cells of the human immune system, and is derived from an abstract model of natural cell function. The major criticisms of the DCA to date have included the fact that if a single DC is used then the system function equates computationally to a filtered linear classifier. Additionally the mapping process between domain and signal/antigen requires a considerable amount of expert knowledge. Thirdly, numerous variants of the DCA now exist, all with slightly different setups, parameter makeups and different data mapping processing. This also includes recent hybrid fuzzy DCA which further confuses the issue. As part of this talk I will examine examples of DCA applications to date, in addition to presenting a clear definition of the most recent deterministic DCA. The basics of artificial immune systems will be presented placing the evolution of the DCA into context. Details of the algorithm are presented, from its initial conception as an abstract model through to an applied algorithm. As part of this talk I also present conjecture as to the next steps for the development of an archetypal DC based algorithm. -------------------------------- Date and time: Friday 26th April 2013 at 11:00 Location: UG06, Learning Centre Title: Active exploration and adaptation in autonomous robot learning Speaker: Dr. Ales Ude (http://www.ijs.si/~aude/) Institution: Jozef Stefan Institute, Ljubljana, Slovenia () Host: Prof. Ales Leonardis Abstract: Understanding how robots can autonomously acquire new sensorimotor knowledge is a long-standing issue in cognitive robotics. In the first part of my talk I will present a new methodology for skill learning that combines ideas from statistical generalization and reinforcement learning. By associating the known motor patterns with observable parameters describing the task, task-specific generalization of motor knowledge can be achieved. This way we obtain a manifold of movements, which dimensionality is usually much smaller than the dimensionality of the full space of movements. Next we refine the policy by means of reinforcement learning on the approximating manifold, which results in a learning problem constrained to a low dimensional manifold. We developed a reinforcement learning algorithm with an extended parameter set, which combines learning in constrained domain with learning in full space of parametric movement primitives to explore also actions outside of the initial approximating manifold. In the second part of this talk I will present an approach that combines object manipulation with visual processing techniques originating in robust statistics.The developed system can autonomously learn visual models of unknown objects, which can later be used for recognition. The proposed approach is robust because firstly, it enables the robot to reliably segment unknown objects from the background, secondly, it does not require long term feature tracking, and thirdly, it enables the acquisition of state-of-the-art statistical models of object appearance. By making use of force sensing, the robot can react to unexpected collisions with other objects during exploration, thus expanding the variety of scenes that can be dealt with by the proposed system. The approach used for segmentation at the time of exploration is also used when the task is to recognize an object. This way segmentation becomes beneficial as a preprocessing step for object recognition. -------------------------------- Date and time: Monday 29th April 2013 at 16:00 Location: UG05, Learning Centre Title: Real Time Model Adaptation for Non-Stationary Systems Speaker: Mr. Hao Chen (http://www.linkedin.com/pub/hao-chen/28/901/257) Institution: School of Systems Engineering, University of Reading (http://www.reading.ac.uk/sse/) Host: Prof. Xin Yao Abstract: In this talk, I will present three new modelling approaches for the non-linear and non-stationary data sets, which are commonly generated in many systems including Radar, Sonar, communications, instrumentation, seismic exploration, speech processing and recognition, etc. Signal processing functions usually perform based on a pre-set model, or the system structure is fixed. Although this provides a simple solution, it is highly inefficient especially for non-stationary systems that are common in practice. This makes it highly desirable to adapt the model so that it can capture the true underlying dynamics and predicts accurately the output for unseen data. This talk will particularly focus on on-line real time model adaptation approaches which are an important class of model construction algorithms that deal with model structure and/or parameter updating on the arrival of new data. Firstly, I will describe the tunable radial basis function (RBF) network for on-line system identification. Based on a RBF network with individually tunable nodes and a fixed small model size, the weight vector is adjusted using the multi-innovation recursive least square (MRLS) algorithm online. When the residual error of the RBF network becomes large despite of the weight adaptation, an insignificant node with little contribution to the overall system is replaced by a new node. Structural parameters of the new node can be optimized by two ways (powerful version and fast version). I will then describe a new adaptive multiple modeling approach. Simulation results will give in comparison with some typical alternatives. -------------------------------- Date and time: Friday 17th May 2013 at 16:00 Location: UG09, Learning Centre Title: Online Kernel Density Estimation for learning and classification Speaker: Dr. Matej Kristan (http://www.vicos.si/matejk/) Institution: Visual Cognitive Systems Laboratory, Faculty of Computer and Information Science, University of Ljubljana (http://www.vicos.si/Main_Page) Host: Prof. Ales Leonardis Abstract: Many practical applications require building probabilistic models of a perceived environment, or of an observed process, in form of probability density functions (pdf) over some moderate-dimensional feature space. Building the pdf models from large batches of data may be computationally infeasible. Moreover, all the data may not be available in advance. We might want to observe some process for an indefinite duration, while continually providing the best estimate of the pdf from the data observed so far. To guarantee the model's low computational complexity at the its application, the model should remain simple enough even after observing many new data-points. In this talk I will present an approach to online estimation of probability density functions, which is based on kernel density estimation (KDE). The approach is called the online Kernel Density Estimation (oKDE) and generates a Gaussian mixture model from a stream of data-points. It retains the nonparametric quality of a KDE, allows adaptation from as little as a single data-point at a time, while keeping the model's complexity low. The oKDE allows estimation of stationary as well as nonstationary distributions and is extendable to online construction of classifiers. I will show the main results of oKDE comparison to competing batch KDEs and support vector machines on problems of online estimation of generative models as well as classifiers. -------------------------------- Date and time: Friday 31st May 2013 at 16:00 Location: UG06, Learning Centre Title: Solving the distal reward problem with rare neural correlations: from theory to robots Speaker: Dr. Andrea Soltoggio Institution: Research Institute for Cognition and Robotics (CoR-Lab), Bielefeld, Germany Host: Dr. John Bullinaria Abstract: When learning by trial and error, the results of actions, manifested as rewards or punishments, occur often seconds after the actions that caused them. How can a reward be associated with an earlier action when the neural activity that caused that action is no longer present in the network? This problem is referred to as the distal reward problem. A recent study suggests that the rarity of neural correlations may play a pivotal role in neural computation and in the regulation of plasticity. This talk will focus on general plasticity mechanisms such as Hebbian plasticity, neuromodulation and eligibility traces, and will then suggest how those mechanisms can be combined to produce useful neural dynamics and solve the distal reward problem. The efficacy of the model is shown in classical and operant conditioning tests in human-robot interactions. The proposed models also points to open questions over the nature of eligibility traces and the function of short- and long-term plasticity. I would also like to take this opportunity, in particular given the strong expertise in natural computation, machine learning and robotics of scientists at the School of Computer Science, to discuss implications of the proposed method and open questions in the field of neural plasticity and learning. -------------------------------- Date and time: Wednesday 12th June 2013 at 14:00 Location: UG09, Learning Centre Title: Understanding Brain Networks using Functional Imaging Speaker: Peter Zeidman Institution: Wellcome Trust Centre for Neuroimaging, University College London Host: Prof. Jeremy Wyatt Abstract: Understanding how individual brain structures contribute to our everyday lives is a key objective of cognitive neuroscience. Using functional Magnetic Resonance Imaging (fMRI), researchers generally perform a statistical test at each point in the brain to determine whether its activity was affected by an experimental manipulation. Although informative, this kind of analysis ignores a key consideration: parts of the brain do not act in isolation, but operate as large functional networks. In this presentation I will show how a ‘systems’ approach can contribute to our understanding of the human brain. I will draw on examples from the study of aphasia, where we have shown how the brain’s reading pathways can be disrupted by stroke, and from studies on memory and imagination, where measuring the connectivity of the hippocampus has given us fresh insights into its role. I will discuss the significant computational challenges that connectivity analyses pose, and present work capitalising on recent developments in the modelling of brain networks (Stochastic Dynamic Causal Modelling), which enables connectivity to be inferred during internally-driven cognitive processes, such as episodic memory. -------------------------------- Date and time: Thursday 13th June 2013 at 13:00 Location: UG07, Learning Centre Title: Using spatio-temporal knowledge for activity recognition and monitoring in Human-Robot Interaction Speaker: Michael Karg (http://hcai.in.tum.de/members/kargm) Institution: Technische Universität MĂźnchen Department of Computer Science () Host: Dr. Nick Hawes Abstract: Personal robots that are useful helpers for humans need to know about common tasks and activities of their human partners to be able to react adequately to human behavior. Thus, knowledge about human task performance becomes an inevitable part of a robotic system that is aimed to work together with humans in human centered environments like a household. In this talk I will show an approach that enables robots to acquire general, spatio-temporal plan representations (STPRs) from human motion tracking data in different environments given a semantically annotated map of the environment. Those models can successfully be used to create hierarchical Hidden Markov Models (HHMMs) and perform activity recognition and monitoring with inexpensive depth cameras in spatially limited environments. This enables a robot to maintain a probabilistic belief-state module about human task performance to react to human actions and even anticipate possible next actions of its human partner. The integration of this belief-state module into human-aware planning can enable a robot to better adapt its behavior to its human partner, thus being a more efficient helper for the human. -------------------------------- Date and time: Monday 12th August 2013 at 16:00 Location: UG05, Learning Centre Title: Pervasive Technologies for Depression Early Stage Prediction and Intervention (PerDESPI) Speaker: Professor Bin Hu (http://uais.lzu.edu.cn/a/team/tutors/2011/0818/713.html) Institution: School of Information Science and Engineering, Lanzhou University, China () Host: Jizheng Wan Abstract: Mental disease is one of the greatest personal, societal and economic problems of the modern world. With mental health care already representing over a third of the cost of health care in the EU, health services struggle to keep up. The most common of all mental disorders is Depression, a disease that causes immense individual and family suffering. The goal of the research is to contribute to the prevention of Depression. PerDESPI aims to develop pervasive technologies to monitor subjects’ physiological and cognitive state in their natural environment. These include wearable sensors to measure EEG , ECG, physical activity and sleep; tools for voice analysis (depression is associated with changes in voice tone) and activities. It also concentrates innovative solutions, e.g. Computer Cognitive Behavioral Therapy, which may intervene in Depression effectively. Short bio: Dr. Bin Hu, Professor, Dean, the School of Information Science and Engineering, Lanzhou University, China; IET Fellow; Director of Technical Committee of Cooperative Computing, China Computer Federation; Member at large of ACM China; Director of International Society for Social Neuroscience (China Committee); Guest Professor, Department of Computer Science, ETH , Zurich, Switzerland. His research interests include Cognitive Computing, Context Aware Computing, Pervasive Computing, and he has published about 100 papers in peer reviewed journals, conferences, and book chapters, e.g. PLoS One, PLoS Computational Biology, IEEE Intelligent Systems, IEEE Transactions on Information Technology in Biomedicine, IEEE Transactions on Systems, Man and Cybernetics, World Wide Web Journal, AAAI , UbiComp, ICONIP . His work has been funded by international funds, e.g. EU FP7 , HEFCE UK, NSFC China, “973” China and industry. He has served more than 60 international conferences and offered about 40 Keynotes/talks, also as editor/guest editor in about 10 peer reviewed journals in Computer Science. -------------------------------- Date and time: Monday 7th October 2013 at 16:00 Location: UG05, Learning Centre Title: TBA Speaker: Dr. Kieran Alden (http://www.kieranalden.info) Institution: School of Biosciences at the University of Birmingham () -------------------------------- Date and time: Monday 18th November 2013 at 16:00 Location: UG05, Learning Centre Title: TBA Speaker: Prof. Andy Tyrrell (http://www.elec.york.ac.uk/intsys/users/amt/index.shtml) Institution: Department of Electronics, University of York (http://www.elec.york.ac.uk) Host: Prof. Xin Yao