DEPARTMENTAL (OLD) SEMINARS
NOTE: Seminars in this series prior to Spring 2004 are listed on a separate archive page.
Visit http://events.cs.bham.ac.uk/seminar-archive/compsem for more information.
--------------------------------
Date and time: Thursday 15th January 2004 at 16:00
Location: UG40, School of Computer Science
Title: Humanoids with Intelligence and Imagination
Speaker: Murray Shanahan
Institution: Imperial College, London
Abstract:
This talk will discuss some ongoing work with an upper-torso humanoid
robot at Imperial College. The first project involves using abductive
reasoning to facilitate high-level active vision. The robot nudges an
object in its workspace to obtain a new view of it, and uses the
information gained to improve its set of hypotheses about what the
object might be. The second project involves the use of analogical
representations to predict the trajectories of moving objects in the
robot's workspace, thus endowing it with a rudimentary visual
imagination.
--------------------------------
Date and time: Thursday 22nd January 2004 at 16:00
Location: UG40, School of Computer Science
Title: TBA
Speaker: David Supple
Institution: Corporate Web Team, The University of Birmingham
--------------------------------
Date and time: Thursday 5th February 2004 at 16:00
Location: UG40, School of Computer Science
Title: Language Engineering and Assistive Computing: the case of
Patients with Limited English
Speaker: Professor Harold Somers
Institution: UMIST
Abstract:
Immigrants, asylum seekers and other speakers of non-indigenous
minority languages often have a level of English which is sufficient
for their day-to-day needs, but is inadequate for more formal
situations like a visit to the doctor. This talk will present a design
for a prototype of a computer-based system to support this need.
The CAMELS (Computer Aids for Minority Language Speakers)
system employs a variety of Language Engineering tools and
methods in an integrated environment aimed, in this proof-of-
concept prototype, at helping Somali or S speakers with
respiratory problems. The system operates in various situations: as
a self-help tool for an initial enquiry, to conduct a computer-
mediated interview to establish the patient's history, and as a
desktop translation/interpretation assistant during the
doctor-patient interview. At the heart of the system is multi-engine
MT (example-based, rule-base and simple lexical look-up), but
there are important issues of user-friendliness for more or less
experienced computer users with languages using a non-Roman
alphabet, not to mention the cultural, ethical, sociological and
linguistic issues. This talk will explore these and other aspects of
the project.
--------------------------------
Date and time: Thursday 12th February 2004 at 16:00
Location: UG40, School of Computer Science
Title: Mobile Resource Guarantees
Speaker: Don Sannella
(http://www.dcs.ed.ac.uk/home/dts/)
Institution: University of Edinburgh
(http://www.dcs.ed.ac.uk)
Abstract:
The Mobile Resource Guarantees (MRG) project is building
infrastructure for endowing mobile bytecode programs with
independently verifiable certificates describing their resource
consumption. These certificates will be condensed and formalised
mathematical proofs of a resource-related property which are by their
very nature self-evident and unforgeable. Arbitrarily complex methods
may be used to construct such a certificate, but once constructed its
verification will always be a simple computation. This makes it
feasible for the recipient to check that the proof is valid, and so
the claimed property holds, before running the code.
This work falls within an area known as "proof carrying code". Our
focus in MRG on quantitative resource guarantees is different from the
traditional PCC focus which is security. Another novelty is the
method to be used to generate proofs, which is to use a "linear" type
system that classifies programs according to their resource usage as
well as according to the kinds of values they consume and produce.
The intention is to generate proofs of resource usage from typing
derivations.
The MRG project (IST-2001-33149), which is a collaboration between the
University of Edinburgh and LMU Munich, is funded by the EC under the
FET proactive initiative on Global Computing.
--------------------------------
Date and time: Thursday 19th February 2004 at 16:00
Location: UG40, School of Computer Science
Title: Thesauruses for Natural Language Processing
Speaker: Adam Kilgarriff
Institution: University of Brighton
Abstract:
A thesaurus is a resource that groups words according to similarity. We
argue that manual thesauruses, like Roget and WordNet, and automatic,
distributional thesauruses produced from corpora are alternative
resources for the same language processing tasks. We discuss the tasks
they are relevant for and the roles of words and word senses. The WASPS
thesaurus is presented. Ways of evaluating thesauruses are proposed.
--------------------------------
Date and time: Thursday 26th February 2004 at 16:00
Location: UG40, School of Computer Science
Title: Grid Middleware: Dressing the Emperor!
Speaker: Gordon Blair
Institution: Lancaster University
Host: Behzad Bordbar
Abstract:
There has recently been a major investment in the UK and elsewhere in
the computational Grid and e-Science generally. A part of this
investment has been directed towards appropriate middleware for the Grid
with current thinking favouring an application of the web services
approach in this area. This seminar will discuss such developments and
also associated developments in the greater middleware community
(distributed objects, components, etc). It will be argued that the basic
web services approach is insufficient to meet the needs of all classes
of e-Science application. The talk will conclde by a short presentation
of ongoing research addressing more open and flexible middleware
architectures for Grid computing.
--------------------------------
Date and time: Thursday 4th March 2004 at 16:00
Location: UG40, School of Computer Science
Title: Time and Action lock detection via Rational Presburger
sentences
Speaker: Bezhad Bordbar
Institution: University of Birmingham, School of Computer Science
Host: Achim Jung
Abstract:
A Time Action Lock (TAL) is a state of a network of timed automata at
which neither time can progress nor an action can occur. TALs are often
seen as inconsistencies in the specification of the model. The seminar
presents a geometric method for detecting TALs in behavioural models of
Real-Time Distributed Systems, expressed as networks of Timed Automata.
Based on our theory, we can identify part of the specification that can
result in a TAL. Pointing the source of TAL to the designer results in
producing a TAL-free system, which avoids a group of design faults. We
have developed a CASE tool called TALC (Time Action Lock Checker), which
can be used in conjunction with the model checker UPPAAL. TALC conducts
static analysis of the UPPAAL model and provides feedback to the
designer. I shall present a short demo of the current version of TALC.
--------------------------------
Date and time: Thursday 11th March 2004 at 16:00
Location: UG40, School of Computer Science
Title: Strategic Enterprise Integration with Model-Driven
Architecture
Speaker: Andrew Watson
Institution: OMG
Host: Behzad Bordbar
Abstract:
IT systems are indispensable in running a modern business, but with
their success and ubiquity have come many problems. Building strategic
enterprise systems is a risky undertaking, with 15% of all IT projects
failing to deliver anything at all, and another 50% falling short of
their original specifications in some way. The huge rate of Information
Technology churn means that every new application seems to be built on a
different language and operating system, with Java replacing C++, Linux
challenging traditional Unix, and a dozen different operating systems
all called Windows. Re-engineering last year's application to use this
year's implementation technology is too risky and expensive, so users
are forced to make all their various strategic applications work
together. OMG has been working to solve the Enterprise Integration
problem for over ten years, using both middleware and model-driven
development techniques. This talk will outline these initiatives,
describe how they complement each other in solving Enterprise
Integration problems, and present some recent case studies and
independently-gathered statistics that show how successful they can be.
--------------------------------
Date and time: Thursday 18th March 2004 at 16:00
Location: UG40, School of Computer Science
Title: Multiple Cause Markov Analysis with Applications to User
Activity Profiling
Speaker: Ata Kaban
(http://www.cs.bham.ac.uk/~)
Institution: University of Birmingham
(http://www.cs.bham.ac.uk/~axk/)
Host: Peter Tino
Abstract:
This talk will present a distributed Markov model for the analysis of
symbolic sequences. Each observation sequence is assumed to be generated
by several 'basis' Markov chains that may interleave randomly in a
sequence-specific proportion of participation. Both the set of
basis-transitions and the sequence-specific mixing proportions are then
inferred by a linear-time algorithm.
An important application of this model is profiling of individuals'
sequential activity within a group. The possibly heterogeneous behaviour
of individuals is represented in terms of a relatively small number of
low complexity common behavioral patterns which may interleave randomly
according to individual-specific mixing proportions in order to
reconstruct more complex individual activity behaviors.
The results of an extensive empirical study on three different
application domains indicate that this modelling approach is potentially
an efficient compression scheme for temporal sequences that provides
useful human-interpretable representations as well as improved
prediction performance over existing comparable models.
--------------------------------
Date and time: Friday 19th March 2004 at 14:00
Location: UG40, School of Computer Science
Title: Call-by-value is dual to call-by-name
Speaker: Phil Wadler
(http://homepages.inf.ed.ac.uk/wadler/)
Institution: University of Edinburgh
(http://www.inf.ed.ac.uk/)
Host: Paul Levy
Abstract:
The rules of classical logic may be formulated in pairs corresponding to
De Morgan duals: rules about "and" are dual to rules about "or". A line
of work, including that of Filinski (1989), Griffin (1990), Parigot
(1992), Danos, Joinet, and Schellinx (1995), Selinger (1998,2001), and
Curien and Herbelin (2000), has led to the startling conclusion that
call-by-value is the de Morgan dual of call-by-name.
This paper presents a dual calculus that corresponds to the classical
sequent calculus of Gentzen (1935) in the same way that the lambda
calculus of Church (1932,1940) corresponds to the intuitionistic natural
deduction of Gentzen (1935). The paper includes crisp formulations of
call-by-value and call-by-name that are obviously dual; no similar
formulations appear in the literature. The paper gives a CPS translation
and its inverse, and shows that the translation is both sound and
complete, strengthening a result in Curien and Herbelin (2000).
Paper
--------------------------------
Date and time: Thursday 25th March 2004 at 16:00
Location: UG40, School of Computer Science
Title: Chemoinformatics: an introduction and some applications
of genetic algorithms
Speaker: Peter Willett
Institution: University of Sheffield
Host: Xin Yao
Abstract:
Chemoinformatics is the name given to a body of techniques that are
used
for the storage, retrieval and processing of information about the
structures (either two-dimensional or three-dimensional) of chemical
compounds. This presentation will commence with an introduction to
chemoinformatics, explaining what it is and how its techniques are used
to support the discovery of novel bioactive molecules for the
pharmaceutical and agrochemical industries. Many of the problems that
need to be addressed are inherently combinatorial in nature, and thus
amenable to investigation using non-deterministic approaches such as
genetic algorithms. The second part of the presentation will discuss
briefly several applications of genetic algorithms in chemoinformatics,
including flexible ligand docking, field-based similarity searching and
the design of structurally diverse combinatorial libraries.
Clark, D.E. (ed.) (2000). Evolutionary Algorithms in Molecular
Design. Weinheim: Wiley-VCH.
Gasteiger, J. & Engel, T. (eds.) (2003). Chemoinformatics. A
Textbook.
Weinheim: Wiley-VCH.
Leach, A.R. & Gillet V.J. (2003). An Introduction to Chemoinformatics.
Dordrecht: Kluwer.
--------------------------------
Date and time: Thursday 1st April 2004 at 16:00
Location: UG40, School of Computer Science
Title: Gödel Machines and other Wonders of the New, Rigorous,
Universal AI
Speaker: Juergen Schmidhuber
(http://www.idsia.ch/~juergen/goedelmachine.html)
Institution: Istituto Dalle Molle di Studi sull'Intelligenza
Artificiale
(http://www.cs.bham.ac.uk)
Host: Aaron Sloman & Jeremy Wyatt
Abstract:
An old dream of computer scientists is to build an optimally efficient
universal problem solver. We show how to solve arbitrary computational
problems in an optimal fashion inspired by Kurt Gödel's celebrated
self-referential formulas (1931). Our Gödel machine's initial software
includes an axiomatic description of: the problem, the hardware, known
aspects of its environment, costs of actions and computations, and the
initial software itself (this is possible without introducing
circularity). It also includes an asymptotically optimal proof searcher
searching the space of computable proof techniques--that is, programs
whose outputs are proofs. The Gödel machine will rewrite any part of its
software, including the proof searcher, as soon as it has found a proof
that this will improve its future performance. We show that
self-rewrites are globally optimal--no local minima!--since provably
none of all the alternative rewrites and proofs (those that could be
found by continuing the proof search) are worth waiting for.
Further details available here:
http://www.idsia.ch/~juergen/goedelmachine.html
http://www.idsia.ch/~juergen/gmsummary.html
--------------------------------
Date and time: Thursday 29th April 2004 at 16:00
Location: UG40, School of Computer Science
Title: Computational natural selection and the 'ALife Test'
Speaker: Alastair Channon
(http://www.channon.net/alastair/)
Institution: School of Computer Science
(http://www.cs.bham.ac.uk)
Host: Achim Jung
Abstract:
Computational natural selection, in which the phenotype to fitness
mapping is an emergent property of the evolving environment and
competition is biotic rather than abiotic, is a paradigm that aims
towards the creation of open-ended evolutionary systems. Within such
an environment, increasingly complex behaviours can emerge. Bedau,
Snyder & Packard's statistical classification system for long-term
evolutionary dynamics provides a test for open-ended evolution. Making
this test more rigorous, and passing it, are two of the most important
open problems for research into the unbounded evolution of novel
behaviours.
In this talk I will give an introduction to computational natural
selection and describe the application of the 'Artificial Life (ALife)
Test' to Geb, a system designed to verify and extend theories behind
the generation of evolutionary emergent systems. The result is that,
according to these statistics, Geb exhibits unbounded evolutionary
dynamics, making it the first autonomous artificial system to pass the
test. I will also briefly describe how computational natural selection
systems might be used for future applications.
--------------------------------
Date and time: Thursday 6th May 2004 at 16:00
Location: UG40, School of Computer Science
Title: Graph Colouring and other optimization tasks on random
graphs
Speaker: Jort van Mourik
(http://www.ncrg.aston.ac.uk/~vanmourj/)
Institution: Aston University, Neural Computing Research Group
(http://www.ncrg.aston.ac.uk)
Host: Peter Tino
Abstract:
Recent developments in statistical physics based algorithms for solving
hard optimization problems on large sparse graphs will be discussed.
We will concentrate on the graph colouring problem in particular, and
show how insights from statistical physics have lead to novel lgorithms
that are competitive with existing methods.
--------------------------------
Date and time: Thursday 13th May 2004 at 16:00
Location: UG40, School of Computer Science
Title: TBA
Speaker: John Barnden
(http://www.cs.bham.ac.uk/~jab)
Institution: The University of Birmingham,School of Computer Science
(http://www.cs.bham.ac.uk)
Abstract:
TBA
--------------------------------
Date and time: Thursday 20th May 2004 at 16:00
Location: UG40, School of Computer Science
Title: Biologically inspired mechanisms for robot learning
Speaker: Yiannis Demiris
(http://www.iis.ee.ic.ac.uk/yiannis)
Institution: Department of Electrical and Electronic Engineering,
Imperial College London
(http://www.iis.ee.ic.ac.uk)
Host: Jeremy Wyatt
Abstract:
Within societies, an individual learns not only on its own, but to a
large extent from other individuals, by observation and imitation. At
the heart of an agent's ability to imitate there is a mechanism that
matches perceived external behaviours with equivalent internal
behaviours of its own, recruiting information from the perceptual, motor
and memory systems. The talk will present my research in developing
computational models of this mechanism and applying them to robotic
systems and real-physics based simulations, with a dual purpose:
(a) developing robots that can imitate and learn from humans
(b) developing plausible explanations and testable predictions
regarding the experimental data available on the behaviour and
performance of imitation mechanisms in primates. It will argue that
classical sense-think-act decompositions of the imitation process do not
correlate well with biological data, and will put forward an approach
where the motor systems of an observer are actively involved in the
perception of the demonstration using prediction as the main driving
force.
--------------------------------
Date and time: Thursday 27th May 2004 at 16:00
Location: UG40, School of Computer Science
Title: Generating appropriate text for poor readers
Speaker: Ehud Reiter
(http://www.csd.abdn.ac.uk/~ereiter/research.html)
Institution: Computing Science, University of Aberdeen
(http://www.csd.abdn.ac.uk)
Host: Alan Wallington
Abstract:
Many people in the UK and elsewhere have limited reading skills; for
example, 20% of UK adults have a reading age of 10 or less. People with
limited reading ability need texts with simple words and short
sentences; they also need texts that hold their interest (otherwise they
will give up on reading them).
I will explore these issues in the context of SkillSum, a new project at
Aberdeen whose goal is to generate feedback reports for people taking
assessments of their basic skills (literacy and numeracy).
--------------------------------
Date and time: Thursday 3rd June 2004 at 16:00
Location: UG40, School of Computer Science
Title: Integrating Model Checking and Theorem Proving for
Industrial Hardware Verification in a Reflective
Functional Language
Speaker: Tom Melham
(http://web.comlab.ox.ac.uk/oucl/people/tom.melham.html)
Institution: Computing Laboratory, Oxford University
(http://www.ox.ac.uk/)
Abstract:
Forte is a formal verification system developed by Intel's Strategic CAD
Labs for applications in hardware design and verification. Forte
integrates model checking and theorem proving within a functional
programming language, which both serves as an extensible specification
language and allows the system to be scripted and customized.
The latest version of this language, called reFLect, has quotation and
antiquotation constructs that build and decompose expressions in the
language itself. This provides combination of pattern-matching and
reflection features tailored especially for the Forte approach to
verification. This talk will describe the design philosophy and
architecture of the Forte system and give an account of the role of
reFLect in the system.
This is joint work with John O'Leary and Jim Grundy of Intel
Corporation.
--------------------------------
Date and time: Monday 7th June 2004 at 16:00
Location: UG40, School of Computer Science
Title: From wireless to sensor networks and beyond
Speaker: P. R. Kumar
(http://black.csl.uiuc.edu/~prkumar)
Institution: University of Illinois, Urbana-Champaign
(http://www.uiuc.edu/index.html)
Host: Marta Kwiatkowska
Abstract:
We begin by addressing the question: How much
information can wireless networks transport,
and what is an appropriate architecture for
information transfer? We provide an
information theory which is designed to shed
light on these issues.
Next we consider three protocols for ad hoc
networks: the COMPOW protocol for power
control, the SEEDEX protocol
for media access control, and the STARA
protocol for routing and load balancing. Then
we turn to sensor networks and address the
issue of issue of how to organize their
harvesting. Finally, we turn to what could be
the next phase of
the information technology revolution: The
convergence of control with communication and
computing. We highlight the importance of
architecture, and describe our efforts in
developing an application testbed and an
appropriate middleware.
--------------------------------
Date and time: Thursday 10th June 2004 at 16:00
Location: UG40, School of Computer Science
Title: Domain Theory for Concurrency
Speaker: Glynn Winskel
(http://www.cl.cam.ac.uk/~gw104/)
Institution: Computer Laboratory, University of Cambridge
(http://www.cl.cam.ac.uk)
Host: Achim Jung
--------------------------------
Date and time: Thursday 17th June 2004 at 16:00
Location: UG40, School of Computer Science
Title: Satan's Computer, Revisited
Speaker: Ross Anderson
(http://www.cl.cam.ac.uk/users/rja14)
Institution: University of Cambridge Computer Laboratory
(http://www.cl.cam.ac.uk)
Host: Prof. Achim Jung
Abstract:
Designers of distributed system have spent twenty-five years struggling
with security protocols. These protocols, which are used to authenticate
users and authorise transactions, may involve the exchange of 3-5
messages, and one would think that programs of this complexity would be
easy to get right. But bugs keep on being discovered in protocols, even
years after they were first published. Almost ten years ago, Roger
Needham and I described protocol design as Programming Satan\'s Computer
[http://www.cl.cam.ac.uk/ftp/users/rja14/satan.pdf] : the problem is the
presence of a hostile opponent, who can alter messages at will. In
effect we're trying to program a computer which gives answers that are
subtly and maliciously wrong at the most inconvenient possible moment.
Four years ago, I started applying protocol ideas to study the security
of application programming interfaces (APIs). In applications from
banking through utility metering to defence, some critical operations
are delegated to tamper-resistant cryptographic processors. These
devices are driven by a stream of transactions from a host computer, and
the set of possible transactions constitutes their API. I found
combinations of transactions that broke security; Mike Bond found many
more, and further API attacks have been discovered by Jolyon Clulow and
Eli Biham. Mike, Jolyon and I have discovered that most security
processors on the market can be defeated by sending them conbinations of
transactions which their designers had not anticipated. The early
attacks are described in our paper API Level Attacks on Embedded Systems
[http://www.cl.cam.ac.uk/users/mkb23/research/API-Attacks.pdf] .
Where now? I will argue
[http://www.cl.cam.ac.uk/ftp/users/rja14/bond-anderson.pdf] that API
security problems are not just important to designers of
cryptoprocessors. First, Microsoft's new Longhorn operating system will
invite all application programmers to decompose their code into a
trusted part (the NCA) and a larger untrusted part; this will bring API
trust issues into the mainstream. Second, API security connects protocol
analysis with the composition problem - the problem that connecting two
systems that are secure in isolation can give a composite system that
leaks. This had previously been seen as a separate issue, tackled with
different tools. Finally, there is a link emerging between protocol
analysis and secure multiparty computation. How can a computation be
shared between a number of parties, if some concerted minority of them
may collude to disrupt the computation in a way that leaks private data?
--------------------------------
Date and time: Thursday 1st July 2004 at 16:00
Location: UG40, School of Computer Science
Title: Realtime Traffic Monitoring and Containment of DOS
attacks
Speaker: A L Narasimha Reddy
(http://ee.tamu.edu/%7Ereddy/)
Institution: Electrical Engineering Dept, Texas A&M University
(http://ee.tamu.edu/htmlIntro.htm)
Host: Prof Uday Reddy
Abstract:
Recent attacks on network infrastructure in the form of denial of
service attacks and worms have raised the need for realtime traffic
monitoring. In this talk, we will present simple and effective
approaches to realtime traffic monitoring.
In the first part of the talk, we will discuss network elements based
on partial state to detect and contain DOS attacks. The limited amount
of state can be efficiently managed to capture the significant or
dominant flows in the traffic. We will show that the partial state
network elements allow effective resource management to contain Denial
of Service attacks on network infrastructure, and allow fairer
distribution of network resources. We will report on our experience
and results in building a partial-state router based on a Linux-PC.
In the second part of the talk, we will discuss a signal/image
processing approach to aggregate analysis of network traffic
data. Aggregate analysis is necessary for detecting DDOS attacks and
may be the only option for realtime traffic monitoring at high link
speeds. Our approach generates signals from aggregate packet header
data, applies statistical analyses to detect abnormalities. We will
report on our experience in building a traffic analysis tool based on
such an approach.
--------------------------------
Date and time: Monday 6th September 2004 at 14:00
Location: UG40, School of Computer Science
Title: Interleaved Visual Object Categorization and Segmentation
in Real-World Scenes
Speaker: Bernt Schiele
(http://www.mis.informatik.tu-darmstadt.de/schiele)
Institution: Darmstadt University of Technology
(http://www.mis.informatik.tu-darmstadt.de)
Host: Aaron Sloman and Jeremy Wyatt
Abstract:
We present a method for object categorization in real-world scenes.
Following a common consensus in the field, we do not assume that a
figure-ground segmentation is available prior to recognition. However,
in contrast to most standard approaches for object class recognition,
our approach effectively segments the object as a result of the
categorization. This combination of recognition and segmentation into
one process is made possible by our use of an Implicit Shape Model,
which integrates both into a common probabilistic framework. In
addition to the recognition and segmentation result, it also generates
a per-pixel confidence measure specifying the area that supports a
hypothesis and how much it can be trusted. We use this confidence to
derive a natural extension of the approach to handle multiple objects
in a scene and resolve ambiguities between overlapping hypotheses with
an MDL-based criterion. In addition, we present an extensive evaluation
of our method on a standard dataset for car detection and compare its
performance to existing methods from the literature. Our results show a
significant improvement over previously published methods. Finally, we
present results for articulated objects, which show that the proposed
method can categorize and segment unfamiliar objects in different
articulations and with widely varying texture patterns. Moreover, it
can cope with significant partial occlusion and scale changes.
--------------------------------
Date and time: Thursday 23rd September 2004 at 16:00
Location: UG40, School of Computer Science
Title: Nonparametric Inference: Gaussian Processes
Speaker: Lehel Csato
(http://www.kyb.mpg.de/~csatol)
Institution: Max Planck Institute for Biological Cybernetics,
Tuebingen, Germany
(http://www.kyb.mpg.de/)
Host: Ata Kaban
Abstract:
Nonparametric methods in machine learning are very popular. Some of them
are being used with success in a wide variety of applications - from
pattern recognition to approximate model inversions. I will talk about
inference using a non-parametric setting, namely the Gaussian
processes.
Gaussian processes are random functions and they will be used a latent
variables which aim to model data. Considering a likelihood data model,
one can build a posterior process and asses the fit to the data. The
analytic computations involved usually are not doable, thus I will
present an approximation method which lets you compute the posterior and
consider examples for likelihoods designed to fit classification,
regression and inversion pproblems.
--------------------------------
Date and time: Friday 1st October 2004 at 14:00
Location: UG40, School of Computer Science
Title: Planning with Uncertainty in Continuous Domains
Speaker: Richard Dearden
(http://is.arc.nasa.gov/AR/tasks/PrbDet.html)
Institution: NASA Ames Research Center, USA
(http://is.arc.nasa.gov/)
Host: Aaron Sloman
Abstract:
We examine the problem of planning with resources where the quantity
of resource consumed by each action is uncertain. We look at two
possible approaches to such problems, one based on classical planning,
and the other on Markov Decision Problems (MDPs). In the classical
approach
we use a plangraph to select goals, based on a heuristic estimate of
the resources needed to reach the goals. The MDP approach is motivated
by the observation that even computing the value function for a plan
is difficult. We show that by exploiting structure in the problem, the
state
space can be dynamically partitioned into regions where the value
function is constant, thus avoiding naive discretization of the
continuous dimensions. We show that we can efficiently compute the
optimal policy and its value function, while maintaining a
representation of value function structure. We apply the techniques to
problems motivated by Mars rover activity planning.
--------------------------------
Date and time: Thursday 7th October 2004 at 16:00
Location: UG40, School of Computer Science
Title: Using computers to make new kinds of books
Speaker: David Parker, Peter Robinson, Barbara Bordalejo
(http://theology.bham.ac.uk/parker/index.htm)
Institution: Department of Theology
(http://www.theology.bham.ac.uk/)
Host: Uday Reddy
Abstract:
Digital editing is the biggest revolution in book production since
Gutenberg, and arguably even more significant. In the next years, we can
expect texts of every major literary and historical work to be turned
into electronic form. Currently two important players on the world scene
are co-operating closely. The Centre for Technology and the Arts (CTA)
is located at de Montfort University. The Centre for the Editing of
Texts in Religion (CETR) is at Birmingham University. It is planned to
amalgamate the two at the UoB as an Institute for Textual Scholarship
and Electronic Editing. The paper will demonstrate the text-editing
software that has been been developed at CTA, and describe the further
developments that are possible, with particular reference to themes that
are of interest in the CS community. The CTA has had successful
collaborations with evolutionary biologists devoted to the further
understanding of how phylogenetic software works with manuscript
traditions. Future research will involve the use of pattern recognition
to link texts and images and perhaps to help identify scribes.
--------------------------------
Date and time: Thursday 14th October 2004 at 16:00
Location: UG40, School of Computer Science
Title: Game Theory and Mechanism Design For Agent Based Systems
Speaker: Alex Rogers
(http://www.ecs.soton.ac.uk/people/acr)
Institution: University of Southampton
(http://www.ecs.soton.ac.uk)
Host: Vibhu Walia
Abstract:
There is currently much interest in designing open agent based systems,
whereby the resources of the system are contributed and owned by
different stakeholders, and yet these resources contribute to some
common goal. We are particularly interested in data fusion
applications,
where distributed sensors are connected through some bandwidth limited
communication network and each individual sensor is seeking to improve
its own individual 'view of the world' by fusing data from other
sources. In many such systems, we would like to ensure some overall
global performance despite the autonomous selfish actions of the
individuals within the system. In this talk, I will discuss how game
theory and specifically mechanism design, allow us address some of
these
issues, and also how we are attempting to overcome some of the
additional limitations that this methodology imposes upon such systems.
--------------------------------
Date and time: Thursday 21st October 2004 at 16:00
Location: UG40, School of Computer Science
Title: Positive Usability
Speaker: Paul Cairns
(http://www.uclic.ucl.ac.uk/paul/)
Institution: University College London
(http://www.uclic.ucl.ac.uk/)
Host: Volker Sorge
Abstract:
Within HCI there is a move towards designing for user experience. I
describe a way of understanding this move in terms of usability as a
privative. This leads to understanding how to move away from the
privative
idea to positive usability. However, what constitutes a positive user
experience? To this end, I describe three projects that I have
supervised
trying to better understand immersion in interactive systems. Immersion
is
a well used term in describing software, particularly games, but we have
tried to find out what exactly immersion means to users both in games
and
in an interactive, educational exhibit. The relationship between
experiences of immersion and properties of the system are, naturally,
not
easy to link but the notion of narrative seems to be very important.
--------------------------------
Date and time: Thursday 4th November 2004 at 16:00
Location: UG40, School of Computer Science
Title: Biped Robots, Industrial Robots, Evolutionary Robots, and
Intelligent Artificial Legs
Speaker: Guan-zheng Tan
(http://www.csu.edu.cn)
Institution: Central South University, The People's Republic of China
(http://www.csu.edu.cn)
Host: Aaron Sloman
Abstract:
The talk has four parts.
Part 1: Research on biped robots
Research on NAIWR-1 biped robot, which was designed in 1991 by me,
Prof.
Zhong-Xin Wei, and Prof. Jian-Ying Zhu. I will introduce briefly the
mechanical structure, control system, kinematical and dynamical
modeling, and gait planning method of the robot.
Part 2: Research on optimal trajectory planning for industrial robots
I will introduce my research work on industrial robots, including
optimal hand path tracking strategy and time-optimal joint trajectory
planning method for robotic manipulators. Using this method, the
working
efficiency of an industrial robot can be raised and its life span can
be
extended.
Part 3: Research on competitive co-evolution strategies for intelligent
robots.
I will introduce briefly the research work done by my MS Degree student
Lianmin Liu and me on competitive co-evolution strategies for
intelligent robots based on genetic algorithm with complex-valued
encoding and ant system algorithm.
Two mobile robots with neural networks control structure were put
into an unknown simulation environment. One of them played the hunter
and the other played the prey. The hunter always tried to catch the
prey. The genetic algorithm with complex-valued encoding was mainly
used
to evolve the neural networks of controller of robots. The experiment
results showed that the genetic algorithm with complex-valued encoding
has better evolutionary ability as compared with the general genetic
algorithm. The ant algorithm was mainly used to search for the best
weights of the robot's neural networks. The experiment results showed
that the ant algorithm has better evolutionary ability as compared with
the genetic algorithm with complex-valued encoding.
Part 4: Research on intelligent artificial legs
I will introduce my research work on CIP-I intelligent artificial leg,
which has been fabricated this June and the research on its control
system is under way. The project is supported by The National Natural
Science Foundation of China and The Foundation of Robotic Laboratory of
Chinese Academy of Sciences. I will introduce briefly the mechanical
structure and control system of CIP-I leg, which consists of a knee
joint, a shank, and a foot. Among them, the knee joint is the most
important component, in which there is an air cylinder with a D.C.
motor, a microprocessor, a walking speed sensor, and two batteries. The
air cylinder is the actuating mechanism used to control the bend and
stretch movements of the knee joint. The walking speed sensor is used
to
measure the walking speed of the leg in real time. The motor is used to
control the opening of a throttle valve in the cylinder. Regulating the
opening can change the bend and stretch speeds of the knee joint
and
thereby achieve the goal of changing the walking speed of
the leg. The microprocessor controls the motion of the motor
according
to the measurement value of walking speed.
For more information about the research please see:
http://www.cs.bham.ac.uk/~gzt/
--------------------------------
Date and time: Thursday 11th November 2004 at 16:00
Location: UG40, School of Computer Science
Title: SPARK: Solving Large Non-uniform Systems of Differential
and Algebraic Equations
Speaker: Andrew Moshier
(http://www1.chapman.edu/cpsc/faculty/moshier/)
Institution: Chapman University, USA
(http://www1.chapman.edu/cpsc/)
Host: Achim Jung
Abstract:
SPARK is a system for solving systems of algebraic and differential
equations, originally motivated by problems in modelling mechanical
systems on which traditional solvers perform poorly. These mechanical
systems, such as large office buildings, are governed by algebraic and
differential equations that are highly non-uniform. That is, the
underlying equations are not interconnected by a simple geometry, are
typically non-linear, and have widely differing numerical behaviors.
SPARK uses several simple graph-theoretic techniques to decompose a
system of equations at compile time into independantly solvable
sub-systems and to determine significantly smaller effective
dimensions for the separate sub-systems. The result is significant
performance gains when compared to other available solvers.
In this talk, we will discuss in more detail the sorts of performance
problems that arise from modelling mechanical systems, and will
describe
the SPARK methods for dealing with these performance problems. We will
provide experimental data comparing SPARK to other systems, including
standard sparse matrix packages and HVACSIM+, the most used commercial
system available for modelling of buildings.
--------------------------------
Date and time: Thursday 18th November 2004 at 16:00
Location: UG40, School of Computer Science
Title: Strength or Accuracy? Credit assignment in Classifier
Systems
Speaker: Tim Kovacs
(http://www.cs.bris.ac.uk/%7Ekovacs/index.html)
Institution: University of Bristol
(http://www.cs.bris.ac.uk/)
Host: Achim Jung, Manfred Kerber
Abstract:
This talk reviews some work on the problem of crediting individual
components of a complex adaptive system for their often subtle effects
on the world. For example, in a game of chess, how did each move (and
the reasoning behind it) contribute to the outcome? Application of
adaptive methods, whether to classification or control tasks, requires
effective approaches to this sort of credit assignment problem.
A fundamental approach is to evaluate components of solutions, rather
than complete solutions, with the intention of simplifying the credit
assignment problem. This is the approach taken by Michigan Learning
Classifier Systems, which combine Evolutionary Algorithms, to generate
solution components, and Reinforcement Learning methods to evaluate
them. Unfortunately, as will be outlined, serious complications arise
from this attempt at simplification. Most significantly, it will be
shown that both of the main approaches (strength-based and
accuracy-based systems) have difficulties with certain tasks which the
other type does not. The talk will also outline the causes of the main
difficulties each type of system faces, the types of tasks which cause
these difficulties, and prospects for addressing them.
--------------------------------
Date and time: Thursday 25th November 2004 at 16:00
Location: UG40, School of Computer Science
Title: Interactive Applications for Teaching Basic Concepts of
Database and Internet Programming
Speaker: Richard Cooper
(http://www.dcs.gla.ac.uk/~rich)
Institution: University of Glasgow
(http://www.dcs.gla.ac.uk)
Abstract:
Everyday use of databases and the web necessarily obscures the
underlying mechanisms which these systems use. Therefore, teaching
the concepts involved in such systems cannot wholly be achieved by
giving the students practice with commercial software. I have been
developing applications which give students access to the underlying
processes. These include permitting students to: enter relational
algebra and calculus programs; see how ER diagrams are turned into a
set of relations; step through normalisation; observe how XML programs
and web services work; and explore the information being transmitted
over HTTP.
Experience with the development of these applications has led to the
identification of a number of software structures which may be
extracted. These include modules which: support interaction with
(i.e. select, edit and move) textual documents and diagrams;
highlight the correspondence between two representations; step through
a process; and provide hyperlinked feedback and help systems. The
talk concludes with the aspiration of extracting abstract versions of
these modules in order to accelerate the development of further
applications.
--------------------------------
Date and time: Thursday 2nd December 2004 at 16:00
Location: UG40, School of Computer Science
Title: Analysis and Synthesis of Logic Controllers
Speaker: Jean-Marc Faure
(http://www.lurpa.ens-cachan.fr/membre.html?membre_id=JMF)
Institution: LURPA Cachan, France
(http://www.lurpa.ens-cachan.fr/)
Host: Hayo Thielecke
Abstract:
This talk will present some recent research results obtained by the
Automation Engineering team of LURPA. The overall objective of this
work is to provide formal means allowing to develop controllers that
comply with the dependability requirements of the application. Two
verification techniques will be addressed: model-checking and
theorem-proving, this latter one using a specific algebra for binary
signals. This algebra is also the underlying theory of our works on
synthesis of controllers. The application requirements are stated
formally thanks to a partial order relation; then consistency checking
of the set of formal statements is performed and control laws are
derived from the consistent set obtained. Finally some prospects will
be given and preliminary results of ongoing research on networked
control systems will be presented.
--------------------------------
Date and time: Thursday 16th December 2004 at 16:00
Location: UG40, School of Computer Science
Title: Cognitive Architecture for Software Application
Architectures
Speaker: John Knapman
Institution: Previously IBM Hursley [now doing independent research on
architectures]
Host: Aaron Sloman
Abstract:
Prior work has applied simple probabilities and uncertainty to models
of software application architectures that use the Unified Modeling
Language (UML). An interesting extension would be to apply techniques
developed in AI research for mapping analogies; for example, stating
that a change to system X is required like the change C made to system
Y. Such ideas have been attempted by others. However, careful
consideration suggests that more fundamental investigation is needed
if we are ever to build tools that encapsulate worthwhile amounts of
knowledge.
Attempts to solve the problems in isolation may forever lead only to
brittle, incomplete systems with limited usefulness, either as
explanatory models or as applications of AI.
I want to investigate the integration of methods and representations
in a cognitive architecture with a view to applying them to software
application architectures.
--------------------------------
Date and time: Thursday 13th January 2005 at 16:00
Location: UG40, School of Computer Science
Title: TeraHertz Imaging
Speaker: Dr Liz Berry
Institution: University of Leeds
Host: Ela Claridge
Abstract:
TBA
--------------------------------
Date and time: Thursday 20th January 2005 at 16:00
Location: UG40, School of Computer Science
Title: The requirements and challenges of automated intrusion
response
Speaker: Steven Furnell
(http://ted.see.plym.ac.uk/nrg/people/sfurnell.asp)
Institution: University of Plymouth
(http://www.plymouth.ac.uk/pages/view.asp?page=7491)
Host: Catriona Kennedy
Abstract:
The continual problem of Internet attacks has served to make the
intrusion detection system an increasingly common security
countermeasure. However, whereas detection technologies have received
extensive research for many years, the issue of intrusion response has
received relatively little attention - particularly in the context of
automated and active response systems, which may be considered
evermore desirable given the volume and speed of Internet-based
attacks. Unfortunately, effective automation of responses is
complicated by the potential for issuing severe actions in a false
positive scenario. Addressing this problem leads to requirements such
as the ability to adapt decisions according to changes in the
environment, the facility to offer escalating levels of response, and
the capability to evaluate response decisions. The presentation will
explore these issues and discuss how some of the required concepts are
achieved within a Flexible Automated Intelligent Responder (FAIR)
architecture and an accompanying prototype.
--------------------------------
Date and time: Thursday 3rd February 2005 at 16:00
Location: UG40, School of Computer Science
Title: The Computer Ate My Vote
Speaker: Peter Y A Ryan
(http://www.cs.ncl.ac.uk/people/home.php?id=105)
Institution: University of Newcastle upon Tyne
(http://www.cs.ncl.ac.uk/)
Host: Mark Ryan
Abstract:
For centuries, in the UK at least, we have taken democracy for granted
and placed trust in the paper ballot approach to casting and counting
votes. In reality, the process of casting and counting votes is one of
considerable fragility. This has been recognized since the dawn of
democracy: the Ancient Greeks perceived the threat and devised
mechanical devices to try to sidestep the need to place trust in
officials. For over a century, the US has been using technological
approaches to recording and counting votes: level machines, punch
cards, optical readers, touch screen machines, largely in response to
widespread corruption with paper ballots. In the last few years, the
UK has been experimenting with alternative voting technologies.
In this talk I will discuss approaches to achieving assurance of
accuracy and ballot secrecy in electoral systems. In particular I
will present a cryptographic scheme, based on an earlier scheme due to
Chaum, that has the remarkable property of providing voters with the
opportunity to verify that their vote is accurately counted whilst
still ensuring the secrecy of their ballot. At the same time, minimal
trust need be placed in the technology or officials.
--------------------------------
Date and time: Thursday 10th February 2005 at 16:00
Location: UG40, School of Computer Science
Title: Complexity in Predicative Arithmetic
Speaker: Stan Wainer
(http://www.amsta.leeds.ac.uk/pure/staff/wainer/wainer.html)
Institution: University of Leeds
(http://www.amsta.leeds.ac.uk/)
Host: Achim Jung
Abstract:
Complexity classes between polynomial and (iterated) exponential time
are characterised in terms of provable termination in a theory
formalising basic principles of Nelson's Predicative Arithmetic, and
based on the Bellantoni-Cook "normal/safe" variable separation.
Extensions by inductive definitions enable full arithmetic and higher
systems to be recaptured in a setting where the natural bounding
functions are "slow" rather than "fast" growing.
--------------------------------
Date and time: Thursday 17th February 2005 at 16:00
Location: UG40, School of Computer Science
Title: A Strongly Typed Approach to User Interfaces of
Submit/Response Style Systems
Speaker: Dirk Draheim
(http://www.inf.fu-berlin.de/inst/ag-pr/draheim/)
Institution: Freie Universität Berlin
(http://www.inf.fu-berlin.de/index.en.html)
Host: Uday Reddy
Abstract:
Submit/Response style systems range from simple Web shops to complex
enterprise resource planning systems. Form-oriented analysis is a
holistic, domain-specific approach to the development of such systems.
Submit/Response style systems are modeled as typed, coarse-grained,
bipartite state-machines that are accessed through conceptual browsers.
From this abstract viewpoint concrete concepts can be derived at
different levels of methodology and technology. In this talk we discuss
strong type system support - both from a source code analysis viewpoint
and a generative programming viewpoint. Angie is a forward engineering
tool for web-based presentation layers. JSPick is a server pages design
recovery tool. A formal semantics of this tool is given as
pseudo-evaluation. Revangie is a source code independent reverse
engineering tool for dynamic web sites. NSP is a statically typed server
pages approach. The defined notions of type correctness ensure the
type-safe interplay of dynamically generated web forms and targeted
software components.
--------------------------------
Date and time: Thursday 24th February 2005 at 16:00
Location: UG40, School of Computer Science
Title: Journeys in Non-Classical Computation -- a UK Grand
Challenge in Computing Research
Speaker: Susan Stepney
(http://www-users.cs.york.ac.uk/~susan/)
Institution: University of York
(http://www.cs.york.ac.uk)
Host: Aaron Sloman
Abstract:
Today's computing, classical computing, is an extraordinary success
story. However, there is a growing appreciation that it encompasses
an extremely small subset of all computational possibilities.
The Grand Challenge of Non-Classical Computation seeks to bring
about a reconceptulisation of computation itself. The various forms
of non-classical computation -- bio-inspired algorithms, open complex
adaptive systems, embodied computation, quantum computation, and
more -- will not supersede classical computation, however: they will
augment and enrich it. The Grand Challenge seeks to explore,
generalise, and unify all the many diverse non-classical
computational paradigms, to produce a fully mature and rich science
of all forms of computation, that unifies the classical and
non-classical computational paradigms.
--------------------------------
Date and time: Thursday 3rd March 2005 at 16:00
Location: UG40, School of Computer Science
Title: A Bigraphical Programming Environment - contributing to
the UK Grand Challenge on Science for Global Ubiquitous
Computing
Speaker: Thomas Hildebrandt
(http://www.it-c.dk/people/hilde/)
Institution: IT University of Copenhagen, Denmark
(http://www.it-c.dk/)
Host: Marta Kwiatkowska
Abstract:
The UK Grand Challenge on Science for Global Ubiquitous Computing
(GUC) is to develop a coherent science for descriptive and predictive
analysis of GUC systems at each level of abstraction, that will be the
sole foundation of the GUC systems and languages constructed in 15
years from now.
The Bigraphical Programming Languages (BPL) project at IT University
of Copenhagen addresses many of the issues raised by the SGUC
challenge.
The project aims to develop a prototype programming environment for
GUC based on the theory of Bigraphical Reactive Systems and tested on
experimental applications developed at Laboratory for
Context-dependent Mobile Communication (LaCoMoCo.itu.dk) at ITU.
The BPL research group covers programming language technology and
semantics, distributed systems and concurrency theory. It collaborates
with Robin Milner and is working on establishing further
collaborations with external research groups.
In this talk we will describe one of the activities in the BPL
project, called Distributed Reactive XML, which aims at building a
prototype, distributed implementation of a bigraphical programming
environment based on XML technologies.
Distributed Reactive XML is joint work with Henning Niss and Martin
Olsen at ITU.
--------------------------------
Date and time: Thursday 10th March 2005 at 16:00
Location: UG40, School of Computer Science
Title: Sampling Graph Colourings
Speaker: Leslie Goldberg
(http://www.dcs.warwick.ac.uk/people/academic/Leslie.Goldberg/)
Institution: University of Warwick
(http://www.dcs.warwick.ac.uk/)
Host: Jon Rowe
Abstract:
A "colouring" of a graph is a function from the set of vertices of the
graph to a set of q colours. The colouring is "proper" if it does not
assign adjacent vertices the same colour. "Glauber dynamics" is a
simple process which is often used to choose a proper colouring
uniformly at random. The idea is simple --- keep repeating the
following step: choose a vertex uniformly at random, and ``randomly''
recolour the vertex. This approach (and others based on it) are used
frequently in statistical physics. It is fairly easy to see that the
limiting distribution (if you run Glauber dynamics forever) is uniform
on proper colourings. That is, if you run Glauber dynamics long enough
and output the resulting colouring, then each proper colouring is
(almost) equally likely to be output. If the number of colours, q, is
large enough, then Glauber dynamics is ``rapidly mixing'', meaning
that the distribution converges to (close to) the uniform distribution
in polynomial time. It is interesting to know, for a given graph G,
how many colours is enough. This question is not completely solved,
even for simple graphs such as the square lattice! In this talk, I
will tell you know what is known about the problem. I will describe
the ``coupling'' method, which is a useful method for determining
whether a dynamics is rapidly mixing and I will describe connections
to the ``uniqueness'' problem in statistical physics.
--------------------------------
Date and time: Thursday 17th March 2005 at 16:00
Location: UG40, School of Computer Science
Title: WHAT DOES A MOBILE ROBOT ACTUALLY DO?
Quantitative Analysis and Description of Mobile Robot
Behaviour
Speaker: Ulrich Nehmzow
(http://cswww.essex.ac.uk/staff/udfn/)
Institution: University of Essex
(http://cswww.essex.ac.uk/)
Host: Jeremy Wyatt
Abstract:
The fundamental rules that determine the behaviour of a mobile robot are
still little understood. As a consequence, developing task-achieving
robot controllers still involves a lot of trial and error
experimentation and "gut feeling" - a clear theory of robot-environment
interaction that could be used instead is not yet available.
In this seminar, which will present practical examples of the mobile
robotics and chaos theory research conducted at Essex, I will address
the following questions:
* How can robot-environment interaction be measured quantitatively?
* How can we develop a theory of robot-environment interaction?
* Can robot-environment interaction be modelled precisely?
While the examples of this talk are taken from mobile robotics, the
methods presented are equally applicable to any other "behaving" agent,
and the seminar addresses problems that are equally relevant to
robotics, psychology, ethology or biology.
--------------------------------
Date and time: Thursday 24th March 2005 at 16:00
Location: UG40, School of Computer Science
Title: Promises and Challenges of CERCIA
Speaker: Xin Yao
(http://www.cs.bham.ac.uk/~xin)
Institution: School of Computer Science, The University of Birmingham
(http://www.cs.bham.ac.uk)
Host: Volker Sorge
Abstract:
CERCIA (The Centre of Excellence for Research in
Computational Intelligence and Applications) was set up two
years ago to create and transfer cutting edge technology in
computational intelligence and natural computation to the
advantage of industry and business. This talk traces the
origin of CERCIA, reviews its work and presents its future
plan.
--------------------------------
Date and time: Thursday 9th June 2005 at 16:00
Location: UG40, School of Computer Science
Title: CHASSIS: A new Inverse Algorithm to Characterise Relaxed
Systems
Speaker: Dalia Chakrabarty
(http://www.physics.rutgers.edu/people/pips/Chakrabarty.html)
Institution: Rutgers University, USA
(http://www.physics.rutgers.edu/)
Host: Ata Kaban
Abstract:
A new non-parametric algorithm is discussed. This scheme is designed to
identify the phase space density distribution function as well as the
potential of a relaxed system, using the measured positions and
kinematic
data of individual system members. A synopsis of the scheme is
presented.
Applications of this algorithm to assorted astrophysical problems are
discussed. Further improvements on the existing code are suggested.
--------------------------------
Date and time: Tuesday 21st June 2005 at 14:00
Location: UG04, Learning Center
Title: The Model Evolution Calculus -- a First-Order DPLL
Procedure
Speaker: Peter Baumgartner
(http://www.mpi-inf.mpg.de/~baumgart/)
Institution: Max-Planck-Institut für Informatik, Saarbrücken, Germany
(http://www.mpi-inf.mpg.de)
Host: Manfred Kerber
Abstract:
The DPLL procedure, due to Davis, Putnam, Logemann, and Loveland, is the
basis of some of the most successful propositional satisfiability
solvers to date. Although originally devised as a proof-procedure for
first-order logic, it has been used almost exclusively for propositional
logic so far because of its highly inefficient treatment of quantifiers,
based on instantiation into ground formulas. Starting from this
observation I developed a "proper" first-order DPLL method (FDPLL). It
is motivated by lifting some of these very effective techniques
developed for the propositional part of DPLL to the first-order level in
conjunction with exploiting successful first-order theorem proving
techniques like unification and subsumption. The FDPLL calculus has been
refined and improved by, e.g., incorporating DPLL style simplification
rules. The resulting method we call the Model Evolution Calculus (it is
joint work with Prof. Cesare Tinelli from the University of Iowa, USA).
In the talk I will focus on the Model Evolution Calculus as such, but I
will also report on performance results obtained with its
implementation, the Darwin system (
http://www.mpi-inf.mpg.de/~baumgart/DARWIN/ ). Further I will sketch
recent work on extending the calculus with dedicated inference rules for
equality reasoning.
--------------------------------
Date and time: Monday 27th June 2005 at 16:00
Location: UG40, School of Computer Science
Title: Formalizing DPLL-based Solvers for Propositional
Satisfiability and for Satisfiability Modulo Theories
Speaker: Cesare Tinelli
(http://www.cs.uiowa.edu/~tinelli/)
Institution: University of Iowa, USA
(http://www.cs.uiowa.edu/)
Host: Volker Sorge
Abstract:
This talk introduces Abstract DPLL, a general and simple rule-based
formulation of the Davis-Putnam-Logemann-Loveland (DPLL) procedure, the
most successful decision procedure for propositional satisfiability to
date. Abstract DPLL allows one to model and formally reason about
several DPLL variants and enhancements in a simple way. Its main
properties such as soundness, completeness or termination immediately
carry over to modern DPLL implementations with such advanced features as
non-chronological backtracking, lemma learning, and restarts.
In the second part of the talk I will extend the framework to
Satisfiability Modulo Theories (SMT), the problem of determining the
satisfiability of quantifier-free formulas in the context of logical
theories of interest. Abstract DPLL Modulo Theories allows one to model
and formally reason about state-of-the-art SMT techniques based on DPLL.
Specifically, I will show how it models several so called lazy
approaches to SMT, including our own DPLL(T) scheme.
This is joint work with Robert Nieuwenhuis and Albert Oliveras of the
Technical University of Catalonia.
--------------------------------
Date and time: Thursday 7th July 2005 at 15:00
Location: UG40, School of Computer Science
Title: Electricity market design and auction theory
Speaker: Robert Marks
(http://www.agsm.edu.au/~bobm)
Institution: Australian Graduate School of Management, Sydney,
Australia
(http://www.agsm.edu.au/)
Host: Xin Yao
Abstract:
This paper explores the state of the emerging practice of
designing markets by the use of agent-based modeling. The paper
first reviews the use of evolutionary and agent-based techniques
of analyzing market behaviors and market mechanisms, and economic
models of learning, comparing genetic algorithms with
reinforcement learning. Ideal design would be direct
optimization of an objective function, but in practice the
complexity of markets and traders' behavior prevents this, except
in special circumstances. Instead, iterative analysis, subject
to design criteria trade-offs, using autonomous self-interested
agents, mimics the bottom-up evolution of historical market
mechanisms by trial and error. The paper discusses recent
progress in agent-based evolutionary analysis and design of
electricity markets in silico. A brief discussion of skepticism
of the profession towards agent- based modelling.
--------------------------------
Date and time: Thursday 7th July 2005 at 16:00
Location: UG40, School of Computer Science
Title: Delta-X: Producing Run-Time Checks from Integrity
Constraints
Speaker: Glenn Bruns
(http://cm.bell-labs.com/cm/cs/who/grb/)
Institution: Bell Laboratories, Lucent Technologies, USA
(http://cm.bell-labs.com/)
Host: Mark Ryan
Abstract:
Software applications are inevitably concerned with data
integrity, whether the data is stored in a database, files,
or program memory. An *integrity guard* is code
executed before a data update is performed. The guard
returns "true" just if the update will preserve data
integrity. The problem considered here is how
integrity guards can be produced automatically from data
integrity constraints. We seek a solution that can be
applied in general programming contexts, and that
leads to efficient integrity guards.
In this talk I will discuss a new integrity constraint language
and guard generation algorithms that are based on a rich
object data model. I will also discuss Delta-X, a tool based
on these ideas that is used at Lucent to generate hundreds
of thousands of lines of product code.
This is joint work with Michael Benedikt.
--------------------------------
Date and time: Monday 25th July 2005 at 16:00
Location: UG40, School of Computer Science
Title: The Traditional Approach to Undefinedness
Speaker: William Farmer
(http://imps.mcmaster.ca/wmfarmer/)
Institution: McMasters University, Canada
(http://www.cas.mcmaster.ca)
Host: Volker Sorge
Abstract:
Undefined terms are commonplace in mathematics and computer science.
The traditional approach to undefinedness in mathematical practice is
to treat undefined terms as legitimate, nondenoting terms that can be
components of meaningful statements. In the traditional approach,
statements about partial functions and undefined terms can be
expressed very concisely because conditions about definedness usually
do not need to be stately explicitly. Unfortunately, the traditional
approach cannot be easily employed in a standard logic, like
first-order logic or simple type theory, in which all functions are
total and all terms are defined. As a result, computer scientists --
who tend to be more formal than mathematicians -- have not embraced
this approach to handling undefinedness.
In this talk we will explain what the traditional approach to
undefinedness is and how it is employed in mathematical practice. We
will show that the traditional approach can actually be formalized in
a standard logic if the logic is modified slightly to admit undefined
terms and statements about definedness. And we will argue that, since
logics with undefinedness closely correspond to mathematical practice
and can be effectively implemented, they should be seriously
considered as a logical basis for mechanized mathematics systems.
--------------------------------
Date and time: Monday 22nd August 2005 at 16:00
Location: UG40, School of Computer Science
Title: Defining Recognition Tasks as Imitation Games
Speaker: Richard Zanibbi
(http://www.cs.concordia.ca/~zanibbi/)
Institution: Concordia University, Montreal, Canada
(http://www.cs.concordia.ca/)
Host: Alan Sexton
Abstract:
Currently in pattern recognition research, the goal of a
recognition task is often defined informally using labelled
training samples. This makes it difficult to compare results
published in the literature. Also, researchers have often
observed that ambiguous patterns, such as poorly handwritten
digits, lead to problems with defining ground-truth (the set
of "correct" interpretation(s) for each pattern). Some
researchers might interpret ambiguous digits differently
than others, for example.
In this talk we propose that the answer to "What is
correct?" depends on who we are asking, and present an
imitation game that captures this dependency. In each round
of the game, a pattern is shown to a set of experts that
produce the goal interpretations for the pattern. Each
player then tries to match the (hidden) expert
interpretations. The game is played on a "field" of
interpretations defined by legal move sequences for players
and experts; these moves are the operations of an explicit
recognition model. At the end of the game, players are
ranked by the distances between expert and player
interpretations as measured by a binary distance metric. For
example, players might be ranked by average interpretation
distance, or number of closest interpretations ("best out of
N"). Both experts and players may be persons or machines.
We describe how the game may be used to define and compare
recognition tasks, and how it places the terms of evaluation
within the problem definition, making it easier to compare
recognition algorithms.
--------------------------------
Date and time: Thursday 29th September 2005 at 16:00
Location: UG40, School of Computer Science
Title: Conditional Symmetry Breaking
Speaker: Tom Kelsey
(http://www.dcs.st-and.ac.uk/~tom/)
Institution: University of St Andrews
(http://www.dcs.st-and.ac.uk/)
Host: Volker Sorge
Abstract:
Symmetry breaking is an important aspect of the efficient solution of
Constraints Satisfaction Problems (the assignment of values to a set of
variables so that no member of a set of constraints is violated).
For example, if we assign colours to graph vertices so that no
connected
vertices share the same colour, then any solution is symmetrically
equivalent another with some or all of the colours permuted. N colours
leads to N! symetrically equivalent solutions. We generally deal with
symmetries present before search starts. This talk sets out approaches
to the harder problem of dealing with symmetries that become present at
some point during search.
--------------------------------
Date and time: Thursday 6th October 2005 at 16:00
Location: UG40, School of Computer Science
Title: Modelling Complex Interactions: A logic-based approach
Speaker: Juliana Küster-Filipe
(http://www.cs.bham.ac.uk/~jkf/)
Institution: The University of Birmingham
(http://www.cs.bham.ac.uk)
Host: Volker Sorge
Abstract:
In UML 2.0, interaction diagrams have been considerably modified, and
new notation has been introduced to express and structure complex
interactions. Remaining limitations include not being able to
distinguish between messages that may or must be received, or enforce
progress of an instance along its lifeline. Within UML these issues
can be addressed combining interaction diagrams with liveness
constraints expressed in UML's Object Constraint Language (OCL).
In this talk, I describe the main features of sequence diagrams in
UML2.0 and present a semantics based on labelled event structures.
Further, I describe a proposed OCL template to specify liveness
properties and show how it can be used. Formally, the
liveness-enriched sequence diagrams can be specified as a collection
of formulae in a true-concurrent two-level logic interpreted over
labelled event structures. The top level logic, called communication
logic, is used to describe inter-object specification, whereas the
lower level logic, called home logic, describes intra-object
behaviour. Time permitting I will describe ways of extending this
logic to address different needs for dependability.
--------------------------------
Date and time: Thursday 13th October 2005 at 16:00
Location: UG40, School of Computer Science
Title: A new theory of vision
Speaker: Aaron Sloman
(http://www.cs.bham.ac.uk/~axs)
Institution: The University of Birmingham
(http://www.cs.bham.ac.uk)
Host: Volker Sorge
Abstract:
For many years I have been working on a collection of related problems
that are usually studied separately. In the last few weeks I have come
across a way of thinking about them
o that seems to be new, though it combines several old ideas
o seems to have a lot of explanatory power
o opens up a collection of new research issues in psychology,
(including animal psychology), neuroscience, AI, biological
evolution, linguistics and philosophy.
The issues I have been concerned with include the following:
o what are the functions of vision in humans, other animals and
intelligent robots -- and what mechanisms, forms of
representation and architectures make it possible for those
requirements to be met?
o how do we do spatial/visual reasoning, both about spatial
problems, and also about non-spatial problems (e.g. reasoning
about search strategies, or about transfinite ordinals, or
family relationships)?
o what is the role of spatial reasoning capabilities in
mathematics?
o what are affordances, how do we see them and how do we use them
including both positive and negative affordances? I.e. how do we
see which actions are possible and what the constraints are,
before we perform them?
o what is causation, how do we find out what causes what, and how
do we reason about causal connections?
o how much of all this do we share with other animals?
o what are the relationships between being able to understand and
reason about what we see, and being able to perform actions on
things we see?
o how do all these visual and other abilities develop within an
individual?
o how did the abilities evolve in various species?
Examples of things I wrote about these topics nearly 30 years ago can
be found in my 1978 book, e.g. these chapters (of which the first was
originally a paper at IJCAI 1971 attaching logicist AI).
http://www.cs.bham.ac.uk/research/cogaff/crp/chap7.html
http://www.cs.bham.ac.uk/research/cogaff/crp/chap9.html
The first presents a theory of diagrammatical reasoning as 'formal'
and rigorous in its own way, and the second reports a theory of vision
as involving perception of structure at different levels of
abstraction, using different ontologies, with information flowing both
bottom up (driven by the data) and top down (driven by problems,
expectations, and prior knowledge).
But most of what I wrote over many years was very vague and did not
specify mechanisms. Many other people have speculated about
mechanisms, but I don't think the mechanisms proposed have the right
capabilities.
In particular AI work on vision over the last 30 years, has mostly
ignored the task of perceiving and understanding structure, and has
instead focused on classification, tracking, and prediction, which are
largely statistical, not visual processes.
Recently I was thinking about requirements for vision in a robot with
3-D manipulation capabilities, as required for the 'PlayMate' scenario
in the CoSy project
http://www.cs.bham.ac.uk/research/projects/cosy/PlayMate-start.html
Thinking about relations between 3-D structured objects made it plain
that besides obvious things like 'the pyramid is on the block' the
robot will have to perceive less obvious things such as that the
pyramid and block each has many parts (including vertices, faces,
edges, centres of faces, interiors, exteriors, etc.) that stand in
different relations to one another and to the parts of the other
object. I.e. the robot (like us) needs to be able to perceive
'multi-strand relationships'.
Moreover, when things move, whether as part of an action performed by
the robot or for some other reason, many of these relations change
*concurrently*.
E.g. one corner of the pyramid might move off the face of the block
while the other parts of the pyramid change relationships to other
parts of the block. If the object moving is flexible, internal
relationships can change also. So the robot needs to perceive
'multi-strand processes', in which multi-strand relationships change.
Thinking about this, and linking it up with a collection of older
ideas (e.g. 'vision is controlled hallucination' Max Clowes) led to
the following hypothesis:
Visual perception involves:
- creation and running of a collection of process *simulations*
- at different levels of abstraction
- some discrete, some continuous (at different resolutions)
- in (partial) registration with one another and with sensory data
(where available), and with motor output signals in some cases,
- using mechanisms capable of running with more or less sensory
input (e.g. as part of an object moves out of sight behind a wall,
etc.)
- selecting only subsets of possible simulations at each level
depending on what current interests and motivations are (e.g.
allowing zooming in and out)
- with the possibility of saving re-startable 'check-points' for use
when searching for a solution to a problem, e.g. a planning
problem.
So, paradoxically, perceiving a static scene involves running
simulations in which nothing happens.
The ability to run these simulations during visual perception may be
shared with many animals, but probably only a small subset have the
ability to use these mechanisms for representing and reasoning about
processes that are not currently being perceived, including very
abstract processes that could never be perceived, e.g. processes
involving transformations of infinite ordinals.
In the talk I shall attempt to explain all this in more detail and
identify some of the unanswered questions arising out of the theory.
There are many research questions raised by all this.
I would welcome criticisms, suggestions for improvement of the theory,
and suggestions for implementation on computers and in brains.
"If a problem is too hard to solve, try a harder one".
(I have not found out who said that. If you know, please tell me.)
--------------------------------
Date and time: Thursday 20th October 2005 at 16:00
Location: UG40, School of Computer Science
Title: Irony processing: Expectation versus salience-based
inferences
Speaker: Rachel Giora
(http://www.tau.ac.il/~giorar)
Institution: Tel Aviv University, Israel
(http://www.tau.ac.il/)
Host: John Barnden
Abstract:
Results from 4 experiments support the view that, regardless of
contextual information, when an end-product interpretation of an
utterance does not rely on the salient (lexicalized and prominent)
meanings of its components (e.g., words), it will not be faster to
derive than when it does. To test this view, we looked into
interpretations of salience-based (literal) and nonsalient (ironic)
interpretations in contexts inducing an expectation for irony. In
Experiment 1, expectancy was manipulated by introducing an ironic
speaker in vivo who also uttered the target utterance. Findings show
that ironic targets were slower to read than literal
counterparts. Experiment 2 shows that ironies took longer to read than
literals and that response times to ironically related probes were
longer than to literally related probes, regardless of
context. Experiments 3 and 4 show that, even when participants were
given extra processing time and were exclusively presented ironically
biasing contexts, the expectancy for irony acquired throughout such
exposure did not facilitate irony interpretation.
This is joint work with:
Ofer Fein,
Department of Behavioral Sciences, The Academic College of Tel Aviv
Yaffo
Dafna Laadan, Joe Wolfson, Michal Zeituny, Ronie Kaufman, Ran Kidron,
and Ronit Shaham
Department of Linguistics, Tel Aviv University
--------------------------------
Date and time: Thursday 27th October 2005 at 16:00
Location: UG40, School of Computer Science
Title: The challenges and promises of autonomic distributed
software
Speaker: Fabrice Saffre
Institution: BT Research
Host: Padma Reddy
Abstract:
The sheer size of modern applications dictates that they are
effectively developed as a collection of interacting modules. Because
the internal dynamics of these modules are largely hidden from but at
the same time affect each other, even software running in isolation on
a single machine can exhibit complex emergent properties. However, an
even bigger challenge is to identify ways for individual modules to
self-organise into custom distributed applications. Autonomic design
principles may hold the key to seamless integration of service
components in the presence of exogenous perturbations like network
latency, transient resource availability or fluctuating usage patterns
("who needs what, where and when"). In the right conditions, local
rules adapted from those found in many biological systems have proven
capable of structuring a population of co-operative devices so that
the offer consistently matches a changing and unpredictable
demand. This decentralised approach currently appears as the best
candidate solution to the problem of managing distributed modular
software and providing reliable and ubiquitous access to services.
--------------------------------
Date and time: Thursday 3rd November 2005 at 16:00
Location: UG40, School of Computer Science
Title: A Rational-Emotive Behavior Therapy-Based Automated
Dialogue System For Exercise Behavior Change
Speaker: Marco De Boni
(http://www-users.cs.york.ac.uk/~mdeboni/)
Institution: Unilever Corporate Research
(http://research.unilever.com/1_0/1_0_1-f.html)
Host: Ata Kaban
Abstract:
We describe an automated intervention system for encouraging users to
increase their exercise levels. The system is centred on automated
dialogue modelled on Rational Emotive Behaviour Therapy, a very
structured therapeutic approach which lends itself well to
computerization; in addition to this we personalise dialogue language
along a number of dimensions, related to psychological type, which
have been shown to increase persuasiveness. The system helps users
identify and overcome mental barriers to exercise, by helping them
think more flexibly about what they perceive to be stopping them from
exercising; this flexible way of thinking in turn enables them to
exercise more, leading to a healthier lifestyle.
--------------------------------
Date and time: Thursday 10th November 2005 at 16:00
Location: UG40, School of Computer Science
Title: Cw
Speaker: Gavin Bierman
(http://research.microsoft.com/~gmb)
Institution: Microsoft Research, Cambridge
(http://research.microsoft.com/)
Host: Volker Sorge, Manfred Kerber
Abstract:
Cw is an experimental language based on C# targeting the
three-tier applications common for web programmers. It includes tight
integration with the relational and semi-structured data models,
offering type-safe support for data creation and querying. In addition,
Cw provides a simple model of asynchronous (one-way) concurrency based
on the join calculus. Our intention is that Cw is a language
well-suited
for the increasingly common distributed, data-intensive web
application.
In this talk, I'll give an informal introduction to the language and
demo some code. Time permitting; I'll briefly discuss some related
features that Microsoft is proposing for C# 3.0/VB 9.0.
--------------------------------
Date and time: Thursday 17th November 2005 at 16:00
Location: UG40, School of Computer Science
Title: Improving Design Approaches
Speaker: Russell Beale
(http://www.cs.bham.ac.uk/~rxb)
Institution: The University of Birmingham
(http://www.cs.bham.ac.uk)
Host: Volker Sorge
Abstract:
Creating effective, usable interactive systems is not easy, and
numerous approaches have been proposed. One approach to supporting
developers and designers is through the use of HCI design patterns that
capture the key elements of a design, providing a library of approaches
that are known to work. However, most design patterns are at best only
semi-formal, providing outline structures that are filled in with
discursive text and/or images.
We want an approach that makes patterns a much more accessible part of
the user interface design process. We focus on modelling a design
pattern at a high level of abstraction. We present an approach that
captures not only the software architectural characteristics of the
system but also its interface realisation, and hence forms a formal
representation of many of the elements of an HCI design pattern. We
discuss the approach and provide examples of how it can be used.
--------------------------------
Date and time: Thursday 24th November 2005 at 16:00
Location: UG40, School of Computer Science
Title: Towards provenance based reasoning in e-Science
Speaker: Luc Moreau
(http://www.ecs.soton.ac.uk/~lavm/)
Institution: University of Southampton
(http://www.ecs.soton.ac.uk/)
Host: Behzad Bordbar
Abstract:
The importance of understanding the process by which a result was
generated in an experiment is fundamental to science. Without
such information, other scientists cannot reproduce, analyse or
validate experiments. Provenance is therefore important to
enable a scientist to trace how a particular result has been
arrived at.
Based on the common sense definition of provenance, we propose a
new definition of provenance that is suited to the computational
model underpinning service oriented architectures: the provenance
of a piece of data is the process that led to the data. Since
our aim is to conceive a computer-based representation of
provenance that allows us to perform useful reasoning about the
origin of results, we examine the nature of such representation,
which is articulated around the documentation of execution.
We then examine the architecture of a provenance system, centered
around the notion of a provenance store designed to support the
provenance lifecycle: during a recording phase some documentation
of execution is archived in the provenance store, whereas a
reasoning phase operates over the archived documentation. Then,
we successively discuss a protocol for recording execution
documentation, a query facility to gain access to the contents of
the store, and a reasoning system to make inferences. The
realisation of such an architecture is particularly challenging
in the presence of e-Science experiments since it must be
scalable.
The presentation will draw upon our experience in the PASOA
(www.pasoa.org) and EU Provenance (www.gridprovenance.org)
projects and will rely on explicit use cases derived from
e-Science applications in the domain of bioinformatics, high
energy physics, organ transplant management and aerospace
engineering.
--------------------------------
Date and time: Thursday 1st December 2005 at 16:00
Location: UG40, School of Computer Science
Title: Text Extraction from Web Images - Making the Web More
Accessible
Speaker: Dimosthenis Karatzas
(http://www.ecs.soton.ac.uk/~dk3/)
Institution: University of Southampton
(http://www.ecs.soton.ac.uk/)
Host: Alan Sexton, Volker Sorge
Abstract:
Text on Web pages is routinely created in image form as an
attempt to overcome the stylistic limitations of HTML. The text
embedded in images (titles, headers, banners, menu items etc.)
has a potentially high semantic value in terms of indexing and
searching for the corresponding Web pages. Nevertheless, as
current search engine technology is unable to perform text
extraction and recognition in images, any text in image form is
simply ignored. Moreover, it is often desirable to obtain a
uniform representation of all visible text of a Web page for
applications such as voice browsing or automated content
analysis, yet without methods to extract text from web images,
this is particularly difficult to achieve.
Existing text extraction methods are not able to cope with the
special characteristics of web images (low resolution,
compression artefacts, anti-aliasing etc). As a result a
considerable percentage of text on Web pages is effectively
inaccessible to automated processes. This talk will highlight the
difficulties and outline research carried out to address the
problem of text extraction from Web images, and will give an
outlook on the future of Web Document Analysis.
--------------------------------
Date and time: Thursday 8th December 2005 at 16:00
Location: UG40, School of Computer Science
Title: From Problem Frames to Architectural Models: a
Coordination-based Approach
Speaker: José Fiadeiro
(http://www.fiadeiro.org/jose/)
Institution: University of Leicester
(http://www.cs.le.ac.uk/)
Host: Juliana Kuester-Filipe
Abstract:
We bring together Jackson's Problem Frames approach to problem
analysis and description and the architectural modelling approach
that we have been developing based on the separation between
computations and the coordination of interactions in component-based
systems. The idea is that we can use architectural connectors (what
we call coordination laws) and roles (coordination interfaces) to
describe the machine and problem domains, as well as their
interactions, that result from using problem frames during analysis.
We focus on the way dynamic composition of problem frames operates
over architectural configurations, and the support that it provides
for incremental development and evolution.
This is joint work with Leonor Barroca and Michael Jackson at the Open
University.
--------------------------------
Date and time: Friday 9th December 2005 at 14:00
Location: UG40, School of Computer Science
Title: Dynamics and generalization ability of Learning Vector
Quantization
Speaker: Michael Biehl
(http://www.cs.rug.nl/~biehl/)
Institution: Rijksuniversiteit Goningen, The Netherlands
(http://www.rug.nl/informatica)
Host: Peter Tino
Abstract:
Learning schemes such as Competitive Learning and
Learning Vector Quantization (LVQ) are based on
the representation of data by appropriately
chosen prototype vectors. While intuitively
clear and widely used in a variety of
classification problems, most algorithms of LVQ
are heuristically motivated and lack, for
instance, the relation to a well-defined cost
function.
Nevertheless, methods borrowed from Statistical
Physics allow for a systematic study of such
learning processes. Model situations in which
the training is based on high-dimensional,
randomized data can be studied analytically. It
is possible, for instance, to compute typical
learning curves, i.e. the success of learning
vs. the number of example data. Besides the
analysis and comparison of standard algorithms,
the aim of these studies is to devise novel, more
efficient training prescriptions.
This talk summarizes our recent results
concerning several unsupervised and supervised
schemes of Vector Quantization and gives an
outlook on forthcoming projects.
--------------------------------
Date and time: Thursday 15th December 2005 at 16:00
Location: UG40, School of Computer Science
Title: A Brain-Inspired Architecture for Cognitive Robotics
Speaker: Murray Shanahan
(http://www.doc.ic.ac.uk/~mpsha/)
Institution: Imperial College London
(http://www.doc.ic.ac.uk/)
Host: Nick Hawes
Abstract:
This seminar will present a brain-inspired cognitive architecture that
incorporates approximations to the concepts of consciousness,
imagination, and emotion. To emulate the empirically established
cognitive efficacy of conscious as opposed to non-conscious
information processing in the mammalian brain, the architecture adopts
a model of information flow from global workspace theory. Cognitive
functions such as anticipation and planning are realised through
internal simulation of interaction with the environment. Action
selection, in both actual and internally simulated interaction with
the environment, is mediated by affect. An implementation of the
architecture is described which is based on weightless neurons and is
used to control a simulated robot.
--------------------------------
Date and time: Thursday 12th January 2006 at 16:00
Location: UG40, School of Computer Science
Title: Scheduling Under Uncertain Resource Consumption And
Utility
Speaker: Richard Dearden
(http://www.cs.bham.ac.uk/~rwd)
Institution: School of Computer Science,The University of Birmingham
()
Abstract:
Consider a hypothetical space mission designed to observe objects of
different characteristics with an instrument. The spacecraft has a
limited
amount of onboard data storage and power. Each observation requires an
uncertain amount of power and data storage, and has uncertain
scientific
value. Data can be transmitted back to Earth, but transmission rates
are
uncertain. Finally, there may be dependencies among observations. Some
observations may depend on calibration observations or other events.
This
both induces precedence constraints, and the failure of the calibration
implies that the dependent observation need not be performed. Given
this
problem, we would like to find schedules that exceed a lower bound on
the
expected utility when executed. We describe two different event
execution
models that motivate different formulations of the expected value of a
schedule. These models are very general, and can handle discrete or
continuous resource consumption and utility distributions with few
limitations. We show that the problem of finding schedules exceeding a
lower bound on the expected utility are NP-complete. Finally, we
present
results that characterize the behavior of some simple scheduling
heuristics over a variety of problem classes.
--------------------------------
Date and time: Thursday 19th January 2006 at 16:00
Location: UG40, School of Computer Science
Title: Adventures in Systems Biology
Speaker: Jane Hillston
(http://www.dcs.ed.ac.uk/home/jeh/)
Institution: University of Edinburgh
(http://www.dcs.ed.ac.uk/)
Host: Juliana Kuester-Filipe
Abstract:
The stochastic process algebra PEPA is well-established as a formalism
for performance modelling of computer and communication systems.
However in recent years we have been investigating the potential of
PEPA, or a similar stochastic process algebra, for modelling signal
tranduction pathways within cells.
In this talk I will present some of our initial work in this area. I
will also explain how this work has also changed our perspectives on
performance modelling.
This is joint work is Muffy Calder (Glasgow) and Stephen Gilmore
(Edinburgh).
--------------------------------
Date and time: Thursday 26th January 2006 at 16:00
Location: UG40, School of Computer Science
Title: 1024, 888, 1066 and all that - the world's most popular
numbers.
Speaker: Dr Colin Frayn
(http://www.cs.bham.ac.uk/~cmf)
Institution: School of Computer Science, The University of Birmingham
(http://www.cs.bham.ac.uk)
Host: Richard Dearden
Abstract:
The all-knowing search engine Google is such an important component of
the internet that it has become a verb in its own right. Given such a
substantial repository of knowledge ripe for the picking, it would seem
irresponsible not to dedicate a few days of light-hearted work to some
frivolous studies of human psychology.
With this in mind, I present a Google-empowered investigation into the
most popular numbers in the world. This work has led to some
interesting
discoveries (at least for me) in the field of mathematics and data
processing, as well as providing an entirely unpredicted application
for
genetic programming. We investigate the way in which the internet
reflects
21st century society, and how a freeform collection of 8 billion web
pages
can give an overview of the numbers and dates considered most important
by the people of the world.
--------------------------------
Date and time: Thursday 2nd February 2006 at 16:00
Location: UG40, School of Computer Science
Title: The new website explained
Speaker: Dr. Russell Beale
(http://www.cs.bham.ac.uk/~rxb)
Institution: School of Computer Science, The University of Birmingham
(http://www.cs.bham.ac.uk)
Host: Richard Dearden
Abstract:
The school is soon to transfer across to a new website: everything is
different - the architecture, the security, the page design, the
navigation, the technologies used. This talk will present a picture of
what we've done, and why, and will then introduce the bits that people
who need to edit and maintain the pages will need to know. It's not a
research seminar, but if you're interested in the site, or are likely to
have to maintain major parts of it, then this may be of interest to you.
--------------------------------
Date and time: Thursday 16th February 2006 at 16:00
Location: UG40, School of Computer Science
Title: Archaeology and Computing at Birmingham - natural
partners?
Speaker: Dr. Vince Gaffney
(http://www.iaa.bham.ac.uk/staff/gaffney.htm)
Institution: Institute of Archaeology and Antiquity, The University of
Birmingham
(http://www.iaa.bham.ac.uk/)
Host: Richard Dearden
Abstract:
From geophysics to the creation of virtual worlds computing technologies
are transforming how archaeology views the past. This should, perhaps,
not be such a surprise as archaeology sits at the interface between the
arts and natural science and has a propensity to generate large amounts
of spatial/numeric data that demands significant computing power to
process. The complex nature of human societies in analytical terms and a
requirement for a range of visualisation technologies for the purpose of
representation, interpretation, restoration or aesthetic display also
provide a challenging environment for the development or application of
a wide range of technologies. This seminar will present some of the
work currently being carried out in the Institute of Archaeology and
Antiquity, from the Pyramids at Giza through to the lost prehistoric
landscapes beneath the North Sea, and discuss the use of computing in
archaeology within these projects and more generally.
--------------------------------
Date and time: Thursday 23rd February 2006 at 16:00
Location: UG40, School of Computer Science
Title: Formal Analysis of Security APIs
Speaker: Graham Steel
(http://homepages.inf.ed.ac.uk/gsteel/)
Institution: University of Edinburgh
(http://www.inf.ed.ac.uk/)
Host: Volker Sorge, Manfred Kerber
Abstract:
Cash machines (ATMs) and other critical parts of the electronic
payment infrastructure contain tamper-proof hardware security modules
(HSMs), which protect highly sensitive data such as the keys used to
obtain personal identification numbers (PINs). These HSMs have a
restricted API that is designed to prevent malicious intruders from
gaining access to the data. However, several attacks have been found
on these APIs, as the result of painstaking manual analysis by experts
such as Mike Bond and Jolyon Clulow.
At the University of Edinburgh, a project is underway to formalise and
mechanise the analysis of these APIs. This talk will present some API
attacks, and our efforts to generalise them and capture them formally,
using theorem provers and the PRISM probabilistic model checker.
--------------------------------
Date and time: Thursday 2nd March 2006 at 16:00
Location: UG40, School of Computer Science
Title: Orthogonal recombinable competences in humans, robots and
other animals
Speaker: Aaron Sloman
(http://www.cs.bham.ac.uk/~axs)
Institution: School of Computer Science, The University of Birmingham
(http://www.cs.bham.ac.uk)
Host: Richard Dearden
Abstract:
This is a sequel to my talk on vision as process simulation last
October.
The work on orthogonal competences arose both out of my work on the
CoSy
project (http://www.cs.bham.ac.uk/research/projects/cosy/)
and ongoing work with Jackie Chappell (Biosciences) on different
tradeoffs between innate and learnt competences in organisms.
Processes perceived by humans (and many other animals) can differ in
many different dimensions, most, though not all of them, dependent on
what is in the environment (i.e. 'objective' environmental invariants,
not just sensori-motor invariants), e.g. various kinds of 3-D surface
curvature and surface discontinuities, rigid vs flexible objects,
different kinds of stuff: compressible, elastic, plastic, flexible like
paper or flexible like cloth, strings, rods, sheets, differences in
viscosity, kinds of texture, stickiness, etc. etc. These, in different
combinations, make possible an *enormous* variety of types of 3-D
process, including many types of actions -- far more than a child can
encounter in 5 years: hence the importance of orthogonality and
recombinability.
Investigating implications of all this contrasts with the excessive
emphasis on sensory-motor contingencies that loom large in 'embodiment'
and 'dynamical systems' approaches, that focus on a tiny subset of
human
competences, such as maintaining balance, and turning a crank handle.
Being able to see all the things a five year old child can see requires
being able to identify which process components are involved in the
child's environment and how they interact. It seems that multiple
independent competences have to be acquired through early exploration
and play, and represented in ways that allow them to be creatively
*recombined* in perceiving novel scenes, and also in creatively acting,
planning, reasoning and explaining, including forming new, ever more
complex units to support subsequent learning.
This seems to require powerful innate 'syntactic' (i.e.
structure-manipulating) mechanisms, perhaps implemented in kinds of
neural mechanisms that have not yet been thought of.
Examples of this ability evolved before human language (since they seem
to be manifested in chimps and corvids, for example, as well as
prelinguistic children, if there are such things). But perhaps through
the usual biological trick (in evolution) of duplication followed by
differentiation, the pre-linguistic mechanisms could have provided a
basis for human language, simultaneously providing both linguistic
mechanisms and semantic content -- after which positive feedback and
cultural evolution rapidly enhanced both the non-linguistic and
linguistic competences after they started co-evolving.
These ideas generate many research questions, e.g. the obvious ones
about which sorts of virtual and physical machines can support such
capabilities, and less obvious questions about varieties of genetic
defects or brain damage that could prevent development of specific
aspects of the ability to acquire and deploy orthogonal competences,
varieties of defect that might occur later in life, and above all what
sorts of neural mechanisms can support creative controlled
hallucinations as required for normal visual perception. Drug and
damage-induced hallucinations and synaesthesia may provide some
pointers
to the mechanisms.
My talk will present a sample of these ideas from which I hope
creative and intelligent listeners will be able to reconstruct the rest
by recombining their own orthogonal competences.
Some examples and speculations can be found in an online version that
is
still under development
http://www.cs.bham.ac.uk/research/projects/cosy/papers/#dp0601
I think the ideas can be used to explain and develop some of Piaget's
ideas about the construction of reality in the child.
Aaron
--------------------------------
Date and time: Thursday 9th March 2006 at 16:00
Location: UG40, School of Computer Science
Title: Seminar Cancelled this week
Speaker: TBA
--------------------------------
Date and time: Thursday 23rd March 2006 at 16:00
Location: UG40, School of Computer Science
Title: Semantics-directed abstraction and refinement
Speaker: Dan Ghica
(http://www.cs.bham.ac.uk/~drg)
Institution: School of Computer Science, The University of Birmingham
(http://www.cs.bham.ac.uk)
Host: Richard Dearden
Abstract:
Game semantics provides a new method for extracting finite-state
models from open programs that exhibit non-trivial interactions
between procedures and state. However, the problem of state-space
growth is as acute in the case of game models as is in the case of
models obtained using traditional, operational, methods. Two of the
most successful techniques dealing with this problem are abstract
interpretation and counterexample-guided refinement. I will present
recent advances in applying these ideas within the game-semantic
framework, from foundations to tool implementations.
--------------------------------
Date and time: Thursday 30th March 2006 at 16:00
Location: UG40, School of Computer Science
Title: What's new? Support from Information Services for
teaching, learning and research
Speaker: Tracy Kent
Institution: Information Services, The University of Birmingham
Host: Richard Dearden
Abstract:
The seminar is intended to alert staff and researchers to new resources
and services available from Information Services which have become
available over the past 6 months in order to support learning, teaching
and research. Such services include new purchases, trials which have
been set up for the School of Computer Science to consider and new
features made available on existing resources. It is also an
opportunity to ask questions about how Information Services supports
teaching and research through provision of services, training
opportunities and subject information funds.
--------------------------------
Date and time: Thursday 1st June 2006 at 16:00
Location: LG32, Learning Centre
Title: Dynamic Data Driven Applications Systems (DDDAS)
Speaker: Dr. Frederica Darema
Institution: Computing and Information Sciences and Engineering (CISE)
Directorate at NSF
Host: Georgios Theodoropoulos
Abstract:
Abstract: This talk will discuss the Dynamic Data Driven Applications
Systems (DDDAS) concept, and the capabilities, research challenges and
opportunities for enabling DDDAS. DDDAS entails the ability to
incorporate additional data into an executing application (these data
can be archival or collected on-line), and in reverse the ability of the
applications will be able to dynamically steer the measurement process.
Such capabilities offer the promise of augmenting the analysis and
prediction capabilities of application simulations and the effectiveness
of measurement systems, with a potential major impact in many science
and engineering application areas. Enabling DDDAS requires advances in
the application modeling methods and interfaces, in algorithms tolerant
to perturbations of dynamic data injection and steering, in measurement
systems, and in systems software to support the dynamic environments of
concern here. Research and development of such technologies requires
synergistic multidisciplinary collaboration in the applications,
algorithms, software systems, and measurements systems areas, and
involving researchers in basic sciences, engineering, and computer
sciences. The talk will address specifics of such technology challenges
and opportunities, and will also provide examples from ongoing DDDAS
research projects.
Bio: Frederica Darema, Ph. D., Fellow IEEE, Senior Executive Service
Member
Dr. Darema is the Senior Science and Technology Advisor in CNS and CISE,
and Director of the Computer Systems Research (CSR) Program, and Lead of
the multi-agency DDDAS Program. Dr. Darema's interests and technical
contributions span the development of parallel applications, parallel
algorithms, programming models, environments, and performance methods
and tools for the design of applications and of software for parallel
and distributed systems. Dr. Darema received her BS degree from the
School of Physics and Mathematics of the University of Athens - Greece,
and MS and Ph. D. degrees in Theoretical Nuclear Physics from the
Illinois Institute of Technology and the University of California at
Davis Respectively, where she attended as a Fulbright Scholar and a
Distinguished Scholar. After Physics Research Associate positions at the
University of Pittsburgh and Brookhaven National Lab, she received an
APS Industrial Fellowship and became a Technical Staff Member in the
Nuclear Sciences Department at Schlumberger-Doll Research. Subsequently,
in 1982, she joined the IBM T. J. Watson Research Center as a Research
Staff Member in the Computer Sciences Department and later-on she
established and became the manager of a research group at IBM Research
on parallel applications. While at IBM she also served in the IBM
Corporate Strategy Group examining and helping to set corporate-wide
strategies. Dr. Darema was elected IEEE Fellow for proposing in 1984 the
SPMD (Single-Program-Multiple-Data) computational model that has become
the popular model for programming today's parallel and distributed
computers. Dr. Darema has been at NSF since 1994, where she has
developed initiatives for new systems software technologies (the Next
Generation Software Program), and research at the interface of
neurobiology and computing (the Biological Information Technology and
Systems Program). She has led the DDDAS (Dynamic Data Driven
Applications Systems) efforts including the synonymous cross-Directorate
and cross-agency competition. She has also been involved in other
cross-Directorate efforts such as the Information Technology Research,
the Nanotechnolgy Science and Engineering, the Scalable Enterprise
Systems, and the Sensors Programs. During 1996-1998 she completed a
two-year assignment at DARPA where she initiated a new thrust for
research on methods and technology for performance engineered systems.
--------------------------------
Date and time: Monday 5th June 2006 at 13:00
Location: UG05, Learning Centre
Title: Autonomy from the Heavens Down to the Depths: A Personal
View
Speaker: Dr. Kanna Rajan
(http://www.mbari.org/staff/kanna/)
Institution: Monterey Bay Aquarium Research Institute
(http://www.mbari.org/)
Host: Richard Dearden
Abstract:
In Jan 2004, MAPGEN became the first Artificial Intelligence (AI) based
system to command a vehicle on the surface of another planet when the
first surface plan for 'Spirit' was successfully built with MAPGEN,
radiated and then executed on-board. In May 1999, the Remote Agent
became the first AI based closed-loop control system to command the
Deep
Space One (DS1) spacecraft, 65 Million miles from Earth. These two AI
based systems were fundamentally different in their approaches to
command a NASA vehicle in deep space. Yet the lessons learned from both
of these missions were substantially similar.
Ocean Sciences the world over is at a cusp, with a move from the
Expeditionary to the Observatory mode of doing science. In the United
States, the President has recently approved $350 Million for building
Global, Coastal and Regional scale observatories. Funded by the US
National Science Foundation, this will result in a substantial change
in
how measurements are to be taken and as a consequence a substantial
impact to the computing sciences, specifically to autonomy in the
depths
for exploration. I will attempt to lay out where Oceanography is and
where it needs to go to be able to deal with these near-term
challenges.
I will also highlight why NASA's investments for the last two decades
will have an impact in understanding our own oceans.
Bio:
----
Kanna is the Principal Researcher in Autonomy at the Monterey Bay
Aquarium Research Institute (www.mbari.org) a small privately funded
Oceanographic institute which he joined in October 2005. Prior to that
he was a Senior Research Scientist and a member of the management team
of the the 95 member Autonomous Systems and Robotics Area at NASA Ames
Research Center Moffett Field, California.
As the Program Manager for Autonomy & Robotics for a $5M FY05 program
at
Ames he was tasked with putting together a credible demonstration of
Human/Robotic collaboration on a planetary surface. The field
demonstration at the Ames Marscape in September 2005 end, showcased how
autonomous systems and EVA astronauts could "work" together towards
exploration tasks. Before this programmatic role, he was the Principal
Investigator on the MAPGEN Mixed-Initiative Planning effort to command
Spirit and Opportunity on the surface of the Red Planet. MAPGEN
continues to be used to this day, twice daily in the mission-critical
uplink process.
Kanna was one of the principals of the Remote Agent Experiment (RAX)
which designed, built, tested and flew the first closed-loop AI based
control system on a spacecraft. The RA was the co-winner of NASA's 1999
Software of the Year, the agency's highest technical award
(http://ic.arc.nasa.gov/projects/remote-agent/).
His interests are in Planning/Scheduling, modeling and representation
for real world planners and agent architectures for Distributed Control
applications. Prior to joining NASA Ames, he was in the doctoral program
at
the Courant Institute of Math Sciences at NYU. Prior to that he was at
the Knowledge Systems group at American Airlines, helping build a
Maintenance Routing scheduler (MOCA) which continues to be used by the
airline 365 days of the year.
MAPGEN has been awarded NASA's 2004 Turning Goals into Reality award
under the Administrators Award category, a NASA Space Act Award, a
NASA Group Achievement Award and a NASA Ames Honor Award. Kanna is
the recipient of the 2002 NASA Public Service Medal and the First NASA
Ames Information Directorate Infusion Award also in 2002. In Oct 2004,
JPL awarded him the NASA Exceptional Service Medal for his role on
MER.
He was the Co-chair of the 2005 Intnl. Conference on Automated
Planning and Scheduling (ICAPS), Monterey California
(http://icaps05.icaps-conference.org/) and till recently the chair of
the Executive Board of the International Workshop on Planning and
Scheduling for Space. He continues to serve in review boards for NASA,
the Italian Space Agency and ESA.
--------------------------------
Date and time: Thursday 8th June 2006 at 16:00
Location: UG40, School of Computer Science
Title: EVIDENTIAL REASONING & INTELLIGENCE MANAGEMENT
Speaker: Dr Richard Leary
Institution: Forensic Pathways Ltd.
Host: Jin Li
Abstract:
A major problem facing law enforcement now and acknowledged widely
since
the terror attacks of September 11th 2001, is that we have become far
more effective at collecting data than we have at making sense of it.
Crime investigation and intelligence analysis involve complex logical
and psychological processes. Discovery, finding out what we do not
know,
is a process. It involves many complex processes: for example,
analysis,
synthesis, questioning, reasoning, composition and decomposition of
both
facts, evidence and potential explanations. Analysis and synthesis
involve the skilled examination of facts and inferences drawn from
information we have as well as information we do not yet have.
This talk will present issues of evidential reasoning and intelligence
management in forensic science. Combination of sophisticated computing
technology with the innate human skills in interpretation of evidence
is
arguably one of the best approaches in crime analysis, which is not
fully recognized. Advanced computer techniques such as machine
learning/data mining, that may discover knowledge hidden in crime data
such as serial crime links or crime networks, are able to improve
efficiency and reduce errors in crime analysis. However for
sophisticated cases where a mater of questioning, querying, or data
mining a technology based collection of data is not sufficient,
imaginative or creative reasoning to eradicate uncertainty rests upon
complex mental tasks is high desired in generating new line of inquiry
and new evidence linking hypotheses and evidence together.
*Biography:*
Richard Leary, MBE, LLB (Hons), Ph.D is the Managing Director of
Forensic Pathways Ltd. He is a former Senior Detective Police Officer,
Senior Fellow of University College London and former Assistant
Director
of Jill Dando Institute. He was invested into the Order of the British
Empire for services to policing and forensic science for work on the
use
of information systems for evidential reasoning. He has worked in the
USA, Canada, Europe and the Middle East building and using information
systems in security and risk related fields. In 1998 he invented and
deployed the Forensic Led Intelligence System (FLINTS) now operating in
a number of police services in the United Kingdom. This was the first
systematic and integrated approach to the management of forensic
intelligence and continues to be responsible for the automated
Identification of offenders 24 hours per day 7 days per week in the
UK. Since then he has developed approaches to the routine management
and
analysis of complex datasets to automate the process of "discovery".
The
system called MAVERICK has been nominated a Case Study Project by
Microsoft. The system is operating on the World Wide Web managing
large
and complex data sets for UK and international organisations where
crime
and fraud is suspected. In autumn 2006, he will deliver a Report to
EUROPOL commissioned by the Homicide Working Group on the feasibility
of
a Pan-European Ballistics Intelligence system. In 2007 he will launch a
co-authored book to be published by Wiley & Son to highlight the
benefits of the use of advanced computing in crime investigation and
intelligence analysis.
Forensic Pathways was one of the first to become a signatory to the
United Nations Global Compact has been awarded the following Awards and
Nominations:
1. 2002 Award for Design & Innovation. (Department of Trade &
Industry).
2. Business of the Year 2003 Awarded by Mustard.uk.com. (Business Link,
PriceWaterhouseCoopers, NatWest Bank).
3. Most Innovative Business 2003 Awarded by Mustard.uk.com. (Business
Link, PriceWaterhouseCoopers, NatWest Bank).
4. Innovation & Technology Award 2003 (Awarded by Business Link).
5. Microsoft Case Study 2005 (Awarded by Microsoft UK).
6. Forensic Pathways CEO Awarded Inventor of the Year 2005.
7. Forensic Pathways CEO Nominated for European Achievement Award 2006.
8. Nominated for the Microsoft Technology Lighthouse Award 2006
(Nominated by Microsoft, Seattle, USA).
--------------------------------
Date and time: Tuesday 18th July 2006 at 16:00
Location: UG40, School of Computer Science
Title: Synthesis from Temporal Specifications
Speaker: Nir Piterman
Institution: EPFL, Lausanne
Host: Marta Kwiatkowska
Abstract:
One of the most ambitious goals in the field of verification is to
automatically produce designs from their specifications, a process
called {em synthesis}. We are interested in {em reactive systems},
systems that continuously interact with other programs, users, or their
environment (like operating systems or CPUs). The complexity of reactive
system does not necessarily arise from computing complicated functions
but rather from the fact that they have to be able to react to all
possible inputs and maintain their behavior forever. The most
appropriate way to consider synthesis is by considering two-sided games
between the system and the environment. The environment tries to falsify
the specification and the system tries to satisfy it. Every system move
(the way the system handles its internal variables) has to match all
possible future moves of the environment. The system wins the game if it
has a strategy such that all infinite traces satisfy the specification.
When the specification is a {em linear temporal logic} formula or a {em
nondeterministic B`uchi automaton}, the problem is reduced to solving
simpler games by constructing deterministic automata. However,
determinization for automata on infinite words is extremely complicated.
Here we show how to construct nondeterministic automata that can replace
deterministic automata in the context of games and synthesis. The fact
that our automata are nondeterministic makes them surprisingly simple,
amenable to symbolic implementation, and allows a natural hierarchy of
automata of increasing complexity that lead to the full solution.
--------------------------------
Date and time: Thursday 31st August 2006 at 16:30
Location: UG40, School of Computer Science
Title: Evolutionary Development for Structure and Design
Optimisation
Speaker: Till Steiner
Institution: Honda Research Germany
Host: Thorsten Schnier
Abstract:
I will present an approach to model an artificial developmental
system based on cells that interact through gene regulatory networks
for design or structure optimisation. Cell differentiation is
accomplished by positional information provided by transcription
factors that diffuse inside a simulated environment. Different
actions such as cell division, apoptosis and cell-cell communication
are implemented.
We believe that a complex genotype to phenotype mapping facilitates
the search for and the representation of complex shapes while the
number of object parameters remains relatively low.
--------------------------------
Date and time: Thursday 14th September 2006 at 16:00
Location: UG40, School of Computer Science
Title: APPLICATION OF ASPECT-ORIENTED TECHNIQUES TO WEB SERVICES
Speaker: G. Ortiz
Institution: Computer Science Department, University of Extremadura,
Spain
Host: Behzad Bordbar
Abstract:
Web Service technologies offer a successful way for interoperability
among web applications. However, current approaches to adding
extra-functional properties, require direct modification of the code,
which is very costly. This seminar presents a new approach, which make
use of Model Driven Development (MDD) and Aspect oriented Programming
(AoP) techniques to address the problem. MDD assists with automated
code
generation, and AoP ensures correct decoupling of the system.
--------------------------------
Date and time: Thursday 12th October 2006 at 16:00
Location: UG40, School of Computer Science
Title: TBA
Speaker: Dr. Kusum Deep ?
Host: Xin Yao
--------------------------------
Date and time: Thursday 9th November 2006 at 16:00
Location: UG40, School of Computer Science
Title: Testing Executable Design Models
Speaker: Robert France
Institution: Computer Science Department, Colorado State University
Host: Behzad Bordbar
Abstract:
Practical model validation techniques are needed for
model-driven development (MDD) techniques to succeed in an
industrial setting. If models are to be used as the basis for
generating machine-executable implementations one must be able
to validate that the models are correct before they are
transformed to code. This is particularly
important for critical systems. In this presentation I will present
an approach and a tool called UMLAnT that supports
animation and testing of design models consisting of UML
class, sequence and activity models.
--------------------------------
Date and time: Friday 17th November 2006 at 16:00
Location: UG40, School of Computer Science
Title: Assurance Techniques for Code Generators
Speaker: Bernd Fischer
Institution: School of Electronics and Computer Science, University of
Southampton
Host: Richard Dearden
Abstract:
Automated code generation is an enabling technology for model-based
software development and promises many advantages but the reliability of
the generated code is still often considered as a weak point,
particularly in safety-critical domains. Traditionally,
correctness-by-construction techniques have been seen as the "right" way
to assure reliability, but these techniques remain difficult to
implement and to scale up, and have not seen widespread use. Currently,
generators are validated primarily by testing, which cannot guarantee
reliability and quickly becomes excessive. In this talk, we present two
related alternative assurance approaches that use Hoare-style safety
proofs to ensure that the generated code does not "go wrong", i.e., does
not violate specific safety conditions during its execution.
The first approach is based on the insight that code generator itself
can be extended to produce all annotations (i.e., pre-/postconditions
and loop invariants) required to enable the safety proofs for each
individual generated program, without compromising the assurance
provided by the subsequent verification phase. This is achieved by
embedding annotation templates into the code templates, which are then
instantiated in parallel by the generator. This is feasible because the
structure of the generated code and the possible safety properties are
known when the generator is developed. It does not compromise the
provided assurance because the annotations only serve as auxiliary
lemmas and errors in the annotation templates ultimately lead to
unprovable safety obligations. We have implemented this approach in the
AutoBayes and AutoFilter code generators and used it to fully
automatically prove that the generated code satisfies both
language-specific properties such as array-bounds safety or proper
variable initialization-before-use and domain-specific properties such
as vector normalization, matrix symmetry, or correct sensor input
usage.
The second approach is based on the insight that the output of a code
generator is highly idiomatic, so that we can use patterns to describe
all code constructs that require annotations and templates to describe
the required annotations. We use techniques similar to aspect-oriented
programming to add the annotations to the generated code: the patterns
correspond to (static) point-cut descriptors, while the introduced
annotations correspond to advice. The resulting annotation inference
algorithm is generic with respect to the safety property and can run
completely separately from the generator, which can thus be treated as a
black box. This allows us to apply it to to third-party generators like
RealTime Workshop as well.
Joint work with Ewen Denney, USRA/RIACS, NASA Ames Research Center, USA.
--------------------------------
Date and time: Thursday 23rd November 2006 at 16:00
Location: UG40, School of Computer Science
Title: Improved Data Security Using Template-Free Biometric
Based Encryption
Speaker: Dr Gareth Howells
Institution: Dept of Electronics, University of Kent
Host: Behzad Bordbar
Abstract:
The digital revolution has transformed the way we create, destroy,
share, process and manage information, bringing many benefits in its
wake. However, such technology has also increased the opportunities for
fraud and other related crimes to be committed. Therefore, as the
adoption of such technologies expands, it becomes vital to ensure the
integrity and authenticity of digital documents and to manage and
control access to their shared contents. This talk introduces an
exciting new approach to template-free biometric encoding which exploits
the potential of biometric identity information to authenticate
activation of the encryption process and hence significantly reduce both
fraudulent authoring of information and fraudulent access to
confidential documents. The novelty of the proposed techniques lie in
the development of algorithms for the direct encryption of data
extracted from biometric samples which characterise the identity of the
individual. Such a system offers the following significant advantages:-
* The removal of the need to store any form of template for
validating the user, hence directly addressing the disadvantage noted
above.
* The security of the system will be as strong as the biometric and
encryption algorithm employed (there is no back door). The only
mechanisms to gain subsequent access are to provide another sample of
the biometric or to break the cipher employed by the encryption
technology.
The compromise of a system does not release sensitive biometric template
data which would allow unauthorised access to other systems protected by
the same biometric or indeed any system protected by any other biometric
templates present
--------------------------------
Date and time: Thursday 25th January 2007 at 16:00
Location: UG40, School of Computer Science
Title: TBA
Speaker: John Derrick
Institution: University of Sheffield
Host: Behzad Bordbar
--------------------------------
Date and time: Thursday 1st February 2007 at 16:00
Location: UG40, School of Computer Science
Title: Everything you always wanted to know about UoB finance
but were afraid to ask
Speaker: John Kreeger
Institution: School of Computer Science, University of Birmingham
Host: Richard Dearden
Abstract:
All members of staff are required to be familiar with and adhere to the
University of Birmingham's Manual of Financial Rules and Procedures.
The School of Computer Science accounts team, under the guidance of the
Head of School and the School Manager, works with staff, University
departments, and students as well as outside agencies in fulfilling the
University's fiscal responsibility in support of its mission. The School
of Computer Science accounts team is committed to maintaining an
atmosphere of continuous improvement in service to those whose work it
supports.
This seminar will focus on, but not be limited to, the two main areas
where School Staff will have the most contact with Finance matters
(Ordering Goods & Service and Expense Claims).
--------------------------------
Date and time: Thursday 15th February 2007 at 16:00
Location: UG40, School of Computer Science
Title: Policy Specification and System Evolution
Speaker: Peter Linington
Institution: University of Kent
Host: Behzad Bordbar
Abstract:
There has been a great deal of interest in recent years in the use of
policies to simplify system management and to reduce costs. The
ideas have been applied to network management, security and
various forms of resource management. However, the major focus
so far has been on the development of techniques with the
greatest expressive power possible, generally viewing the policy
authoring as a self-contained activity performed by experts who
understand
the objectives and constraints on the system being managed.
This talk used an ODP perspective to look at policy specification
as a step in the incremental design of systems and examines how
the writing of policies needs to be constrained in
order to preserve the over all design objectives for the system to be
managed. It proposes a specification architecture for policies and
considers how well-suited existing specification languages and
tools are for supporting this architecture.
--------------------------------
Date and time: Thursday 1st March 2007 at 16:00
Location: UG40, School of Computer Science
Title: Semantics of Model Transformations
Speaker: Reiko Heckel
Institution: University of Leicester
Host: Behzad Bordbar
Abstract:
At the heart of model-driven engineering are activities like
maintaining
consistency, refactoring, translation, and execution of models. These
are examples of model transformations. Semantic foundations are required
for stating and verifying, e.g., the correctness of translations with
respect to the semantics of source and target languages or the
preservation of behaviour by refactoring. This lecture is about the use
of graph transformations as one such foundation.
After introducing the basic concepts by means of an example, we will
focus on the problem of verifying model transformations with respect to
the semantic relation they establish between the models transformed. We
will consider two cases, one in which the semantics is expressed
operationally, by an abstract machine, and another where a denotational
semantics (mapping to an external semantic domain) is considered.
References:
R. Heckel. Graph transformation in a nutshell. In /Proceedings of the
School on Foundations of Visual Modelling Techniques (FoVMT 2004) of the
SegraVis Research Training Network/, volume 148 of /Electronic Notes in
TCS/, pages 187-198. Elsevier, 2006.
http://www.cs.le.ac.uk/people/rh122/papers/2006/Hec06Nutshell.pdf
Luciano Baresi, Karsten Ehrig and Reiko Heckel. Verification of Model
Transformations: A Case Study with BPEL. /Proc. Second Symposium on
Trustworthy Global Computing, TGC'06/, November, 2006.
http://www.pst.informatik.uni-muenchen.de/projekte/Sensoria/month_12_mainpublications/BEH06TGC.pdf
--------------------------------
Date and time: Thursday 15th March 2007 at 16:00
Location: UG40, School of Computer Science
Title: Specification, Refinement and Approximations
Speaker: John Derrick
(http://www.dcs.shef.ac.uk/~jd/)
Institution: Department of Computer science, University of Sheffield
(http://www.dcs.shef.ac.uk/)
Host: Behzad Bordbar
Abstract:
In practice it is rare for an implementation to be an exact (formal)
refinement of a specification. It is natural to ask how close the
implementation is to the original specification, and how a development
process interacts with the process of making a compromise. In this
talk we look at some of the issues that arise in this context.
Specifically, we discuss notions of convergence defined over a metric
space, and look at how refinement interacts with the sequences of
approximate specifications converging to an ideal. We try to answer
the following questions:
- which metrics are appropriate to measure convergence of
specifications?
- how can determine convergence of a sequence of specifications?
- how does convergence of a sequence of specifications fit into a
development process based around refinement?
- what properties of specifications are preserved by making a
compromise?
--------------------------------
Date and time: Thursday 29th March 2007 at 16:00
Location: UG40, School of Computer Science
Title: Multispectral Imaging: Techniques and Challenges
Speaker: Elli Angelopoulou
Institution: Stevens Institute of Technology
Host: Ela Claridge
Abstract:
In 1972 NASA launched its first airborne multispectral sensor,
LANDSAT-1. Since then the field of remote hyper-(multi-)spectral sensing
has evolved worldwide. Though many aspects of multispectral imaging have
already been addressed , especially within the context of remote
sensing, its employment in combination with regular digital cameras
raises new challenges in the fields of medical imaging and computer
vision.
This talk will give a brief background on multispectral imaging followed
by its use within the visible range. In order to demonstrate the
advantages of multispectral imaging, two cases will be presented that
show how multispectral analysis within the visible spectrum can exrtact
imperceptible information. More specifically, it will be shown that by
computing the spectral gradients, one can isolate the albedo of a
surface,(a material property dependent on its chromophores), as well as
extract a more accurate color description of a physically correct
definition of specular highlights (based on the Fresnel reflection
coefficient). The talk will also address one of the challenges of
multispectral imaging: handling a large amount of data (f.i. 73MB per
hyper-image) which often includes redundant information. A new band
selection algorithm will be presented which is invariant to geometry and
illumination.
--------------------------------
Date and time: Thursday 19th April 2007 at 16:00
Location: UG05, Learning Centre
Title: Models versus Ontologies - What's the Difference and
where does it Matter?
Speaker: Colin Atkinson
Institution: University of Mannheim, Germany
Host: Behzad Bordbar
Abstract:
As models and ontologies assume an increasingly central role in
enterprise
systems engineering the question of how they compare and can be used
together assumes growing importance. On the one hand, the semantic web
/
knowledge engineering community is increasingly promoting ontologies as
the
key to better software engineering methods, while on the other hand the
software engineering community is enthusiastically pursuing the vision
of
Model Driven Development as the core solution. Superficially, however,
ontologies and models are very similar, and in fact are often
visualized
using the same language
(e.g. UML). So what's going on? Are models and ontologies basically the
same
thing sold from two different viewpoints or is there some fundamental
difference between them beyond the idiosyncrasies of current tools and
languages? If so, what is this different and how should one choose
which
technology to use for which purpose?
Bio
Colin Atkinson has been the leader of the Software Engineering Group at
the
University of Mannheim since April 2003. Before that he was as a
professor
at the University of Kaiserslautern and project leader at the
affiliated
Fraunhofer Institute for Experimental Software Engineering. From 1991
until
1997 he was an Assistant Professor of Software Engineering at the
University
of Houston Clear Lake.
His research interests are focused on the use of model-driven and
component
based approaches in the development of dependable computing systems. He
received a Ph.D. and M.Sc. in computer science from Imperial College,
London, in 1990 and 1985 respectively, and received his B.Sc. in
Mathematical Physics from theUniversity of Nottingham 1983.
--------------------------------
Date and time: Thursday 3rd May 2007 at 16:00
Location: UG40, School of Computer Science
Title: Woven Sound and other Textures
Speaker: Tim Blackwell
(www.goldsmiths.ac.uk/departments/computing/staff/TB.html)
Institution: Goldsmiths University of London
()
Host: Will Byrne
Abstract:
Woven Sound is part of a larger project - A Sound You Can Touch - that
aims to study texture across sensory modalities. Woven Sound itself is
a
technique for the real-time generation of images from music and sound.
Micro-textures, as chosen by a particle swarm which flies over the
patterned image, are sent to a synthesizer in a process known as
granulation. This presentation will discuss aspects of texture and the
Woven Sound/Swarm Granulator program and will demonstrate the use of
the
system in improvised performance.
--------------------------------
Date and time: Thursday 5th July 2007 at 16:00
Location: UG40, School of Computer Science
Title: Evolving Teams of UAVs as Agents: Alternative
Evolutionary Algorithms
Speaker: Prof. Darrell Whitley
Institution: Colorado State University
Host: Per Kristian Lehre
Abstract:
This talk will be presented in two parts. In the first part, an
empirical study is presented of a system that evolves behaviours for
teams of Unmanned Aerial Vehicles (UAVs) using Genetic
Programming. The UVAs must act as cooperative agents. A highly
effective and flexible system was evolved. The talk will include
simple videos of simulated agents in action. In the process of
studying different evolutionary algorithms for this problem, we found
that traditional Genetic Programming did not always result in the best
performance. The second part of this talk presents a follow-up study,
where we looked at several different forms of evolutionary algorithms
for several well-known benchmarks. Again we found that Genetic
Programming, while competitive, did not always result in the best
performance. The talk will conclude with a discussion of what this
means about the search space induced when evolving programs.
--------------------------------
Date and time: Thursday 19th July 2007 at 16:00
Location: UG40, School of Computer Science
Title: Using Scene Appearance for Loop Closing in Simultaneous
Localisation and Mapping
Speaker: Paul Newman
(http://www.robots.ox.ac.uk/~pnewman/)
Institution: Department of Engineering Science, Oxford
(http://www.eng.ox.ac.uk/)
Host: Jeremy Wyatt
Abstract:
This talk considers ``loop closing'' in mobile robotics. Loop closing
is
the problem of correctly asserting that a robot has returned to a
previously visited area. It is a particularly hard but important
component of the Simultaneous Localization and Mapping (SLAM) problem.
Here a mobile robot explores an a-priori unknown environment performing
on-the-fly mapping while the map is used to localize the vehicle. Many
SLAM implementations look to internal map and vehicle estimates
(p.d.fs)
to make decisions about whether a vehicle is revisiting a previously
mapped area or is exploring a new region of workspace. We suggest that
one of the reasons loop closing is hard in SLAM is precisely because
these internal estimates can, despite best efforts, be in gross error.
The ``loop closerers'' we propose, analyze and demonstrate makes no
recourse to the metric estimates of the SLAM system it supports and
aids
--- they are entirely independent. We demonstrate the technique
supporting a SLAM system driven by scan-matching laser data and using
video sequences for appearance based loop closing in a variety of
outdoor settings "ground truthed" with GPS data.
--------------------------------
Date and time: Thursday 26th July 2007 at 16:00
Location: UG40, School of Computer Science
Title: Formalising Physical Computations
Speaker: Elham Kashefi
Institution: Oxford University
Host: Uday Reddy
Abstract:
Measurement-based quantum computation (MQC) has emerged
from the physics community as a new approach to quantum computation
where the notion of measurement is the main driving force of
computation. This is in contrast with the more traditional circuit
model which is based on unitary operations. I present a rigorous
mathematical model underlying the MQC and a concrete syntax and
operational semantics for programs, called patterns, and an algebra
of these patterns derived from a denotational semantics. More
importantly, I introduce a calculus for reasoning locally and
compositionally about these patterns together with a rewrite theory
with a general standardization theorem which allows all patterns to
be put in a semantically equivalent standard form. Finally I
describe the notion of information flow which fully characterizes the
determinism structure in the MQC and provides insightful knowledge on
depth complexity. As an application, I present a logarithmic
separation in terms of quantum depth between the quantum circuit
model and the MQC.
--------------------------------
Date and time: Thursday 27th September 2007 at 16:00
Location: UG40, School of Computer Science
Title: Objective Measures of Salience, Quality, and Diagnostic
Value in Medical Images
Speaker: Professor Murray H. Loew
Institution: George Washington University
Host: Ela Claridge
Abstract:
This work derives and assesses the usefulness of an objective image
quality measure. The measure is correlated with perceived image quality
as a function of the most salient features contained within a given
image. They are determined by combining aspects of both visual
discrimination theory and signal detection theory to define a new
measure that quantifies the importance of contrast-based features as a
function of spatial frequency. We discuss the development of a
perceptually-correlated metric that is useful for quantifying the
conspicuity of local, low-level or bottom-up visual cues, and the
identification of those spatial frequencies that are most distinct and
perhaps most relied upon by radiologists for decision-making. A
parsimonious analysis of variance model is developed that accounts for
the variance in the salience metric. The model is generalizable to a
population of readers and to a population of cases. This work has
application to the development of techniques to quantitatively assess
breast density, to classify radiographic parenchymal patterns in
mammograms, and to optimize compression techniques for task-based
performance. The salience measure can be used also to assess data set
difficulty for use in development and testing of
computer-assisted-diagnosis algorithms, and to determine conspicuous
regions-of-interest, which can be used to identify regions for higher
compression.
--------------------------------
Date and time: Thursday 4th October 2007 at 16:00
Location: UG40, School of Computer Science
Title: Has computer science anything to contribute to answering
ultimate questions of philosophy?
Speaker: Manfred Kerber
(http://www.cs.bham.ac.uk/~mmk)
Institution: School of Computer Science, University of Birmingham
(http://www.cs.bham.ac.uk)
Abstract:
It seems that there are questions which humans have asked
themselves at least for as long as we have records: Why are there
human beings? Does God exist? What is the meaning of life?
Darwinism, for instance, gives some answer to the first
question. However, this answer can be disputed and is disputed,
for instance, by so-called creationists. Creationists believe
that God has created the world with all the living beings in it
and that something as complex as a human being could never have
evolved. In this talks some of the arguments will be presented
and it will be explored whether science in general and computer
science in particular can contribute to giving coherent answers
to such questions.
--------------------------------
Date and time: Thursday 11th October 2007 at 16:00
Location: UG40, School of Computer Science
Title: Using Models of Rodent Hippocampus for Robot Navigation
Speaker: Gordon Wyeth
(http://www.itee.uq.edu.au/~wyeth/)
Institution: University of Queensland
(http://www.uq.edu.au/)
Host: Jeremy Wyatt
Abstract:
The brain circuitry involved in encoding space in rodents has
been extensively tested over the past thirty years, with an
ever increasing body of knowledge about the components and
wiring involved in navigation tasks. The learning and recall
of spatial features is known to take place in and around the
hippocampus of the rodent, where there is clear evidence of
cells that encode the rodent's position and heading. Many
components of hippocampus have been modelled by computer
simulation, and there exist some well understood
computational models that exhibit similar characteristics to
the recordings from the hippocampal complex.
This talk addresses two questions:
1. Can models of rodent hippocampus match the state of the
art in robot mapping?
2. Can models of rodent hippocampus embodied in a robot
inform biology?
The questions are addressed in the context of a system called
RatSLAM which is based on current models of the rodent
hippocampus. RatSLAM is demonstrated performing real time,
real world, simultaneous localisation and mapping from
monocular vision, showing its effectiveness as a robot
mapping and localisation tool. Furthermore, some of the
modifications necessary to make the models of hippocampus
work effectively in large and ambiguous environments
potentially raise some new questions for further biological study.
--------------------------------
Date and time: Thursday 18th October 2007 at 16:00
Location: UG40, School of Computer Science
Title: Computing with infinite objects in finite time (and
sometimes fast)
Speaker: Martín Escardó
(http://www.cs.bham.ac.uk/~mhe)
Institution: School of Computer Science, University of Birmingham
(http://www.cs.bham.ac.uk)
Abstract:
This talk is intended to enable people in the school to
understand what I am up to in terms of research. Although members of
the theory group are welcome (and urged to attend to support me, or to
challenge me, as the case may be), this talk is addressed to
non-members of the theory group.
For a number of years, I have been applying a branch of mathematics
known as topology to understand computation with infinite objects,
e.g. real numbers (infinitely many digits) and program behaviours
(infinitely many possibilities). In both examples, and in general, the
question is what can be said in finite time, and mechanically, about
finitely presented infinite objects.
In the classical theory of computation (both computability and
complexity), we learn that a number of natural, desirable tasks cannot
be performed by computers (think of the Halting problem and of the
P /= NP conjecture/hypothesis and their implications).
My work, on the other hand, is about tasks that we wouldn't expect to
be able to perform mechanically (in principle, let alone efficiently),
but that actually are possible (sometimes efficiently). I'll discuss
both theoretical and experimental results.
The main challenge of this talk, however, is not how to compute with
infinite objects, but rather how to communicate the research programme
and the results obtained so far to a general audience in finite time
and efficiently. I'll report partial theoretical results in this
direction, and I hope you won't mind being guinea pigs for my
experimental talk.
--------------------------------
Date and time: Thursday 25th October 2007 at 16:00
Location: UG40, School of Computer Science
Title: Commercial Web Application Development in Scheme
Speaker: Dave Gurnell and Noel Welsh
(http://www.untyped.com/about/dave.php)
Institution: untyped
(http://www.untyped.com/)
Host: Manfred
Kerber
Abstract:
Web application development is a very competitive domain. Most web
developers rely on extensive libraries to achieve the productivity
necessary to compete in this field. We have taken a different
approach at Untyped. In the last year we have developed a variety of
web-based applications to automate administration tasks for Queen Mary
University of London, and all our software is written in the
functional language Scheme.
Choosing Scheme has forced us to develop our own library of web
development libraries for database interaction, rendering HTML, and so
forth. However we have been able to use the unique features of
functional languages in general, and Scheme in particular, to our
advantage. We will show how continuations, higher-order procedures,
macros, and other abstractions have allowed us to create a simple and
high productivity web development model, which compares favourably in
many ways to Java and Ruby alternatives. We will then discuss some
ideas for future development that we believe are novel and will give
us a significant productivity boost.
--------------------------------
Date and time: Thursday 1st November 2007 at 16:00
Location: UG40, School of Computer Science
Title: The Effect of Learning on Life History Evolution
Speaker: John Bullinaria
(http://www.cs.bham.ac.uk/~jxb)
Institution: School of Computer Science, University of Birmingham
(http://www.cs.bham.ac.uk)
Abstract:
An Artificial Life approach is described that is aimed at exploring
the effect lifetime learning can have on the evolution of certain
life history traits, in particular the periods of protection that
parents offer their children, and the age at first reproduction of
those children. It involves simulating the evolution of a range of
simple artificial neural network systems that must learn quickly to
perform well on simple classification tasks, and studying whether
extended periods of parental protection emerge. It is concluded that
longer periods of parental protection of children do emerge and offer
clear learning advantages and better adult performance, but only if
the children are not allowed to procreate themselves while being
protected. When procreation is prevented while protected, a
compromise protection period evolves that balances the improved
learning performance against reduced procreation period. When it is
not prevented, much longer protection periods evolve, but the adult
performance is worse. The implications of these results for more
realistic scenarios are discussed.
--------------------------------
Date and time: Thursday 8th November 2007 at 16:00
Location: UG40, School of Computer Science
Title: Talking with Robots: A Case Study in Architectures for
Cognitive Robotics
Speaker: Jeremy Wyatt
(http://www.cs.bham.ac.uk/~jlw)
Institution: School of Computer Science, University of Birmingham
(http://www.cs.bham.ac.uk)
Host: Aaron Sloman
Abstract:
Humans and other biological systems are very adept at performing fast,
complicated control tasks in spite of large sensorimotor delays while
being fairly robust to noise and perturbations. For example, one is able
to react accurately and fast to catch a speeding ball while at the same
time being flexible enough to give-in when obstructed during the
execution of a task.
The key to this is the ability to learn 'internal models' that are able
to predict the consequences of your action without waiting for sensory
feedback as well as generate appropriate feedforward commands rather
than merely compensating for target errors. I will talk about some key
non-parametric techniques that (i) allow efficient learning of internal
models in real-time, online scenarios, (ii) has the ability to exploit
low dimensional manifolds in real movement (robot or human) data and
(iii) scale up to learning in real-world anthropomorphic robots of up to
30 DOFs.
While acquiring dynamics are important, another key ingredient of
adaptive control is flexible trajectory planning. Based on the same
nonparametric fundamentals, I will present a dynamical system based
trajectory encoding scheme that allows movements to be scaled spatially
and temporally without explicit time indexing. This is used with an
adaptive optimal feedback control (OFC) framework to optimally resolve
redundancies.
Videos of learning in high dimensional movement systems like humanoid
robots will serve to validate the effectiveness of these nonparametric
techniques.
--------------------------------
Date and time: Thursday 29th November 2007 at 16:00
Location: UG40, School of Computer Science
Title: Why symbol-grounding is both impossible and unnecessary,
and why theory-tethering is more powerful anyway
Speaker: Aaron Sloman
(http://www.cs.bham.ac.uk/~axs)
Institution: School of Computer Science, University of Birmingham
(http://www.cs.bham.ac.uk)
Abstract:
Concept empiricism is an old, very tempting, and mistaken theory, going
back to David Hume and his precursors, recently re-invented as
"symbol-grounding" theory and endorsed by many researchers in AI and
cognitive science, even though it was refuted long ago by the
philosopher Immanuel Kant (in his Critique of Pure Reason, 1781).
Roughly, concept empiricism states:
* All concepts are ultimately derived from experience of instances
* All simple concepts have to be abstracted directly from experience of
instances
* All non-simple (i.e. complex) concepts can be defined in terms of
simple concepts using logical and mathematical methods of composition.
Symbol grounding theories may add extra requirements, such as that the
experience of instances must use sensors that provide information in a
structure that is close to the structure of the things sensed. This is
closely related to sensory-motor theories of cognition, which work well
for most insect cognition. People are tempted by concept empiricism
because they cannot imagine any way of coming to understand concepts
except by experiencing instances or defining new concepts explicitly in
terms of old ones.
My talk will explain how Kant's refutation was elaborated by
philosophers of science attempting to explain how theoretical terms like
'electron', 'gene', 'valence', etc. could have semantic content, and
will go on to show how there is an alternative way of providing semantic
content using theories to provide explicit definitions of the underfined
symbols they use. The meanings are partly indeterminate insofar as a
theory can have more than one model. The indeterminacy can be reduced by
'tethering' the theory using 'bridging rules' that play a role in
linking the the theory to evidence. This does not require symbols in the
theory to be 'grounded'. A tutorial presentation of these ideas is
available here:
http://www.cs.bham.ac.uk/research/projects/cogaff/talks/#models
[http://www.cs.bham.ac.uk/research/projects/cogaff/talks/#models]
It turns out that there is a growing community of researchers who reject
symbol grounding theory and are moving in this direction. This has
implications for forms of learning and development in both robots and
animals, including humans.
--------------------------------
Date and time: Thursday 6th December 2007 at 16:00
Location: UG40, School of Computer Science
Title: Usable authorization policy languages and tools
Speaker: Moritz Becker
(http://research.microsoft.com/~moritzb/)
Institution: Microsoft Research
(http://research.microsoft.com)
Host: Mark Ryan
Abstract:
Managing the access control and authorization policy in a distributed,
decentralized setting is a challenging task: each collaborating domain
sets its own individual policy; these policies may be updated frequently
and involve federated delegation, separation of duty and other complex
constraints. Many existing authorization mechanisms lack expressiveness,
are not formally specified and are hard to use. This talk will give an
overview of our work on authorization policy at Microsoft Research
Cambridge. I will discuss the design and implementation of SecPAL, a
high-level language for specifying and enforcing decentralized
authorization policies that strikes a careful balance between syntactic
and semantic simplicity, policy expressiveness, and execution
efficiency. I will also describe how SecPAL and similar languages can be
extended to express policies that depend on and update the state, and
algorithms for computing effective permissions and for explaining access
denials.
--------------------------------
Date and time: Thursday 13th December 2007 at 16:00
Location: UG40, School of Computer Science
Title: Regional accents and speech and language technology
Speaker: Martin Russell
(http://www.eee.bham.ac.uk/russellm/mjr1.htm)
Institution: Electronic, Electrical & Computer Engineering, University
of Birmingham
(http://www.eece.bham.ac.uk/)
Host: William
Edmondson
Abstract:
There is a large amount of 'folklore' which suggests that
regional accents are a problem for automatic speech recognition
technology. However, there is surprisingly little hard experimental
evidence to support this. The main reason for this is a lack of
suitable data. For this reason, in 2003 the University of Birmingham
was funded to collect the 'Accents of the British Isles' (ABI) corpus
of
regional accented speech. The ABI corpora now contain approximately
200
hours of speech speech from nearly 600 subjects representing 27
different local accents of the British Isles. In this talk I will
discus current speech and language technology research related to
regional accents, describe the ABI corpora, and present the results of
some analyses of this data.
--------------------------------
Date and time: Thursday 10th January 2008 at 16:00
Location: UG40, School of Computer Science
Title: Could a child robot grow up to be a mathematician and
philosopher?
Speaker: Aaron Sloman
(http://www.cs.bham.ac.uk/~axj)
Institution: School of Computer Science, University of Birmingham
(http://www.cs.bham.ac.uk)
Abstract:
Some old problems going back to Immanuel Kant (and earlier) about the
nature of mathematical knowledge can be addressed in a new way by asking
what sorts of developmental changes in a human child make it possible
for the child to become a mathematician.
This is not the question many developmental psychologists attempt to
answer by doing experiments to find out at what ages children
demonstrate various abilities e.g. distinguishing a group of three items
from a group of four items.
Rather, we need to understand how information-processing architectures
can develop (including the forms of representation used, and the
algorithms and other mechanisms) that make it possible not only to
acquire empirical information about the environment and the agent, but
also to acquire non-empirical information, for example:
* counting a set of objects in two different orders must give the same
result (under certain conditions);
* some collections of objects can be arranged in a rectangular array of
K rows and N columns where both K and N > 1, while others cannot (e.g. a
group of 7 objects cannot);
* optical flow caused entirely by your own sideways motion is greater
for nearer objects than for more distant objects;
* when manipulating two straight rigid rods (where 'straightness'
refers to a collection of visual properties and a set of affordances) it
is possible to have at most one point where they cross over each other,
whereas with a straight rod and a rigid wire circle it is possible to
get one or two cross-over points, but not three;
* if you go round an unchanging building and record the order in which
features are perceived, then if you go round the building in the
opposite direction the same features will be perceived in the reverse
order;
* if one of a pair of rigid meshed gear wheels each on a fixed axle is
rotated the other will rotate in the opposite direction.
Some of what needs to be explained is how the learner's ontology grows
(e.g. discovering notions like 'counting', 'straight', 'order'), in such
a way that new empirical and non-empirical discoveries can be made that
are expressed in the expanded ontology.
I shall try to show how these ideas can provide support for the claim
that many mathematicians and scientists have made, that visualisation
capabilities are important in some kinds of mathematical reasoning, in
contrast with those who claim that only logical reasoning can be
mathematically valid.
Some aspects of the architecture that make these mathematical
discoveries possible depend on self-monitoring capabilities that also
underlie the ability to do philosophy, e.g. being able to notice that a
rigid circular object can occupy an elliptical region of the visual
field, even though the object still looks circular.
Although demonstrating all this in a working robot that models the way a
human child develops mathematical and philosophical abilities will
require significant advances in Artificial Intelligence, I think I can
specify some features of the design required.
There are also implications for biology, because the notion of an
information-processing architecture that grows itself as a result of
creative and playful exploration of the environment and itself can
change our ideas about nature-nurture tradeoffs and interactions.
No claim is made or implied that every mathematician in the universe has
to be a human-like mathematician. Some could use only logic-engines, for
example.
See also http://www.cs.bham.ac.uk/~axs/liv.html .
--------------------------------
Date and time: Thursday 17th January 2008 at 16:00
Location: UG40, School of Computer Science
Title: Towards Stochastic Refinement of Logic Programs
Speaker: Stephen H. Muggleton
(http://www.doc.ic.ac.uk/~shm/)
Institution: Department of Computing, Imperial College London
(http://www3.imperial.ac.uk/computing/)
Host: Aaron Sloman
Abstract:
Much of the theory of Inductive Logic Programming centres around
refinement of definite clauses. In this talk we discuss a new method of
integrating the refinement graph, Bayes' prior over the hypothesis
space, background knowledge, examples and hypotheses. The approach is
based around an explicit representation of the prior as a Stochastic
Logic Program. The posterior is developed by using the examples to guide
the unfolding of the prior and associated background knowledge. This
approach of using unfolding as a refinement operator is reminiscent of
Bostrom's SPECTRE algorithm. However, Bostrom's unfolding approach does
not involve prior probabilities. We present an algorithm which given a
new example, can incrementally update the posterior by amending a
structure referred to as the "hypothesis tree". An initial
implementation of the approach is described along with worked examples.
--------------------------------
Date and time: Thursday 24th January 2008 at 16:00
Location: UG40, School of Computer Science
Title: Semantics of nondeterminism
Speaker: Paul B Levy
(http://www.cs.bham.ac.uk/~pbl)
Institution: School of Computer Science, University of Birmingham
(http://www.cs.bham.ac.uk)
Abstract:
_ Semantics_ Denotational semantics is a powerful way of reasoning about
programs and programming languages. It is based on the principle of
compositionality: if a program is built up from components, then the
meaning of the program can be computed from the meaning of the
components. Before setting up such a semantics, we need to decide what
it means for programs to be considered equivalent, i.e. to have the same
meaning.
_ Nondeterminism_ We often want to regard software systems as if they
were programs that are nondeterministic i.e. have several legitimate
behaviours. If they are run several times, they might behave differently
each time. The actual factors that determine the choice (such as
schedulers) are low-level and we'd rather not think about them.
Is it possible to adapt the theory of denotational semantics to
nondeterministic programs? One problem is that nondeterminism causes a
proliferation of different notions of equivalence. But even once we've
decided which equivalence we want to model, there are many problems that
arise. We shall look at some of these difficulties, particularly the
interaction of nondeterminism and recursion.
--------------------------------
Date and time: Thursday 31st January 2008 at 16:00
Location: UG40, School of Computer Science
Title:
Speaker: CANCELLED DUE TO ILLNESS
--------------------------------
Date and time: Thursday 7th February 2008 at 16:00
Location: UG40, School of Computer Science
Title: Machine Learning in Astronomy: Time Delay Estimation in
Gravitational Lensing
Speaker: Peter Tino
(http://www.cs.bham.ac.uk/~pxt/)
Institution: School of Computer Science, University of Birmingham
(http://www.cs.bham.ac.uk)
Abstract:
A ray of light (or any other form of electromagnetic radiation, e.g.
radio or x-rays) travels along a geodesic, which could be locally curved
due to the gravitational effect of clumps of matter like stars or
galaxies. This is known as Gravitational lensing. Gravitational lensing,
caused by intervening matter along the line of sight, can give rise to
interesting cosmic illusions like magnified and seriously distorted
images of distant sources, sometimes splitting into multiple images.
Since the distortion of the images depends on the distribution of matter
in the lensing object, this is the most direct method of measuring
matter (which is often dark) in the Universe.
Quasar Q0957+561, an ultra-bright galaxy with a super massive central
black hole was the first lensed source to be discovered and it is the
most studied so far. Gravitational lens creates two distinct images of
Q0957+561.
We attempt to recover the phase shift in the 2 lensed images of
Q0957+561 using a model based approach formulated within the framework
of kernel regression. In a set of controlled experiments emulating
presence of realistic observational gaps, irregular observation times
and noisy observations, we compare our method with other state-of-art
statistical methods currently used in astrophysics. We then apply the
method to actual observations doubly imaged quasar Q0957+561 at several
radio and optical frequencies.
Joint work with:
Juan C. Cuevas-Tello, Engineering Faculty, Autonomous University of San
Luis Potosi, Mexico
and
Somak Raychaudhury, School of Physics and Astronomy, University of
Birmingham, UK
--------------------------------
Date and time: Thursday 21st February 2008 at 16:00
Location: UG40, School of Computer Science
Title: Open Source Virtualisation
Speaker: Malcolm Herbert
Institution: redhat
Host: Manfred Kerber
Abstract:
Virtualisation is not new in terms of thinking or technology. What is
different is that there is an increasing need for enterprise users to
be
more aware of power consumption and hardware costs, couple with the
emergence of solutions using commodity hardware.
This talk discusses the current open source virtualisation
technologies,
their potential usages and the challenges for their deployment and
management. It will look at both leading edge technologies and the
practical reality of what is currently being deployed.
--------------------------------
Date and time: Thursday 28th February 2008 at 16:00
Location: UG40, School of Computer Science
Title: 3 = 4 ?
Speaker: Achim Jung
(http://www.cs.bham.ac.uk/~axj)
Institution: School of Computer Science, University of Birmingham
(http://www.cs.bham.ac.uk)
Abstract:
In three-valued logic one considers a third truth value besides
the usual "true" and "false", namely, "unknown". This is a useful
concept
when one is dealing with situations where knowledge is partial (as in
many
AI applications) or uncomputable.
In four-valued logic, a further value is considered, representing
"contradiction". This, too, arises naturally when one tries to
formalise
the knowledge that one holds (or that one has been told) about
aspects of the real-world.
For a logic we need more than the truth values, however, and one can
wonder what logical connectives would be appropriate in these
multi-valued
settings, and what their proof rules should be. In this talk I will
present a point of view (developed jointly with Drew Moshier) which is
strongly model-theoretic. By studying sets of models, one is led fairly
naturally to consider axiomatisations of three- and four-valued logic
which make a clear distinction between "logic" and "information".
Furthermore, it emerges that there is in fact a 1-1 translation between
the three- and four-valued approach.
--------------------------------
Date and time: Thursday 6th March 2008 at 16:00
Location: UG40, School of Computer Science
Title: Figurative Language and Artificial Intelligence
Speaker: John Barnden
(http://www.cs.bham.ac.uk/~jab)
Institution: School of Computer Science, University of Birmingham
(http://www.cs.bham.ac.uk)
Abstract:
With colleagues in the Figurative Language Research Group in the
School,
I have for some time been developing an AI system called ATT-Meta for
handling core aspects of the reasoning needed to understand
metaphorical
utterances. This talk will outline recent developments, both
theoretical
and implementational, in a way that is intended to be accessible to an
multi-disciplinary audience. In particular I will summarize the state
of
play on our novel approach to handling the inter-domain mappings in
metaphor, sketch one of the central techniques used for handling
potential reasoning conflicts, and mention a special application of
this
technique to serially-compounded (chained) metaphor. Time permitting I
will also briefly describe a new departure of the project, namely the
deconstruction of the notion of metaphor, and a sister notion called
metonymy, into more fundamental dimensions where the action really is.
--------------------------------
Date and time: Thursday 13th March 2008 at 16:00
Location: UG40, School of Computer Science
Title: If CHR is the solution, what was the problem?
Speaker: Peter Hancox
(http://www.cs.bham.ac.uk/~pjh)
Institution: School of Computer Science, University of Birmingham
(http://www.cs.bham.ac.uk)
Abstract:
The logic programming enterprise has developed in two main directions:
the exploration of concurrent processing of logic programs and the
replacement of unification by constraint solving. While concurrent
Prolog has remained the play thing of the laboratory, constraint logic
programming (CLP) has moved into practical applications, particularly
those implemented with constraints over finite domains (CLP(FD)).
CLP languages have usually been implemented in Prolog and thus they
have many similarities, especially of syntax. Powerful though they
may be, CLP languages are restricted to specific domains, for instance
constraints on integers. Constraint Handling Rules (CHR) differs from
CLP languages by providing the programmer with the tools to write
specialized constraint solvers, from the very small (for instance for
>=/2) to the very much larger (eg the re-implementation of CLP(FD)).
In CHR, constraints are posted to a central store and rules are used
to rewrite that store. (Given that there is no obvious start and end
states, this makes CHR particularly suited to the implementation of
reactive rather than transformational systems.) Rules are fired when
the store contains constraints that match (rather than unify) with
their heads and either propagate new constraints or simplify the
contents of the store. Constraints that do not immediately fire their
rules are suspended until some change, perhaps argument instantiation,
allows them to match and thus fire their rules.
Constraints can be seen as independent agents, capable of initiating
concurrent processes with information communicated through shared
variables. As processes are asynchronous, the programmer is forced to
use shared variables to impose synchronization. In practice, CHR is
less valued tor its concurrency than for the convenience of writing
multi-head rule-based systems. It is argued that this is largely due
to a lack of atomic unification on the one hand and, on the other, a
misinterpretation of CHRs execution strategy.
--------------------------------
Date and time: Thursday 20th March 2008 at 16:00
Location: UG40, School of Computer Science
Title: Automatic Resolution of Figurative Language: The example
of metonymy
Speaker: Katja Markert
(http://www.comp.leeds.ac.uk/markert/)
Institution: School of Computing, University of Leeds
(http://www.engineering.leeds.ac.uk/comp/)
Host: John Barnden
Abstract:
Figurative language is ubiquitous in everyday conversation and
texts. Metaphor allows a target concept to be equated to a different
concept: for example, a situation can be conceptualised as a difficult
lie of the land as in "The Iraq war is a quagmire" or "The working
place is a snake pit". In contrast, metonymy allows to replace an
expression with a related one, as the use of "Lockerbie" for the air
disaster near the town in Sentence (1).
(1) Because of Lockerbie, the United States still shunned Qaddafi
The recognition and interpretation of figurative language is important
for many applications such as geographical information retrieval or
opinion mining systems. However, traditionally, natural language
processing systems dealing with figurative language get bogged down in
knowledge-intensive treatment of individual examples. Similarly,
larger datasets for shared evaluation frameworks hardly exist. My talk
addresses these problems for the phenomenon of metonymy and presents a
reliably annotated larger dataset for metonymy as well as an automatic
metonymy recognition system evaluated on that dataset. In addition, I
have organised a common evaluation competition for metonymy in
conjunction with SemEval 2007 and will present the results and
approaches of the 5 participating industrial and academic systems.
--------------------------------
Date and time: Thursday 1st May 2008 at 16:00
Location: UG40, School of Computer Science
Title: Precision, Local Search and Unimodal Functions
Speaker: Jon Rowe
(http://www.cs.bham.ac.uk/~jer)
Institution: School of Computer Science, University of Birmingham
(http://www.cs.bham.ac.uk)
Abstract:
A local search algorithm maintains a current point of the search
space, and updates this point by trying to find an improving
"neighbour" of the current one. Local search algorithms vary in how
they generate such neighbours, and in how they decide to move (e.g. by
evaluating all neighbours and choosing the best, or by evaluating
random neighbours until an improving one is found). We consider the
performance of a variety of such algorithms on unimodal functions of a
single variable (that is, functions with a single optimum) in terms of
the precision used (that is, the number of points n used to represent
the domain). There are efficient O(log n) deterministic and randomised
algorithms for this problem. There are other slightly less efficient
algorithms (running in O((log n)^2) time) which represent the search
space as binary strings using a Gray code. This leads us to the idea
of a randomised algorithm which generates neighbours at a distance
from the current point using a fixed probability distribution. This
also runs in O((log n)^2) time, which is optimal in the sense that
there are unimodal functions for which Omega((log n)^2)) is necessary
regardless of the probability distribution chosen. An advantage of
such an algorithm is that, empirically, it also works well on
multimodal problems and for functions of several variables. However,
it is possible to construct a unimodal function of two variables for
which it can be proved that no black box algorithm can work
efficiently.
This work is joint with Martin Dietzfelbinger, Ingo Wegener and
Philipp Woelfel, and will be presented at this year's Genetic and
Evolutionary Computation conference.
--------------------------------
Date and time: Thursday 8th May 2008 at 16:00
Location: UG40, School of Computer Science
Title: Programming Verifiable Heterogeneous Agent Systems
Speaker: Louise Dennis
(http://www.csc.liv.ac.uk/~lad/)
Institution: Department of Computer Science, University of Liverpool
(http://www.csc.liv.ac.uk)
Host: Manfred
Kerber
Abstract:
The overall aim of the Model Checking Agent Programming Languages
(MCAPL) project is to provide a verification framework for practical
multi-agent systems. To achieve practicality, we must be able to
describe and implement heterogeneous multi-agent systems (i.e., systems
where individual agents may be implemented in a number of different
agent
programming languages). To achieve verifiability, we must define
semantics appropriately for use in formal verification. In this talk I
will give a general outline of the MCAPL project and then focus on the
problem of implementing heterogeneous multi-agent systems in a
semantically clear, and appropriate, way.
--------------------------------
Date and time: Thursday 15th May 2008 at 16:00
Location: UG40, School of Computer Science
Title: Epsilon: A Platform for Building Model Management
Languages & Tools
Speaker: Richard Paige
(http://www-users.cs.york.ac.uk/~paige/)
Institution: Department of Computer Science, University of York
(http://www.cs.york.ac.uk/)
Host: Behzad
Bordbar
Abstract:
Models are abstract descriptions of interesting phenomena. For building
complex systems, we use a variety of different kinds of models (e.g.,
programs, tests, transformations, architectural descriptions) written in
different languages. We need to be able to manage these models in
sophisticated, automated ways.
The Epsilon model management platform (
http://www.eclipse.org/gmt/epsilon ) provides tools and domain-specific
languages for model management. It comprises a number of integrated
model management languages (such as transformation, merging, and
validation languages) that are based upon common and shared
infrastructure. Its design promotes reuse when building new languages
and tools. We report on recent advances in the development and
application of Epsilon, in particular its support for native objects,
model transactions, context-independent user interaction, and profiling.
We also describe support in Epsilon for the unit-testing of model
management operations.
Joint work with Dimitrios Kolovos, Fiona Polack, and Louis Rose
--------------------------------
Date and time: Thursday 29th May 2008 at 16:15
Location: UG40, School of Computer Science
Title: An informal discussion of how best to achieve long range
objectives of the EU's Cognitive Systems initiative
Speaker: Colette Maloney
Institution: Head of EC Unit, INFSO E5 Cognitive Systems and Robotics
Initiative (FP6/FP7)
Host: Aaron Sloman
Abstract:
This talk will follow shortly after her more formal presentation in
Bristol the same day at the IET Seminar on Directions and Funding of
Robotics Research in the UK
http://www.theiet.org/events/2008/robotics-r-and-d.cfm
[http://www.theiet.org/events/2008/robotics-r-and-d.cfm]
In the discussion here, she will attempt to to highlight strengths and
limitations of what is already happening and some of the ideas that have
been proposed for overcoming the limitations.
See also the euCognition research roadmap initiative
http://www.eucognition.org/wiki/index.php?title=Research_Roadmap
[http://www.eucognition.org/wiki/index.php?title=Research_Roadmap]
http://www.eucognition.org/wiki/index.php?title=Roadmap_Kick-off_Meeting
[http://www.eucognition.org/wiki/index.php?title=Roadmap_Kick-off_Meeting]
And the FP7 documents on Challenge 2: Cognitive Systems, Interaction,
Robotics
http://cordis.europa.eu/fp7/ict/programme/challenge2_en.html
[http://cordis.europa.eu/fp7/ict/programme/challenge2_en.html]
ftp://ftp.cordis.europa.eu/pub/ist/docs/cognition/background-doc-for-call-3_en.pdf,
Challenge 2: Cognitive Systems, Interaction, Robotics Technical
Background Notes for Proposers
[ftp://ftp.cordis.europa.eu/pub/ist/docs/cognition/background-doc-for-call-3_en.pdf
]
--------------------------------
Date and time: Thursday 5th June 2008 at 16:00
Location: UG40, School of Computer Science
Title: Calculating Probabilistic Anonymity
Speaker: Tom Chothia
(http://www.cs.bham.ac.uk/~tpc)
Institution: School of Computer Science, University of Birmingham
(http://www.cs.bham.ac.uk)
Abstract:
Anonymity is distinctly different from other, well-understood security
properties, such as secrecy or authenticity, but no common accepted
definition exists. In this talk I will describe the use of
Information Theory to define anonymity and in particular how "channel
capacity" can be used to say how much an observer can learn about the
users of a system, no matter how they behave.
The data required to perform this analysis can be obtained accurately
from a formal model or estimated from a number of observations of the
system. In the latter case, we use the "Central Limit Theory" to
estimate the reliability of our data then in both cases we use the
Blahut Arimoto algorithm to calculate capacity, i.e., the anonymity of
the system. We have automated this analysis and I will demonstrate
this on an implementation of the Dining Cryptographers protocol that
is subtly broken due to the affect the scheduler may have on the
output. I will not assume that the audience has any previous knowledge
what so ever.
--------------------------------
Date and time: Thursday 19th June 2008 at 10:00
Location: UG40, School of Computer Science
Title: Adaptive Business Intelligence
Speaker: Zbigniew Michalewicz
(http://www.cs.adelaide.edu.au/~zbyszek/)
Institution: School of Computer Science, The University of Adelaide
(http://www.cs.adelaide.edu.au/)
Host: Shan He
Abstract:
In the modern information era, managers must recognize the competitive
opportunities represented by decision support tools. New family of
systems called Adaptive Business Intelligence systems combine
prediction and optimization techniques to assist decision makers in
complex, rapidly changing environments. These systems address the
fundamental questions: What is likely to happen in the future? And what
is the best course of action? Adaptive Business Intelligence includes
elements of data mining, predictive modelling, forecasting,
optimization, and adaptability. The talk introduces the concepts behind
Adaptive Business Intelligence, which aims at providing significant
cost
savings & revenue increases for businesses. A few real-world examples
will be shown and discussed.
--------------------------------
Date and time: Tuesday 5th August 2008 at 16:00
Location: UG40, School of Computer Science
Title: A New Vision of Language, or There and Back Again
Speaker: Shimon Edelman
(http://kybele.psych.cornell.edu/~edelman/)
Institution: Department of Psychology, Cornell University
(http://www.psych.cornell.edu/)
Host: Aaron Sloman
Abstract:
One of the greatest challenges facing the cognitive sciences is to
explain what it means to know a language, and how the knowledge of
language is acquired. For decades, the dominant approach to this
challenge within linguistics has been to seek an efficient
characterization of the wealth of documented structural properties of
language in terms of a compact generative grammar --- ideally, the
minimal necessary set of innate, universal, exception-less, highly
abstract, syntactic rules that jointly generate all and only the
well-formed structures.
I shall offer a sketch of an alternative view, whose roots can be traced
to linguistic insights advanced, among others, by Zellig Harris and
Ronald Langacker, as well as to some fairly standard notions from the
study of human vision. According to the newly emerging synthesis,
language is generated and interpreted by a large, open set of
constructions of varying degrees of abstraction, complexity and
entrenchment, which integrate form and meaning and are acquired through
embodied, socially situated experience, by probabilistic learning
algorithms that resemble those at work in other cognitive modalities,
notably vision.
In support of this new conception of language, I shall review behavioral
and computational evidence suggesting that (i) hierarchical, highly
productive syntactic structures can be learned from experience by
unsupervised probabilistic algorithms, (ii) supra-sentential structural
properties of child-directed speech facilitate such learning, (iii) the
acquired constructions are differentially entrenched depending on their
usage, and (iv) their processing during comprehension is affected by the
human body plan.
Papers describing these results can be found at
http://kybele.psych.cornell.edu/~edelman/archive.html
[http://kybele.psych.cornell.edu/~edelman/archive.html]
Recent and present collaborators in this project include Jonathan
Berant, Catherine Caldwell-Harris, David Horn, Catalina Iricinschi, Luca
Onnis, Eytan Ruppin, Ben Sandbank, Zach Solan, and Heidi Waterfall.
Professor Edelman is the author of "Representation and Recognition in
Vision" (MIT Press 1999), see
http://mitpress.mit.edu/catalog/item/default.asp?ttype=2&tid=3958 and
"Computing the Mind" (OUP to be published), see
http://kybele.psych.cornell.edu/~edelman/Edelman-book-ToC.pdf .
--------------------------------
Date and time: Thursday 2nd October 2008 at 16:00
Location: UG40, School of Computer Science
Title: Multidisciplinary Design Optimization Research at
UNSW@ADFA and Progress on Surrogate Assisted Optimiz
ation for Computationally Expensive Problems
Speaker: Tapabrata Ray
(http://www.unsw.adfa.edu.au/acme/staffpages/MDO-Web/index.html)
Institution: School of Aerospace, Civil and Mechanical Engineering,
University of New South Wales, Australia
(http://www.unsw.adfa.edu.au)
Host: Xin Yao
Abstract:
The first part of the presentation will provide a brief overview of
recent research being conducted by the Multidisciplinary Design
Optimization Group at the University of New South Wales, Australian
Defence Force Academy Australia. The second part of the presentation
will focus on challenges involved in dealing with computationally
expensive optimization problems and our progress in that direction.
Background of the Speaker: Dr Tapabrata Ray is from the School of
Aerospace, Civil and Mechanical Engineering, University of New South
Wales Australia where he is currently Senior Lecturer and the Leader of
the Multidisciplinary Design Optimization Group. He holds all three
(Bachelors, Masters and a PhD) from the Indian Institute of Technology,
Kharagpur, India. Since his PhD, he has worked in various capacities
with three major research institutes in Singapore namely Information
Technology Institute, Institute of High Performance Computing and
Temasek Labs at National University of Singapore. Currently he is
visiting CERCIA and will be around till mid Feb 2009 working on
multiobjective and dynamic optimization for CARP problems. His interests
are in the area of all forms of optimization methods suitable for
multiobjective constrained optimization.
Details on Research is available at
http://www.unsw.adfa.edu.au/acme/staffpages/MDO-Web/index.html
[http://www.unsw.adfa.edu.au/acme/staffpages/MDO-Web/index.html]
--------------------------------
Date and time: Thursday 9th October 2008 at 16:00
Location: UG40, School of Computer Science
Title: Deductive Temporal Reasoning with Constraints
Speaker: Clare Dixon
(http://www.csc.liv.ac.uk/~clare/)
Institution: Department of Computer Science, University of Liverpool
(http://www.csc.liv.ac.uk/)
Host: Manfred
Kerber
Abstract:
Often when modelling systems, physical constraints on the
resources available are needed. For example, we might say that at
most 'n' processes can access a particular resource at any moment or
exactly 'm' participants are needed for an agreement. Such
situations
are concisely modelled where propositions are constrained such that
at
most 'n', or exactly 'm', can hold at any moment in time. This talk
describes both the logical basis and a verification method for
propositional linear time temporal logics which allow such
constraints as input. The complexity of this procedure
is discussed and case studies are examined. The logic itself
represents a combination of standard temporal logic with classical
constraints restricting the numbers of propositions that can be
satisfied at any moment in time. We discuss restrictions to the
general case where only 'exactly one' type constraints are allowed
and extensions to first-order temporal logic.
--------------------------------
Date and time: Thursday 16th October 2008 at 16:00
Location: UG40, School of Computer Science
Title: Why virtual machines really matter -- for several
disciplines
Speaker: Aaron Sloman
(http://www.cs.bham.ac.uk/~axs)
Institution: School of Computer Science, University of Birmingham
(http://www.cs.bham.ac.uk)
Abstract:
One of the most important ideas (for engineering, biology, neuroscience,
psychology, social sciences and philosophy) to emerge from the
development of computing has gone largely unnoticed, even by many
computer scientists, namely the idea of a _running_ virtual machine that
acquires, manipulates, stores and uses information to make things
happen.
The idea of a virtual machine as a mathematical abstraction is widely
discussed, e.g. a Turing machine, the Java virtual machine, the Pentium
virtual machine, the von Neumann virtual machine. These are abstract
specifications whose relationships can be discussed in terms of mappings
between them. E.g. a von Neumann virtual machine can be implemented on a
Universal Turing Machine. An abstract virtual machine can be analysed
and talked about, but, like a mathematical proof, or a large number, it
does not _do_ anything. The processes discussed in relation to abstract
virtual machines do not occur in time: they are mathematical
descriptions of processes that can be mapped onto descriptions of other
processes. In contrast a physical machine can consume, transform,
transmit, and apply energy, and can produce changes in matter. It can
make things happen. Physical machines also have abstract mathematical
specifications that can be analysed, discussed, and used to make
predictions, but which, like all mathematical objects cannot do
anything.
But just as _instances_ of designs for physical machines can do things
(e.g. the engine in your car does things), so can instances of designs
for virtual machines do things: several interacting virtual machine
instances do things when you read or send email, browse the internet,
type text into a word processor, use a spreadsheet, etc. But those
running virtual machines, the active instances of abstract virtual
machines, cannot be observed by opening up and peering into or measuring
the physical mechanisms in your computer.
My claim is that long before humans discovered the importance of active
virtual machines (AVMs), long before humans even existed, biological
evolution produced many types of AVM, and thereby solved many hard
design problems, and that understanding this is important (a) for
understanding how many biological organisms work and how they develop
and evolve, (b) for understanding relationships between mind and brain,
(c) for understanding the sources and solutions of several old
philosophical problems, (d) for major advances in neuroscience, (e) for
a full understanding of the variety of social, political and economic
phenomena, and (e) for the design of intelligent machines of the future.
In particular, we need to understand that the word "virtual" does not
imply that AVMs are unreal or that they lack causal powers, as some
philosophers have assumed. Poverty, religious intolerance and economic
recessions can occur in socio-economic virtual machines and can clearly
cause things to happen, good and bad. The virtual machines running on
brains, computers and computer networks also have causal powers. Some
virtual machines even have desires, preferences, values, plans and
intentions, that result in behaviours. Some of them get philosophically
confused when trying to understand themselves, for reasons that will be
explained. Most attempts to get intelligence into machines ignore these
issues.
Some of the ideas are presented in this forthcoming Journal paper:
http://www.cs.bham.ac.uk/research/projects/cosy/papers/#tr0807
[http://www.cs.bham.ac.uk/research/projects/cosy/papers/#tr0807]
--------------------------------
Date and time: Thursday 23rd October 2008 at 16:00
Location: UG40, School of Computer Science
Title: Attack and fix for the Trusted Platform Module
Speaker: Mark Ryan
(http://www.cs.bham.ac.uk/~mdr)
Institution: School of Computer Science, University of Birmingham
(http://www.cs.bham.ac.uk)
Abstract:
The Trusted Platform Module (TPM) is a hardware chip designed to
enable a level of security beyond that which can be provided by
software alone. TPMs are currently fitted in high-end laptops, and are
destined to feature in all devices within a few years. There are 100
million TPMs currently in existence. Application software such as
Microsoft's BitLocker and HP's HP ProtectTools use the TPM in order
to guarantee security properties.
I'll describe an attack on the TPM that I discovered while I was on
Royal Academy of Engineering "industrial secondment" at HP. I'll also
mention the method we proposed to fix it, and some ideas about
verifying that the fix is correct. I'll also discuss the ideas and
controversies about trusted computing, and its possible future.
The work is joint with Liqun Chen, HP Labs, Bristol.
--------------------------------
Date and time: Thursday 30th October 2008 at 16:00
Location: UG40, School of Computer Science
Title: Aspects of Topology
Speaker: Steve Vickers
(http://www.cs.bham.ac.uk/~sjv)
Institution: School of Computer Science, University of Birmingham
(http://www.cs.bham.ac.uk)
Abstract:
Topology as a mathematical subject arose out of questions such as the
classification of surfaces, but has extended its influence far beyond
that. Some unexpected connections (many going back to Dana Scott)
between topology and logic, and between continuity and computability,
have established topology as an area of high interaction between
mathematics and computer science.
I shall start by sketching the history of this, and I shall focus on the
notion of "bundle" - a continuous map f: Y -> X, from the point of view
that it is a "variable space" - the fibre Y_x (the inverse image
f^{-1}({x})) varies as x varies in X. "Parametrizing mathematics by x"
leads to a non-classical mathematics (sheaves over X) in which
topological spaces are replaced by bundles over X - provided one adopts
a "point-free" approach (locale theory) to topology.
My own work is on such logical aspects of topology and especially on the
"powerlocales", which derive from the powerdomains used in the semantics
of non-deterministic computation.
To conclude I shall outline some exciting new links with quantum theory.
Isham and Doering (at Imperial) and Landsman, Spitters and Heunen (at
Nijmegen) have been using the mathematics of sheaves as a logical trick
to make quantum theory appear more like classical physics. I shall
sketch the relationship of this with the bundle ideas of "variable
spaces".
--------------------------------
Date and time: Thursday 6th November 2008 at 16:00
Location: LG34, Learning Centre
Title: Pairing-free Identity-Based Encryption
Speaker: Kenny Paterson
(http://www.isg.rhul.ac.uk/~kp/)
Institution: Information Security Group, Royal Holloway
(http://www.isg.rhul.ac.uk/)
Host: Guilin Wang
Abstract:
Identity-Based Encryption (IBE) is an alternative to traditional public
key cryptography that has the potential to simplify key management and
the security infrastructure needed to support it. The subject of IBE
has
undergone an extraordinarily rapid development since the discovery in
2001 of efficient and secure IBE schemes based on pairings on elliptic
curves. In this talk, I will discuss some recent developments in the
field of IBE, focussing on schemes that avoid the use of pairings.
--------------------------------
Date and time: Thursday 13th November 2008 at 16:00
Location: UG40, School of Computer Science
Title: The 150-year-old science of active virtual machines
Speaker: David Booth
(http://psychology-people.bham.ac.uk/people-pages/detail.php?identity=boothda)
Institution: School of Psychology, University of Birmingham
(http://www.psychology.bham.ac.uk/)
Host: Aaron Sloman
Abstract:
In 1888-9, E.H. Weber discovered one of the first
basic principles of Experimental Psychology:
equal ratios of the quantity of stimulation to the
senses were rated as equally different in intensity,
when the levels of input were moderate. This
semilog linear range of an input/output function
for physical or chemical stimuli is plain linear
when the stimuli are symbolic such as quantitative
descriptions. This discriminative sensitivity of an
output can therefore be used as a scaling unit for
quantities of any input. Furthermore, inputs that
are treated as the same by an output will
summate in discrimination units from the level to
which the person or animal has learnt: that is, an
information-transmitting channel through an
adapted intelligent system constitutes a mental
dimension. If two transforms operate over
different channels, then their interaction is
orthogonal. Hence the simplest account of a
mind is as a Euclidean hyperspace of distinct
causal processes. When two outputs are
observed from one input, two distinct ways of
processing the input may be distinguished. With
sufficiently independent multiple inputs tested on
specific outputs as well as on an overall output of
interest, the set of possible processes and their
interactions can be tested against each other on
the individual's multiple discrimination
performance in acting on variants of a specific
situation (Booth & Freeman, 1993; data-analytic
program `Co-Pro', 2006). Several examples of
such cognitive diagnosis will be given. An
argument offered for discussion -- made in a
response to MoC 2003 in an MS now on
epapers.bham.ac.uk -- is that the development of
intelligent robots needs to include a science of
artificial performance, analogous to this
psychological science of natural performance -
i.e., `POEMS', psychology of emerging machine
souls / sentients / symbolisers / subjectivities!
--------------------------------
Date and time: Thursday 20th November 2008 at 16:00
Location: LG34, Learning Centre
Title: Modeling Security Concerns During Early Development
Lifecycle Stages
Speaker: Jon Whittle
(http://www.comp.lancs.ac.uk/~whittljn/)
Institution: Department of Computing, Lancaster University
(http://www.comp.lancs.ac.uk/)
Host: Behzad
Bordbar
Abstract:
Secure programming techniques are now relatively well understood and a
variety of tools and checkers exist to assist developers in finding
code-level security bugs. However, a significant proportion of
security problems are actually due to higher level design flaws -- up
to 50% by some estimates. As a result, there is commercial interest
in trying to use requirements and design documentation to assist in
security assessments. A big problem with this is that such
documentation, for example expressed as UML models, is often
incomplete, ambiguous and are not executable. To tackle this problem,
we have developed an executable modeling technique for scenario-based
requirements that allows modelers to automatically execute candidate
attack patterns on a model. The supporting tool allows modelers to
validate early lifecycle models by executing possible attacks, in a
way that is similar to regression testing. This talk will describe the
technique as well as its application to a number of applications,
including electronic voting systems, train control systems and
software defined radio. The latter application was conducted in
collaboration with the UK's National Technical Authority on
Information Assurance.
--------------------------------
Date and time: Thursday 27th November 2008 at 16:00
Location: UG40, School of Computer Science
Title: Function Interface Models for Hardware Compilation:
Types, Signatures, Protocols
Speaker: Dan Ghica
(http://www.cs.bham.ac.uk/~drg)
Institution: School of Computer Science, University of Birmingham
(http://www.cs.bham.ac.uk)
Abstract:
The problem of synthesis of gate-level descriptions of digital
circuits from behavioural specifications written in higher-level
programming languages (hardware compilation) has been studied for a
long time yet a definitive solution has not been forthcoming. This
talk will be bringing a new methodological perspective that is
informed by programming-language theory. I argue that one of the major
obstacles in the way of hardware compilation becoming a useful and
mature technology is the lack of a well defined function interface
model (FIM), i.e. a canonical way in which functions communicate with
arguments. We will discuss the consequences of this problem and
propose a solution based on new developments in programming language
theory. We will conclude by presenting an implementation and examples.
--------------------------------
Date and time: Thursday 4th December 2008 at 16:00
Location: UG40, School of Computer Science
Title: Targets and SETI: Shared motivations, life signatures,
and asymmetric SETI
Speaker: William Edmondson
(http://www.cs.bham.ac.uk/~whe)
Institution: School of Computer Science, University of Birmingham
(http://www.cs.bham.ac.uk)
Abstract:
In this talk I propose that conventional assumptions which underpin SETI
can be revised in ways which permit a more nuanced approach to the
enterprise. It is suggested that sensible assumptions based on
adventurous science include the notion that we can conjecture helpfully
about what we can know about SETI, and that probably the ETIs for which
we are looking are sending signals to us because they know they are not
alone, and are interested to help us learn that we are not alone.
Additionally, existing work using Pulsars as Beacons for SETI (see
http://www.cs.bham.ac.uk/~whe/SETIPaper.pdf ) is reviewed in the context
of what we can now call Asymmetric SETI, the term coined to reflect that
we are merely seeking to determine what ETI already knows.
--------------------------------
Date and time: Thursday 11th December 2008 at 16:00
Location: UG04, Learning Centre
Title: Computing beyond a Million Processors - bio-inspired
massively-parallel architectures
Speaker: Steve Furber
(http://intranet.cs.man.ac.uk/apt/people/sfurber/)
Institution: School of Computer Science, The University of Manchester
(http://www.cs.manchester.ac.uk/)
Host: Dan Ghica
Abstract:
Moore's Law continues to deliver evermore transistors on an integrated
circuit, but discontinuities in the progress of technology mean that
the
future isn't simply an extrapolation of the past. For example, design
cost and complexity constraints have recently caused the microprocessor
industry to switch to multi-core architectures, even though these
parallel
machines present programming challenges that are far from solved.
Moore's
Law now translates into ever-more processors on a multi-, and soon
many-
core chip. The software challenge is compounded by the need for
increasing
fault-tolerance as near-atomic-scale variability and robustness
problems
bite harder.
We look beyond this transitional phase to a future where the
availability
of processor resource is effectively unlimited and computations must be
optimised for energy usage rather than load balancing, and we look to
biology for examples of how such systems might work. Conventional
concerns
such as synchronisation and determinism are abandoned in favour of
real-
time operation and adapting around component failure with minimal loss
of
system efficacy.
--------------------------------
Date and time: Tuesday 16th December 2008 at 16:00
Location: UG40, School of Computer Science
Title: An Agent-Based Generic Framework for Symbiotic Simulation
and its Application to Manufacturing
Speaker: Stephen John Turner
(http://www3.ntu.edu.sg/home/ASSJTurner/)
Institution: School of Computer Engineering, Nanyang Technological
University, Singapore
(http://www3.ntu.edu.sg/SCE/)
Abstract:
Simulation-based decision support is an important tool in many areas of
science, engineering and business. Although traditional simulation
analysis can be used to generate and test out possible plans, it suffers
from a long cycle-time for model update, analysis and verification. It
is thus very difficult to carry out prompt what-if analysis to respond
to abrupt changes in the physical systems being modeled. Symbiotic
simulation has been proposed as a way of solving this problem by having
the simulation system and the physical system interact in a mutually
beneficial manner. The simulation system benefits from real-time input
data which is used to adapt the model and the physical system benefits
from the optimized performance that is obtained from the analysis of
simulation results.
This talk will present a classification of symbiotic simulation systems
based on existing applications described in the literature. An analysis
of these applications reveals some common aspects and issues which are
important for symbiotic simulation systems. From this analysis, we have
specified an agent-based generic framework for symbiotic simulation. We
show that it is possible to identify a few basic functionalities that
can be provided by corresponding agents in our framework. These can
then be composed together by a specific workflow to form a particular
symbiotic simulation system. A prototype framework has been developed as
a proof of concept and its application to semiconductor manufacturing
will be described. This work is part of a larger collaborative project,
funded by the Singapore A*STAR Integrated Manufacturing and Service
Systems programme.
--------------------------------
Date and time: Thursday 15th January 2009 at 16:00
Location: UG40, School of Computer Science
Title: The development of a distributed clinical decision
support-system's functional elements and classifiers for
the non-invasive characterisation of childhood brain
tumours using magnetic resonance spectroscopy
Speaker: Theodoros N. Arvanitis
(http://www.eee.bham.ac.uk/arvanitt/)
Institution: Electronic, Electrical & Computer Engineering,
University of Birmingham
(http://www.eee.bham.ac.uk)
Host: Manfred
Kerber
Abstract:
Over the past decade, there have been substantial advances in the field
of computer-aided decision support for the early detection of cancer.
At
the same time, advanced biological characterisation and innovative
imaging modalities have provided novel approaches to determining the
diagnosis and prognosis of brain tumours. Early efforts in the
implementation of interactive decision-support systems for brain tumour
diagnosis have identified the need to combine biomedical automated
pattern recognition techniques and data from Magnetic Resonance Imaging
(MRI) and Spectroscopy (MRS). These studies have concentrated on adult
cases and there is an unmet need for the application of such approaches
to children and young adults, a group where brain tumours are the most
common solid tumours. The HealthAgents project, an EU funded effort,
has
been developing a distributed decision-support system (DSS), based on
software agent technologies, in order to provide a set of automated
classifiers for the diagnosis and prognosis of brain tumours. In this
presentation, we will discuss the context of developing interactive
software-based elements for the HealthAgents DSS, which facilitates the
classification of childhood brain tumours, for diagnostic purposes. We
provide the argument for the clinical need for such a system and the
constraints which should be imposed upon the building of classifiers
for
childhood brain tumours. The constraints are based on tumour type,
patient age and tumour location. To illustrate the strategy and
demonstrate its potential, classification results are presented from a
small cohort of children with cerebellar tumours.
Dr Theodoros N. Arvanitis is a Reader in Biomedical Informatics, Signals
and Systems and affiliated with the School of Electronic, Electrical and
Computer Engineering, College of Engineering and Physical Sciences,
University of Birmingham & Birmingham Children's Hospital NHS Foundation
Trust, Birmingham
--------------------------------
Date and time: Thursday 22nd January 2009 at 16:00
Location: UG40, School of Computer Science
Title: An introduction to Model Driven Engineering and its
application
Speaker: Behzad Bordbar
(http://www.cs.bham.ac.uk/~bxb)
Institution: School of Computer Science, University of Birmingham
(http://www.cs.bham.ac.uk)
Abstract:
Unlike conventional Engineering disciplines, Software
Engineering has paid very little attention to /modelling/.
Traditionally, models in Software Engineering are mostly used for
documentation purposes. However, in the past decade two major steps
towards promoting the role of modelling have been taken. The first step
is the mass adoption of standard languages such as Unified Modelling
Language (UML) and languages used in Service oriented Architectures
(SoA). The second step is Model Driven Engineering (MDE) which aims at
promoting the role of modelling and their use in conjunction with other
automated Software Engineering methods.
In this seminar, I will present a gentle introduction to MDE and its
application to model analysis and fault monitoring in SoA. Examples of
applying MDE in our ongoing collaborations with IBM and BT will also be
discussed.
--------------------------------
Date and time: Thursday 29th January 2009 at 16:00
Location: UG40, School of Computer Science
Title: The Evolution of Evolutionary Computation
Speaker: Xin Yao
(http://www.cs.bham.ac.uk/~xin)
Institution: School of Computer Science, University of Birmingham
(http://www.cs.bham.ac.uk)
Abstract:
Evolutionary computation has enjoyed a tremendous growth for two
decades
in both its theoretical foundations and industrial applications. Its
scope
has gone far beyond binary string optimisation using a simple genetic
algorithm. Many research topics in evolutionary computation nowadays
are
not necessarily ``genetic'' or ``evolutionary'' in any biological
sense.
This talk will describe some recent research efforts in evolutionary
optimisation, evolutionary learning, co-evolution and fundamental
theories
of evolutionary computation. Applications in material modelling,
astrophysics, neural network ensembles, game-playing strategy learning,
etc., will be touched upon. The talk will be rather introductory (i.e.,
shallow).
--------------------------------
Date and time: Thursday 5th February 2009 at 16:00
Location: UG40, School of Computer Science
Title: Large-scale Document Digitisation: Challenges and
Opportunities
Speaker: Apostolos Antonacopoulos
(http://www.primaresearch.org/people/aa)
Institution: Pattern Recognition and Image Analysis (PRImA) group,
University of Salford
(http://www.primaresearch.org)
Host: Voker Sorge
Abstract:
The seminar will cover the background issues, challenges and
opportunities in the analysis of historical documents for large-scale
digitisation and full-text conversion. The seminar starts by examining
the different factors that influence technical decisions in document
digitisation. The types of documents typically encountered are discussed
next with the challenges and possibilities they offer for digitisation
and full-text conversion. Focussing on the needs of major libraries, the
different stages in full-text conversion (scanning, image enhancement,
segmentation, OCR and post-processing) are examined along with the
corresponding challenges and possibilities for improvement. Major past
and current initiatives are also mentioned for the processing, analysis
and recognition of historical documents.
--------------------------------
Date and time: Thursday 12th February 2009 at 16:00
Location: UG40, School of Computer Science
Title: Irony and the Artful Disguise of Negative Sentiment: A
Computational Analysis of Ironic Comparisons
Speaker: Tony Veale
(http://www.csi.ucd.ie/users/tony-veale)
Institution: School of Computer Science & Informatics, University
College Dublin
(http://www.csi.ucd.ie/)
Host: Alan
Wallington
Abstract:
Humorous descriptions are often couched in the form of a simile, whose
flexible frame allows an author to yoke a topic to a perspective that
is at once both incongruously different yet appropriately similar.
Humorous similes exhibit all the commonly accepted hallmarks of verbal
humour, from linguistic ambiguity to expectation violation and
appropriate incongruity. But so too do non-humorous poetic similes,
which exhibit an equal tendency for the ingenious and the
incongruous. What then separates humorous similes from the broader
class of creative similes, and can their signature characteristics, if
any, be expressed via the presence or absence of specific formal,
structural or semantic features? To address these questions, we
describe the construction of a very large database of creative
similes, and present the results of an initial empirical analysis upon
this data-set. Our results are two-fold: while no formal or structural
feature is either necessary or sufficient for a humorous simile, such
similes frequently carry an explicit linguistic marker of their
humorous intent; furthermore, similes that carry this marker are shown
to exhibit an identifiable affective signature, to the extent that the
humorous intent of the simile is often telegraphed to its intended
audience.
We go on to describe how our findings can be used as the basis of a
computational mechanism for correctly recognizing irony in similes.
--------------------------------
Date and time: Thursday 19th February 2009 at 16:00
Location: LG34, Learning Centre
Title: Normalizations for Testing Heuristics in Propositional
Logic
Speaker: Manfred Kerber
(http://www.cs.bham.ac.uk/~mmk)
Institution: School of Computer Science, University of Birmingham
(http://www.cs.bham.ac.uk)
Abstract:
Typically it is considered a strength of a language that the same
situation can be described in different ways. However, when a human or
a program is to check whether two representations are essentially the
same it is much easier to deal with normal forms. For instance,
(infinitely many) different sets of formulae may normalize to the same
clause set. In the case of propositional logic formulae with a fixed
number of boolean variables, the class of all clause sets is
finite. However, since the number of clause sets grows doubly
exponentially, it is not feasible to construct the complete class even
for only four boolean variables. Hence further normalizations are
necessary when the full class is to be studied. Such normalizations
allow to systematically test heuristics on all problems for small
numbers of propositional logic variables and answer the question
whether on the whole class of problems there holds a free lunch
theorem for heuristics or not.
--------------------------------
Date and time: Thursday 26th February 2009 at 16:00
Location: UG40, School of Computer Science
Title: Computational Evolutionary Biology: Introduction and
Issues
Speaker: Peter Coxhead
(http://www.cs.bham.ac.uk/~pxc)
Institution: School of Computer Science, University of Birmingham
(http://www.cs.bham.ac.uk)
Abstract:
Progress towards unravelling the 'tree of life' (given that this is a
coherent concept) has depended on simultaneous advances in the fields
of biology and computing, generating the research field of
computational evolutionary biology. In this seminar, I will give an
introduction suitable for non-specialists, and outline some current
issues, particularly those which could be illuminated by researchers
in computer science.
--------------------------------
Date and time: Thursday 5th March 2009 at 16:00
Location: UG40, School of Computer Science
Title: The grand challenge in verified software
Speaker: Jim Woodcock
(http://www-users.cs.york.ac.uk/~jim/)
Institution: Department of Computer Science, University of York
(http://www.cs.york.ac.uk/)
Host: Behzad Bordbar
Abstract:
We describe the current status of the grand challenge in verified
software. After giving a summary of the current state of the art in
software verification, we describe some of the pilot projects now
underway. These include work on operating system kernels and on a
biometric-based security system. The objective is to develop
benchmarks to challenge tool developers to make more advances in
automatic verification.
--------------------------------
Date and time: Thursday 12th March 2009 at 16:00
Location: UG40, School of Computer Science
Title: Why CHR programmers don't use concurrency -- Or Parallels
between Concurrent Logic Programming and Constraint
Handling Rules
Speaker: Peter Hancox
(http://www.cs.bham.ac.uk/~pjh)
Institution: School of Computer Science, University of Birmingham
(http://www.cs.bham.ac.uk)
Abstract:
Concurrent Logic Programming (CLP) languages have many features
that distinguish them from sequential logic programming languages such
as Prolog. Non-determinism through backtracking is discarded in
favour of committed choice with one-way unification of goals and
clause heads and the satisfaction of guard tests. The addition of
concurrency provides a family of languages suitable for reactive
rather than transformational systems.
Concurrent Constraint Logic Programming (CCLP) languages seem to offer
the power of constraint processing to the CLP paradigm. However,
Constraint Handling Rules (CHR) - the most successful of the CCLP
languages - is largely used to implement rule-rewriting
(transformational) systems without taking advantage of its
concurrency. It is conjectured that this is partly because the
largest group of CHR programmers are also Prolog programmers who do
not extend their programming style and partly because CHR is usually
implemented using co-routing extensions to Prolog that are tied to
Prolog's scheduling policy.
These conjectures are explored through an implementation of FCP(|), a
concurrent Prolog with flat guards. The "Sequential Abstract Machine"
model has previously been implemented in Prolog, involving significant
programming of one-way unification, guard tests, goal suspension and
scheduling. In this implementation, CHR provides one-way unification
and guard checking and its concurrent constraint processing models
FCP's concurrent processing of goals. The scheduling of goals is
explicitly implemented.
In support of these conjectures, it will be shown that the use of
Prolog's attributed variable (co-routining) package forces CHR into
`stable clause selection' and thus a Prolog-like scheduling policy.
To make fuller use of concurrency, CHR programmers have to use shared
variable-based techniques from CLP which ensure synchronization and
fairness, although mutual exclusion and deadlock are harder to avoid
or may even be a feature of CLP programming.
--------------------------------
Date and time: Thursday 19th March 2009 at 16:00
Location: UG40, School of Computer Science
Title: Multi-level Model Transformation via tracing
Speaker: Seyyed Shah
(http://www.cs.bham.ac.uk/~szs/)
Institution: School of Computer Science, University of Birmingham
(http://www.cs.bham.ac.uk)
Abstract:
The aim of Model Driven Development is to promote the role of models
as a form of software abstraction, along with the manipulation of
models via Model Transformations. Model Transformations manipulate and
convert and are defined at the meta-representation level, to be
executed on models. This talk presents the concept and utility of
Multi-layered transformations, in the context of Model Driven
Development. Such transformations require a pair of Model
Transformations to be conducted, stacked one on top of the other. In
Multi-layered transformations, the source and destination models of
the upper transformation are the meta-representations for the
respective models in the lower transformation.
A key challenge of defining such transformations is to ensure the
consistency between the upper and lower transformations. This talk
presents our work in creating the lower level transformation
automatically from the upper layer transformations' trace, ensuring
the consistency between the two transformations. The method is
presented with the aid of two examples, a repository of instances and
UML2Alloy.
--------------------------------
Date and time: Thursday 26th March 2009 at 16:00
Location: UG40, School of Computer Science
Title: Computer Support for Human Mathematics: Dealing with
"..." in Matrices
Speaker: Alan Sexton
(http://www.cs.bham.ac.uk/~aps)
Institution: School of Computer Science, University of Birmingham
(http://www.cs.bham.ac.uk)
Abstract:
In mathematical texts an ellipsis is a series of dots that indicates
the omission of some part of a text which the reader should be able to
reconstruct from its context. Ellipses occur in discussions of
sequences, series, polynomials, sets, systems of equations and other
situations where there is a collection of mathematical objects
described by a pattern rather than an explicit enumeration or a closed
form. The most complex and sophisticated use of ellipses occur in
matrix expressions, where they are used to describe whole classes of
matrices, encompassing a range of dimensions.
In this talk I shall describe work on analysing and manipulating
abstract matrix expressions involving ellipses and symbolic dimensions
in the context of computer algebra systems. For such a useful
representation of a common mathematical structure, two dimensional
matrix expressions possess some surprisingly subtle complexities that
require careful analysis and correspondingly involved algorithms to
tease out their true meaning. I shall present a parsing procedure for
abstract matrices, which involves graph analysis, constraint
maintenance, 2-d region finding, anti-unification and surface
interpolation. This procedure makes textbook matrices accessible to
mathematical software thus making them available for further
computational processing.
In the second part of my talk I shall present our progress on
developing an computational algebraic theory for abstract matrices. It
currently allows us to carry out arithmetic operations on classes of
matrices with an explicit representation of structural
properties. This provides a tool to establish results on the
interactions of matrix regions under addition and multiplication as
well as a foundation to show arithmetic closure properties for classes
of matrices.
This is joint work with Volker Sorge and, in parts, with Stephen Watt.
--------------------------------
Date and time: Thursday 7th May 2009 at 16:00
Location: UG40, School of Computer Science
Title: What are causal laws good for?
Speaker: Nancy Cartwright
(http://www.lse.ac.uk/collections/philosophyLogicAndScientificMethod/WhosWho/staffhomepages/Cartwright.htm)
Institution: The London School of Economics and Political Science
(http://www.lse.ac.uk/)
Host: Manfred
Kerber
Abstract:
Causal laws are not all they are cracked up to be. They are supposed
to have a special relationship with strategy, but it turns out that
this is true only for very special kinds of causal laws. The standard
fix for this problem is invariance. But, this talk will argue, if we
have invariance we don't need causal laws to begin with. Moreover,
current emphasis is on measuring invariance rather than understanding
where it comes from. I propose instead that we pay far more attention
to the underlying structure that give rise to relations we can trust
for strategy and how these structures work.
--------------------------------
Date and time: Thursday 14th May 2009 at 16:00
Location: UG40, School of Computer Science
Title: Intelligent Support to Augment User Knowledge Capture
Speaker: Vania Dimitrova
(http://www.comp.leeds.ac.uk/vania/)
Institution: School of Computing , University of Leeds
(http://www.engineering.leeds.ac.uk/comp/)
Host: John Barnden
Abstract:
Knowledge-intensive technologies are based on some understanding of
the world which is usually encoded in appropriate formal
models. Building such models requires capturing the domain knowledge
of people who in most cases lack knowledge engineering skills. I will
show how intelligent techniques can be used to provide intuitive ways
to augment the knowledge capturing process. User knowledge capture has
been one of the prime research topics in the area of Personalisation
and User-adaptive Systems. I will discuss the connection between
personalisation techniques and knowledge capture for the Semantic
Web. Building on an earlier work which utilised dialogues for
capturing users' knowledge in financial markets, I will present our
current work on capturing knowledge of domain experts in a Geography
domain, which is conducted in collaboration with Ordnance Survey under
the Confluence project: http://www.comp.leeds.ac.uk/confluence/ I will
demonstrate a controlled natural language tool for developing
ontologies in OWL (ROO) and will discuss our current work in the
direction of intelligent support for multi-perspective knowledge
capture.
--------------------------------
Date and time: Thursday 21st May 2009 at 16:00
Location: UG40, School of Computer Science
Title: Spectrally resolved biomoluminescence tomography in small
animal imaging: Why is it unique?
Speaker: Hamid Dehghani
(http://www.cs.bham.ac.uk/~dehghanh/)
Institution: School of Computer Science, University of Birmingham
(http://www.cs.bham.ac.uk)
Abstract:
Diffuse optical tomography has been emerging as a method to image
fluorescence, absorption and scatter in soft tissue, and the use of
molecular markers such as firefly luciferase can also be applied to
bioluminescence imaging. Absorption based imaging provides
quantitative information about the total haemoglobin and oxygen
saturation in tissue, and spectrally resolved Bioluminescence
Tomography (sBLT) can be used to quantify specific cellular
activity. The combination of these two methods provides a novel
quantitative 3D imaging tool, for example, to characterize tumour
growth and change due to treatment in experimental animal models. A
brief outline of the principles of optical tomography will be
presented, together with the uniqueness problem in bioluminescence
imaging. Novel computational algorithms will be outlined, whereby not
only the problem's uniqueness can be reduced, but also computation
speed of the `inverse problem' can be greatly improved through the
application of 'reciprocity principle'. The latest developments in
imaging system will be presented together with a proposal for a new
system that will provide state-of-the-art imaging platform which can
be utilized in a commercially producible small animal imaging system,
which is of benefit to, for example, pharmaceutical companies to
accelerate drug discovery processes.
--------------------------------
Date and time: Thursday 27th August 2009 at 14:00
Location: Haworth Building, Room 203
Title: Mini Robotics/Cognition Symposium
Speaker: Veronica Esther Arriola Rios, Frank Guerin, Chandana Paul
Host: Aaron Sloman
Abstract:
Seminar 1.
Learning to Predict Behaviour of Deformable Objects through and
for Robotic Interaction (Thesis proposal)
Speaker:
Veronica Esther Arriola Rios (Vero)
http://www.cs.bham.ac.uk/~vxa855/
PhD student School of Computer Science
Date and Time: Thursday 27th August 14:00
Location: Howarth Building, Room 203
Host: Aaron Sloman
Part of Mini Symposium: Robotics/Cognition/Development
Abstract (Proposed PhD research):
The objective of this research is the study of the process of
modelling, prediction and evaluation of the predictive capabilities
of a model, applied to the concrete scenario of robotic manipulation
of deformable objects. The robot is expected to identify the
presence of impenetrable regions of the world whose shape and
behaviour is susceptible of being modelled, with the basic learning
algorithms it will possess from the beginning. These algorithms have
been selected according to a set of requirements that come from the
problem itself and the necessary delimitation of the research. The
models will be generated from experiments that the robot will do to
obtain relevant data, like pushing actions or application of
impulses. Latter, the robot will try to use those models to predict
behaviours in previously unseen scenarios, like forces being applied
in different positions and with different magnitudes, new
configurations of the same materials, or new materials. The robot
will have to evaluate the domain where it can consider the model to
still be sufficiently accurate to be used for particular tasks, or
where a new model has to be created. Two main mechanisms for model
generation are considered: parameter estimation for basic models,
and composition of known basic models for interpolation between
known distinct behaviours. A series of experiments where the robot
will go through the phases of learning, application and evaluation
of the models will be presented.
==========================
Seminar 2:
The "Baby Learning" Approach to Artificial Intelligence
Dr. Frank Guerin,
Lecturer, Department of Computing Science,
University of Aberdeen.
http://www.csd.abdn.ac.uk/~fguerin/
He is interested in (among other things) "Developmental AI" --
trying to build a system which will develop knowledge and skills
on its own by experimenting in an environment and learning, in
particular, trying to model the type development that occurs in
human infants.
Date and Time: Thursday 27th August 15:00
Location: Howarth Building, Room 203
Host: Aaron Sloman
Part of Mini Symposium: Robotics/Cognition/Development
Abstract
One of the major stumbling blocks for Artificial Intelligence
remains the commonsense knowledge problem. It is not clear how we
could go about building a program which has all the commonsense
knowledge of the average human adult. This has led to growing
interest in the "developmental" approach, which takes its
inspiration from nature (especially the human infant) and attempts
to build a program which could develop its own knowledge and
abilities through interaction with the world. The challenge here is
to find a learning program which can continuously build on what it
knows, to reach increasingly sophisticated levels of knowledge.
Unfortunately our current knowledge of how humans build their
knowledge still has major gaps; Psychology has made some advances,
but current knowledge is still fragmentary. Given this deficit,
Artificial Intelligence researchers face a dilemma over how to make
use of sketchy psychological theories, and how much they need to
make up from scratch themselves. This talk will look at approaches
that have been attempted thus far, drawing out major themes and
problems, and will outline a roadmap of possible future directions,
discussing their relative merits.
==========================
Seminar 3:
The Formation of Ontology
Dr. Chandana Paul
Visiting us from 22 Oct to about 22 Sept
She studied at MIT with Rodney Brooks, and did her PhD in Zurich
with Rolf Pfeifer, then did post-doctoral work wih Hod Lipson
and Ephraim Garcia at Cornell. She has worked with us in the
past, as a member of the CoSy robotics team (based in Stockholm).
There is information about her work on "morphological
computation" here:
http://en.wikipedia.org/wiki/Morphological_computation_(robotics)
Date and Time: Thursday 27th August 16:30
Location: Howarth Building, Room 203
Host: Aaron Sloman
Part of Mini Symposium: Robotics/Cognition/Development
Abstract
The ontology of an agent, which consists of the kinds of entities
the agent considers to exist in the environment, is an important
aspect of intelligence, as it forms the basis for intelligent
reasoning and action. In the last decades in AI, the study of
ontology has involved the construction of conceptual structures by
automated or manual methods based on textual or perceptual data. The
goal has been to generate fixed knowledge structures which serve as
the basis for an agent's intelligent actions. However, in nature,
habitats and environmental conditions rapidly change, and it is
unlikely that agents with a fixed ontology can effectively face the
pressures of survival. They must have mechanisms which allow them to
acquire ontological entities based on their environment. The talk
will address this issue of ontological acquisition and raise new
questions about the underlying mechanisms. It will consider various
mechanisms by which the ontology can be shaped, including bottom up,
top down and emergent mechanisms. It will also consider the
potential differences in the mechanism between various species, and
bring to light the dimensions along which the process of ontological
acquisition can vary.
--------------------------------
Date and time: Thursday 3rd September 2009 at 16:00
Location: Hills 120, School of Psychology
Title: A Hierarchical Computational Model of Statistical
Learning of Two-Dimensional Visual Shapes
Speaker: Ales Leonardis
(http://vicos.fri.uni-lj.si/alesl)
Institution: University of Ljubljana
(http://vicos.fri.uni-lj.si)
Host: Jeremy Wyatt
Abstract:
SEMINAR OF THE CENTRE FOR NEUROSCIENCE AND ROBOTICS AND DEPARTMENTAL
SEMINAR OF THE SCHOOL OF COMPUTER SCIENCE
Visual categorization of objects has been an area of active research in
the computational vision as well as the neuroscience community. While
both communities agree on a hierarchically organized object-processing
pathway, many specific mechanisms of this visual information processing
remain unknown. In this talk, I will focus on a computational approach
where selectivity of the units to two-dimensional visual shapes emerges
as a result of statistical learning at multiple hierarchical stages. The
approach takes simple contour fragments and learns their frequent
spatial configurations. These are recursively combined into increasingly
more complex and class-specific shape compositions, each exerting a high
degree of shape variability. In the top-level of the hierarchy, the
compositions are sufficiently large and complex to represent the whole
shapes of the objects. We learn the hierarchical representation layer
after layer, by gradually increasing the size of the window of analysis
and the spatial resolution at which the shape configurations are
learned. Applied to a large collection of natural images, the units in
the model become selective to contour fragments at multiple levels of
specificity and complexity. The learned units in the first four layers
respond to shapes such as corners, T-, L-, Y-junctions and arcs of
various curvatures, whereas the units in the higher layers are selective
to increasingly more complex and class specific contours. I will also
present some experimental results which show that the learned
multi-class object representation scales logarithmically with the number
of object classes.
Some more details can be found at
http://vicos.fri.uni-lj.si/alesl/research/
[http://vicos.fri.uni-lj.si/alesl/research/]
This is a joint work with Sanja Fidler and Marko Boben.
--------------------------------
Date and time: Thursday 10th September 2009 at 16:00
Location: UG40, School of Computer Science
Title: Mobile Multimedia and Handheld Digital TV: Is It for
Real?
Speaker: Chang Wen Chen
(http://www.cse.buffalo.edu/faculty/chencw/)
Institution: University of Buffalo, Computer Science and Engineering
(http://www.cse.buffalo.edu/)
Host: Xin Yao
Abstract:
This talk will first review recent technology trends in mobile
multimedia and digital TV, especially the changing landscape and the
paradigm shift revolution in digital video that may impact worldwide
consumers at home and on the road. Then, the talk will examine how the
challenging characteristics of mobile digital video will mean for
technology advancement and the potential implications for emerging
applications in our contemporary mobile life styles. As a prime
example of mobile multimedia applications, mobile IPTV (Internet
Protocol TV) will be examined in more detail. In particular, DVB-H as
an European standard for mobile IPTV will be analyzed and major
enhancement components of DVB-H over DVB-T will be discussed. This
European originated standard has made its way to beyond Europe and is
expected to have a significant influence in consumer electronics
industry worldwide. Technical challenges and research opportunities
for IPTV and mobile IPTV will then be identified.
--------------------------------
Date and time: Thursday 24th September 2009 at 16:00
Location: UG40, School of Computer Science
Title: The Turing Game challenge for machine learning
Speaker: Andras Lorincz
(http://people.inf.elte.hu/lorincz/)
Institution: EĂśtvĂśs University, Budapest, Hungary
(http://www.inf.elte.hu/Lapok/kezdolap.aspx)
Host: Peter Tino
Abstract:
Social interactions will include interaction with robots in the
future. It is crucial to develop tools and methods where this novel
type of interaction can be practised without causing any harm. The
problem for machine learning is in the partially observed world, where
the emotions and the intentions of the partner are relevant, hidden
and uncertain. We have been tackling this issue both from the
theoretical and the experimental point of view. On the theoretical
side, we have been working on polynomial-time goal oriented learning
algorithms that can deal with a number of variables simultaneously.
Experimentally, we have been developing the Turing Game, where the
players' can express their emotions and this information can be used
by their robot and human partners in the game. The robot or human
nature of the partners in this multi-player game is hidden and we are
about to study the asymmetries of the emerging social network, i.e.,
who collaborates with whom.
--------------------------------
Date and time: Thursday 1st October 2009 at 16:00
Location: UG40, School of Computer Science
Title: Can computation cope with cellular complexity?
Speaker: Rod Smallwood
(http://www.dcs.shef.ac.uk/~rod)
Institution: Computer Science, The University of Sheffield
(http://www.shef.ac.uk/dcs/)
Host: Hamid
Dehghani
Abstract:
Biology is immensely complex - the number of biological functions that
could be encoded by the human genome is uncountably large; the number
of different proteins is immense; the range of length scales covers at
least nine orders of magnitude; and the range of timescales covers
perhaps fifteen orders of magnitude. It is widely believed that we
will not understand biological function without the assistance of
mathematical and computational tools. Is this a reasonable belief,
given the complexity? I will discuss how we can approach the problems
of cellular behaviour, with examples taken from modelling the
behaviour of epithelial tissues.
--------------------------------
Date and time: Thursday 8th October 2009 at 16:00
Location: UG40, School of Computer Science
Title: Partial Orders with Conditionals: Analysis, Synthesis and
Applications in Electronic System Design
Speaker: Alex Yakovlev
(http://www.staff.ncl.ac.uk/alex.yakovlev/)
Institution: School of EECE, Newcastle University
(http://async.org.uk)
Host: Behzad
Bordbar
Abstract:
Imagine the following situation. An orchestra plays without a
conductor. Musicians look at the score of a piece and see when they
have to enter depending on the actions of other musicians. The overall
behaviour is a partial order of events following this particular
score. Suppose we want such an orchestra to play hundreds of different
pieces, i.e. hundreds of different partial orders would be
"programmed" as separate scores. Now, instead of giving the musicians
hundreds of separate "conventional "score sheets, we would like to
give them only one "unconventional" score which is specially annotated
with additional control keys that determine which dependencies between
musicians are enabled or disabled. This "unconventional" score
technique may be useful (produce less paperwork) if the individual
scores have many shared parts and sections that are interlaced in a
complex way. Similarly, in modern processor design, especially for
processors without the global clock (like "orchestras with
conductors") large sets of instruction scenarios, which can be
captured as partial orders of actions in functional blocks, often
exist. Each such a scenario can have a high degree of concurrency
between individual logic blocks. The whole portfolio of such
instruction scenarios (cf. micro-programs) forms a specification for a
microcontroller. For many years finite state machines have been
dominating the world of design automation for microcontrollers. State
machines can of course deal well with choice, i.e. capturing large
sets of instructions in fully sequential or globally clocked
architectures, but they really struggle with the representation of
concurrency. As data processing is gradually becoming more concurrent
at the microchip level, models that can elegantly handle the mixture
of concurrency and choice are required.
In this talk we present Conditional Partial Order Graphs (CPOGs) which
are believed to fulfil in the above ambition. The talk will try to
introduce them in a not-so-formal way, using examples as much as
possible. Ideas of how they can be analysed, synthesised and applied
to the design of meaningful asynchronous circuits will be presented.
Most of the research towards this goal has been carried out by the
speaker's PhD student Andrey Mokhov, a talented musician, who likes to
play the piano part in groups without a conductor!
--------------------------------
Date and time: Thursday 15th October 2009 at 16:00
Location: UG40, School of Computer Science
Title: Adaptive Method for the Digitization of Mathematical
Journals
Speaker: Masakazu Suzuki
(http://www.inftyproject.org/suzukilabo/suzuki.html)
Institution: Faculty of Mathematics and Graduate School of
Mathematics, Kyushu University
(http://www.kyushu-u.ac.jp/english/)
Host: Volker Sorge
Abstract:
InftyReader is the software developed in Kyushu University to recognize
mathematical documents. It converts the scanned images of printed paper
documents including various formulas in pure and applied sciences to
various formats, LaTeX, xhtml with mathML, etc. We are using it in the
retro-digitization project of mathematical journals in Japan.
In the lecture, after a brief sketch of the methods used in
InftyReader,
I will talk about our current approach to improve the recognition rate
in the case of large scale digitization of mathematical documents.
--------------------------------
Date and time: Thursday 22nd October 2009 at 16:00
Location: UG40, School of Computer Science
Title: Cloud Computing: the next software revolution?
Speaker: Andy Evans
Institution: Xactium
Host: Behzad
Bordbar
Abstract:
Everyone is talking about cloud computing. IBM, Microsoft, Google,
Amazon and many more are investing billions in cloud computing
platforms that they believe will transform the software landscape.
In this talk, I will discuss what is meant by cloud computing, and
other associated terms, such as software as a service and platform as
a service. I will talk about its benefits and risks and go into detail
on one of the more mature cloud computing development platforms:
Force.com, which is changing the way that developers think about
software development and delivery.
Andy Evans is an experienced business executive, consultant and
manager, with over 20 years experience of software design and
development technologies. He is MD of Xactium limited, which is using
cloud technology to deliver business solutions to large enterprises.
--------------------------------
Date and time: Thursday 29th October 2009 at 16:00
Location: UG40, School of Computer Science
Title: Performance Evaluation and Localization on Mobile
Wireless Networks -Toward Affluent and High-reliable
Ubiquitous Society
Speaker: Teruo Higashino
(http://www-higashi.ist.osaka-u.ac.jp/~higashino/)
Institution: Osaka University
(http://www.osaka-u.ac.jp/en)
Host: Behzad
Bordbar
Abstract:
In this seminar, I will present our recent work about performance
evaluation and localization on mobile wireless networks. In future
ubiquitous society, several sensors and RFID will be deployed in urban
areas. Pedestrians and vehicles might also have ubiquitous devices.
The performance of such mobile wireless networks is strongly affected
by node mobility. I will enumerate several types of mobility and
explain how those mobility models affect performance of mobile
wireless networks. I will also explain how we can construct realistic
mobility of pedestrians and vehicles. We have developed wireless
network simulator called Mobireal (http://www.mobireal.net/) in order
for formally modeling urban pedestrian flows and evaluating
performance and reliability of MANET applications in realistic
environments.
I will also introduce two types of localization techniques based on
mobile wireless communication; range base and range free techniques,
and explain our techniques to estimate trajectories of several mobile
nodes in urban areas.
--------------------------------
Date and time: Thursday 5th November 2009 at 16:00
Location: UG40, School of Computer Science
Title: Naturally occurring data as research instrument
Speaker: Errol Thompson
(http://www.cs.bham.ac.uk/~thompsew/)
Institution: School of Computer Science, University of Birmingham
(http://www.cs.bham.ac.uk)
Abstract:
Errol Thompson has been involved in two Computing Education research
projects. The BRACELet project utilised naturally occurring data and
action research cycles to develop an understanding of the novice
programmer. As this project has grown, it has spread from its base in
Australasia to the US, Canada, and now Europe. The project has involved
the use of different styles of exam questions and analysis based on
Educational taxonomies. In this presentation, Errol will endeavour to
outline the philosophy behind the research and some of the key findings
from the work conducted over a five year period.
The some of the collaborators in this research include: Jacqueline
Whalley, Tony Clear, Phil Robbins (Auckland Institute of Technology),
Raymond Lister (ex-University of Technology Sydney/now at University of
British Columbia), Beth Simon (San Diego), Angela Carbone, and Judy
Sheard (Monash).
--------------------------------
Date and time: Thursday 19th November 2009 at 16:00
Location: UG40, School of Computer Science
Title: Machine Learning for Sensorimotor Control
Speaker: Sethu Vijayakumar
(http://homepages.inf.ed.ac.uk/svijayak/)
Institution: School of Informatics, University of Edinburgh
(http://www.inf.ed.ac.uk)
Host: Jeremy Wyatt
Abstract:
Humans and other biological systems are very adept at performing fast,
complicated control tasks in spite of large sensorimotor delays while
being fairly robust to noise and perturbations. For example, one is able
to react accurately and fast to catch a speeding ball while at the same
time being flexible enough to âgive-inâ when obstructed during the
execution of a task.
There are various components involved in achieving such levels of
robustness, accuracy and safety in anthropomorphic robotic systems.
Broadly, speaking challenges lie in the domain of robust sensing,
flexible planning, appropriate representation and learning dynamics
under various contexts. Statistical Machine Learning provides ideal
tools to deal with these challenges, especially in tackling issues like
partial observability, noise, redundancy resolution, high dimensionality
and the ability to perform and adapt in real time.
In my talk, I will talk about (a) novel techniques we have developed for
real time acquisition of non-linear dynamics in a data driven manner,
(b) techniques for automatic low-dimensional (latent space)
representation of complex movement policies and trajectories and (c)
planning methods capable of dealing with redundancy (e.g. variable
impedance) and adaptation in the Optimal Feedback Control framework.
Some of the techniques developed, in turn, provide novel insights into
modeling human motor control behavior.
Videos of learning in high dimensional movement systems like
anthropomorphic limbs (KUKA robot arm, SARCOS dexterous arm, iLIMB etc.)
and humanoid robots (HONDA ASIMO, DB) will serve to validate the
effectiveness of these machine learning techniques in real world
applications.
Sethu Vijayakumar is the Director of the Institute for Perception,
Action and Behavior (IPAB) in the School of Informatics at the
University of Edinburgh. Since August 2007, he holds a Senior Research
Fellowship of the Royal Academy of Engineering, co-funded by Microsoft
Research in Learning Robotics. He also holds additional appointments as
an Adjunct Faculty of the University of Southern California (USC), Los
Angeles and as a Visiting Research Scientist at the RIKEN Brain Science
Institute, Japan. His research interest spans a broad interdisciplinary
curriculum involving basic research in the fields of statistical machine
learning, robotics, human motor control, Bayesian inference techniques
and computational neuroscience.
--------------------------------
Date and time: Thursday 26th November 2009 at 16:00
Location: UG40, School of Computer Science
Title: Body and mind of a humanoid robot: where technology meets
physiology
Speaker: Giorgio Metta
(http://pasa.lira.dist.unige.it/)
Institution: Italian Institute of Technology, Department of Robotics,
Brain and Cognitive Sciences
(http://www.dist.unige.it/dist/index_en.html)
Host: Jeremy Wyatt
Abstract:
Simulating and getting inspiration from biology is certainly not a new
endavor in robotics (Atkeson et al., 2000; Sandini, 1997; Metta
et.al. 1999). However, the use of humanoid robots as tools to study
human cognitive skills it is a relatively new area of the research
which fully acknowledges the importance of embodiment and the
interaction with the environment for the emergence of motor skills,
perception, sensorimotor coordination, and cognition (Lungarella,
Metta, Pfeifer, & Sandini, 2003).
The guiding philosophy - and main motivation - is that cognition
cannot be hand-coded but it has to be the result of a developmental
process through which the system becomes progressively more skilled
and acquires the ability to understand events, contexts, and actions,
initially dealing with immediate situations and increasingly acquiring
a predictive capability (Vernon, Metta Sandini, 2007).
To pursue this research, a humanoid robot (iCub) has been developed as
result of the collaborative project RobotCub (www.robotcub.org)
supported by the EU through the "Cognitive Systems and Robotics" Unit
of IST. The robotic platform has been designed with the goal of
studying human cognition and therefore embeds a sophisticated set of
sensors providing vision, touch, proprioception, audition as well as a
large number of actuators (53) providing dexterous motor
abilities. The project is "open", in the sense of open-source, to
build a critical mass of research groups contributing with their ideas
and algorithms to advance knowledge on human cognition (N. Nosengo
2009).
The aim of the talk will be: i) to present the approach and
motivation, ii) the illustrated the technological choices made and
iii) to present some initial results obtained.
References
Atkeson, C. G., Hale, J. G., Pollick, F., Riley, M., Kotosaka, S.,
Schaal, S., et al. (2000). Using Humanoid Robots to Study Human
Behavior. IEEE Intelligent Systems, 46-56.
Sandini, G. (1997, April). Artificial Systems and Neuroscience. Paper
presented at the Otto and Martha Fischbeck Seminar on Active Vision,
Berlin, Germany.
Sandini, G., G. Metta, and J. Konczak. Human Sensori-motor Development
and Artificial Systems. in International Symposium on Artificial
Intelligence, Robotics and Intellectual Human Activity
Support(AIR&IHAS '97). 1997. RIKEN - Japan.
D. Vernon, G. Metta, and G. Sandini. "A Survey of Artificial Cognitive
Systems: Implications for the Autonomous Development of Mental
Capabilities in Computational Agents," IEEE Transactions on
Evolutionary Computation, vol. 11, no. 2, pp. 151-180, 2007
N. Nosengo. "Robotics: The bot that plays ball" Nature Vol 460,
1076-1078 (2009) | doi:10.1038/4601076a
--------------------------------
Date and time: Thursday 3rd December 2009 at 16:00
Location: UG40, School of Computer Science
Title: Dynamic Evolutionary Optimisation: An Analysis of
Frequency and Magnitude of Change
Speaker: Per Kristian Lehre, Philipp Rohlfshagen
(http://www.cs.bham.ac.uk/~pkl/)
Institution: School of Computer Science, University of Birmingham
(http://www.cs.bham.ac.uk)
Abstract:
Despite successful applications of evolutionary algorithms in numerous
domains, the theoretical understanding of why and how these algorithms
work is still incomplete. This is particularly true for the rapidly
growing field of evolutionary dynamic optimisation where only few
theoretical results have been obtained to date. In this talk, we give
an example of rigorous runtime analysis of evolutionary algorithms in
a dynamic optimisation scenario. We focus on the impact of the
magnitude and frequency of change on the performance of a simple
algorithm called (1+1)-EA on a set of artificially designed
pseudo-Boolean functions, given a simple but well-defined dynamic
framework. We demonstrate some counter-intuitive scenarios that allow
us to gain a better understanding of how the dynamics of a function
may affect the runtime of an algorithm. In particular, we present the
function Magnitude, where the time it takes for the (1+1)-EA to
relocate the global optimum is less than n^2log n (i.e., efficient)
with overwhelming probability if the magnitude of change is large. For
small changes of magnitude, on the other hand, the expected time to
relocate the global optimum is e^{Omega(n)} (i.e., highly
inefficient). Similarly, the expected runtime of the (1+1)-EA on the
function Balance is O(n^2) (efficient) for high frequencies of change
and n^Omega(n^0.5) (highly inefficient) for low frequencies of
change. These results contribute towards a better understanding of
dynamic optimisation problems in general and show how traditional
analytical methods may be applied in the dynamic case.
--------------------------------
Date and time: Thursday 10th December 2009 at 16:00
Location: UG40, School of Computer Science
Title: A Traceability Attack Against e-Passports
Speaker: Tom Chothia
(http://www.cs.bham.ac.uk/~tpc/)
Institution: School of Computer Science, University of Birmingham
(http://www.cs.bham.ac.uk)
Abstract:
Since 2004 many nations have started issuing ``e-passports''
containing an RFID tag that broadcasts information. It is claimed that
this will make passports more secure and that our data will be
protected from any possible unauthorised attempts to read it. In this
talk we show that there is a flaw in one of the passport's protocols
that makes it possible to trace the movements of a particular
passport, without having to break the passport's cryptographic key.
All an attacker has to do is to record one session between the
passport and a legitimate reader, then by replaying a particular
message, the attacker can distinguish that passport from any other. We
have implemented our attack and tested it successfully against
passports issued by a range of nations.
This was joint work with Vitaliy Smirnov.
--------------------------------
Date and time: Monday 21st December 2009 at 16:00
Location: UG40, School of Computer Science
Title: The evolution of multicellular computing; parallels with
the evolution of multicellular life
Speaker: Steve Burbeck
(http://evolutionofcomputing.org/HistoryAndInfo.html)
Institution: Evolution of Computing
(http://evolutionofcomputing.org)
Host: Aaron Sloman
Abstract:
The evolution of computing is similar to the evolution of other
complex systems -- biological, social, ecological, and economic
systems. In each of these domains, the elements become increasingly
more specialized and sophisticated and they interact with each other
in ever more complex ways. The parallels between biology and
computing are not coincidental. The organizing principles of
multicellular biological systems suggest architectural principles
that multicellular computing can mimic to tame the spiraling
problems of complexity and out-of-control interactions in the
Internet.
Bio:
His achievements include research in Mathematical sociology,
Mathematical psychology, Molecular and chemical biology/Proteomics
at the Linus Pauling Institute of Science and Medicine, Object
Oriented Programming tools and techniques (including
commercialisation of Smalltalk), helping to lead IBM into the Open
Source world, Service Oriented Architectures (SOA), Web Services
and
Peer-to-Peer (P2P) Software, Theory of Mind (in collaboration with
Sam Adams, IBM), The Interface Between Biology and Computing,
Archiving Digital Records, and recently developing a web site on
Multicellular computing: http://evolutionofcomputing.org/.
--------------------------------
Date and time: Thursday 14th January 2010 at 16:00
Location: UG40, School of Computer Science
Title: Making AI and Robotics Shake Hands: The Challenges of
Grasping Intelligence
Speaker: Helge Ritter
(http://ni.www.techfak.uni-bielefeld.de/people/helge/)
Institution: Excellence Cluster Cognitive Interaction Technology
(CITEC) and Institute of Cognition and Robotics
(CoR-Lab), Bielefeld University
(http://www.cit-ec.de/home)
Host: Jeremy
Wyatt
Abstract:
Robotics and entirely disembodied AI are often seen as opposite
extremes to approach the challenge of understanding and building
intelligent systems. We argue that grasping and manual actions
can offer a rich and interdisciplinary `middle ground', thoroughly
rooted in physical interaction on the one side, and yet connected
to many aspects of `high level' intelligence, such as tool use,
language and even emotion. We discuss some issues and challenges
in this upcoming field of `Manual Intelligence', report on some
examples from our own research on controlling grasping movements
of anthropomorphic robot hands, and discuss how this and similar
research fits into the `larger picture' of understanding
cognition.
--------------------------------
Date and time: Thursday 21st January 2010 at 16:00
Location: UG40, School of Computer Science
Title: A Unifying Analytical Framework for Discrete Linear Time
Speaker: Ben Moszkowski
(http://www.cse.dmu.ac.uk/~benm)
Institution: De Montfort University, Leicester
(http://www.cse.dmu.ac.uk/STRL/)
Host: Paul Levy
Abstract:
(Joint Theory/Departmental Seminar)
Discrete linear state sequences provide a compellingly natural and
flexible way to model many dynamic computational processes involving
hardware or software. Over 50 years ago, the distinguished logicians
Church and Tarski initiated the study of a fundamental and powerful
class of decidable calculi for rigorously describing and analysing
various basic aspects of discrete linear-time behaviour. The number of
such formalisms has significantly grown to include many temporal logics,
some of which are employed in industry, and even found in IEEE
standards. They are intimately connected with regular languages,
analogues for infinite words called omega-regular languages and the
associated finite-state automata.
We describe a promising hierarchical approach for systematically
analysing and relating these logics and establishing axiomatic
completeness. Our framework is based on Interval Temporal Logic (ITL), a
well- established, stable formalism first studied over 25 years ago.
Previous proofs of axiomatic completeness developed over approximately
40 years for the hardest logics contained deductions involving explicit
embeddings of nontrivial techniques such as the complementation of
nondeterministic finite-state automata which recognise infinite words.
Our greatly simplified approach avoids the need to encode these automata
and techniques in logic. Instead, it just applies some standard results
from the 60s and 70s which can be understood without any knowledge of
automata for infinite words! In addition, it suggests new improved
axioms and inference rules for some of the logics.
Our work also offers intriguing evidence that Propositional ITL (PITL)
might play a central role in the overall proof theory of the class of
decidable notations for discrete linear time, even for less succinct
logics with lower computational complexity. Therefore PITL could
eventually be seen as the canonical propositional logic for this model
of time. Furthermore, PITL provides a starting point for less explored
decidable calculi which formalise memory, framing and multiple time
granularities as well as for a calculus of sequential and parallel
composition based on nestable Hoare triples having assertions expressed
in temporal logic. Potential applications include the Rely-Guarantee
paradigm and some kinds of Separation Logic. This all suggests that ITL
could serve as the basis for a âlogical physicsâ of discrete linear
time. Consequently, ITL might come to be regarded as a key analytical
formalism for investigating programming semantics and calculi based on
this model of time.
--------------------------------
Date and time: Thursday 28th January 2010 at 16:00
Location: UG40, School of Computer Science
Title: Data-Parallel Programming for Heterogeneous Systems
Speaker: Satnam Singh
(http://research.microsoft.com/en-us/people/satnams)
Institution: Microsoft Research
(http://research.microsoft.com)
Host: Dan Ghica
Abstract:
This presentation introduces an embedded domain specific language
(DSL) for data-parallel programming which can target GPUs, SIMD
instructions on x64 multicore processors and FPGA circuits. This
system is implemented as a library of data-parallel arrays and
data-parallel operations with implementations in C++ and for .NET
languages like C#, VB.NET and F#. We show how a carefully selected set
of constraints allow us to generate efficient code or circuits for
very different kinds of targets. Finally we compare our approach which
is based on JIT-ing with other techniques e.g. CUDA which is an
off-line approach as well as to stencil computations. The ability to
compile the same data parallel description at an appropriate level of
abstraction to different computational elements brings us one step
closer to finding models of computation for heterogenous multicore
systems.
--------------------------------
Date and time: Thursday 4th February 2010 at 16:00
Location: UG40, School of Computer Science
Title: Critical Aspects of Object-Oriented Programming
Speaker: Errol Thompson
(http://www.cs.bham.ac.uk/~thompsew/)
Institution: School of Computer Science, University of Birmingham
(http://www.cs.bham.ac.uk)
Abstract:
In his recently completed PhD research, Errol sought to discover the
different ways that practitioners expressed their awareness of
object-oriented programming in order to identify critical aspects for
teaching novice programmers. Errol's work identified some critical
aspects with respect to the nature of an object-oriented program and the
design characteristics of program that form the basis for further
research and for the planning of teaching. In this presentation, Errol
will overview the approach taken in the research and the future
direction being planned for this research. He will place emphasis on
some of the implications that he sees for teaching introductory
programming.
The original research was supervised by Professor Kinshuk (Athabasca
University, Canada) and Emeritus Associate Professor Janet Davies
(Massey University, NZ).
--------------------------------
Date and time: Thursday 11th February 2010 at 16:00
Location: UG40, School of Computer Science
Title: A Linear Grammar Approach to Mathematical Formula
Recognition from PDF
Speaker: Josef Baker
(http://www.cs.bham.ac.uk/~jbb/)
Institution: School of Computer Science, University of Birmingham
(http://www.cs.bham.ac.uk)
Abstract:
Many approaches have been proposed over the years for the recognition of
mathematical formulae from scanned documents. More recently a need has
arisen to recognise formulae from PDF documents. Here we can avoid
ambiguities introduced by traditional OCR approaches and instead extract
perfect knowledge of the characters used in formulae directly from the
document. This can be exploited by formula recognition techniques to
achieve correct results and high performance.
In this talk I will revisit an old grammatical approach to formula
recognition, that of Anderson from 1968, and assess its applicability
with
respect to data extracted from PDF documents. We identify some problems
of
the original method when applied to common mathematical expressions and
show how they can be overcome.
--------------------------------
Date and time: Thursday 18th February 2010 at 16:00
Location: UG40, School of Computer Science
Title: Self-understanding and self-extension in robots
Speaker: Jeremy Wyatt
(http://www.cs.bham.ac.uk/~jlw)
Institution: School of Computer Science, University of Birmingham
(http://www.cs.bham.ac.uk)
Abstract:
I this talk I'm going to give a tour of work being carried out within
the CogX project. In this we are trying to devise a principled systems
approach to robots that can act purposively under uncertainty and
incompleteness. This requires some judicious use of different kinds of
representations of uncertainty, and lack of knowledge. The challenges
are how to link up different representations across several
modalities, and how to then reason with them efficiently. I will
describe results from the first year that take a qualitative approach
to the problem. I will focus on Dora: a mobile robot that fills gaps
in its maps of an office environment. I'll then describe the work we
are currently pursuing on probabilistic representations for domains
such as mapping, and manipulation.
--------------------------------
Date and time: Thursday 25th February 2010 at 16:00
Location: UG40, School of Computer Science
Title: Stable sets in pillage games
Speaker: Colin Rowat
(http://www.socscistaff.bham.ac.uk/rowat/)
Institution: Department of Economics, University of Birmingham
(http://www.economics.bham.ac.uk/)
Host: Manfred
Kerber
Abstract:
Pillage games are a class of cooperative games, and are therefore
defined by a dominance operator rather than a game form. Pillage
games allow more powerful coalitions of agents to seize resources from
less powerful coalitions. As such seizure is costless, it may model
involuntary transfers made in democracies, including taxation and the
exercise of eminent domain. Stable sets, von Neumann and
Morgenstern's original solution concept, satisfy internal and external
stability conditions: no allocation in such a set can dominate
another; every allocation outside such a set must be dominated by a
member allocation. We first provide a graph theoretic interpretation
of stable sets in pillage games, bounding their cardinality by a
Ramsey number. Restricting analysis to three agents, and imposing a
few regularity conditions, we then present an algorithm for deciding
the existence of stable sets, and characterising them.
--------------------------------
Date and time: Thursday 4th March 2010 at 16:00
Location: UG40, School of Computer Science
Title: Domain Driven Design using Naked Objects
Speaker: Dan Haywood
Institution: Haywood Associates Ltd
Host: Behzad
Bordbar
Abstract:
Domain driven design is a technique for building enterprise apps by
focusing on the bit that matters: the domain model. Naked Objects
meanwhile is a full-stack, open source Java framework with the same
goal: building enterprise apps. Why put the two together? Well, the
twist is that with Naked Objects all you need to do is to write the
domain objects that sit in the domain model; Naked Objects takes care of
the UI and persistence layers for you.
In this talk we'll see what a Naked Objects application looks like
(pojos, basically), and we'll see how, by taking care of most of the
plumbing, Naked Objects allows us to rapidly capture the subtleties and
complexity of a domain model. Along the way we'll also talk about
extensibility, customization, testing, prototyping vs deployment, and
whatever else comes up. And in a blatant attempt to ensure the session
is interactive, there'll be a free copy of Dan's book to the person
asking the most (relevant!) questions.
Bio: Dan Haywood (http://danhaywood.com) is a freelance consultant,
writer, trainer, mentor, specializing in domain-driven design, agile
development and enterprise architecture on the Java and .NET
platforms. He's a well-known advocate of Naked Objects, and was
instrumental in the success of the first large-scale Naked Objects
systems which now administers state benefits for citizens in
Ireland. He's also the author of "Domain-Driven Design using Naked
Objects" (http://pragprog.com/titles/dhnako), a committer to the Naked
Objects framework and the lead of a number of sister open source
projects.
--------------------------------
Date and time: Thursday 11th March 2010 at 12:00
Location: Lecture Theatre 1, Sports and Exercise Science
Title: Toward `Organic Compositionality': Neuro-Dynamical
Systems Accounts for Cognitive Behaviors
Speaker: Jun Tani
(http://www.bdc.brain.riken.go.jp/~tani/)
Institution: RIKEN Brain Science Inst.
(http://www.bdc.brain.riken.go.jp)
Host: Jeremy Wyatt
Abstract:
My studies have examined how compositionality can be developed as
consequences of self-organization in neuro-dynamic systems via
repetitive learning of sensory-motor experiences. We introduce a basic
model accounting for parietal-premotor-prefrontal interactions to
represent generative models for cognitive behaviors. The basic model
had been implemented in a set of humanoid robotics experiments
including imitation learning of others, developmental and interactive
learning of object manipulation and associative learning between
proto-language and actions. The experimental results showed that the
compositional structures can be attained as ``organic'' ones with
hierarchy by achieving generalization in learning, by capturing
contextual nature in cognitive behaviors and by affording flexibility
in generating creative images.
--------------------------------
Date and time: Thursday 11th March 2010 at 16:00
Location: UG40, School of Computer Science
Title: Adaptive Infrastructures for Distributed Simulated
Worlds
Speaker: Georgios Theodoropoulos
(http://www.cs.bham.ac.uk/~gkt)
Institution: School of Computer Science, University of Birmingham
(http://www.cs.bham.ac.uk)
Abstract:
Very large distributed data structures are used more than ever before
in
the deployment of applications such as distributed simulations,
distributed virtual environments, massively multiplayer online games,
sensor networks, interactive media, and collaborative manufacturing and
engineering environments. As these applications become larger, more
data-intensive and latency-sensitive, scalability becomes a crucial
element for their successful deployment, presenting engineering
challenges for the design of the underlying infrastructure. The talk
will present work we have been doing in Birmingham for the last few
years on adaptive algorithms which aim to achieve scalability adapting
to the ever changing application demands.
--------------------------------
Date and time: Tuesday 16th March 2010 at 10:00
Location: Room 124, School of Computer Science
Title: Motor Skills Learning for Robotics
Speaker: Jan Peters
(http://www.kyb.mpg.de/~jrpeters)
Institution: Max Planck Institute for Biological Cybernetics
(http://www.kyb.mpg.de)
Host: Jeremy Wyatt
Abstract:
Autonomous robots that can assist humans in situations of daily life
have been a long standing vision of robotics, artificial intelligence,
and cognitive sciences. A first step towards this goal is to create
robots that can learn tasks triggered by environmental context or
higher level instruction. However, learning techniques have yet to
live up to this promise as only few methods manage to scale to
high-dimensional manipulator or humanoid robots. In this talk, we
investigate a general framework suitable for learning motor skills in
robotics which is based on the principles behind many analytical
robotics approaches. It involves generating a representation of motor
skills by parameterized motor primitive policies acting as building
blocks of movement generation, and a learned task execution module
that transforms these movements into motor commands. We discuss
learning on three different levels of abstraction, i.e., learning for
accurate control is needed to execute, learning of motor primitives is
needed to acquire simple movements, and learning of the task-
dependent "hyperparameters" of these motor primitives allows learning
complex tasks. We discuss task-appropriate learning approaches for
imitation learning, model learning and reinforcement learning for
robots with many degrees of freedom. Empirical evaluations on a
several robot systems illustrate the effectiveness and applicability
to learning control on an anthropomorphic robot arm.
Bio:
Jan Peters is a senior research scientist and heads the Robot Learning
Lab (RoLL) at the Max Planck Institute for Biological Cybernetics
(MPI) in Tuebingen, Germany. He graduated from University of Southern
California (USC) with a Ph.D. in Computer Science. He holds two German
M.S. degrees in Informatics and in Electrical Engineering (from Hagen
University and Munich University of Technology) and two M.S. degrees
in Computer Science and Mechanical Engineering from USC. Jan Peters
has been a visiting researcher at the Department of Robotics at the
German Aerospace Research Center (DLR) in Oberpfaffenhofen, Germany,
at Siemens Advanced Engineering (SAE) in Singapore, at the National
University of Singapore (NUS), and at the Department of Humanoid
Robotics and Computational Neuroscience at the Advanded
Telecommunication Research (ATR) Center in Kyoto, Japan. His research
interests include robotics, nonlinear control, machine learning,
reinforcement learning, and motor skill learning.
--------------------------------
Date and time: Thursday 18th March 2010 at 16:00
Location: UG40, School of Computer Science
Title: Why do philosophers worry about mathematical knowledge?
Speaker: Mary Leng
(http://pcwww.liv.ac.uk/~mcleng)
Institution: Department of Philosophy, University of Liverpool
(http://www.liv.ac.uk/philosophy)
Host: Aaron Sloman
Abstract:
We all know lots of mathematical truths: that 2 + 2 = 4; that there
are infinitely many prime numbers; that pi is irrational ... .
Indeed, unlike most interesting truths about empirical matters of
fact, most of these mathematical truths are known, or at least are
knowable, with certainty. That, at least, is what most of us would
assume. Why is it, then, that some philosophers persist in worrying
not just that our mathematical knowledge may not be certain, but that
we may have no mathematical knowledge at all? This talk will consider
the reasons why philosophers do, and indeed should, worry about
mathematical knowledge (while reassuring mathematicians that these
worries need not stop them from doing what they do best - proving
theorems).
--------------------------------
Date and time: Thursday 25th March 2010 at 12:00
Location: The Law Moot Room 219, Law Building
Title: Object Detection, Recognition and Tracking in Open-Ended
Learning Scenarios
Speaker: Michael Zillich
(http://users.acin.tuwien.ac.at/mzillich)
Institution: ACIN Institute of Automation and Control, Vienna
University of Technology
(http://www.acin.tuwien.ac.at)
Host: Aaron
Sloman
Abstract:
Perceiving objects, i.e. segmenting from a background scene, tracking
over short to medium time spans and remembering and recognising over
longer periods of time is a recurring task in many robotics scenarios
that has spawned many solutions.
We view the task of perceiving objects in the context of a cognitive
robotics framework. By cognitive we mean a system that is able to
reflect upon its own knowledge and gaps therein and which is able to
plan information gathering actions accordingly. So we have an open-ended
learning scenario, where the robot learns with varying amounts of tutor
interaction. In the following we consider a scenario where a tutor shows
a new object to the robot within a learning setup that is intended to
make things easy initially, i.e. the tutor basically puts the object on
a table and says something like ``This is a tea box.'' The system
detects this new object and starts tracking it. The tutor then picks up
the object and shows it from different sides and the system learns the
different object views while tracking. The system is now able to
recognise the learned object and re-initialise the tracker in more
general scenes, with all the background clutter and varying lighting
that are typical of robotic scenarios. To this end we employ a
combination of edge-based detection of basic geometric shapes, fast
edge-based particle filter tracking and SIFT-based object recognition.
--------------------------------
Date and time: Thursday 25th March 2010 at 16:00
Location: UG40, School of Computer Science
Title: Microsystems and data processing
Speaker: Mike Ward
(http://www.eng.bham.ac.uk/mechanical/about/people_ward.shtml)
Institution: School of Mechanical Engineering, University of
Birmingham
(http://www.eng.bham.ac.uk/mechanical/)
Host: Peter Tino
Abstract:
In this talk I will describe some of the micro and nanotechnology
sensor projects that we have been involved with over the last five
years. Many of the challenges presented in these projects have been
based around dealing with data from a array of sensors that are noisy
and may best be described as a random array. I will also describe some
of the multi physics FEA software that we can use to model both our
sensors and the environment that the sensors are monitoring. Finally I
will present our work on GE optimisation of sensors design and show
how we are trying to use GE to help interpret the data generated by
our sensors.
--------------------------------
Date and time: Thursday 29th April 2010 at 16:00
Location: UG40, School of Computer Science
Title: Aims, Objectives and Guidelines for PhD students
Speaker: Tim Kovacs
(http://www.cs.bris.ac.uk/~kovacs/)
Institution: Department of Computer Science, University of Bristol
(http://www.cs.bris.ac.uk/)
Host: Manfred
Kerber
Abstract:
I suspect there are common misconceptions about how and even why to do a
PhD. This talk sets out my ideas on the subject, lists objectives a PhD
student should work toward and suggests guidelines for activities PhD
students should undertake in order to get the most out of their studies.
See http://www.cs.bris.ac.uk/Teaching/learning/phd-guidelines.html
[http://www.cs.bris.ac.uk/Teaching/learning/phd-guidelines.html ] .
--------------------------------
Date and time: Thursday 6th May 2010 at 16:00
Location: UG40, School of Computer Science
Title: Privacy Challenge and Achievement in Trusted Computing
Speaker: Liqun Chen
(http://www.hpl.hp.com/personal/Liqun_Chen/)
Institution: Hewlett-Packard Laboratories, Bristol
(http://www.hpl.hp.com/bristol/)
Host: Guilin Wang
Abstract:
Let us consider a typical internet scenario: A user Alice using her
computer accesses two online services run by Bob and Charlie
respectively. In order to protect their systems from being abused by
malicious users, both Bob and Charlie want some assurance that Alice's
computer can be trusted, such that it contains a Trusted Platform
Module (TPM) which reports platform configuration in a
tamper-resistant manner. Alice is happy to let Bob and Charlie
authenticate her TPM, but she does not want them to know which TPM she
is using or to find out that they are talking to the same
TPM. Furthermore, in agreement with Bob, Alice allows Bob to link
multiple messages from her TPM, but she doesn't give Charlie this
privilege. This scenario requires the seemingly contradictory goals
between security and privacy, between authentication and anonymity,
and between system abuse-free and user controllable information
release.
In this talk we are going to introduce a special digital signature
scheme, namely Direct Anonymous Attestation (DAA), which can
simultaneously achieve these goals. The talk will cover the DAA
development from its original scheme in the existing TPM, installed in
more than 100 million enterprise-class PCs, to the most recent one
proposed for the next generation of TPM.
--------------------------------
Date and time: Thursday 3rd June 2010 at 16:00
Location: UG40, School of Computer Science
Title: Neural dynamic motor primitives for learning of Rich
Motor Skills
Speaker: Jochen J. Steil
(http://www.cor-lab.de/corlab/cms/user/9)
Institution: Research Institute for Cognition and Robotics - CoR-Lab,
Universitaet Bielefeld, Germany
(http://www.cor-lab.de/)
Host: Jeremy
Wyatt
Abstract:
Compared to animals and humans, the motor skills of today's robots
still must be qualified as poor. Their behavioral repertoire is
typically limited to a narrow set of carefully engineered motor
patterns that operate a rigid mechanics. We feature the new AMARSi
Integrated Project that aims at a qualitative jump toward biological
richness of robotic motor skills Specifically, we introduce neural
dynamic motion primitives realized as recurrent neural attractor
networks with a high degree of similarity to the neural architecture
of the cerebellum. In our framework, kinematic mappings of robots
including high-DOF redundant humanoids are efficiently learned from
sample trajectories gained i.e. from imitation or kinesthetic
teaching. The resulting dynamic network then performs attractor based
motion generation without utilizing any explicit representation of the
geometric form of the sample trajectories. Examples show learning and
trajectory generation for different platforms including Honda's
humanoid research robot and the child-like robot iCub. In conclusion
we discuss implications our the motor learning framework on the
information structure of general motor learning problems and
consequently on human motor learning.
--------------------------------
Date and time: Thursday 10th June 2010 at 16:00
Location: UG05 - Learning Centre
Title: Compressed Fisher Linear Discriminant Analysis:
Classification of Randomly Projected Data
Speaker: Robert Durrant
(http://www.cs.bham.ac.uk/~durranrj/)
Institution: School of Computer Science, University of Birmingham
(http://www.cs.bham.ac.uk)
Abstract:
I will be talking about my work with Ata Kaban on classification in
randomly projected domains, specifically our analysis of Fisher's Linear
Discriminant (FLD) classifier in randomly projected data spaces. Unlike
previous analyses of other classifiers in this setting, we avoid the
unnatural effects that arise when one insists that all pairwise
distances are approximately preserved under projection. We impose no
sparsity or underlying low-dimensional structure constraints on the
data; we instead take advantage of the class structure inherent in the
problem. We obtain a reasonably tight upper bound on the estimated
misclassification error on average over the random choice of the
projection, which, in contrast to early distance preserving approaches,
tightens in a natural way as the number of training examples increases.
It follows that, for good generalization of FLD, the required projection
dimension grows logarithmically with the number of classes. We also show
that the error contribution of a covariance misspecification is always
no worse in the low-dimensional space than in the initial
high-dimensional space. We contrast our findings to previous related
work, and discuss our insights.
This work is to be published in two papers accepted for presentation at
KDD and ICPR later this year.
If time permits I will also discuss our recent work on finite sample
effects in this setting, in particular the (exact) probability that the
random projection of the data sends the true and sample means of a class
to opposite sides of the decision boundary. This work is summarised in a
poster to be presented at AIStats, and a technical report detailing the
proof is currently in preparation.
The papers and poster can all be found on my web page at
www.cs.bham.ac.uk/~durranrj [http://www.cs.bham.ac.uk/~durranrj] .
--------------------------------
Date and time: Wednesday 16th June 2010 at 17:00
Location: UG40 - School of Computer Science
Title: Mechanical Impedance in humans and robots, the key to
understanding machine intelligence?
Speaker: William Harwin
(http://www.personal.reading.ac.uk/~shshawin/)
Institution: Cybernetics, School of Systems Engineering, University of
Reading
(http://www.reading.ac.uk/cybernetics/)
Host: Jeremy Wyatt
Abstract:
In the last four decades of research robots have made very little
progress and are still largely confined to industrial manufacture and
cute toys, yet in the same period computing has followed Moore's Law
where the processing capacity double roughly every two years. So why
is there no Moore's Law for robots? Two areas stand out as worthy of
research to speedup progress. The first is a greater understanding of
how human and animal brains control movement, the second to build a
new generation of robots that have greater sense of touch (haptics)
and can adapt to the environment as it is encountered.
Humans are able to control the force of contact remarkably well, in
particular can adjust their impedance to meet the demands of the
task. This is despite a slow processing system where a reaction time
of 150ms is considered fast. A better understanding these processes of
interaction has allowed us to build haptic interfaces able to mimic
these interaction and under the right circumstances these are
remarkably convincing. However there are still technology limitations
so we still often require the user to suspend belief in the realism of
the virtual world, and yet they do so with remarkably little
difficulty. Ideas from the sciences of haptic interactions, and human
cognitive-motor systems may also help to influence the design of new
generations of rehabilitation robots for the treatment of neurological
conditions such as stroke, where the treatment and the assessment can
be done in parallel.
--------------------------------
Date and time: Tuesday 22nd June 2010 at 11:00
Location: LG33, Teaching and Learning Centre
Title: Fly-by-Agent: Controlling a Pool of UAVs via a
Multi-Agent System
Speaker: Jeremy Baxter
(http://www.qinetiq.com/home/defence/defence_solutions/aerospace/unmanned_air_systems/autonomy.html)
Institution: QinetiQ
(http://www.QinetiQ.com)
Host: Aaron
Sloman
Abstract:
The talk will describe how a variety of Artificial Intelligence
techniques have been combined together to provide a system that allows
a single operator to control a team of unmanned air vehicles
(UAVs). The talk will describe the multi agent system that interprets
the operators commands and continually adapts the plans of the
vehicles to carry out complex and interdependent tasks. The agents
perform team and individual task planning, co-ordinate the execution
of multiple agents and respond rapidly to changes in the
environment. A series of test flights will be described during which a
pilot controlled both his own aircraft and a team of UAVs. The
challenges which had to be overcome to get an artificial intelligence
system out of the laboratory and onto an aircraft will be discussed.
About the speaker:
Dr Jeremy Baxter is a Lead Researcher in the UAVs and Autonomous
Systems Group, part of the Aerospace Business Unit at QinetiQ. Jeremy
has a first class honours degree in Engineering and a Ph.D. in Fuzzy
Logic Control of Automated Vehicles. Jeremy joined QinetiQ (DERA) in
1994 and his initial work focussed on the application of Artificial
Intelligence (AI) techniques to battlefield simulation and the
development of Multi Agent systems. This included the development of a
robust planning and execution framework for groups of vehicles,
capable of re-organising in the face of failures and losses. From 2001
to 2003 he was responsible for providing the autonomous navigation
component of the Unmanned Ground Vehicle Demonstrator program. Since
2002 he has lead a team developing cooperative behaviours for groups
of Unmanned Combat Air Vehicles. This included numerous high fidelity
synthetic environment trials, test flights on the QinetiQ Surrogate
UAV in 2006/2007 and being the lead designer for the Reasoning layer
of the MOD UCAV demonstrator system, Taranis. Jeremy is the principal
author of several scientific papers on agent based decision-making and
is a Chartered Engineer and Fellow of the Institute of Engineering and
Technology. His research interests are primarily in multi-agent
systems, plan execution architectures, path planning algorithms and
robust, real-time, planning and decision making under uncertainty. He
was involved in a close collaboration with the AI group in the School
of Computer Science for several years from 1994
--------------------------------
Date and time: Thursday 23rd September 2010 at 16:00
Location: UG 06, Learning Centre
Title: Hidden abstract structures of elementary mathematics
Speaker: Alexandre Borovik
(http://www.maths.manchester.ac.uk/~avb/)
Institution: School of Mathematics, The University of Manchester
(http://www.maths.manchester.ac.uk)
Host: Aaron Sloman
Abstract:
My talk can be classified as a talk on psychology of mathematical
abilities, but given from a mathematician's point of view. I will
discuss some hidden structures of elementary school mathematics
(frequently quite sophisticated and non-elementary) and conjectural
cognitive mechanisms which allow some children to feel the presence
of these structures.
--------------------------------
Date and time: Thursday 14th October 2010 at 16:00
Location: UG05, Learning Centre
Title: Computational Methods for Aerospace Applications
Speaker: Prof. Nigel Weatherill FREng, DSc, Head of College
(http://www.mgmtgroup.bham.ac.uk/college_heads/weatherill.shtml)
Institution: College of Engineering and Physical Sciences
(http://www.birmingham.ac.uk/university/colleges/eps/index.aspx)
Host: Iain Styles
--------------------------------
Date and time: Thursday 28th October 2010 at 16:00
Location: UG**06**, Learning Centre
Title: The CamCube project: Rethinking the Data Center
Speaker: Ant Rowstron
(http://research.microsoft.com/en-us/um/people/antr/)
Institution: Microsoft Research Cambridge
(http://research.microsoft.com/en-us/labs/cambridge/default.aspx)
Host: George Theorodopoulos
Abstract:
Why do we build data centers in the way that we do? In this
talk I will provide a high-level overview of current data center
architectures used by companies like Microsoft, Google, Yahoo, Amazon
and so forth. I will then describe some of the work we are currently
doing in the CamCube project, which aims to build, from the ground up, a
new data center cluster architecture to support workloads and
applications that are run in data centers. CamCube liberally borrows
ideas from High Performance Computing, Distributed Systems and
Networking and represents a very different design point that blatantly
violates many accepted norms of data center cluster design. The
talk will motivate the design, and then show a number of example
applications that perform significantly better using the CamCube
architecture, including a MapReduce-like application. The talk will be
aimed at a general CS audience.
--------------------------------
Date and time: Thursday 4th November 2010 at 16:00
Location: UG05, Learning Centre
Title: Taming the Malicious Web: Avoiding and Detecting
Web-based Attacks
Speaker: Marco Cova
(http://www.cs.bham.ac.uk/~covam/)
Institution: School of Computer Science, University of Birmingham
(http://www.cs.bham.ac.uk/)
Abstract:
The world wide web is an essential part of our infrastructure and a
predominant mean for people to interact, do business, and participate
to
democratic processes. Unfortunately, in recent years, the web has also
become a more dangerous place. In fact, web-based attacks are now a
prevalent and serious threat. These attacks target both web
applications, which store sensitive data (such as financial and
personal
records) and are trusted by large user bases, and web clients, which,
after a compromise, can be mined for private data or used as drones of
a
botnet.
In this talk, we will present an overview of our techniques to detect,
analyze, and mitigate malicious activity on the web. In particular, I
will present a system, called Wepawet, which targets the problem of
detecting web pages that launch drive-by-download attacks against their
visitors. Wepawet visits web pages with an instrumented browser and
records events that occur during the interpretation of their HTML and
JavaScript code. This observed activity is analyzed using anomaly
detection techniques to classify web pages as benign or malicious. We
made our tool available as an online service, which is currently used
by
several thousands of users every month.
We will also discuss techniques to automatically detect vulnerabilities
and attacks against web applications. In particular, we will focus on
static analysis techniques to identify ineffective sanitization
routines, which found tens of vulnerabilities in several real-world
applications.
--------------------------------
Date and time: Thursday 11th November 2010 at 16:00
Location: UG**06**, Learning Centre
Title: From Solitary Individuals to Self-Organising Aggregation
Speaker: Shan He
(http://www.cs.bham.ac.uk/~szh/)
Institution: School of Computer Science, University of Birmingham
(http://www.cs.bham.ac.uk/)
Host: TBA
Abstract:
Animal aggregation such as bird flock, fish school and ant
swarm is a fascinating phenomenon in nature. But how do these animals
aggregate and coordinate? Why do they aggregate? How solitary animals
evolved aggregation by natural selection? In this talk I will introduce
my work in attempt to address these questions by using agent-based
modelling and evolutionary artificial neural networks (EANNs). Using the
model, I will show how solitary individuals evolved aggregation by
selection pressure of predation at individual level. I shall also
present a novel social interaction rule evolved by our EANNs that can
generate visually realistic animal aggregation patterns but never
reported before. Finally, the implications of this study to engineering
self-organising systems will be discussed.
--------------------------------
Date and time: Tuesday 23rd November 2010 at 16:00
Location: UG05, Learning Centre
Title: Language, Cognition and Musical Emotions
Speaker: LEONID I. PERLOVSKY
(http://www.leonid-perlovsky.com/)
Institution: Air Force Research Laboratory, Hanscom AFB, USA, and
Harvard University
()
Host: Aaron Sloman
Abstract:
What are the mechanisms of interaction between language and cognition?
Are we thinking with words, or do we use language to communicate ready
thoughts? Why do kids learn language by 5, but cannot act like adults?
What motivates us to combine language and cognition, and what emotions
are involved in these motivations? The talk presents mathematically
and
cognitively grounded answers. First I briefly review past mathematical
difficulties of modeling the mind and a new mathematical technique of
dynamic logic (DL), which overcomes these difficulties. Mind mechanisms
of concepts, emotions, instincts are described; they are inseparable
from perception and cognition. Engineering applications illustrate
orders of magnitude improvement in pattern recognition, data mining,
fusion, financial predictions. Brain imaging experiments demonstrated
that DL models actual perception mechanisms in human brain. DL is
related to perceptual symbol system (PSS), and the DL processes are
related to simulators in PSS. DL is extended to joint operations of
language and cognition. It turned out those human abilities could only
evolve jointly.
The second part of the talk moves to future research directions: roles
of beauty, music and sublimity in the mind, cognition and evolution.
Arguments are presented that unusual type of emotions related to
cognitive dissonances motivate combining language and cognition. A
hypothesis is suggested that these emotions are related to musical
emotions. Future psychological and computer-science experiments
verifying this hypothesis are discussed.
--------------------------------
Date and time: Thursday 2nd December 2010 at 16:00
Location: UG05, Learning Centre
Title: Routing Protocols for Untrusted Networks
Speaker: Michael Rogers
(http://www.cs.ucl.ac.uk/people/M.Rogers.html)
Institution: Dept of Computer Science, University College London
(http://www.cs.ucl.ac.uk/)
Host: Tom Chothia
Abstract:
In open distributed systems such as peer-to-peer overlays and mobile ad
hoc networks, messages may need to be routed across an unknown and
changing topology where it is not possible to establish the identities
or trustworthiness of all the nodes involved in routing. In this talk I
will describe two address-free routing protocols that use feedback in
the form of unforgeable acknowledgements to discover dependable routes
without knowing any node needing to know the structure or membership of
the network beyond its immediate neighbours. The protocols are designed
to survive faulty or misbehaving nodes and to reveal minimal information
about the communicating parties, making them suitable for use in
censorship-resistant communication. One of the protocols additionally
creates an incentive for selfish users to cooperate in routing.
--------------------------------
Date and time: Thursday 9th December 2010 at 16:00
Location: UG06, Learning Centre
Title: Computational Approaches to Understanding Complex
Biological Systems
Speaker: Francesco Falciani
(http://biosciences-people.bham.ac.uk/About/staff_profiles_contact.asp?ID=76)
Institution: School of Biosciences, University of Birmingham
()
Host: Jon Rowe
Abstract:
The advent of functional genomics technologies have revolutionised the
way we investigate Biological systems. These technologies provide
quantitative measurements on tens of thousands of cellular molecular
components in single experiments and at a reasonable cost. This
unprecedented amount of data has stimulated the development of
computational methodologies for the identification of the relevant
genes/proteins/metabolites and for inferring their relationship in the
context of a mechanism. This presentation will show some of the
approaches that we have developed in the last few years and present some
of the applications with specific reference to clinical problems.
--------------------------------
Date and time: Thursday 3rd February 2011 at 16:00
Location: UG40, School of Computer Science
Title: Efficient Computation of the Shapley Value for Centrality
in Networks
Speaker: Ravindran Balaraman
Institution: IIT Madras
Host: Jeremy Wyatt
Abstract:
The Shapley Value is arguably the most important normative
solution concept in coalitional games. One of its applications is in
the domain of networks, where the Shapley Value is used to measure the
relative importance of individual nodes. This measure, which is called
node centrality, is of paramount significance in many real-world
application domains including social and organisational networks,
biological networks, communication networks and the internet. Whereas
computational aspects of the Shapley Value have been analyzed in the
context of conventional coalitional games, this work presents the
first such study of the Shapley Value for network centrality. Our
results demonstrate that this particular application of the Shapley
Value presents unique opportunities for efficiency gains. In
particular, we develop exact analytical formulas for computing Shapley
Value based centralities in both weighted and unweighted networks.
These formulas not only provide an efficient (polynomial time) and
error-free way of computing node centralities, but their surprisingly
simple closed form expressions also offer intuition into why certain
nodes are relatively more important to a network.
[Joint work with Karthik V. Aadithya.]
--------------------------------
Date and time: Thursday 10th February 2011 at 16:00
Location: UG40, School of Computer Science
Title: Biomimetic Robotics with a Light Touch
Speaker: Tony Prescott
(http://www.shef.ac.uk/psychology/staff/academic/tony-prescott.html)
Institution: Active Touch Lab, Dept of Psychology, The University of
Sheffield
(http://www.sheffield.ac.uk/psychology/research/groups/atlas)
Host: Jeremy Wyatt
Abstract:
When animals, including humans, sense the world they usually do so in a
purposive and information-seeking way that is often referred to as
"active sensing". We aim to understand biological active sensing
systems in the domain of touch and to apply the insights gained to
develop better sensory systems for robots. We have focused on the
vibrissal (whisker) system of rodents which we are investigating
through
a combination of (i) ethological studies of behaving animals, (ii)
computational neuroscience models of the neural circuits involved in
vibrissal processing, and (iii) biomimetic robots embodying many of the
characteristics of whiskered animals in their design and control. This
talk will present converging lines of evidence, from these different
research strands, for the importance of active control in tactile
sensing. In particular, it will show that vibrissal sensing in animals
takes advantage of control strategies that allow the exploration of
surfaces using a light touch. Experiments with robots indicate that
such strategies promote the extraction of surface invariants whilst
limiting the dynamic range of touch signals in a manner that can boost
sensitivity. These results will be used as an example to illustrate
how
experimental and robotic approaches can operate together to advance our
understanding of complex behaving systems.
--------------------------------
Date and time: Thursday 17th February 2011 at 16:00
Location: UG40, School of Computer Science
Title: Semantic Category Theory and the Automated Resolution of
Ambiguity
Speaker: John St. Quinton
Institution: Zetetic Systems Ltd. & School of Electronic, Electrical
and Computer Engineering
Host: Thorsten Schnier
Abstract:
A major hurdle in the path of developing an 'intelligent' machine is
human language. And a major hurdle in human language is ambiguity.
John will describe his discovery of the four distinct data types, and
combination modes, used by humans to construct and communicate
their thoughts. One combination mode in particular is a source of
extremely sophisticated, often almost transparent, and thereby
beguilingly persuasive, ambiguity.
These discoveries are based on observation, and the associated
scientific theory John created, 'Semantic Category Theory', can be
refuted by observation. The data that Semantic Category Theory
accounts for are sentences, written or spoken. There is no shortage of
data with which to test the theory.
John will begin his talk by briefly outlining his research and
development work in Cybernetics; from the late 1960's in Airborne
Flight Control at Elliott Bros., his development of the world's
first onboard maritime collision avoidance system for Decca Radar
Research Laboratories (to prevent further 'Torrey Canyon' and suchlike
disasters), his 1970's PhD research into Automated Heuristic
Acquisition in the Department of Cybernetics at Reading
University, and in the 1980's, ADCIS, the European
Air Defense C3I System developed at Farnborough.
In his spare time, John continued his own specialist interest in
Machine Intelligence and since the late 1990's has applied
Semantic Category Theory, and it associated algorithmic analytical
technique, 'Semantic Category Analysis' to a wide range of
application domains, including Philosophy and Pure Mathematics, and
pursued his desire to implement the Semantic Category Analysis
algorithm as a language generation and interpretation system for
Machine Intelligence.
John will illustrate how Semantic Category Analysis can been used, not
only to solve a wide range of perplexing problems - from pinpointing
the precise source of an erroneous argument - to resolving
conundrums and paradoxes, but also, how the 'semantic descriptions'
generated by the Semantic Category Analysis algorithm can equally be
used to create entirely original 'problems'.
The talk will conclude by demonstrating how Semantic Category
Analysis can be used by anyone to become their own 'Zen Master'.
--------------------------------
Date and time: Thursday 3rd March 2011 at 14:00
Location: LG32, Learning Centre
Title: Sensorimotor Systems in Insects and Robots
Speaker: Barbara Webb
Institution: Informatics, University of Edinburgh
Host: Aaron Sloman
Abstract:
Despite their relatively small brains, the sensorimotor tasks faced
and solved by insects are comparable in a number of ways to those
faced by vertebrates and humans. For example they need to smoothly
combine or switch between different responses depending on context,
and they need to distinguish re-afferent input (i.e. stimuli caused
by their own movement) from external disturbances, which may involve
predictive processes. There has also been much recent interest in the
learning capabilities of insects and what neural architecture is
needed to support their behavioural flexibility. Our approach to these
problems combines behavioural experiments with modelling approaches
that utilise realistic input and output constraints by implementing
the hypothesised neural control circuits on robots.
--------------------------------
Date and time: Thursday 3rd March 2011 at 16:00
Location: UG40, School of Computer Science
Title: Routing Protocols for Untrusted Networks
Speaker: Michael Rogers
Institution: Dept of Computer Science, University College London
Host: Tom Chothia
Abstract:
In open distributed systems such as peer-to-peer overlays and mobile ad
hoc networks, messages may need to be routed across an unknown and
changing topology where it is not possible to establish the identities
or trustworthiness of all the nodes involved in routing. In this talk I
will describe two address-free routing protocols that use feedback in
the form of unforgeable acknowledgements to discover dependable routes
without knowing any node needing to know the structure or membership of
the network beyond its immediate neighbours. The protocols are designed
to survive faulty or misbehaving nodes and to reveal minimal
information
about the communicating parties, making them suitable for use in
censorship-resistant communication. One of the protocols additionally
creates an incentive for selfish users to cooperate in routing.
--------------------------------
Date and time: Thursday 10th March 2011 at 16:00
Location: UG40, School of Computer Science
Title: ICT Research and Ethics
Speaker: Simon Rogerson
Institution: Centre for Computing and Social Responsibility, De
Montfort University
Host: Nick Blundell
Abstract:
Increasingly Information and Communication Technology (ICT) impacts on
the lives of more and more people. Those involved in providing the ICT
products and services have obligations and responsibilities towards a
range of stakeholders regarding the acceptability of such products and
services. It is often overlooked that ICT researchers have such
obligations and responsibilities. The substantial work done in the field
of ICT Ethics over the past 30 years can be used to help ICT researchers
understand these obligations and responsibilities and help them use the
ethical dimension of research in a proactive manner. This talk will draw
upon experiences from EU FP7 research projects to discuss what the
ethical issues might be within ICT research, how these might be
addressed and reported.
--------------------------------
Date and time: Thursday 7th April 2011 at 16:00
Location: UG40, School of Computer Science
Title: Multithreaded Reconfigurable Hardware - Programming and
OS Integration
Speaker: Enno Luebbers
Institution: EADS Innovation Works, Munich
Host: Peter Lewis
Abstract:
Modern platform FPGAs integrate programmable logic with dedicated
microprocessors and present powerful implementation platforms for
complete reconfigurable systems-on-chip. However, traditional design
techniques that view specialized hardware circuits as passive
coprocessors are ill-suited for programming these reconfigurable
computers. Moreover, the promising feature of partial reconfiguration
has yet to be embraced by a pervasive programming paradigm.
This talk covers recent work in the new area of multithreaded
programming of reconfigurable logic devices. After introducing the
concept of reconfigurable Systems-on-Chip (rSoc) in general, it
presents
an execution environment called ReconOS that is based on existing
embedded operating systems (such as Linux and eCos) and extends the
multithreaded programming model--already establishedand highly
successful in the software domain--to reconfigurable hardware. Using
threads and common synchronization and communication services as an
abstraction layer, ReconOS allows for the creation of portable and
flexible multithreaded HW/SW applications for CPU/FPGA systems.
--------------------------------
Date and time: Thursday 21st April 2011 at 14:00
Location: UG40, School of Computer Science
Title: Support Vector Machines with Hash Kernels in Dependency
Parsing and Text
Speaker: Bernd Bohnet
(http://www.ims.uni-stuttgart.de/~bohnetbd/)
Institution: University of Stuttgart
()
Host: John Barnden
Abstract:
In recent years, data-driven dependency parsing has become popular to
analyse natural language text. The most important properties of a
dependency parser are high accuracy and short parsing times. For many
application of parsers such as dialog systems and machine translation,
parsing times play a crucial role since users are not willing to wait
long for a respond of a computer. However, parsing and training takes
still quite long since parser are mainly optimized towards accuracy. We
show in this talk that accuracy and fast parsing times are not a
contradiction. We extend a linear support vector machine (MIRA) by a
Hash Kernel, which substantially improves the parsing times. A parser
creates during the parsing process millions of features from negative
examples, which are usually filtered out due to their huge number. With
the Hash Kernel, we can take these additional features into account and
improve the accuracy too. Data-driven Natural Language Generation (NLG)
is a second application that can benefit from a Hash Kernel. Surface
Realization is the subtask of NLG that is concerned with the mapping of
semantic input graphs to sentences. We conclude the talk by describing
the transfer of parsing techniques and the Hash Kernel to this
application.
--------------------------------
Date and time: Thursday 5th May 2011 at 14:00
Location: Learning Centre UG04
Title: Turning On the Light to See How the Darkness Looks
Speaker: Susan Blackmore
(http://www.susanblackmore.co.uk/)
Host: Aaron Sloman
Abstract:
Given a curious property of introspection, some common assumptions made
about the nature of consciousness may be false.
Inquiring into one's own conscious experience "now" produces different
answers from inquiring into the immediate past. "Now" consciousness
seems to be unified with one conscious self experiencing the contents
of a stream of consciousness. This implies a mysterious or magic
difference between the contents of the stream and the rest of the
brain's unconscious processing.
By contrast, looking back into the immediate past reveals no such
unity, no contents of consciousness or coherent stream, but multiple
backwards threads of different lengths, continuing without reference to
each other or to a unified self. From this perspective there is no
mystery and no magic difference.
I suggest that the difference between conscious and unconscious events
is an illusion created by introspection into the present moment. So is
the persisting self who seems to be looking. Most people are not
introspecting this way much of the time if ever. Yet whenever they do
the mystery appears. Looking into those times when we are not deluded
is like opening the fridge door to see whether the light is always on
or, as William James put it, turning on the light to see how the
darkness looks.
This seems to be impossible but there may be ways around the problem.
--------------------------------
Date and time: Thursday 5th May 2011 at 16:00
Location: UG40, School of Computer Science
Title: Energy as Syntax
Speaker: Vincent Danos
(http://homepages.inf.ed.ac.uk/vdanos/home_page.html)
Institution: School of Informatics, University of Edinburgh
()
Host: TBD (NB: the semr is joint with Biosciences)
Abstract:
To model and analyze decentralized dynamics of high complexity and
connectedness, one has to go beyond basic descriptive tools such as
Markov chains and differential equations. Even representational
challenges can be insurmountable without structured syntaxes. Eg, we
have developed the concepts of rule-based modeling of bio-molecular
networks and the accompanying kappa language. In this talk, I will
discuss how one can program the dynamics of such stochastic graph
rewriting systems by means of local energy functionals. The idea is
that the dynamics is now inferred from the statics (as in MCMC
methods). This leads to less parameter-hungry modeling and meshes well
with statistical mechanical techniques.
--------------------------------
Date and time: Friday 20th May 2011 at 15:00
Location: UG40, School of Computer Science
Title: From Networks to Markets
Speaker: Dr. Peter Key
(http://research.microsoft.com/en-us/people/peterkey/)
Institution: Microsoft Research
(http://research.microsoft.com/en-us/)
Host: Vivek Nallur
Abstract:
The introduction of new network services or architectures cannot ignore
questions related to economics or incentives. This is manifest in
current protocol tussles in the Internet community and reflected in the
"Net Neutrality" debate. Cloud based services and the rapid growth of
social networks makes such questions even more important. We argue that
any resource allocation problem needs to consider incentives, as well
as
algorithm design. We illustrate this by looking at questions of
multipath routing, congestion control and network pricing using both
Stochastic Modelling and Game Theory.
In fact Network resource allocation problems have an intriguing
connection with sponsored-search auctions (such as those used by
Microsoft Live or Google for ad-sponsored search). We describe this
connection and then switch gear to look at some specific questions
related to auctions, giving examples from two large data sets: snapshots
of Adcenter data, and Forza data. First we show how a simple stochastic
model can give a new way of looking at repeated auctions. Then we
describe a virtual economy based on Forza auctions where users bid with
points for items. We present some preliminary findings and unsolved
problems in this exciting area.
Speaker's Bio
--------------
Peter Key joined Microsoft Research's European Research Centre in
Cambridge, U.K., in 1998 where he is a Principal Researcher. He leads
a
newly formed Networks, Economics, and Algorithms team. His current
research interests focus on Networks and Economics, looking at
Ad-auctions, Pricing, Multipath Routing and Routing Games. He was
previously at BT Labs, which he joined in 1982, working in the field of
Teletraffic Engineering and Performance Evaluation, and where he was
involved with the development and introduction of DAR (Dynamic
Alternative Routing) into BT's trunk network. At BT he led a
mathematical services group, and 1992 ventured in to ATM to lead
performance group. In 1995 he led a Performance Engineering team and
then managed the Network Transport area.
Peter Key went to St John's College received the BA degree in
Mathematics from Oxford University in 1978, and an MSc (from UCL) and
PhD from London University in 1979 and 1985, both in Statistics. From
1979 to 1982 he was a Research Assistant in the Statistics and Computer
Science department of Royal Holloway College, London University. He is
a Visiting Fellow at the Statistical Laboratory, Cambridge, and a Fellow
of the Institution of Engineering and Technology (FIET). In 1999 he was
Technical co-chair of the 16th International Teletraffic Congress (ITC)
, Program co-chair for Sigmetrics 2006 and is TPC chair for CoNext 2011.
--------------------------------
Date and time: Friday 20th May 2011 at 15:00
Location: UG40, School of Computer Science
Title: From Networks to Markets
Speaker: Peter Key
Institution: Microsoft Research: European Research Centre
Host: Vivek Nallur
Abstract:
The introduction of new network services or architectures cannot ignore
questions related to economics or incentives. This is manifest in
current protocol tussles in the Internet community and reflected in the
"Net Neutrality" debate. Cloud based services and the rapid growth of
social networks makes such questions even more important. We argue that
any resource allocation problem needs to consider incentives, as well as
algorithm design. We illustrate this by lookingat questions of multipath
routing, congestion control and network pricing using both Stochastic
Modelling and Game Theory.
In fact Network resource allocation problems have an intriguing
connection with sponsored-search auctions (such as those used by
Microsoft Live or Google for ad-sponsored search). We describe this
connection and then switch gear to look at some specific questions
related to auctions, giving examples from two large data sets: snapshots
of Adcenter data, and Forza data. First we show how a simple stochastic
model can give a new way of looking at repeated auctions. Then we
describe a virtual economy based on Forza auctions where users bid with
points for items. We present some preliminary findings and unsolved
problems in this exciting area.
--------------------------------
Date and time: Thursday 26th May 2011 at 16:00
Location: UG40, School of Computer Science
Title: Verification of Multi-Agent Systems
Speaker: Alessio Lomuscio
Institution: Dept of Computing, Imperial College
Host: Mark Ryan
Abstract:
Multi-agent systems are distributed autonomous systems in which the
components, or agents, act autonomously or collectively in order to
reach private or common goals. Logic-based specifications for MAS
typically do not only involve their temporal evolution, but also other
intensional states, including their knowledge, beliefs, intentions and
their strategic abilities.
This talk will survey recent work carried out on model checking
MAS. Specifically, serial and parallel algorithms for symbolic model
checking for temporal-epistemic logic as well as bounded-model
checking procedures will be discussed. MCMAS, an open-source model
checker, developed at Imperial College London, will be briefly
demonstrated. Applications of the methodology to the automatic
verification of security protocols, web services, and fault-tolerance
will be surveyed.
--------------------------------
Date and time: Thursday 2nd June 2011 at 16:00
Location: UG40, School of Computer Science
Title: Learning for Perceptual Decisions in the Human Brain
Speaker: Zoe Kourtzi
Institution: School of Psychology
Host: John Barnden
Abstract:
Successful actions and interactions in the complex environments we
inhabit entail making fast and optimal decisions. Extracting the key
features from our sensory experiences and deciding how to interpret them
is a computationally challenging task that is far from understood.
Accumulating evidence suggests that the brain may solve this challenge
by combining sensory information and previous knowledge about the
environment acquired through evolution, development, and everyday
experience. We combine behavioural and brain imaging measurements with
computational approaches to investigate the role of visual learning and
experience-dependent plasticity in optimizing perceptual decisions. We
demonstrate that learning translates sensory experiences to decisions by
shaping decision criteria in fronto-parietal circuits and neural
sensitivity to object categories in higher occipito-temporal circuits.
Our findings propose that long-term experience and short-term training
interact to shape the optimization of visual recognition processes in
the human brain.
--------------------------------
Date and time: Friday 3rd June 2011 at 14:00
Location: UG40, School of Computer Science
Title: Better Learning Algorithms for Neural Networks
Speaker: Geoffrey Hinton
(http://www.cs.toronto.edu/~hinton/)
Institution: Department of Computer Science, University of Toronto
()
Host: Aaron Sloman / Jeremy Wyatt
Abstract:
Neural networks that contain many layers of non-linear processing
units are extremely powerful computational devices, but they are also
very difficult to train. In the 1980's there was a lot of excitement
about a new way of training them that involved back-propagating error
derivatives through the layers, but this learning algorithm never
worked very well for deep networks that have many layers between the
input and the output. I will describe a way of using unsupervised
learning to create multiple layers of feature detectors and I will
show that this allows back-propagation to beat the current state of the
art for recognizing shapes and phonemes. I will then describe a new
way of training recurrent neural nets and show that it beats the best
other single method at modeling strings of characters.
--------------------------------
Date and time: Thursday 18th August 2011 at 16:00
Location: UG40, School of Computer Science
Title: Randomised Algorithms for Discrete Load Balancing
Speaker: Thomas Sauerwald
Institution: Max-Planck-Institut fĂźr Informatik, SaarbrĂźcken
Abstract:
Load Balancing is an important requisite for the efficient utilisation
of parallel computers. Here, we consider the problem of balancing
discrete load items on networks. In our model, in each time-step certain
nodes are paired and they are allowed to average their load as close as
possible. Previous algorithms assumed that the excess token (if any) is
kept by the node with the larger load. In this talk, we investigate
algorithms that direct the excess token in a random manner and show that
they achieve a much smoother load distribution.
--------------------------------
Date and time: Thursday 25th August 2011 at 18:00
Location: Haworth Lecture Theatre, University of Birmingham
Title: Copyright vs Community in the Age of Computer Networks
Speaker: Richard Stallman
(http://www.fsf.org/author/rms)
Institution: Free Software Foundation
(http://www.fsf.org)
Host: Bob Hendley
Abstract:
Abstract
Copyright developed in the age of the printing press, and was designed
to fit with the system of centralized copying imposed by the printing
press. But the copyright system does not fit well with computer
networks, and only draconian punishments can enforce it. The global
corporations that profit from copyright are lobbying for draconian
punishments, and to increase their copyright powers, while suppressing
public access to technology. But if we seriously hope to serve the only
legitimate purpose of copyright â to promote progress, for the benefit
of the public â then we must make changes in the other direction.
About the Speaker
Richard Stallman launched the free software movement in 1983 and started
the development of the GNU operating system [http://www.gnu.org] in
1984. GNU is free software: everyone has the freedom to copy it and
redistribute it, as well as to make changes either large or small. The
GNU/Linux system, basically the GNU operating system with Linux added,
is used on tens of millions of computers today. Stallman has received
the ACM Grace Hopper Award, a MacArthur Foundation fellowship, the
Electronic Frontier Foundation's Pioneer Award, and the Takeda Award for
Social/Economic Betterment, as well as several honorary doctorates.
Venue
The Haworth Lecture Theatre is located in Building Y2 on the main campus
[http://www.birmingham.ac.uk/Documents/university/edgbaston-map.pdf] .
Enquiries to Bob Hendley .
--------------------------------
Date and time: Thursday 15th September 2011 at 16:00
Location: UG40, School of Computer Science
Title: Digital Investigations
Speaker: Myfanwy Johns (Artist in Residence)
(www.myfanwyjohns.com)
Institution: School of Computer Science
()
Host: Thorsten Schnier
Abstract:
This talk will introduce Myfanwy to the school.
In September 2011 Dr Myfanwy Johns will start a ten-month Artist in
Residence funded by the Leverhulme Trust. The residency is hosted by
CERCIA, The Centre of Excellence for Research in Computational
Intelligence and Applications (Dr Thorsten Schnier), at the School of
Computer Science, University of Birmingham.
Myfanwy is visual artist investigating the function of ornament and its
transformative quality on architectural space. An ongoing interest of
her practice is to investigate interior and exterior architectural
structures that initiate a public interface with ornamentation in the
built environment. Particular interests include the expression of
materials and integrating pattern into essential substrates. Her
explorations of surface design include working with historic pattern;
pattern enables her work to connect with previous and future
generations. Her research interests include the application of computer
image transfer technology to new and traditional materials to integrate
decoration into the structure.
The aim of the Residency is to research the boundaries between creative
engagement, advanced engineering methods and computer science to create
unique outcomes that have the potential to advance thinking towards use
of ornamentation. Myfanwy's PhD research led to the investigation of
digital image transfer techniques and its significance to the function
of ornamentation in architecture. The individuality of outcome is of
interest, computers and digital manufacturing enable mass customisation
as opposed to mass production. A common interest with the computer
science department is the creation, variation and use of patterns in a
range of contexts. The host has in the past explored the computational
formulation and parameterization of patterns, and Myfanwy has explored
repetition of forms and their decorative applications. The mathematical
formulation of patterns found in nature, historic artifacts,
environmental data, and other sources, and their use in decorative art,
will therefore form one focus of the collaborative work. Patterns also
play an important role in human computer interaction (HCI) and software
design; they will look at exploiting some of the implicit knowledge
embodied in design practice.
Myfanwy plans to work closely with the Visual and Spatial Technology
Centre (VISTA) and the Interdisciplinary Research Centre (IRC) in
Metallurgy and Materials Processing. Visual experimentation is likely to
use the potential for 3D scanning and direct laser fabrication
technologies to create unique surface ornamentation and 3-D sculptural
objects. The direct laser fabrication technique can produce 3-D
components directly from CAD files using a laser beam, which moves
following the paths defined by the CAD file whilst metal powder is being
injected into its focal point.
--------------------------------
Date and time: Thursday 6th October 2011 at 16:00
Location: UG40, School of Computer Science
Title: Pillage Games and Formal Proofs - Past Work, Future Plans
Speaker: Manfred Kerber and Colin Rowat
Institution: School of Computer Science, Department of Economics
Abstract:
Theoretical economics makes use of strict mathematical
methods. For instance, games as introduced by von Neumann and
Morgenstern
allow for formal mathematical proofs for certain axiomatized
economical
situations. Such proofs can also be carried through in formal systems
such as Isabelle and Theorema.
The structure of this presentation is three-fold: First we describe
work
we did in exploring particular cooperative games, so-called three
player
pillage games. Second we present experiments we carried through in
collaboration with Wolfgang Windsteiger using the Theorema system to
prove these theorems formally. Of particular interest is some
pseudo-code which summarizes the results previously shown. Since the
computation involves infinite sets the pseudo-code is in several ways
non-computational. However, in the presence of appropriate lemmas, the
pseudo-code has sufficient computational content that Theorema can
compute stable sets (which are always finite). Third we discuss plans
for
the future in a related project.
--------------------------------
Date and time: Thursday 13th October 2011 at 16:00
Location: UG40, School of Computer Science
Title: Non-verbal behaviour analysis for affect sensitive and
socially perceptive machines
Speaker: Ginevra Castellano
(http://www.eecs.qmul.ac.uk/~ginevra/)
Institution: School of Electronic, Electrical & Computer Engineering,
University of Birmingham
(http://www.birmingham.ac.uk/schools/eece/index.aspx)
Abstract:
Machines capable of displaying social, affective behaviour are becoming
increasingly essential for systems involved in direct interaction with
human users across a variety of application domains.
For example, affect sensitivity and social perception are of the utmost
importance for robot companions to be able to display socially
intelligent
behaviour, a key requirement for sustaining long-term interactions with
humans.
The first part of this talk will explore some of the issues arising
from
the design of an affect recognition and social perception framework for
artificial companions investigated in the EU FP7 LIREC (LIving with
Robots
and intEractive Companions) project. An example of design in a
real-world
scenario is provided and a robotic companion capable of inferring the
user's
affective state and generate empathic behaviour is presented.
The second part of the talk will focus on the role of human movement
expressivity as a source of affective information for the recognition
and communication of emotion in human-computer interaction. Results
from
the EU FP6 HUMAINE (Human-Machine Interaction Network on Emotion)
project will be presented.
--------------------------------
Date and time: Thursday 20th October 2011 at 16:00
Location: UG40, School of Computer Science
Title: Analysing Random Processes: From Search Heuristics to
Problem-Specific Algorithms (and back again)
Speaker: Christine Zarges, University of Warwick
Abstract:
In this talk we will consider the analysis of different random
processes. In the first part, artificial immune systems (AIS) are
introduced. The field of AIS is an emerging new area of research that
comprises two main branches: on one hand immune modelling, which is
closely related to immunology and aims at understanding the natural
immune system; on the other hand engineering and optimisation, which is
concerned with problem solving by immune-inspired methods. We give an
overview over both aspects and point out interesting directions for
future research.
In the second part of the talk we concentrate on balls-into-bins games
where m balls are inserted into n bins by means of some randomised
procedure. We discuss connections to the analysis of artificial immune
systems and randomised search heuristics in general. Afterwards we
point
out different interesting scenarios in this model. We close by
highlighting mutual benefits of the research in both parts and defining
the main objectives of the proposed research.
--------------------------------
Date and time: Thursday 27th October 2011 at 16:00
Location: UG40, School of Computer Science
Title: Acting on the world: understanding how animals use
information to guide their action
Speaker: Jackie Chappell
(http://www.ornithology.bham.ac.uk/staff/academicstaff/jackiechappell.shtml)
Institution: Centre for Ornithology, University of Birmingham
(http://www.ornithology.bham.ac.uk/)
Host: Nick Hawes
Abstract:
How do animals work out which parts of their environment are the most
important or interesting to them, and gather information on those parts
to guide their action later? In this talk, I will briefly outline what
we already know about how animals gather and represent information about
the world. I will then discuss a few of the unsolved problems relating
to how animals collect information, before suggesting some approaches
which might be useful in unravelling these problems.
--------------------------------
Date and time: Thursday 3rd November 2011 at 16:00
Location: UG40, School of Computer Science
Title: Feature Selection: Rough and Fuzzy-Rough Approaches
Speaker: Qiang Shen
(http://users.aber.ac.uk/qqs/)
Institution: Department of Computer Science, Aberystwyth University
(http://www.aber.ac.uk/en/cs/)
Host: Xin Yao
Abstract:
Feature selection (FS) addresses the problem of selecting those system
descriptors that are most predictive of a given outcome. Unlike other
dimensionality reduction methods, with FS the original meaning of the
features is preserved. This has found application in tasks that involve
datasets containing very large numbers of features that might otherwise
be impractical to process (e.g., large-scale image analysis, text
processing and Web content classification).
FS mechanisms developed on the basis of rough and fuzzy-rough theories
provide a means by which data can be effectively reduced without the
need for user-supplied information. In particular, fuzzy-rough feature
selection (FRFS) works with discrete and real-valued noisy data (or a
mixture of both), and can be applied to continuous or nominal decision
attributes. As such, it is suitable for regression as well as
classification. The only additional information required is in the form
of fuzzy partitions for each feature that can be automatically derived
from the data. FRFS has been shown to be a powerful technique for data
dimensionality reduction. In introducing the general background of FS,
this talk will cover the rough-set-based approach, before focusing on
FRFS and its application to real-world problems. The talk will conclude
with an outline of opportunities for further development.
Professor Qiang Shen holds the Established Chair of Computer Science at
Aberystwyth University. He is Head of the Department of Computer
Science, and a member of UK REF 2014 Subpanel 11: Computer Science &
Informatics. Qiang is a long-serving associate editor of two IEEE
flagship Journals (Systems, Man and Cybernetics - Part B, and Fuzzy
Systems) and also, an editorial board member for several other leading
international periodicals. He has chaired and given keynote lectures at
many prestigious international conferences.
Qiangâs current research interests include: computational
intelligence, fuzzy and qualitative modelling, reasoning under
uncertainty, pattern recognition, data mining, and their real-world
applications for intelligent decision support (e.g. crime detection,
consumer profiling, systems monitoring, and medical diagnosis). He has
authored 2 research monographs and approximately 300 peer-reviewed
papers, including an award-winning IEEE Outstanding Transactions paper.
--------------------------------
Date and time: Thursday 10th November 2011 at 16:00
Location: UG40, School of Computer Science
Title: Passwords: Insecure Nuisance or Misunderstood Protector?
Speaker: Mike Just
(http://justmikejust.wordpress.com/)
Institution: Glasgow Caledonian University
(http://www.gcu.ac.uk/ebe//)
Host: Mina Vasalou
Abstract:
Password-based authentication is commonly criticized by users and
security practitioners alike. Users lament having to recall multiple
passwords and follow seemingly arcane rules for password selection.
Security practitioners highlight numerous vulnerabilities to password
authentication and suggest their imminent demise. Yet passwords remain a
mainstay of security, and despite prophetic suggestions to the contrary,
they will likely remain with us for many years to come. This talk will
explore the history of passwords and examine recent results that are
causing a re-think of our approach to authentication. Using a risk-based
approach, I will discuss several password myths and suggest what the
future might hold for passwords and other forms of authentication.
--------------------------------
Date and time: Thursday 17th November 2011 at 16:00
Location: UG40, School of Computer Science
Title: Modeling Control of Object Manipulation in Cephalopods:
Big Brains, Soft Bodies the Hyper-Redundant
Path-Not-Taken by Vertebrates
Speaker: Frank W. Grasso
(http://www.brooklyn.cuny.edu/pub/Faculty_Details5.jsp?faculty=18)
Institution: BioMimetic & Cognitive Robotics Laboratory, Dept. of
Psychology, Brooklyn College CUNY
(http://academic.brooklyn.cuny.edu/userhome/psych/fgrasso/index.htm)
Host: Aaron Sloman
Abstract:
Modern cephalopods are an evolutionary success story based on brain and
body architectures that are fundamentally different from those of
vertebrates like mammals, birds and even fish. Large-brained with soft
bodies, and sophisticated learning, sensory and motor capabilities their
modern forms, the coleiods, are descended from behaviorally
sophisticated ancestors that precede the most primitive vertebrates in
the fossil record and precede the boney fishes by hundreds of millions
of years. Those eons of competition with and predation on the diverse
forms of marine life have lead to cumulative specializations of
morphology, neural circuitry and behavior that offer a plethora of
existence proofs for the feasibility of soft, hyper-redundant of robotic
systems. This talk will discuss both in vivo studies and in studies with
artificial models of two such highly derived cephalopod adaptations: the
octopus sucker and the squid tentacle. These studies aim to advance our
understanding of the coordination and control of dexterous soft limbs
and appendages. The sucker, acting in coordination with the arm enables
fine and forceful manipulation of objects by the octopus. The tentacle
enables a high-speed, accurate and ballistic grasp of relatively distant
objects by the squid. This talk will introduce some of the
under-appreciated aspects of the biomechanics and neural architecture
that support these abilities and will also describe studies using the
Artificial and Biological Soft Actuator Manipulator Simulator (ABSAMS),
a physically and physiologically constrained computer simulation
environment employed to study 3d models of soft systems and their
control. Results from simulations of the squid tentacle strike and
octopus sucker attachment as modeled in ABSAMS and the insights those
simulations offer into controlling soft, hyper-redundant appendages will
be discussed and compared with results from in vivo studies. Finally, I
will discuss implications these studies present for the development of
flexible object manipulation devices with cephalopod-like properties in
man-made technologies.
--------------------------------
Date and time: Thursday 24th November 2011 at 16:00
Location: UG40, School of Computer Science
Title: Robotics at life's edges: adaptive robotic assistants for
children and the elderly
Speaker: Yiannis Demiris
(http://www.iis.ee.ic.ac.uk/yiannis/webcontent/HomePage.html)
Institution: Imperial College London
(http://www3.imperial.ac.uk/electricalengineering)
Host: Jeremy Wyatt
Abstract:
Robots are increasingly establishing their credibility as useful
assistants outside traditional industrial environments, with new
challenges emerging for intelligent robotics research. To personalise
the interaction with human users, robots need to develop life-long user
models that can be used to recognise human actions, predict human
intentions and assist intelligently, while constantly adapting to
changing human profiles. In this talk, I will draw inspiration from
biological systems and describe our latest advances in embodied social
cognition mechanisms for humanoid robots, and describe their application
towards adaptive robotic assistants for children and adults with
disabilities.
--------------------------------
Date and time: Thursday 1st December 2011 at 16:00
Location: UG40, School of Computer Science
Title: Sensing, Understanding and Modelling Dynamic Networked
Systems
Speaker: Mirco Musolesi
(www.cs.bham.ac.uk/~musolesm)
Institution: University of Birmingham
()
Abstract:
The goal of this talk is to present my recent and current research work
in the areas of sensing and analysis of complex dynamic networked
systems and discuss possible collaborations at School and University
level. This work is highly interdisciplinary since the same tools,
models and analytical techniques can usually be applied to a very large
number of systems, including social, technological, biological and
economic ones.
First of all, I will discuss some of the research projects in the area
of social sensing I have been involved in, including the design and
implementation of the CenceMe platform, a system that allows the
inference of activities and other presence information of individuals
using off-the-shelf sensor-enabled phones and EmotionSense, a system
designed for supporting social psychology research. I will then present
another key aspect of my research work, the analysis of dynamic and
time-varying networked systems. I will discuss examples and applications
to various systems including social and biological ones.
--------------------------------
Date and time: Thursday 8th December 2011 at 16:00
Location: UG40, School of Computer Science
Title: Being Human when Computers are Everywhere
Speaker: Yvonne Rogers
(http://www.ucl.ac.uk/uclic/people/y_rogers)
Institution: UCL
(http://www.ucl.ac.uk/)
Host: Russell Beale
Abstract:
The world we live in has become suffused with computers, some visible,
others hidden; from smartphones that enable you to track your friends
and family, to digital billboards that can sense the make-up of the
crowd walking by, and target ads specifically at them. Huge changes are
afoot in how we access and interact with information, and in how we
learn, socialize and work. So much so that our lifestyles are radically
changing, raising the question of what it means to be human when
everything we do is supported or augmented by technology. In my talk, I
will give an overview of the field, contrasting the highly influential
vision of Ubiquitous Computing set in the 90s by Weiser and the stark
realities and challenges we face today.
Yvonne Rogers is a Professor of Interaction Design and director of UCLIC
at University College London. She is also a visiting professor at the
Open University, Indiana University and Sussex University. She has spent
sabbaticals at Stanford, Apple, Queensland University, and UCSD. Her
research focuses on augmenting and extending everyday learning and work
activities with a diversity of novel technologies. She was one of the
principal investigators on the UK Equator Project (2000-2007) where she
pioneered ubiquitous learning. She has published widely, beginning with
her PhD work on graphical interfaces to her recent work on public
visualizations and behavioral change. The third edition of her textbook,
Interaction Design Beyond Human-Computer Interaction, co-authored with
Helen Sharp and Jenny Preece has just been published. She has also been
awarded a prestigious EPSRC dream fellowship in the UK where she will
rethink the relationship between ageing, computing and creativity.
--------------------------------
Date and time: Tuesday 13th December 2011 at 15:00
Location: UG40, School of Computer Science
Title: Empathy in Virtual Agents and Robots
Speaker: Ana Paiva
(http://gaips.inesc-id.pt/~apaiva/Home.html)
Institution: INESC-ID
()
Host: Mina Vasalou
Abstract:
Empathy is often seen as the capacity to perceive, understand and
experience others' emotions. This notion is seen as one of the major
elements in social interactions between humans. As such, when creating
virtual agents, that are believable and able to engage with users in
social interactions, empathy needs to be addressed. Indeed, for the past
few years, many researchers have been looking at this problem, not only
in trying to find ways to perceive the user's emotions, but also to
adapt to them, and react in an empathic way. This talk will provide an
overview of this new challenging area of research, by analyzing empathy
in the social relations established between humans and virtual agents or
social robots. To illustrate these notions, we will be providing a
concrete model for the creation of empathic agents with some examples of
both virtual agents and social robots.
--------------------------------
Date and time: Thursday 19th January 2012 at 17:00
Location: UG40, School of Computer Science
Title: Contactless Smart Cards in Buildings and Public
Transportation: The Case of MiFare Classic
Speaker: Nicolas Courtois
Institution: University College London
Abstract:
In this talk we are going to study the security of the MiFare Classic
contactless smart cards (200 millions of cards in circulation, more
than
1 billion sold), massively used worldwide in public transportation,
many
buildings in central London and elsewhere. In 2008-2009 German and
Dutch
hackers and researchers have reverse engineered this chip which
remained
proprietary for some 15 years. Then the Dutch researchers from Nijmegen
have developed and published some 6+ different attacks on this product
and had to face the manufacturer's lawyers in court. The real challenge
would to break the card offline, without any access to a legitimate
reader, this is called a card-only attack. At the end of 2009 Courtois
published just one additional attack which is requires less than 10
times less data than the best Nijmegen attack without a very costly
pre-computation, and which can be executed by anyone at any moment.
Given the very peculiar way in which this system leaks the information
to the cryptanalyst through a covert channel, this attack is unlike
anything we know about stream cipher cryptanalysis, the combination of
this cipher and this channel calls for a very special type of attack
which does not occur elsewhere in cryptanalysis and is a differential
attack with a multiple differentials which hold simultaneously and lead
to a spectacularly low complexity. Several open source implementations
of this attack exist on the Internet and it is commonly called the
'Courtois Dark-Side' attack. When combined with the so called 'Nested
Authentication Attack' by Nijmegen it is possible to extract keys and
data from cards even faster. As a result of al these discoveries, in
December 2009 transport for London stopped using the MiFare Classic
cards. However millions of older Oyster cards are still accepted in
circulation and the situation is much worse in buildings. None of the
buildings in London and elsewhere which we are aware of have stopped
using these cards and as of 2012 they are still issued to new
students/employees, with additional very serious problems of bad key
management.
--------------------------------
Date and time: Friday 27th January 2012 at 12:30
Location: LG33 Learning Centre
Title: A Secret Computer
Speaker: Peter Breuer
Institution: University of Birmingham
Host: Andrew Howes
--------------------------------
Date and time: Thursday 2nd February 2012 at 16:00
Location: UG40, School of Computer Science
Title: Predicting human behavior by solving bounded utility
maximization problems
Speaker: Andrew Howes
Institution: University of Birmingham
Abstract:
In this talk I will demonstrate the value of predicting human behavior
by calculating optimal solutions to bounded utility maximization
problems, where bounds are theories of human information processing.
Further, I will argue that solving such problems provides an opportunity
to address the identifiability problem; this is the problem that arises
for scientists who attempt to infer the invariant mechanisms of
cognition from behaviour when the consequences of former for the latter
are mediated by strategic adaptation. The talk will use as an example
the Psychological Refractory Period (PRP) task which was invented in
order to investigate hypotheses concerning response selection
limitations.
--------------------------------
Date and time: Thursday 9th February 2012 at 16:00
Location: UG40, School of Computer Science
Title: How do we understand other human beings? The person model
theory
Speaker: Albert Newen
Institution: Ruhr University Bochum
Host: Mihaela Popa
Abstract:
For decades we had an intense debate between Theory-Theory and
Simulation-Theory. The most important progress during the last few years
have been made by Goldmanâs recent detailed presentation of his
Simulation Theory (Goldman 2006) and by Gallagher (2008) who argues for
a revival of the phenomenological thesis that we directly perceive
mental states of others. The aim of the presentation is to criticize
both proposals and to develop and defend a new theoretical approach: the
person model theory.
An important progress of Goldmanâs simulation theory that he
distinguishes two level of understanding other human beings: low-level
and high-level mindreading. According to Goldman, third-person
attribution of a decision (high-level mindreading) consists of (i)
creating pretend propositional attitudes, (ii) using a (the same)
decision making mechanism (as in the first-person case) and (iii)
projecting the product of this decision-making process onto another
person, while quarantining those mental phenomena that are only specific
for me and not for the other person.
Simulation-Theory (ST) can be distinguished negatively in contrast to
Theory-Theory (TT) by rejecting the belief in a psychological law, but
it can also be positively characterized by positing this two
stage-process of mindreading, namely the simulation stage and the
projection stage (Goldman 2006, 40). In the talk I develop a criticism
of both accounts as well as the recent development of the interaction
account (Gallagher/Hutto). I argue that all these accounts have severe
deficits. We are in need of a new theory that accounts for the
difference between low-level and high-level mindreading and does not run
into the problems neither of TT nor ST.
I argue that the person model theory can do the job. I suggest that we
develop âperson modelsâ from ourselves, from other individuals and
from groups of persons. These person models are the basis for the
registration and evaluation of persons having mental as well as physical
properties. Since there are two ways of understanding other minds
(non-conceptual and conceptual mindreading), we propose that there are
two kinds of person models: Very early in life we develop non-conceptual
person schemata: A person schema is a system of sensory-motor abilities
and basic mental dispositions related to one human being (or a group of
humans) while the schema functions without awareness and is realized by
(relatively) modular information processes. Step by step we also develop
person images: A person image is a system of consciously registered
mental and physical dispositions as well as situational experiences
(like perceptions, emotions, attitudes, etc.) related to one human being
(or a group). We have clear evidence of implicit communication in humans
which can best be understood as a non-conceptual understanding of other
minds by unconsciously registering someoneâs emotions and attitudes.
On the basis of such non-conceptual person schemata, young children
learn to develop conceptual person images which in the case of groups
are stereotypes of managers, students or homeless people. We also
develop detailed person images of individuals we often deal with.
--------------------------------
Date and time: Thursday 16th February 2012 at 16:00
Location: UG40, School of Computer Science
Title: A Unifying Framework for Mutual Information Based Feature
Selection
Speaker: Gavin Brown
(http://www.cs.man.ac.uk/~gbrown/)
Institution: University of Manchester
()
Host: Chris Bowers
Abstract:
Feature Selection is a ubiquitous problem in pattern recognition and
machine learning. Methods based on mutual information measurements have
tremendously popular, with dozens of 'novel' algorithms, and hundreds of
applications published in domains like Computer Vision and
Bioinformatics as well as mainstream machine learning outlets. In this
work, we asked the question 'what are the implicit underlying
statistical assumptions of feature selection criteria based on mutual
information?'
The main result I will present is a unifying probabilistic framework for
information theoretic feature selection, bringing almost two decades of
research on heuristic methods under a single theoretical interpretation.
This allows for both solid empirical analysis of existing methods and
their relationships, and a clear foundation on which to build new
methods.
This is work based on:
Conditional Likelihood Maximisation: A Unifying Framework for Mutual
Information Feature Selection. Journal of Machine Learning Research,
2012
(in press)
--------------------------------
Date and time: Thursday 23rd February 2012 at 16:00
Location: UG40, School of Computer Science
Title: Probabilistic projection for binary, ordinary and real
data
Speaker: Simon Rogers
(http://www.dcs.gla.ac.uk/~srogers/)
Institution: University of Glasgow
()
Host: Mirco Musolesi
Abstract:
In this talk, I'll describe algorithms for projection (dimensionality
reduction) for binary and/or ordinal data, motivating this by the
problem of producing visualisations of the voting behaviour of
politicians in the house of commons. The models make use of the probit
likelihood and the auxiliary variable trick, allowing inference to be
performed via Gibbs Sampling or Variational Bayes. I will show how
these methods lead to better-calibrated predictions of variance than a
popular alternative.
--------------------------------
Date and time: Thursday 1st March 2012 at 16:00
Location: UG40, School of Computer Science
Title: Spoken language processing: where do we go from here?
Speaker: Roger Moore
(http://www.dcs.shef.ac.uk/~roger/)
Institution: University of Sheffield
(http://www.shef.ac.uk/dcs)
Host: Nick Hawes
Abstract:
Recent years have seen steady improvements in the quality and
performance of
speech-based human-machine interaction driven by a significant
convergence
in the methods and techniques employed. Spoken language processing has
finally emerged from the research laboratory into the real-world, and
members of the general public now regularly encounter talking and
listening
machines in their daily lives. However, whilst several niche markets
have
been established, there is a general consensus that spoken language
technology is still insufficiently robust for a range of valuable
applications, and that the capabilities of contemporary spoken language
systems continue to fall short of what users expect and the market
needs.
Of particular concern is that the quantity of training data required to
improve state-of-the-art systems seems to be growing exponentially.
Yet
performance appears to be reaching an asymptote that is not only well
short
of human performance, but which may also be inadequate for many
real-world
applications. This suggests that there may be a fundamental flaw in
the
underlying architecture of contemporary systems, and the future
direction
for research into spoken language processing is currently uncertain.
This talk attempts to address these issues by stepping outside the
usual
domains of speech science and technology, and instead drawing
inspiration
from recent findings in the neurobiology of living systems. It will be
shown how these results point towards a novel architecture for
speech-based
human-machine interaction - PREdictive SENsorimotor Control and
Emulation
(PRESENCE) - that blurs the distinction between the core components of
a
traditional spoken language dialogue system; an architecture in which
cooperative and communicative behaviour emerges as a by-product of a
model
of interaction where the system has in mind the needs and intentions of
a
user, and a user has in mind the needs and intentions of the system.
The
talk will conclude with examples of current research that support the
PRESENCE hypothesis.
--------------------------------
Date and time: Thursday 8th March 2012 at 16:00
Location: UG40, School of Computer Science
Title: Biologically-Inspired Massively-Parallel Computing
Speaker: Steve Furber
(http://apt.cs.man.ac.uk/people/sfurber/)
Institution: University of Manchester
()
Host: Xin Yao
Abstract:
The SpiNNaker (Spiking Neural Network Architecture) project aims to
deliver a
massively-parallel computing platform for modelling large-scale systems
of spiking
neurons in biological real time. The architecture is based around a
Multi-Processor
System-on-Chip that incorporates 18 ARM processor subsystems and is
packaged
with a 128Mbyte SDRAM to form the basic computing node. An
application-specific
packet-switched communications fabric carries neural "spike" packets
between
processors on the same or different packages to allow the system to be
extended
up to a million processors, at which scale the machine has the capacity
to model
in the region of 1% of the human brain.
--------------------------------
Date and time: Thursday 15th March 2012 at 16:00
Location: UG40, School of Computer Science
Title: Architecture-neutral Parallelism
Speaker: Pete Calvert
(http://www.cl.cam.ac.uk/~prc33/)
Institution: University of Cambridge
()
Host: Peter Hancox
Abstract:
The shift towards parallel hardware is well known, but it is currently
much less clear what the corresponding changes in programming languages
and compilers should be. In particular, as GPUs and other heterogeneous
architectures become more popular, how can we continue to offer
developers performance portability? That is, enable a single program to
achieve good (if not optimal) performance on any system.
In this talk, I discuss how this was achieved for sequential systems,
why parallelism offers new challenges, and how these might be overcome.
Rather than proposing yet another programming language, our solution
proposes a common architecture-neutral abstract machine, that can be
used as a compiler intermediate representation. Our ongoing research
concerns compiler techniques that map this representation to hardware
effectively.
--------------------------------
Date and time: Thursday 5th April 2012 at 16:00
Location: UG40, School of Computer Science
Title: NO SEMINAR
Speaker: -- - --
--------------------------------
Date and time: Thursday 17th May 2012 at 16:00
Location: UG40, School of Computer Science
Title: Creating a Computerized Sports Expert for Live-Action
Sports
Speaker: Patrick Lucey
(http://www.patricklucey.com/Site/Home.html)
Institution: Disney Research Pittsburgh
()
Host: Michael Mistry
Abstract:
In February 2011, ``Watsonâ (an IBM created AI computer system capable
of answering questions) competed on Jeopardy! and comprehensively beat
the best human players in the history of the quiz show. The technology
of âWatsonâ evolved from IBMâs âDeep Thoughtâ and âDeep
Blueâ projects in the 80âs and 90âs where they created a computer
system which could beat the top human chess players. In sports, AI
computer systems have been developed to automatically generate text
summaries using match statistics (e.g. statsheet.com), although the
reporting lacks tactical insight. Video games (e.g. EAsports) have
virtual commentators which can describe and analyze what is going on in
the match. Following this trend, we ask âwhy canât a computer system
do similar things for real live-action sport?â
--------------------------------
Date and time: Thursday 24th May 2012 at 16:00
Location: UG40, School of Computer Science
Title: Dynamic interpretation and integration of mismatched data
Speaker: Fiona McNeill
(http://homepages.inf.ed.ac.uk/fmcneill/)
Institution: University of Edinburgh
(http://dream.inf.ed.ac.uk/)
Host: Nick Hawes
Abstract:
We exist in a world of large data: most organisations have large data
stored, and many (such as governments) have vast ones. Accessing and
utilising this data quickly and effectively is essential for many
real-world takes. One of the great difficulties of such automated
knowledge sharing is that each participant will have developed and
evolved its knowledge sources independently and there will be
significant variation in how they have done this. These differences may
be to do with different words being used for the same thing, or vice
versa, but may also be to do with the structure of the data. In this
talk I will discuss our work on failure-driven diagnosis of ontological
mismatch, and its application to dynamic integration of mismatched data
from potentially large sources. The fact that these techniques are only
invoked when some sort of failure occurs (for example, failure to
interpret an automated query), and are based on reasoning about the
causes of the failure, means that the majority of data in a large data
source can be ignored, thereby providing a tractable solution to the
problem.
--------------------------------
Date and time: Wednesday 6th June 2012 at 16:00
Location: UG40, School of Computer Science
Title: Advances in Time-of-Flight 3D imaging
Speaker: Adrian Dorrington
(http://sci.waikato.ac.nz/about-us/people/adrian)
Institution: University of Waikato
()
Host: Hamid Dehghani
Abstract:
Time-of-Flight range imaging is a technology that captures
three-dimensional information in a scene, allowing computers to perceive
the world in a way we humans take for granted. It works by emitting
modulated or coded light and measuring how long it takes for that light
to bounce off objects and return to the camera, generating digital
photograph or video like output where each pixel contains distance
information as well as brightness. This distance information means that
the size, shape. and size of objects can be measured. Time-of-Flight
technology is still very new and the current camera systems suffer a
number of drawbacks. The Chronoptics research group at the University
of Waikato develop technologies to improve the quality of these cameras
and overcome the current drawbacks, opening the door for new commercial
and industrial applications. This talk will introduce time-of-flight 3D
imaging technology and it's limitations, and discuss the latest
technologies being developed at the University of Waikato.
--------------------------------
Date and time: Friday 6th July 2012 at 08:30
Location: Leonard Deacon Lecture Theatre, Wolfson Centre, The
Medical School
Title: Constructing the Foundations of Commonsense Knowledge
Speaker: Benjamin Kuipers
Institution: University of Michigan
Host: Aaron Sloman
Abstract:
--Please note that the seminar is taking place at the Leonard Deacon
Lecture Theatre--
An embodied agent experiences the physical world through low-level
sensory and motor interfaces (the "pixel level"). However, in order to
function intelligently, it must be able to describe its world in terms
of higher-level concepts such as places, paths, objects, actions,
goals,
plans, and so on (the "object level"). How can higher-level concepts
such as these, that make up the foundation of commonsense knowledge, be
learned from unguided experience at the pixel level? I will describe
progress on providing a positive answer to this question.
This question is important in practical terms: As robots are developed
with increasingly complex sensory and motor systems, and are expected
to
function over extended periods of time, it becomes impractical for
human
engineers to implement their high-level concepts and define how those
concepts are grounded in sensorimotor interaction. The same question
is
also important in theory: Must the knowledge of an AI system
necessarily be programmed in by a human being, or can the concepts at
the foundation of commonsense knowledge be learned from unguided
experience?
--------------------------------
Date and time: Thursday 30th August 2012 at 16:00
Location: G33, Aston Webb
Title: An analysis of the ant swarm behavior for quorum sensing:
a new direction for bio-inspired computing in
optimization
Speaker: Hide Sasaki
Host: Xin Yao and Shan He
Abstract:
Ant traffic flow increases with growing density. This characteristic
phenomenon is different from any other systems of traffic flow. In this
talk, I would describe a computational model for density-independent
traffic flow in ant colonies that transport to new nests. Ants have two
types of swarm behavior: emigration and foraging. A precedence model in
computational ecology focused on foraging trails. However, ants move on
a much larger scale during emigration. They gauge nest density by
frequent close approaches among them and time the transport of colony.
This density assessment behavior known as quorum sensing has not been
discussed in the context of traffic flow theory. Based on the behavior,
we model ant traffic flow that is organized without the influence of
changes in population density of colonies. The proposed model predicts
that density-independent ant traffic flow only depends on the frequency
of mutual close approaches. Iwould show how to verify this estimation of
our model in comparison with robust empirical data that ant experts
obtained from field researches.I would indicate how to organize a study
of computational ecology, and in which direction you may expect
technical contributions using the proposed model.
Professor Sasaki works in computational ecology. Before he moved to the
area of study, Dr. Sasaki elaborated in decision science, soft computing
and database systems. His current interest is modeling the ant swarm
behavior and exploring traffic flow model from that. He modeled the
swarm behavior known as quorum sensing into a computational formulation
and compared its simulation result to the data obtained from field
researches. The presentation given on the date discusses the recent
development of his research.
He is a member of the IEEE CIS ETTC and serves as a task force chair on
Bio-Inspired Self-Organizing Collective Systems. Professor Sasaki is the
founding and current Editor-in-Chief of the International Journal of
Organizational and Collective Intelligence (IJOCI) that is published
from IGI Global, NJ, USA.
Dr. Sasaki is tenured as an associate professor of computer science at
Ritsumeikan University in Kyoto, Japan. He is a visiting professor at
VSB Technical University of Ostrava, Czech Republic and an honorary
research associate at The Chinese University of Hong Kong.
--------------------------------
Date and time: Thursday 6th September 2012 at 16:00
Location: G33, Aston Webb
Title: Modeling gaze during driving
Speaker: Dana Ballard
Host: Jeremy Wyatt
Abstract:
The use of gaze in acquiring information has been long established but
the exact metric it uses in choosing gaze targets is still not
satisfactorily established. Early work explored on the saliency of image
features, but more recent research has focused on the role of a
subjectâs ongoing task suite. Nonetheless, there is still the open
need for a theory that would explain how specific information acquired
by gaze is chosen over alternative choices. We use the context of
vehicle driving develop a reward-based gaze theory that asserts that
gaze reduces reward uncertainty. The principal premise of the theory is
that the main determiner of visuo-motor resources is a dynamically
allocated small set of underlying cognitive programs, or modules, that
manage sub-tasks of the behavior that the human subject is engaged in.
In effect driving gaze deployment can be seen as a form of
multi-tasking. Driving requires the simultaneous control of heading,
lane position, and avoidance of obstacles. Tests with human subjects in
a virtual reality simulated driving venue show the model can account for
gaze deployment statistics in servicing such tasks.
--------------------------------
Date and time: Thursday 27th September 2012 at 16:00
Location: UG05, Learning Centre
Title: On Computing: The Fourth Great Scientific Domain
Speaker: Paul Rosenbloom
Host: Andrew Howes
Abstract:
This talk introduces two broad themes about computing: (1) that it
amounts to what can be termed a great scientific domain, on a par with
the physical, life and social sciences; and (2) that much about its
structure, content, richness and potential can be understood in terms of
its multidisciplinary relationships with these other great domains (and
itself). The intent is to advance a new way of thinking about computing
and its nature as a scientific discipline, while broadening our
perspectives on what computing is and what it can become.
--------------------------------
Date and time: Thursday 4th October 2012 at 16:00
Location: UG07, Learning Centre
Title: Cyber-Physical Society
Speaker: Hai Zhuge
(http://www.knowledgegrid.net/~h.zhuge/)
Institution: Institute of Computing Technology of Chinese Academy of
Sciences
()
Abstract:
Natural physical space provides material basis for the generation and
evolution of human beings and civilization. The progress of human
society has created the cyber space. With the rapid development of
information technology, the cyber space is connecting the physical
space, social space and mental space to form a new space â
Cyber-Physical Society.
Beyond the scope of the Cyber-Physical Systems and Web of Things, the
Cyber-Physical Society concerns not only the cyber space and the
physical space but also humans, knowledge, society and culture. It is a
new environment that connects nature, cyber space and society under
certain rules.
The cyber-physical society is a multi-dimensional complex space that
generates and evolves diverse subspaces to contain different types of
individuals interacting with, reflecting or influencing each other
directly or through the cyber, physical, socio and mental subspaces.
Versatile individuals and socio roles coexist harmoniously yet evolve,
provide appropriate on-demand information, knowledge and services for
each other, transform from one form into another, interact with each
other through various links, and self-organize according to socio value
chains. It ensures healthy and meaningful life of individuals, and
maintains a reasonable rate of expansion of individuals in light of
overall capacity and the material, knowledge, and service flow cycles.
Human beings will live and develop in the Cyber-Physical Society in the
near future. Exploring the Cyber-Physical Society concerns multiple
disciplines and will go beyond Bushâs and Turingâs ideals since
traditional machines and the cyber space are limited in ability to
implement it. Research objects and conditions of many disciplines will
be changed. Methodologies in respective disciplines are not suitable
for researching and developing the environment. Multi-disciplinary study
will lead to breakthrough in sciences, technologies, engineering and
philosophy.
This lecture introduces the architecture, distinguished characteristics,
scientific issues, principles, super-links, semantics, computing model,
and closed loops of the Cyber-Physical Society. The relevant
philosophical issues will be discussed.
--------------------------------
Date and time: Thursday 11th October 2012 at 16:00
Location: UG07, Learning Centre
Title: Procedural isomorphism and restricted beta-reduction:
John loves his wife, and so does Peter
Speaker: Bjørn Jespersen
Institution: Czech Academy of Sciences, Dept. Logic; Technical
University of Ostrava, Dept. Computer Science, Czech
Republic
Host: Mihaela Popa
Abstract:
This paper solves, in a logically rigorous manner, a problem discussed
in a 2004 paper by Stephen Neale that was originally put forward as a
challenge to Chomskyâs program.
The example is this. John loves his wife, and so does Peter. Hence John
and Peter share a property. But which one? (1) Loving Johnâs wife:
then John and Peter love the same woman. (2) Loving oneâs own wife:
then, unless they are married to the same woman, John loves one woman
and Peter loves another woman. Since âJohn loves his wifeâ is
ambiguous between attributing (1) or (2) to John, âSo does Peterâ is
also ambiguous between attributing (1) or (2) to Peter.
On the so-called strict interpretation, John loves Johnâs wife,
therefore Peter loves Johnâs wife. On the sloppy interpretation, John
loves his own wife, therefore Peter loves his own wife. (Whether
Peterâs wife is the same woman as Johnâs wife is semantically and
logically immaterial.) The original problem for Chomsky is that he can
accommodate only the strict interpretation, thus both failing to capture
the ambiguity of the sample sentence and picking the less obvious of the
two readings.
The critical part of the sample sentence is the anaphoric expression
âhisâ, which is ambiguous between âhis ownâ and âJohnâsâ
in this context. The ambiguity is visited upon the anaphoric expression
âso doesâ: the property predicated of Peter in the second clause is
a function of the property predicated of John in the first clause.
The logical problem is that the respective redexes of the sloppy and the
strict reading reduce to the same contractum, which corresponds to the
strict reading. The unpleasant consequences are that the anaphoric
character of âhis wifeâ is lost in conversion and that two
properties â loving Johnâs wife and loving oneâs own wife â are
predicted, wrongly, to be equivalent. This erroneous prediction would
detract from the value of the lambda-calculus as a means of transparent
logical analysis of anaphora.
This paper introduces a restricted form of beta-reduction to block the
unwarranted equivalence. The paper also details how to apply this
restricted rule of beta -conversion to contexts containing anaphora such
as âhisâ and âso doesâ. The logical contribution of the paper is
a generally valid form of beta- reduction âby valueâ. This mechanism
is a declarative variant of the imperative solution proposed by Van
Eijck and Francez (1995). See also Loukanova (2009).
The technical portions of the paper will be presented within the
framework of TichĂ˝âs Transparent Intensional Logic (see DuŞà et al.
2010). The resulting restriction of beta-reduction is one element â
the other two being beta- conversion and beta-conversion â of the
notion of procedural isomorphism, which is the notion of
hyperintensional individuation of linguistic meaning we are advocating.
The basic idea is that hyperintensional individuation is procedural
individuation.
The dialectics of the talk is that an issue originally bearing on
linguistics is used to make a point about beta-conversion, which in turn
is used to make a point about hyperintensional individuation.
--------------------------------
Date and time: Thursday 18th October 2012 at 16:00
Location: UG07, Learning Centre
Title: Compressive Sensing for Cancer Imaging
Speaker: Iain Styles
(http://www.cs.bham.ac.uk/~ibs/)
Institution: University of Birmingham
()
Abstract:
The reconstruction of a signal from a set of discrete samples is
normally governed by the Nyquist-Shannon sampling theorem. However under
certain conditions, it is possible to reconstruct a signal using a far
lower sampling frequency than the Nyquist-Shannon condition specifies.
The key feature of the signal that can be exploited in order to break
the Nyquist limit is that it must be sparse in some representation, in
which case it can be reconstructed using a number of samples that is of
the order of non-zero components in the sparse representation. This
allows high-resolution signals (images, in our case) to be reconstructed
from low-resolution samples. By a happy coincidence, many types of
images that are interesting to us can be represented sparsely, and hence
compressive sensing methods can be applied. I will review the concepts
of compressive sensing, and show a simple practical example of how they
can be applied to construct a "single pixel camera" (and why you would
want to do this). I will then describe some of our recent work on
applying the principles of compressive sensing to the reconstruction of
bioluminescence tomography images that are a potentially powerful tool
in preclinical cancer studies. Time permitting, I will also give a
whistle-stop tour of some of the other current topics of interest to the
imaging group.
--------------------------------
Date and time: Thursday 25th October 2012 at 16:00
Location: UG07, Learning Centre
Title: The Logical Axiomatisation of Socio-Economic Principles
for Self-Organising Electronic Institutions
Speaker: Jeremy Pitt
(http://www.iis.ee.ic.ac.uk/~j.pitt/Home.html)
Institution: Imperial College London
()
Host: Mina Vasalou
Abstract:
Abstract: Open computing systems, from sensor networks to SmartGrids,
face the same challenge: a set of autonomous, heterogenous agents,
needing to collectivise and distribute resources without a centralised
decision-making authority. This challenge is further complicated in an
economy of scarcity, when there are fewer resources available than are
required in total. We address this challenge through the axiomatisation
in computational logic of Elinorâs Ostromâs socio-economic
principles of enduring institutions for common-pool resource management
and Nicholas Rescherâs canons of distributive justice for resource
allocation. We discuss experimental results with self-organising
electronic institutions showing that Ostromâs principles complemented
by Rescherâs canons are necessary and sufficient conditions for both
endurance and fairness. We conclude with some remarks on the
implications of these results for computational sustainability.
Jeremy Pitt is Reader in Intelligent Systems in the Department of
Electrical & Electronic Engineering at Imperial College London, where he
is also Deputy Head of the Intelligent Systems & Networks Group and an
Associate Director of Institute for Security Science and Technology. His
research interests focus on the foundations and applications of
computational logic in multi-agent systems, in particular agent
societies, agent communication languages, and self-organising electronic
institutions. He has been an investigator on more than 30 national and
European research projects and has published more than 150 articles in
journals and conferences. He is a Senior Member of the ACM, a Fellow of
the BCS, and a Fellow of the IET, and is an Associate Editor of ACM
Transactions on Autonomous and Adaptive Systems.
--------------------------------
Date and time: Thursday 1st November 2012 at 16:00
Location: UG07, Learning Centre
Title: The design of image recognition CAPTCHAs: challenges and
solutions
Speaker: Jeff Yan
(http://homepages.cs.ncl.ac.uk/jeff.yan/)
Institution: Newcastle University
()
Host: Shishir Nagaraja
Abstract:
Abstract: CAPTCHA has already become a standard security mechanism, but
it's hard to get its design right. Most text CAPTCHAs were broken, and
there is a growing interest in the research communities exploring an
alternative: image recognition CAPTCHAs (IRCs), which require
interdisciplinary expertises including computer vision and image
processing, HCI, machine learning and security. In this talk, I discuss
the design of IRCs for large-scale real-life applications such as Gmail
and Hotmail, where millions of CAPTCHAs are required on a daily basis.
Some design challenges will be highlighted, e.g. security, usability and
scalability. I will show how representative IRCs were broken due to
novel attacks, and define a simple but novel framework for guiding the
design of robust IRCs. Then, I will present Cortcha, a novel design that
relies on Context-based Object Recognition to Tell Computers and Humans
Apart. Cortcha meets all the key design criteria, and arguably
represents the state-of-the-art design of IRCs. This is joint work with
Dr Bin Zhu's team at Microsoft Research Asia, Beijing.
Bio: Jeff Yan is a lecturer in the School of Computing Science at
Newcastle University, UK, where he's a founding research director of the
Center for Cybercrime and Computer Security. He started researching
human behavior in security in the '90s, and ever since his work has
focused on human and systems aspects of security. He has a PhD in
computer security from Cambridge University, served on the PC for the
first SOUPS (CMU, 2005), and serves on the editorial boards of
Springer's International Journal of Information Security and IEEE
Transactions on Information Forensics and Security.
--------------------------------
Date and time: Thursday 8th November 2012 at 16:00
Location: UG09, Learning Centre
Title: The role of redundancy in human and robot motion
Speaker: Michael Mistry
(http://michaelmistry.com/)
Institution: University of Birmingham
()
Abstract:
Why does the human body have so many joints and muscles? There are
considerably more than required for most of our fundamental tasks.
Nikolai Bernstein famously referred to redundancy in our motor system as
the "degrees-of-freedom problem": how can the nervous system cope with
the indeterminant mapping from goals to actions? Neurophysiologists of
his day hypothesized that the brain may constrain certain degrees of
freedom in order to reduce the problem's complexity. Recently, however
redundancy has not been viewed as problematic, but rather as beneficial
for goal achievement. For example, Todorov and Jordan have shown how
redundancy can act as a "noise buffer," pushing the noise inherent in
motion execution into a task-irrelevant space. In this talk, I will
focus on the relationship between task-relevant motion and redundancy
through the framework of operational space control. Rather than treating
redundancy as merely a passive buffer for handling noise or
disturbances, I will discuss how redundancy can be actively controlled
to assist in task achievement, as well as to realize certain
optimization criteria. For example, I will discuss how redundant motion
is useful for generating forces at passive degrees of freedom, as often
demonstrated by gymnasts. I will show how the redundancy and external
forces created by environmental contact can be exploited to reduce
actuation effort. If time permits, I will also introduce our new
European project on whole-body humanoid motion execution while
exploiting external contact.
--------------------------------
Date and time: Thursday 15th November 2012 at 16:00
Location: UG07, Learning Centre
Title: The impact of modern communications and media on society
Speaker: Jeff Patmore, Tanya Goldhaber and Anna Mieczakowski
Host: Aaron Sloman
Abstract:
This talk looks at a project undertaken in 2010/2011 examining how the
Internet, mobile communication and social networks have changed how we
communicate and interact with others at work and at home. The research
project compared families the UK with those in the US, Australia and
China and discovered some surprising results. The research was covered
in more than 200 articles in the press and media, including BBC
breakfast: http://www.bbc.co.uk/news/technology-14038864
--------------------------------
Date and time: Thursday 22nd November 2012 at 16:00
Location: UG07, Learning Centre
Title: Dynamic shifts in the transcriptional network regulate
cell fate decisions in normal and malignant cells.
Speaker: Constanze Bonifer
(http://www.birmingham.ac.uk/staff/profiles/cancer/bonifer-constanze.aspx)
Institution: University of Birmingham
()
Abstract:
One of the great challenges for future biological and medical research
will be to understand in a system-wide fashion how cell fate decisions
are regulated during development. Great progress has been made with
respect to identifying individual components of the cell fate decision
machinery, such as transcription factors, chromatin components and
signalling components. However, while recent genome-wide studies allow
a first glimpse into the complexities of transcription factor-DNA
interactions in specific cell types, we know very little about
hierarchical relationships between different network states or how
metastable states are established and eventually altered. We also do
not know how specific chromatin structures influence the dynamics of
transcription factor accessibility. Or, in other words, how the
ordered interplay of transcription factors and specific chromatin
states eventually leads to the stable expression of lineage specific
genetic programs. To this end, we use haemopoiesis as a model to
identify the molecular mechanisms and dynamics of cell differentiation
in a system-wide fashion.
In this talk I will present examples of how single transcription
factors regulate dynamic shifts in the transcriptional network, both
in the normal, but also in a malignant context. I will discuss how
such system-wide studies may help us to decipher the genomic
regulatory blueprint for development of a mammalian organ system and
develop targeted therapies for haemopoietic malignancies. Last, but
not least, I will outline opportunities of collaborations between
computer scientists and laboratory scientists.
--------------------------------
Date and time: Thursday 29th November 2012 at 16:00
Location: UG07, Learning Centre
Title: Activity Analysis: Finding Explanations for Sets of
Events
Speaker: Dima Damen
(http://www.cs.bris.ac.uk/~damen/)
Institution: University of Bristol
()
Host: Ales Leonardis
Abstract:
Automatic activity recognition is the computational process of analysing
visual input and reasoning about detections to understand the performed
events. In all but the simplest scenarios, an activity involves multiple
interleaved events, some related and others independent. The activity in
a car park or at a playground would typically include many events. Given
the possible events and any constraints between the events, analysing
the activity should thus recognise a complete and consistent set of
events; this is referred to as a global explanation of the activity. By
seeking a global explanation that satisfies the activitys constraints,
infeasible interpretations can be avoided, and ambiguous observations
may be resolved. An activity's events and any natural constraints are
defined using a grammar formalism. Attribute Multiset Grammars (AMG)
allow defining hierarchies, as well as attribute rules and constraints.
Parsing the set of detections by the AMG provides a global explanation.
To find the best parse tree given a set of detections, a Bayesian
network models the probability distribution over the space of possible
parse trees. Heuristic and exhaustive search techniques are compared to
find the maximum a posteriori global explanation.
The presentation will discuss two surveillance applications: bicycle
theft detection, and abandoned luggage in hidden areas. When a
surveillance camera overlooks a bicycle racks, people are observed
locking their bicycles onto the racks and picking them up later. The
best global explanation for all detections gathered during the day
resolves local ambiguities from occlusion or clutter. Intensive testing
on 5 full days proved global analysis achieves higher recognition rates.
The second case study tracks people and any objects they are carrying as
they enter and exit a building entrance. A complete sequence of the
person entering and exiting multiple times is recovered by the global
explanation.
The presentation will conclude with a live-demo of our latest scalable
Texture-less Object Detector - Best Poster Paper for BMVC 2012
Damen, Dima and Hogg, David (2012). Explaining Activities as Consistent
Groups of Events - A Bayesian Framework using Attribute Multiset
Grammars. International Journal of Computer Vision (IJCV) vol 98 (1) pp
83-102
Damen, Dima and Hogg, David (2012). Detecting Carried Objects from
Sequences of Walking Pedestrians. IEEE Transactions on Pattern Analysis
and Machine Intelligence (TPAMI) vol 34 (6) pp 1056-1067
Damen, Dima and Bunnun, Pished and Calway, Andrew and Mayol-Cuevas,
Walterio (2012). Real-time Learning and Detection of 3D Texture-less
Objects: A Scalable Approach. British Machine Vision Conference (BMVC)
--------------------------------
Date and time: Thursday 6th December 2012 at 16:00
Location: UG09, Learning Centre
Title: Unikernels: Functional Library Operating Systems for the
Cloud
Speaker: Anil Madhavapeddy
Institution: University of Cambridge
Host: Dan Ghica
Abstract:
Public compute clouds provide a flexible platform to host applications
as
a set of appliances, e.g., web servers or databases. Each appliance
usually contains an OS kernel and userspace processes, within which
applications access resources via APIs such as POSIX. The flexible
architecture of the cloud comes at a cost: the addition of another
layer
in the already complex software stack. This reduces performance and
increases the size of the trusted computing base.
Our new Mirage operating system proposes a radically different way of
building these appliances. Mirage supports the progressive
specialisation
of functional language (OCaml) application source code, and gradually
replaces traditional OS components with type-safe libraries. This
ultimately results in "unikernels": sealed, fixed-purpose images that
run
directly on the hypervisor without an intervening guest OS such as
Linux.
Developers no longer need to become sysadmins, expert in the
configuration
of all manner of system components, to use cloud resources. At the same
time, they can develop their code using their usual tools, only making
the
final push to the cloud once they are satisfied their code works. As
they
explicitly link in components that would normally be provided by the
host
OS, the resulting unikernels are also highly compact: facilities that
are
not used are simply not included in the resulting unikernel. For
example,
the self-hosting Mirage web server image is less than a megabyte in
size!
I'll describe the architecture of Mirage in the talk, show some code
examples, and interesting benchmark results that compare the
performance
of our unikernels to traditional applications such as Apache and BIND.
--------------------------------
Date and time: Thursday 13th December 2012 at 16:00
Location: UG09, Learning Centre
Title: From OOD to Moral Subjectivity: What does it take to
build "real" AI?
Speaker: Joanna Bryson
Host: Andrew Howes
Abstract:
You can do quite a lot with an ordinary computer and an object-oriented
language. Why and when do we need goals, drives, competences, plans,
perception, emotions, consciousness, culture, moral agency and moral
patiency? And why when we build all of these things into a machine do
people still say we haven't produced "real" AI?
Believe it or not, this talk will be mostly about real code and software
tools we deliver over our web pages to help people build more humanoid
AI, though I will also mention work by other companies and laboratories
that goes beyond what we've done at Bath, at least in some areas.
Joanna J. Bryson is an academic specialised in two areas: advancing
systems artificial intelligence (AI), and exploiting AI simulations to
understand natural intelligence, including human culture. She holds
degrees in behavioural science, psychology and artificial intelligence
from Chicago (BA), Edinburgh (MSc and MPhil), and MIT (PhD). She joined
the University of Bath in 2002, where she was made a Reader in 2010.
Between 2007-2009 she held the Hans Przibram Fellowship for EvoDevo at
the Konrad Lorenz Institute for Evolution and Cognition Research in
Altenberg, Austria. In 2010 she was a visiting research fellow in the
University of Oxford's Department of Anthropology, and since 2011 she
has been a visiting research fellow at the Mannheimer Zentrum fĂźr
Europäische Sozialforschung, but since you are in a Computer Science
department you'll care more that she developed AI for LEGO in 1994-1995
and 1998. At Bath she leads the Intelligent Systems research group, one
of four in the Department of Computer Science. She also heads
Artificial Models of Natural Intelligence, where she and her colleagues
publish in cognitive science, philosophy, anthropology, behavioural
ecology and systems AI.
--------------------------------
Date and time: Thursday 10th January 2013 at 16:00
Location: Mechanical Engineering, B22
Title: Responsible Research and Innovation in ICT: or a Question
of Ethics
Speaker: Marina Jirotka
(https://www.cs.ox.ac.uk/people/marina.jirotka/)
Institution: University of Oxford
(http://www.ox.ac.uk)
Host: Mina Vasalou
Abstract:
The context of publicly funded research has changed over the last decade
with the recognition of a need for government-funded research to
contribute directly to social benefit. This has entailed consideration
at an early stage of the eventual use to which fundamental research may
be put. EPSRC has funded a project on Responsible Research and
Innovation in ICT in order to undertake a base-lining of current opinion
with regard to the current ethical and social implications of research
within its communities, and to advise on the development of a framework
within which such ethical and social implications might be considered
This talk will outline the objectives of this project, the work to date
and some early stage emerging issues.
Marina Jirotka is Reader in Requirements Engineering in the Computing
Department, University of Oxford, Associate Director of the Oxford
e-Research Centre and Associate Researcher at the Oxford Internet
Institute. She is also Deputy Director of ESRCâs UK Strategy for
Digital Social Research. Marina has led a number of research projects
relating to ethical, legal and social issues including: research on the
importance of intellectual property rights in collaborative medical
databases (ESRC Copyright Ownership of Medical Data in Collaborative
Computing Environments); Ethical, Legal and Institutional Responses to
Emerging e-Research Infrastructure, Policies and Practices (ESRC); a
consideration of the economic, social, legal and regulatory issues that
emerge in the next generation of the internet (EPSRC Opportunities and
Challenges in the Digital Economy - an Agenda for the Next-generation
Internet); and an investigation into the emergent practices and
capabilities of social networking systems, exploring how we can develop
understandings of services, exchange and interaction to benefit the UK
economy (EPSRC Innovative Media for the Digital Economy (IMDE)). She is
most recently leading the EPSRC Framework for Responsible Research and
Innovation in ICT project. Marina is a Chartered IT Professional of the
BCS and sits on the ICT Ethics Specialist Group committee and the
Digital Economy Ethics Advisory Panel. She has published widely in
international journals and conferences in e-Science, HCI, CSCW and
Requirements Engineering
--------------------------------
Date and time: Thursday 14th February 2013 at 16:00
Location: G28, Mech Eng
Title: Self-Enforcing Electronic Voting
Speaker: Feng Hao
(http://homepages.cs.ncl.ac.uk/feng.hao/home.php)
Institution: Newcastle University
(http://www.ncl.ac.uk/computing/)
Host: Shishir Nagaraja
Abstract:
This talk presents a new e-voting design called âself-enforcing
electronic votingâ. A self-enforcing e-voting protocol provides
End-to-End (E2E) verifiability, but in contrast to all other E2E
verifiable voting schemes, it does not require any involvement of
tallying authorities. In other words, the election is self-tallying. We
show how to realize âself-enforcing e-votingâ through a
pre-computation strategy and novel encryption methods. Furthermore, we
show that, by removing tallying authorities, the resultant e-voting
system becomes much simpler, more manageable, has better efficiency,
provides better usability, and incurs lower hardware cost â all of
these are achieved without degrading security. Finally, if time permits,
I will present a live demo of a classroom e-voting prototype which we
have developed at Newcastle University and used in the actual classroom
teaching. Our project has recently received an ERC starting grant for
further investigation, and we welcome any form of collaboration from
interested audience.
--------------------------------
Date and time: Wednesday 20th February 2013 at 14:00
Location: UG06, Learning Centre
Title: Component Analysis for Human Sensing
Speaker: Fernando De la Torre
Institution: Carnegie Mellon University
Host: Ales Leonardis
Abstract:
Enabling computers to understand human behavior has the potential to
revolutionize many areas that benefit society such as clinical
diagnosis, human computer interaction, and social robotics. A critical
element in the design of any behavioral sensing system is to find a good
representation of the data for encoding, segmenting, classifying and
predicting subtle human behavior. In this talk I will propose several
extensions of Component Analysis (CA) techniques (e.g., kernel principal
component analysis, support vector machines, spectral clustering) that
are able to learn spatio-temporal representations or components useful
in many human sensing tasks. In particular, I will show how several
extensions of CA methods outperform state-of-the-art algorithms in
problems such as facial feature detection and tracking, temporal
clustering of human behavior, early detection of activities, non-rigid
matching, visual labeling, and robust classification. The talk will be
adaptive, and I will discuss the topics of major interest to the
audience.
Biography:
Fernando De la Torre received his B.Sc. degree in Telecommunications
(1994), M.Sc. (1996), and Ph. D. (2002) degrees in Electronic
Engineering from La Salle School of Engineering in Ramon Llull
University, Barcelona, Spain. In 2003 he joined the Robotics Institute
at Carnegie Mellon University , and since 2010 he has been a Research
Associate Professor. Dr. De la Torre's research interests include
computer vision and machine learning, in particular face analysis,
optimization and component analysis methods, and its applications to
human sensing. He is Associate Editor at IEEE PAMI and leads the
Component Analysis Laboratory (http://ca.cs.cmu.edu) and the Human
Sensing Laboratory (http://humansensing.cs.cmu.edu).
--------------------------------
Date and time: Thursday 28th February 2013 at 16:00
Location: G28, Mech Eng
Title: A SmĂśrgĂĽsbord of Computational Creativity Topics
Speaker: Simon Colton and Alison Pease
(http://www.doc.ic.ac.uk/~sgc/)
Institution: Imperial College London
(http://www.doc.ic.ac.uk/)
Host: Aaron Sloman
Abstract:
Computational Creativity has recently been (re)defined as being:
"The philosophy, science and engineering of computational systems
which, by taking on particular responsibilities, exhibit behaviours that
unbiased observers would deem to be creative". In our group at Imperial
(ccg.doc.ic.ac.uk), we have studied the notion of software being
autonomously
creative from various perspectives, which we will present in the talk
with
reference to recent projects we've carried out. The first perspective is
practical,
and we will describe the software we have developed and tested for
creating
poems, paintings, mathematical concepts and video games. In particular,
we'll
cover The Painting Fool project, where the aim is to produce software
which is
one day taken seriously as a creative artist in its own right. The
second
perspective is formal, and we will describe our efforts in bringing much
needed
formalism to the question of addressing progress towards autonomous
creativity
in software. In particular, we will give some details of the
Computational Creativity
Theory framework of descriptive models, the first of which covers the
types of creative
acts that software can undertake and the second of which covers the ways
in which
those creative acts could have impact.
The third perspective is philosophical, and we will describe our efforts
to take
a holistic view of the difficulties encountered in handing over creative
responsibility to software, bringing in concepts such as the
creativity tripod, issues with Turing-style tests in Computational
Creativity,
and the latent-heat issue: where giving software more creative
responsibility
can lead to a decrease in the value of its outputs. The final
perspective is
social, and we will present our findings from studies of social
creativity in
mathematics and interviews with creative artists, identifying several
areas
which are relevant to computational readings of creativity. We focus on
three
areas: (i) explanation in mathematical collaboration and our work
empirically
testing philosophical theories of explanation; (ii) framing information
that
artists give, including qualitative and quantitative methods to
investigate what
sorts of things artists say, how they say them, and what they don't say;
and
(iii) the role of serendipity in creativity, using sociological work and
examples
to identify components of serendipitous discoveries, and presenting
computational
analogues for each component.
Simon Colton (Imperial College):
Reader in Computational Creativity and EPSRC Leadership Fellow
Computational Creativity Group, Department of Computing, Imperial
College, London
www.doc.ic.ac.uk/~sgc
http://ccg.doc.ic.ac.uk/wiki/doku.php?id=simoncolton
Alison Pease
Research Associate working in the Computational Creativity
Group at Imperial College London, and the Theory group at Queen
Mary, University of London.
Also a visiting researcher at the University of Edinburgh.
http://homepages.inf.ed.ac.uk/apease/research/
http://ccg.doc.ic.ac.uk/wiki/doku.php?id=alisonpease
--------------------------------
Date and time: Thursday 7th March 2013 at 16:00
Location: G29, Mech Eng
Title: A New Extensible Framework for Multi-Agent System
Verification
Speaker: Franco Raimondi
(http://www.mdx.ac.uk/aboutus/staffdirectory/Franco_Raimondi.aspx)
Institution: Middlesex University London
(http://www.mdx.ac.uk/)
Host: Mirco Musolesi
Abstract:
Recently, there has been a proliferation of tools and
languages for modeling multi-agent systems (MAS). Verification tools,
correspondingly, have been developed to check properties of these
systems. Most MAS verification tools, however, have their own input
language and often specialize in one verification technology, or only
support checking a specific type of property.
In this talk I present an extensible framework that leverages
mainstream
verification tools to successfully reason about various types of
properties. The Brahms agent modeling language is used to demonstrate
the effectiveness of the approach (Brahms is used to model real
instances of interactions between pilots, air-traffic controllers, and
automated systems such as the NASA autopilot). The framework takes as
input a Brahms model along with a Java implementation of its semantics
and explores all possible behaviors of the model. The model is then
verified using mainstream model checkers, including PRISM, SPIN, and
NuSMV.
(This is work in collaboration with Neha Rungta @NASA Ames and
Richard Stocker @Liverpool, based on a paper accepted at AAMAS 2013)
--------------------------------
Date and time: Thursday 14th March 2013 at 16:00
Location: UG09, Learning Centre
Title: Applying computational approaches for the representation
of word meaning
Speaker: Joe Levy
(http://www.roehampton.ac.uk/staff/Joe-Levy/)
Institution: University of Roehampton
(http://www.roehampton.ac.uk/staff/Joe-Levy/)
Host: John Bullinaria
Abstract:
There has been a great deal of research about methods for representing
word or concept meaning, both in linguistic/technological terms and in
psychological/neuroscientific ones. A promising method in computational
linguistics has been to measure the intuition that a wordâs meaning is
a function of the other words it co-occurs with by counting these words
as they occur in large text corpora. I will present results from work
with John Bullinaria that demonstrate just how well a simple
co-occurrence method can perform on various evaluation tasks and how it
can improve a model of fMRI activation associated with word and concept
meaning.
Joe Levy is a cognitive scientist whose research interests include
computational modelling of language phenomena and the cognitive
neuroscience of human social cognition and language. After a degree in
Natural Sciences at the University of Cambridge, he completed a PhD and
postdoctoral research in cognitive science at the University of
Edinburgh. Since then he has worked at Birkbeck, University of London,
the University of Greenwich and is currently a Principal Lecturer in the
Department of Psychology, University of Roehampton.
--------------------------------
Date and time: Thursday 21st March 2013 at 16:00
Location: G29, Mech Eng
Title: Digital Visualisation and Multi-scale Simulation of
Complex Particulate Processes
Speaker: Richard Williams
(http://www.birmingham.ac.uk/staff/profiles/university/richard-williams.aspx)
Institution: University of Birmingham
()
Host: Hamid Dehghani
Abstract:
There has been a transformation in the use of advanced sensing methods
to recreate fully three-dimensional visualisations of complex materials.
This has enabled the development and validation of digital modelling and
simulation approaches using multi-scale computation platforms. The
seminar illustrates the application of a digital modelling method that
can take account of three-dimensional shape (and inherent physical and
chemical properties) of particulate components, providing a useful tool
in various engineering process.
For example, this is useful in predicting best ways handling of high,
medium and low level radioactive waste that is so critical in
decommissioning and dismantling nuclear installations of legacy nuclear
medical and military hardware. The processes involve making decisions
on where to âcutâ existing plant components and then how to pack
these components into boxes, which are then cemented and kept for long
term storage as the level of radioactive declines with time. The seminar
will illustrate the utility of the method and its ability to take data
at plant scale (m-scale) and then deduce behaviours at sub millimetre
scale in the packed containers. A variety of modelling approaches are
used as a part of this approach including cutting algorithms, geometric
and dynamic (distinct element) force models, and lattice Boltzmann
methods.
These methods are applicable to other complex particulate systems
including simulation of waste, building recycling, disintegration of
pharmaceutical tablets, heap leaching and related minerals separations
processes. The paper introduces the basic concepts of this multi-scale
and multi-model approach.
Work is on-going using these methods that combine tomographic type
measurements with multi-scale simulation using hybrid CPU-GPU platforms
based on NVIDIA Mole 8.5 systems (2 Petaflops peak performance in single
precision) that can handle real time simulation of millions of
particles. Work is seeking to utilise its inherent parallelism in
simulation methods with the system architecture to scale out the
simulations.
Professor Richard Williams is a Professor of Energy and Mineral
Resources Engineering (University of Birmingham) and Visiting Professor
at the Chinese Academy of Sciences (Institute of Process Engineering,
Beijing)
--------------------------------
Date and time: Thursday 2nd May 2013 at 16:00
Location: G29, Mech Eng
Title: Back to school for computing? New lessons for computer
science, lessons for a new computer science
Speaker: Meurig Beynon
(http://www2.warwick.ac.uk/fac/sci/dcs/people/meurig_beynon/)
Institution: University of Warwick
()
Host: Achim Jung
Abstract:
Recent developments associated with the 'computing at schools' agenda
have highlighted problematic issues for computer science education. In
tackling these issues, the main emphasis has been on developing better
educational resources for imparting the core ideas of computer science
as an established academic subject. New initiatives for schools take
their inspiration from a mantra that reflects the way in which elite
university computer science departments perceive and promote their
discipline. Pupils are bored and disenchanted with the pragmatic
introduction to computing as "information and communication technology
(ICT)"; they should instead be introduced to the challenging fascinating
hard science of computing. Pragmatic aspects of computing (business and
engineering applications, serious games, social media etc) are valuable
as the means by which to engage pupils' interest in real computer
science with its focus on the theory of algorithms as the core
ingredient in the broader framework of computational thinking. Students
must be taught the clear distinction between fundamental abstract
principles and the ephemeral ways in which these are embodied in new
languages and technologies.
This talk advances a complementary proposal: that the problems of
computer science education should be attributed at least as much to the
immature status of computer science as an academic subject as to poor
educational practice. Beyond question, the potential of the
computational thinking paradigm is immense and far from fully explored.
But whether it should be regarded as the single most significant
conceptual framework for computing is quite another matter. The
difficulty of reconciling theoretical and pragmatic approaches in areas
such as software development, database technology and artificial
intelligence belies this assumption. It also helps to explain why - for
many would-be students - traditional computer science fails to connect
with their experience of computing-in-the-wild: as Ben-Ari and his
co-researchers have shown in empirical studies, though an imaginative
initiative such as 'CS Unplugged' may help students to appreciate core
computer science concepts, it does not typically inspire enthusiastic
interest in computational thinking.
An appropriate science of computing is one in which the pragmatic use of
computers - such as is represented in best practice in ICT - is neither
deemed to be subsumed by computational thinking nor regarded as of
subordinate, peripheral and/or ephemeral interest. Classifying a
phenomenon as computational in character is a matter of construal, not a
matter of fact. The activities that enable us to make such a
classification are quite as integral to computing and as intellectually
significant as computational thinking. What is more, the potential
impact of computers and related technologies on practice in relation to
construal - though of its essence pragmatic in nature - is as
far-reaching as that of computational thinking. Giving an account of
computing in which computational thinking and construal are intimately
integrated can bring new life to our teaching of traditional computer
science and lay the foundation for a new and broader science of
computing.