Room: UG33
I am prepared to supervise projects in the areas of neural networks (machine learning in general), data visualisation, statistical pattern analysis and evolutionary computation. I am open to project ideas suggested by students. I am prepared to devote a considerable time to supervising my project students, but in return I expect a steady and solid work on the project right from the beginning. Please e-mail me or pop into my office.
Data visualisation
Increasingly, larger and larger amounts of data are available for analysis. However, the principal trends and relationships in the data are often hidden from us due to sheer volume of the data and potentially incomprehensible nature of data items (high-dimensional vectors of measurements, genetic sequences, molecules of potential drug components etc). Data visualisation is a first step in data mining and intelligent data analysis. It relies on 1-, 2-, or 3-dimensional representations of data items that anable our powerful pattern recognotion system (human vision) to detect principal trends and interesting relationship in the data. There are many methods currently in use for data visualisation, but there is a huge scope for improvement of the existing techniques and development of new ones, specifically tailored to a particular application area.
Recurrent neural networks
Recurrent neural networks contain feedback inter-neuron connections and thus can deal with data with strong temporal structure. Examples include financial time series, DNA sequences, symbolic sequences from a natural language, or trajectories produced by a chaotic system. However, even though such networks can theoretically represent a very rich variety of dynamical regimes, it is usually quite difficult and time consuming to train them beyond simple finite-memory machines/predictors. There are many options for (at least partially) dealing with this problem (improved learning techniques, more advanced architectures, etc ...) and we can explore some of these in a project.
Recursive neural networks
Recursive neural networks are generalizations of recurrent neural networks capable of processing tree-like structures. Successful applications have been reported in prediction tasks regarding pharmaceutical compounds, logo recognition etc... However, recursive networks are even more difficult to train than recurrent networks and more work needs to be done both to theoretically understand the problem and to formulate more capable learning algorithms for such networks.
Probabilistic models for sparse temporal data
Many applications in the area of web data mining suffer from the problem of extremely sparse temporal data. Usually we have to deal with relatively short symbolic sequences over very large alphabets. It is both highly non-trivial and challenging to devise strategies for finding a good trade-off between the need for reducing the alphabet size and the need for having more capable memory models in order to capture important temporal aspects of the data. We will explore several very different approaches to reducing the alphabet size as well as possibilities of building efficient finite memory models on relatively short data segments.
Hierarchical visualization of spatio-temporal data
We have developed a hierarchical visualization system for understanding spatial layout of potentially high dimensional static data points. The system is based on a probabilistic tool for non-linear dimensionality reduction in high-dimensional spaces. An interesting generalization is to extend the system to hierarchies in both the spatial and temporal domains. Going down the hierarchy may correspond to increasingly detailed spatial representation of the data, or increasingly detailed account for temporal relationships in the data, or both. Although there has been some work on visualization of temporal data, a general system in such a unified framework has not been explored.
Evolutionary art
There have been many nice evolutionary approaches to help artists in creating interesting and unorthodox pictures. Many extensions are possible, for example in modifying the vocabulary of basic transforms to allow for hierarchies of self-similar fractal-like objects, defining continuous mappings on such fractal-generating transformations etc. Other possibilities in music composition include e.g. helping a composer to create interesting new tunes, or given an existing melody, create an appealing counter melody.