Home Page of Bob Durrant

Contents

·         Responsibilities

·         Contact Details and Timetable

·         Research Interests

·         MSc Projects

·         Publications

·         Teaching Links

 

Responsibilities

I am a teaching fellow in the University of Birmingham School of Computer Science.

In 2008-9 I am teaching modules on Machine Learning, Machine Learning (Extended), Nature Inspired Design (A) and Nature Inspired Design.

I am also the admissions tutor for our MSc Natural Computation and MSc Intelligent Systems Engineering degrees.

 

Back to top

 

Contact Details and Timetable

Robert J. Durrant, Room 134,

School of Computer Science,

University of Birmingham,

Edgbaston,

B15 2TT

UK

 

e: r.j.durrant [at] cs [dot] bham [dot] ac [dot] uk

w: www.cs.bham.ac.uk/~durranrj

t: +44 (0)121 414 3710

 

Personal Timetable

 

Back to top

 

Research Interests

I have a broad interest in learning and vision, both the organic and machine varieties.

I have a particular interest in computational theories of learning.

 

Back to top

MSc Projects

I am happy to supervise projects in the broad areas of Machine Learning and Nature-inspired methods.

Some projects that I would be interested in are:

Convergence of search operators.

For example: Some search operators (e.g. those used in Simulated Annealling and Xin Yao's Fast EP) seem to work well on a wide range of problems. The intuition is that these approaches work well because they avoid premature convergence in the search space by achieving a good balance between exploration and exploitation. Beyer et. al. have demonstrated that as dimensionality increases the variance in the fitnesses of individuals in the population must converge to zero, so why do these approaches do well? When don't they do so well and why?

Compressive Learning.

For example: In a recent paper Calderbank et al. have demonstrated that linear soft-margin SVM is capable of learning from sparse data that has had its dimensionality reduced by random projection with training and generalization error only slightly worse than that achieved by learning the decision boundary on the unprojected data. This works because, for such sparse data, the random projection approximately preserves key geometric properties of the data. One could compare the performance of different types of classifier on sparse data when learning on the unprojected and projected data, with or without noise.

Modelling of dynamic systems, e.g. road or pedestrian traffic, financial, auction, or betting markets, call centre traffic, etc.

For example: Given a set of constraints on agent working hours, the number of lines, total number of agents, agent skills (not all of which every agent has), etc. along with time-varying call data figures (volume, duration), estimate the optimal staffing pattern for the call centre which minimises overhead, queuing, and call duration.

Learning aesthetics

For example: In evolutionary art and evolutionary music, a large overhead (especially in the latter) is the human fitness function which determines the aesthetic appeal of candidate solutions. Using reinforcement learning one could teach fitness evaluation of such individuals to a computer, to match the designer's taste, which would be a one-off rather than ongoing cost to the process. Alternatively, a number of simple measures of aesthetic appeal could be devised, and an optimal mixture of such functions evolved.

Evolutionary cooking(!)

For example: Some combinations of individually appealing flavours do not necessarily mix well (e.g. bananas and cheese), whereas some unusual sounding combinations can turn out to be extremely tasty (e.g. carrot and orange). It might be intriguing to see to what extent a GA is successful in devising a simple recipe for a soup, say. A solution of the forward problem would be required initially, in order to estimate fitness of a suitably large number of candidate solutions. This could be tackled, for example, using an ANN.

Other ideas.

Alternatively, if you have a project idea of your own that you would like to tackle I'm always happy to also discuss these with students.

Publications

Refereed Conference Papers:

A. Kaban and R.J.Durrant. Learning with Lq<1 vs L1-norm regularization with exponentially many irrelevant features. Proc. of the 19th European Conference on Machine Learning (ECML08), 15-19 Sept 2008, Antwerp, Belgium. W. Daelemans et al. (eds.): LNAI 5211, pp. 580-596. Springer.  pdf  slides  code

 

A. Kaban and R.J.Durrant. A norm-concentration argument for non-convex regularization. ICML/UAI/COLT Workshop on Sparse Optimization and Variable Selection, 9 July, 2008, Helsinki, Finland.  pdf  slides

 

Journal Paper:

R.J. Durrant and Ata Kaban. When Is ‘Nearest Neighbor’ Meaningful: A Converse Theorem and Implications. Journal of Complexity (Accepted.) pdf

 

Invited Poster Presentation:

Sparsity in the context of learning from high dimensional data - with Ata Kaban. ICARN International Workshop 26 Sept 2008, Liverpool.  pdf

 

Back to top

 

Teaching Links

These are the links to the course webpages (handouts, exercises, reading, etc.) for the courses I am teaching this year:

 

Machine Learning & Machine Learning (Ext)

 

Nature Inspired Design

 

Back to top

 

 

Cookie Policy

Last revised: 25/09/08