Project Topics for 2016/7 Students
Video introduction (MP4)
I offer (final year and MSc) projects across the spectrum of Natural Computation. This area includes Simulated Evolution (genetic algorithms, genetic programming, evolutionary computation), Artifical Life, Swarm Intelligence, and heuristic approaches to Optimization. In these research topics the idea is to develop novel ideas and algorithms for problem solving, and to test them out experimentally or by mathematical analysis. The research may also be informed by and directed toward "real-world" applications, or its purpose may be to further scientific theories concerning Nature and Evolution.
The projects would obviously suit students who have taken Nature Inspired Search and Optimisation or a similar course.
Keywords (methods): multiobjective optimization, natural computation, computational intelligence, human problem solving, reinforcement learning, design of experiments, dynamic programming, online problems
Keywords (applications): software evolution, drug discovery, taste optimization, "citizen science" (e.g. like SETI@home)
Keywords (other): school teaching of computer science, philosophy of artificial intelligence, existential threat of artificial intelligence
For projects, students can choose to (1) work on topics my group of PhD students and collaborators are currently working on, or (2) develop a new idea. For most of my projects, there will be a strong experimental component where you will need to implement ideas and test them out. However, at the same time, it is good to develop theoretical models or think about how they could be used.
In the following, some project topics are new and some are recycled. Here's a key to that.
Key: GREEN=new topic | BLACK=unused topic from last year | GREY=topic may need updating
1. Topics being pursued by my current PhD students and collaborators
- Improving Game Playability. Top Trumps is a popular game amongst children based on themed packs of
The game proceeds in rounds called
tricks, involving the comparison of numerical values printed on the
cards. In this project, you will use computer science and
psychological theories about what makes a game playable in order to
improve Top Trumps cards, in particular considering how the
distribution of numerical values can be changed or selected in order
to make the game more exciting, fair, reflective-of-skill, fun for
players of different strengths, and unpredictable.
Choose this project if you are interested in game design, testing with
human subjects, and/or programming of automatic (AI) game players. All
these aspects and more can be considered in this project.
Togelius, J., & Schmidhuber, J. (2008). An experiment in automatic
game design. In IEEE Computational Intelligence and Games 2008,
Cardona, A. B., Hansen, A. W., Togelius, J., & Gustafsson,
M. (2014). Open trumps, a data game. In Proceedings of the 9th
International Conference on the Foundations of Digital Games. Society
for the Advancement of the Science of Digital Games, ISBN
Hunicke, R., & Chapman, V. (2004) AI for Dynamic Difficulty Adjustment
in Games. Challenges in Game Artificial Intelligence, Technical
Report WS-04-04, AAAI Workshop.
- The Emergence of Cooperation. We have been looking in recent work at how cooperation can emerge from a group of agents who are initially self-interested. This is not a unique direction, but builds on a large body of research. Our recent findings have been that complex networks that also fluctuate in time can provide the necessary noise or perturbation to the agents to encourage them to flip into cooperation. We have some ongoing experiments where teams of agents can potentially form alliances, or they can choose to act selfishly. Maybe you have some ideas how to encourage more of the latter without changing the essential self-regard of the agents? An interest in learning about game theory, agent-based simulation and cooperation would make you a suitable candidate for this project area.
- Sequential Experimentation and Decision-Making with Evolutionary Algorithms. When Red Bull racing is developing the aerodynamics of their F1 racing car, they do it using a physical model of the car in a wind tunnel: a simulator is not good enough. When blending a brand of coffee to taste good, we need to do experiments with real coffee and real tasters. When testing the efficacy of a drug, laboratory experiments are carried out with real biological samples or cultures. Increasingly, these experiments are automated, and run in a feedback loop, with the results of experiments, informing the next experiments to do. One promising method of automating the choice of experiments is to use Evolutionary Algorithms. It can be argued that evolutionary algorithms are suitable because they require less knowledge of the problem to work well. However, there are still a number of open research issues around how to adapt the use of evolutionary algorithms to suit experimental settings (as opposed to their usual use in optimizing mathematical functions or simulations. Can you help?
- Multiple-Objective Machine Learning. My group and collaborators have developed a number of algorithms for machine learning that work by optimizing several objective (or cost, or loss) functions simultaneously. These techniques enable us to deliver classifiers, feature sets, or hyperparameters that can achieve a balance between several conflicting aims, such as accuracy versus parsimony, or true positive rate vs true negative rate. We have a number of ongoing projects in this area. If you have an interest in machine learning and optimization, this could be a great opportunity for you.
- The Evolution of Evolvability. A very seductive idea in Evolutionary theory, is the idea that Evolution may be able to evolve itself! In other words, as populations of organisms evolve, they actually also somehow get better at evolving. No one really understands whether this happens or how it does; but many people, including us, are trying to collect evidence of it. In our recent research, we have shown that there are at least some conditions under which it makes sense (teleologically) for evolution to "prefer" (or selectively favour) evolutionary lineages that are evolving faster even if they are not presently "fitter". But how does evolution "know" that is the right thing to do? We don't know, and we don't even know how it could estimate the best lineage if it did know ... but we have some good ideas. This project would interest anyone who want to see how Computer Science can inform other sciences.
2. Topics that are New Ideas
- Personalization in the Design of Products and Services
Capitalism and the markets suggest that we all like to have a lot of choice - choice when buying a book, considering a school, visiting a hospital. In fact, the retailer, Amazon, recently said "No-one ever wanted less choice". But when companies are designing new products or services, sometimes providing choice is a bit of an add-on, and is not based on a thorough consideration of the customers' varieties of needs and desires. Here we will investigate how to design search (or optimization) algorithms that can simultaneously search for many product or service variants, and account for what we know about the market.
- The HUMIES: The Oscars of Computational Intelligence
The Humies is an annual competition asking
contestants to submit evidence of a result produced by evolutionary computation that is comparable to the level
of performance of a human expert. I would like to hear from any students who have good ideas (or think they do) for an entry
in this contest. Note: If doing such a project were not, in itself, reward enough, the Humies also awards
cash prizes up to 5,000 USD.
- Particle filters, sequential Monte Carlo, and evolution
Sequential Monte Carlo algorithms are used to estimate the values of hidden variables in a dynamical system from sequential sampling of other measurable (but noisy) variables. A linear approach to this is the Kalman filter, a very successful approach in many computational, statistical and engineering applications. The more general method (for nonlinear systems) is known as a particle filter. In a particle filter, a population of estimates (the particles) undergo several iterations of selection and "renormalization", a process which is abstractly very similar to evolution and evolutionary algorithms (EAs), especially estimation-of-distribution algorithms. In this project, you will investigate whether existing techniques developed to improve EAs and EDAs, such as niching, variable step sizes, and exploiting "flow", can be used to improve particle filters as well.
- Reinforcement learning in humans and machines
[Note: this topic was used last year]
Who learns quicker on a reinforcement learning task, humans (students) or current sophisticated algorithms? And can the methods used by one group (machines or humans), help the other learn better?
In this project, you will use some simple board games that can be played on differently-sized boards. Players (machine or human) will only be told what moves are legal, but not the purpose or rules of the game. Each game gives variable rewards or payoffs. The aim for a player is to collect the most reward in 10 games, inferring how the rewards are given and also generalizing this inference to games of different sizes. Who will win, Birmingham students or the machines?
- Chess - exploit the opponent by learning his weaknesses
In 2004 Magnus Carlsen, the world chess champion played a game against Bill Gates on a Norwegian TV show. Knowing Gates was a weak player, Norman did not play his usual game but attacked quickly using more risky tactics. He won in 9 moves, prompting Gates (who was a good sport) to exclaim "that quickly?!!".
In this project, you will explore ways a computer could achieve a similar feat - measure a player's apparent skill profile (during play) and seek to exploit this in trying to win quickly. Most chess, poker and other "bots" do not do this, but just play conservatively.
Note: Chess may or may not be involved in the project; several simpler games may be more advisable.