Research Interests of Peter Tino
Research Interests of Peter Tino
Broad Areas of Interest

Artificial Intelligence,
Machine Learning,
Neural Networks,
Evolutionary Computation,
Bioinformatics,
Computational Biology

Intelligent Methods for Pattern Recognition,
Financial Forecasting, Molecular Biology

Knowledge Representation in Neural Networks, Integration of Symbolic and
Connectionist Paradigms, Hybrid Systems

Computational Models of Dynamical Systems,
Evolution of Complexity in SelfOrganizing Systems

Fractal and Multifractal Analysis
Current Research Interests
 Theoretical underpinning of existing machine learning methodologies
Machine learning methodologies have been successfully and widely used in many areas of life, science and engineering. It is imperative that for those technologies we can answer principal questions such as:
 What are their representational capabilities?
 How can the induced knowledge be interpreted in a transparent manner?
 How does data complexity translate into difficulty of learning and complexity of the learnt models?
 Interdisciplinary applications of Machine Learning
As more and more data is collected in many different branches of science, engineering, healthcare etc., intelligent processing and understanding of such data collections is a necessary prerequisite for any progress within each specialisation. Truly interdisciplinary approaches are the key to the success, yet much work needs to be done on how to incorporate intricate domain knowledge into machine learning methods so that both sides of the coin  specific domain knowledge approaches and purely data driven learning approaches  are symbiotically blended into superior methods.
 Learning in the model space
Potentially huge and diverse data collections can be naturally processed and analyzed in a novel way: Data items are first represented through models constructed to capture data features deemed crucial for further analysis  learning is then performed in the space of models.
 Modelling of brain imaging data across multiple spatial and temporal scales
Construction of principled models capable of capturing in a unified way small, intermediate and long time scales of runs, sessions and training courses, respectively, while being sufficiently sensitive to spatial scales from localized voxel collections, through Regions of Interest to the whole brain.
 Learning with privileged information
In many practical applications there are situations where an additional information is available during the predictive model construction phase, but not when the model is used in practice. A typical example is galaxy classification where full spectra (carrying a lot of useful information) for some galaxies are available during classifier construction, but not when the model is used to classify new galaxies (for which only simple morphological and/or bulk spectral features are available). I am interested in formulating efficient metric learning approaches to deal with this problem.
 Analysis of population level complex dynamics
Many analytical descriptions of population level models are conveniently performed with the assumption of infinite populations. While insightful ideas of different behavioural scenarios can be obtained in this way, if the underlying dynamics is complex, it is not clear how such theoretical results translate into realistic scenario of potentially large, but finite populations. In fact, in the framework of coevolutionary learning it can shown that the infinite population results may not be directly translatable to the finite population case, even if the finite population is arbitrarily large. I am interested in a systematic study of conditions on complex population level systems under which similar problems occur.
 Adaptive state space models
Typically, in order to be able to recursively process sequential data, parametrised
learning models are endowed with some form of feedback
mechanism. This turns them in (nonautonomous) dynamical systems. Understanding such systems is a challenge. Many researchers suggested to fix the dynamical state transition part and adapt only the readout from the state space. Deeper understanding of this approach and consequences for learning capabilities it imposes is still an open question.
Sample of Past Research Topics
 Recurrent neural networks
 their ability to represent and learn temporal structures in (mainly symbolic) data
 analysis of training process
 interpretation of induced knowledge using automata,
dynamical systems and information theories
 relationship to other adaptive and learning paradigms
 Geometric representations of symbolic sequences
 mapping symbolic structures into vector metric spaces
 multifractal properties of representations based on iterative function systems
 relationship to recurrent neural networks
 constructing Markovlike predictive models on fractal representations of symbolic streams
 applications in Bioinformatics and language modeling
 Time series modeling and prediction through quantization
 quantization techniques
 symbolic modeling and prediction tools: Markov models, Hidden Markov
models, Variable memory length Markov models, Prediction fractal machines,
Epsilon machines, Formal grammars, ...
 applications in finance
 comparison with nonsymbolic methods
 Modeling and prediction of financial time series
 modeling hidden dynamics in returns of financial derivatives (DJIA, DAX, FTSE)
 state space models of conditional mean and variance
 extracting abstract rules from financial time series quantized into symbolic streams
 building automatic trading strategies for option markets
 Hierarchical visualization of highdimensional data
 principled hierarchical visualization, where each visualization plot
is a (local) probabilistic model of data
 magnification factors and directional curvatures
as a means for understanding properties of projection manifolds
 applications in bioinformatics and document mining
[back to my homepage ]