Irfan Subakti | 司馬伊凡
2005/2006(2) - CI2421 Sistem Inteligensia | Intelligent System | 智慧系統
Reference- Tom M. Mitchell, Machine Learning, International Edition, McGraw-Hill, Singapore, 1997.
- Mitsuo Gen, Runwei Chen, Genetic Algorithms and Engineering Design, John Wiley & Sons, Inc., New York, USA, 1997.
- Richard O. Duda, Peter E. Hart, David G. Stork, Pattern Classification, Second Edition, John Wiley & Sons, Inc., USA, 2001.
- Stuart J. Russell and Peter Norvig, Artificial Intelligence - A Modern Approach, Second Edition, Prentice Hall - Pearson Education, Inc., New Jersey, USA, 2003.
- IEEE Transactions on Neural Networks, The Institute of Electrical and Electronics Engineers, Inc
- IEEE Transactions on Pattern Analysis and Machine Intelligence, The Institute of Electrical and Electronics Engineers, Inc.
Module
Source
- Introduction
- Knowledge-based Systems
- Ripple Down Rules (RDR)
- Single Classification RDR
- Multiple Classification RDR
- RDR Application: EMMA (An E-Mail Management Assistant)
- Fuzzy Concept in Agen Technology: Speedy Agent Car
- Fuzzy Concepts in Expert Systems
- GA-Fuzzy Application in RDBMS
- SA, GA and GSA in Fuzzy Systems
- GSA-Fuzzy Application in RDBMS
- Multiple Null Values in GSA-Fuzzy applied in RDBMS
- A Variable-Centered Intelligent Rules System (VCIRS)
- Proceeding of ICTS 2005, 11th August 2005
Course overview
- Credit
- 4 Credits (50 minutes x 4 = 200 minutes)
- Prerequisites
- None
- Goals
- Student are able to get the understanding about machine learning which has relation with the computer program. It can improve its performance by training or learning set
- Student will get knowledge theoretic about several concepts: inductive bias, Probability Approximately Correct (PAC) and the Mistake bound learning frameworks, Minimum Description Length principle and Occam ' s Razor.
- Student will get practical applications , i.e., learning method algorithms such as: Decision Trees learning, Neural Network learning, Statistical Learning methods, Genetic Algorithms, Bayesian Learning methods, Explanation-based learning and Reinforcement learning.
- Student are able to get the understanding about machine learning which has relation with the computer program. It can improve its performance by training or learning set
- Contents
- Introduction to machine learning
- Well-posed learning problems, designing a learning system: choosing the training experience, choosing the target function, choosing a representation for the target function, choosing a function approximation algorithm, final design; perspectives and issues in machine learning
- Well-posed learning problems, designing a learning system: choosing the training experience, choosing the target function, choosing a representation for the target function, choosing a function approximation algorithm, final design; perspectives and issues in machine learning
- Concept learning
- Concept learning task: notation and the inductive learning hypothesis, concept learning as search at space hypotheses, version spaces, inductive bias.
- Concept learning task: notation and the inductive learning hypothesis, concept learning as search at space hypotheses, version spaces, inductive bias.
- Decision Tree learning
- Decision tree representation, the basic decision tree learning algorithm, hypothesis space search in decision tree learning, inductive bias in decision tree learning, issues in decision tree learning: overfitting, incorporating continuous-valued attributes, handling training examples with missing attribute values
- Decision tree representation, the basic decision tree learning algorithm, hypothesis space search in decision tree learning, inductive bias in decision tree learning, issues in decision tree learning: overfitting, incorporating continuous-valued attributes, handling training examples with missing attribute values
- Artificial Neural Networks
- Neural network representations, perceptrons, multilayer networks and the Backpropagation algorithm, problems with: convergence and local minima, hidden layer representations, generalization, overfitting, and stopping criterion
- Neural network representations, perceptrons, multilayer networks and the Backpropagation algorithm, problems with: convergence and local minima, hidden layer representations, generalization, overfitting, and stopping criterion
- Bayesian Learning
- Bayes theorem and concept learning: Brute-force Bayes concept learning and MAP hypotheses and consistent learners, Maximum likelihood and Least-squared error hypotheses, Minimum description length principle, Bayes optimal classifier, Naive Bayes classifier, Bayesian belief networks, the Expectation Maximization (EM) algorithm
- Bayes theorem and concept learning: Brute-force Bayes concept learning and MAP hypotheses and consistent learners, Maximum likelihood and Least-squared error hypotheses, Minimum description length principle, Bayes optimal classifier, Naive Bayes classifier, Bayesian belief networks, the Expectation Maximization (EM) algorithm
- Genetic Algorithm
- Representing hypotheses, genetic operators, fitness function and selection, hypothesis space search
- Representing hypotheses, genetic operators, fitness function and selection, hypothesis space search
- Reinforcement Learning
- Q learning: the Q function, an algorithm for learning Q, Convergence; nondeterministic rewards and actions, temporal difference learning
- Introduction to machine learning