Introduction to AI - Week 3
Learning
Decision Trees
Reinforcement Learning
* Inductive learning involves making uncertain inferences that go beyond our direct experience. [Anderson95]
* Scientific discovery by proof is impossible, except one knows the "first" primary premises ... We must
obtain these premises by induction. [Aristotle ~330 BC]
_________________________________________________________________________________________________________
Decision Trees
* A Decision Tree takes as input an object given by a set of properties, output a Boolean value (yes/no
decision). Each internal node in the tree corresponds to test of one of the properties. Branches are
labelled with the possible values of the test.
* Aim: Learn goal concept (goal predicate) from examples
* Learning element: Algorithm that builds up the decision tree.
* Performance element: decision procedure given by the tree
_________________________________________________________________________________________________________
Example
Problem to wait for a table at a restaurant. A decision tree decides whether to wait or not in a given
situation.
Attributes:
1. Alternate: alternative restaurant nearby
2. Bar: bar area to wait
3. Fri/Sat: true on Fridays and Saturdays
4. Hungry: whether we are hungry
5. Patrons: how many people in restaurant (none, some, or full)
6. price: price range (£ , £ £ , £ £ £ )
7. raining: raining outside
8. reservation: whether we made a reservation
9. type: kind of restaurant (French, Italian, Thai, or Burger)
10. WaitEstimate: estimated wait (<10, 10-30,30-60,>60)
_________________________________________________________________________________________________________
Original Decision Tree
[dectree-orig.jpg]
_________________________________________________________________________________________________________
Expressiveness of Decision Trees
* Each path through tree corresponds to implication like
FORALL r Patrons(r,Full) & WaitEstimate(r,0-10)
& Hungry(r,N) -> WillWait(r)
Hence Decision tree corresponds to conjunction of implications.
* Cannot express tests that refer to two different objects like:
EXISTS r[2] Nearby(r[2]) & Price(r,p) & Price(r[2],p[2]) & Cheaper(p[2],p)
* Expressiveness essentially propositional logic (no function symbols, no existential quantifier)
* Complexity for n attributes is 2^2^n, since for each function 2^n values have to be defined. (e.g. for n=6
there are 2 x 10^19 different functions)
* Functions like parity function (1 for even, 0 for odd) or majority function (1 if more than half of the
inputs are 1) end in large decision trees.
_________________________________________________________________________________________________________
Some Examples
Ex Atrributes Goal
Alt Bar Fri Hun Pat Price Rain Res Type Est Wait
-------- ------------- -------------
X[1] Yes No No Yes Some £ £ £ No Yes French 0-10 Yes
X[2] Yes No No Yes Full £ No No Thai 30-60 No
X[3] No Yes No No Some £ No No Burger 0-10 Yes
X[4] Yes No Yes Yes Full £ Yes No Thai 10-30 Yes
X[5] Yes No Yes No Full £ £ £ No Yes French >60 No
X[6] No Yes No Yes Some £ £ Yes Yes Italian 0-10 Yes
X[7] No Yes No No None £ Yes No Burger 0-10 No
X[8] No No No Yes Some £ £ Yes Yes Thai 0-10 Yes
X[9] No Yes Yes No Full £ Yes No Burger >60 No
X[10] Yes Yes Yes Yes Full £ £ £ No Yes Italian 10-30 No
X[11] No No No No None £ No No Thai 0-10 No
X[12] Yes Yes Yes Yes Full £ No No Burger 30-60 Yes
_________________________________________________________________________________________________________
Different Solutions
* Trivial solution: Construct decision tree that has one path to a leaf for each example
Given the examples o.k., but else bad
* Occam's razor: The most likely hypothesis is the simplest one that is consistent with all observation.
* Finding smallest decision trees is intractable, hence heuristic decisions: test the most important
attribute first. Most important = makes most difference to the classification of an example.
Short paths in the trees, small trees
* Compare splitting the examples by testing on attributes (cf. Patrons, Type)
_________________________________________________________________________________________________________
Selecting Best Attributes
[dectree-tests.jpg]
_________________________________________________________________________________________________________
Selecting Best Attributes (Cont'd)
[dectree-test-two.jpg]
_________________________________________________________________________________________________________
Recursive Algorithm
* If there are positive and negative examples, choose "best" attribute to split (i.e., Patrons in the
example above)
* If all remaining examples positive (or all negative), then done
* If no examples left, no information, hence default value
* If no attributes left, but both positive and negative examples, then problem. Examples have same
description but different classification due to incorrect data (noise), not enough information, or
nondeterministic domain
Then majority vote.
_________________________________________________________________________________________________________
DecTreeLearning ID3
function DecTreeL(ex,attr,default);;; returns decision tree
;;; set of examples, of attributes, default for goal predicate
if ex empty then return default
elseif all ex have same classification then return it
elseif attributes empty then return MajVal(ex)
else ChooseAttribute(attr,ex) -> best1
new decision tree with root test best1 -> tree1
for each value v[i] of best1 do
{elements of ex with best1=v[i}] -> ex[i]
DecTreeL(ex[i],attr,MajVal(ex)) -> subtree1
add branch to tree with label v[i]
subtree subtree1 end return tree
_________________________________________________________________________________________________________
Generated Decision Tree
[dectree-learnt.jpg]
_________________________________________________________________________________________________________
Discussion of the Result
Comparison of original tree and learned tree:
* Trees differ.
* Learned tree is smaller (no test for raining and reservation, since all examples can be classified without
them).
* Detects regularities (waiting for Thai food on weekends).
* Can make mistakes (e.g. case where the wait is <10, but the restaurant is full and not hungry).
* Question: if consistent, but incorrect tree, how correct is the tree?
_________________________________________________________________________________________________________
Assessing the Performance
* Collect a large set of examples.
* Divide it into two disjoint sets: training set and test set.
* Learn algorithm with training set and generate hypothesis H.
* Measure percentage of examples in test set that are correctly classified by H.
* Repeat steps 1 to 4 for different sets.
_________________________________________________________________________________________________________
Assessing the Performance (Cont'd)
Learning curve shows increase of the quality of the prediction, when training set grows.
[learning-curve.jpg]
_________________________________________________________________________________________________________
Applications
* ID3 used to classify boards in a chess endgame. ID3 had to recognise boards that led to a loss within 3
moves. Classification of half a million positive situations from 1.4 million different possible boards.
Typical learning curve as result.
* Building up an expert system for designing gas-oil separation systems for oil platforms. gasoil XPS of BP
with 2500 rules. Building by hand: 10 person-years, using decision-tree learning 100 person-days.
* Learning to fly: Flight simulator, generated by watching three skilled human pilots. 90,000 examples and
20 state variables labelled by the action taken. Extract decision tree which was translated into C code.
Program could fly better than its teachers.
_________________________________________________________________________________________________________
Finding Best Attributes
In order to build up small decision trees: select best attributes first (best = most informative)
* measure information in bits
* One bit of information is enough to answer a yes/no question about which one has no idea (e.g. flip of a
fair coin)
* If the possible answers u[i] have probabilities P(u[i]), then
I(P(u[1]),... ,P(u[n])) = SUM[i=1]^n -P(u[i])* ld(P(u[i]))
* e.g., fair coin: I(½,½) = 1 (1 bit)
* e.g., if we know already the outcome by 99%, the information of the real outcome has the (expected)
information of: I(1/100,99/100)=0.08 bits.
If we know outcome by 100%, no additional information, I=0.
_________________________________________________________________________________________________________
Logarithm
[fig-log.jpg]
ld(x) (dual logarithm) is defined for every positive real number x such that
2^ld(x) = x
_________________________________________________________________________________________________________
Logarithm (Cont'd)
Some values:
x 1 2 4 8 10 16 1/2 1/4 1/8
ld(x) 0 1 2 3 3.32 4 -1 -2 -3
lim[x-> 0+]ld(x)=-infty lim[x-> 0+]x*ld(x)=0
Remember:
log[10]x= log[10]2 * ld(x)
_________________________________________________________________________________________________________
Calculations in the Examples
I(½,½)
= SUM[i=1]^2 -P(u[i])* ld(P(u[i]))
= -½*ld(½)-½*ld(½)
= -½* (-1)-½* (-1)
= ½+½
= 1
with P(u[i])=½
_________________________________________________________________________________________________________
Calculations in the Examples (Cont'd)
I(0/ 100,100/ 100)
= SUM[i=1]^2 -P(u[i])* ld(P(u[i]))
= -0/ 100*ld(0/ 100)-100/ 100*ld(100/ 100)
*= -0* (-infty)-1* 0
*= 0+0
= 0
with P(u[1])=0/ 100 and with P(u[2])=100/ 100
(*) Strictly you have to use here
lim[x-> 0+]x*ld (x) = 0.
_________________________________________________________________________________________________________
Calculations in the Examples (Cont'd)
I(1/ 100,99/ 100)
= SUM[i=1]^2 -P(u[i])* ld(P(u[i]))
= -1/ 100*ld(1/ 100)-99/ 100*ld(99/ 100)
= -1/ 100* (-6.64386)-99/ 100* (-0.0145)
= 0.066439 + 0.014355
= 0.080794
with P(u[1])=1/ 100 and with P(u[2])=99/ 100
ld(1/ 100) = -6.64386 and ld(99/ 100) = -0.0145
_________________________________________________________________________________________________________
Applied to Attributes
p := number of positive examples
n := number of negative examples
Information contained in correct answer:
I(p/ p+n,n/ p+n)= - p/ p+n ld p/ p+n - n/ p+nldn/ p+n
Restaurant example: p=n=6, hence we need 1 bit of information. A test of one single attribute A will not
usually give this, but only some of it. A divides example set E into subsets E[1],... ,E[u]. Each subset E[i]
has p[i] positive and n[i] negative examples, so we need in this branch an additional I(p[i]/
(p[i]+n[i]),n[i]/ (p[i]+n[i])) bits of information, weighted by
(p[i]+n[i])/(p+n) (probability of a random example)
HENCE information gain:
Gain(A) = I(p/ p+n,n/ p+n) - SUM[i=1]^u p[i]+n[i]/p+n * I(p[i]/ p[i]+n[i],n[i]/ p[i]+n[i])
_________________________________________________________________________________________________________
Heuristics
Choose attribute with largest information gain.
In the restaurant example, initially:
alternative bar friday hungry patrons
-------- ------------- -------------
0.0 0.0 0.020721 0.195709 0.540852
price rain reservation type estimate
-------- ------------- -------------
0.195709 0.0 0.020721 0.0 0.207519
Hence: choose "patrons"
_________________________________________________________________________________________________________
Noise and Overfitting
* Overfitting, problem not to find meaningless regularity in the data (examples: rolling dice characterised
according to attributes like hour, day, month result in perfect decision tree, when no two examples have
identical description)
* possibility: decision tree pruning by detecting irrelevant attributes. Irrelevant = no information gain
for an infinitely large sample.
null hypothesis, assumes that there is no underlying pattern. Only if significant deviation (e.g more than
5%) attribute considered.
* alternative: cross-validation, i.e. take only part of the data for learning and rest for testing the
prediction performance. Repeat with different subsets and select best tree. (can be combined with pruning)
_________________________________________________________________________________________________________
Reinforcement Learning
Assume the following stochastic environment
[reinforcement.jpg]
Each training sequence has the form:
* (1,1)->(1,2)->(1,3)->(2,3) ->(1,3)->(2,3)->(3,3)->(4,3) reward +1
* (1,1)->(2,1)->(1,1)->(2,1) ->(3,1)->(3,2)->(4,2) reward -1
_________________________________________________________________________________________________________
Rewards
Probability for a transition to a neighbouring state is equal among all possibilities, i.e.
[reinforcement-trans.jpg]
Assume utility function is additive, i.e.
U([s[0],s[1],... ,s[n]]) = reward(s[0]) + U([s[1],... ,s[n]])
with e.g. pected utility of a state is the expected reward-to-go of that state.
_________________________________________________________________________________________________________
Utility to be Learned
[reinforcement-utility.jpg]
Can be learned by Least Mean Squares approach, short LMS, (also called adaptive control theory). It assumes
that the observed reward-to-go on that sequence provides direct evidence of the actual expected reward-to-go.
At end of each sequence: calculate reward-to-go for each state and update utility
_________________________________________________________________________________________________________
Passive Reinforcement Learning
vars U ;;; table of utility estimates
vars N ;;; table of frequencies for states
vars M ;;; table of transition probabilities from state to state
vars percepts ;;; percept sequence, initially empty
function Passive-RL-Agent(e);;; returns an action
add e to percepts
increment N(State(e))
UPDATE(U,e,percepts,M,N) -> U
if Terminal?(e) then nil -> percepts
return action Observe
function LMS-Update(U,e,percepts,M,N);;; returns updated U
if Terminal?(e) then 0 -> reward-to-go;
for each e[i] in percepts (starting at end) do
reward-to-go+Reward(e[i]) -> reward-to-go;
Running-Average(U(State(e[i])),reward-to-go,N(state(e[i])))
-> U(State(e[i]));
end
_________________________________________________________________________________________________________
Summary - Decision Tree Learning
* Decision Tree Learning: very efficient way of non-incremental learning space.
+ It adds a subtree to the current tree and continues its search.
+ It does not backtrack.
+ It is highly dependent upon the criteria for selecting properties to test.
+ It can be extended to allow more than two values as result of the classification
+ It can be extended to deal with noise.
_________________________________________________________________________________________________________
Summary - Reinforcement Learning
* Reinforcement Learning: incremental learning approach.
+ We could only give a glimpse of reinforcement learning.
+ We looked only at the example of a passive agent, which observes the world. Typically you will have
an active agent, which can make decisions based on its partial knowledge of the world.
+ An active agent has to decide whether it should exploit its current knowledge, or explore the world.
_________________________________________________________________________________________________________
Further Reading [books-shelf1.jpg]
* S. Russell, P. Norvig. Artificial Intelligence - A Modern Approach. 2nd Edition, Pearson Education, 2003.
Sections 18.3 & 21.2.
* G. Luger, W. Stubblefield. Artificial Intelligence - Structures and Strategies for Complex Problem
Solving. 2nd Edition, The Benjamin/Cummings Publishing Company, 1993.
* J.R. Quinlan, Induction of Decision Trees. Machine Learning, 9(1):81-106, 1986.
* J.R. Quinlan, The effect of noise on concept learning. In Michalski et al., eds., Machine Learning: An
Artificial Intelligence Approach, Vol. 2. Morgan Kaufmann. 1986.
_________________________________________________________________________________________________________
_________________________________________________________________________________________________________
© Manfred Kerber, 2004, Introduction to AI
24.4.2005
The URL of this page is http://www.cs.bham.ac.uk/~mmk/Teaching/AI/Teaching/AI/l3.html.
URL of module [1]http://www.cs.bham.ac.uk/~mmk/Teaching/AI/
References
1. http://www.cs.bham.ac.uk/~mmk/Teaching/AI/