### Introduction to AI - Week 3

Learning
Decision Trees
Reinforcement Learning
• Inductive learning involves making uncertain inferences that go beyond our direct experience. [Anderson95]

• Scientific discovery by proof is impossible, except one knows the "first" primary premises ... We must obtain these premises by induction. [Aristotle ~330 BC]

# Decision Trees

• A Decision Tree takes as input an object given by a set of properties, output a Boolean value (yes/no decision). Each internal node in the tree corresponds to test of one of the properties. Branches are labelled with the possible values of the test.

• Aim: Learn goal concept (goal predicate) from examples

• Learning element: Algorithm that builds up the decision tree.

• Performance element: decision procedure given by the tree

# Example

Problem to wait for a table at a restaurant. A decision tree decides whether to wait or not in a given situation.

Attributes:

1. Alternate: alternative restaurant nearby
2. Bar: bar area to wait
3. Fri/Sat: true on Fridays and Saturdays
4. Hungry: whether we are hungry
5. Patrons: how many people in restaurant (none, some, or full)
6. price: price range (£ , £ £ , £ £ £ )
7. raining: raining outside
8. reservation: whether we made a reservation
9. type: kind of restaurant (French, Italian, Thai, or Burger)
10. WaitEstimate: estimated wait (<10, 10-30,30-60,>60)

# Expressiveness of Decision Trees

• Each path through tree corresponds to implication like
FORALL r  Patrons(r,Full) &  WaitEstimate(r,0-10)
&  Hungry(r,N) ->  WillWait(r)
Hence Decision tree corresponds to conjunction of implications.

• Cannot express tests that refer to two different objects like:
EXISTS r2 Nearby(r2) &  Price(r,p) &  Price(r2,p2) &  Cheaper(p2,p)

• Expressiveness essentially propositional logic (no function symbols, no existential quantifier)

• Complexity for n attributes is 22n, since for each function 2n values have to be defined. (e.g. for n=6 there are 2 x 1019 different functions)

• Functions like parity function (1 for even, 0 for odd) or majority function (1 if more than half of the inputs are 1) end in large decision trees.

# Some Examples

 Ex Atrributes Goal Alt Bar Fri Hun Pat Price Rain Res Type Est Wait -------- ------------- ------------- X1 Yes No No Yes Some £ £ £ No Yes French 0-10 Yes X2 Yes No No Yes Full £ No No Thai 30-60 No X3 No Yes No No Some £ No No Burger 0-10 Yes X4 Yes No Yes Yes Full £ Yes No Thai 10-30 Yes X5 Yes No Yes No Full £ £ £ No Yes French >60 No X6 No Yes No Yes Some £ £ Yes Yes Italian 0-10 Yes X7 No Yes No No None £ Yes No Burger 0-10 No X8 No No No Yes Some £ £ Yes Yes Thai 0-10 Yes X9 No Yes Yes No Full £ Yes No Burger >60 No X10 Yes Yes Yes Yes Full £ £ £ No Yes Italian 10-30 No X11 No No No No None £ No No Thai 0-10 No X12 Yes Yes Yes Yes Full £ No No Burger 30-60 Yes

# Different Solutions

• Trivial solution: Construct decision tree that has one path to a leaf for each example
Given the examples o.k., but else bad
• Occam's razor: The most likely hypothesis is the simplest one that is consistent with all observation.
• Finding smallest decision trees is intractable, hence heuristic decisions: test the most important attribute first. Most important = makes most difference to the classification of an example.
Short paths in the trees, small trees
• Compare splitting the examples by testing on attributes (cf. Patrons, Type)

# Recursive Algorithm

• If there are positive and negative examples, choose "best" attribute to split (i.e., Patrons in the example above)
• If all remaining examples positive (or all negative), then done
• If no examples left, no information, hence default value
• If no attributes left, but both positive and negative examples, then problem. Examples have same description but different classification due to incorrect data (noise), not enough information, or nondeterministic domain
Then majority vote.

# DecTreeLearning ID3

```function DecTreeL(ex,attr,default);;; returns decision tree
;;; set of examples, of attributes, default for goal predicate
if ex empty then return default
elseif all ex have same classification then return it
elseif attributes empty then return MajVal(ex)
else ChooseAttribute(attr,ex) -> best1
new decision tree with root test best1 -> tree1
for each value vi of best1 do
{elements of ex with best1=vi} -> exi
DecTreeL(exi,attr,MajVal(ex)) -> subtree1
add branch to tree with label vi
subtree subtree1
end
return tree
```

# Discussion of the Result

Comparison of original tree and learned tree:

• Trees differ.
• Learned tree is smaller (no test for raining and reservation, since all examples can be classified without them).
• Detects regularities (waiting for Thai food on weekends).
• Can make mistakes (e.g. case where the wait is <10, but the restaurant is full and not hungry).
• Question: if consistent, but incorrect tree, how correct is the tree?

# Assessing the Performance

• Collect a large set of examples.
• Divide it into two disjoint sets: training set and test set.
• Learn algorithm with training set and generate hypothesis H.
• Measure percentage of examples in test set that are correctly classified by H.
• Repeat steps 1 to 4 for different sets.

# Assessing the Performance (Cont'd)

Learning curve shows increase of the quality of the prediction, when training set grows.

# Applications

• ID3 used to classify boards in a chess endgame. ID3 had to recognise boards that led to a loss within 3 moves. Classification of half a million positive situations from 1.4 million different possible boards. Typical learning curve as result.

• Building up an expert system for designing gas-oil separation systems for oil platforms. gasoil XPS of BP with 2500 rules. Building by hand: 10 person-years, using decision-tree learning 100 person-days.

• Learning to fly: Flight simulator, generated by watching three skilled human pilots. 90,000 examples and 20 state variables labelled by the action taken. Extract decision tree which was translated into C code. Program could fly better than its teachers.

# Finding Best Attributes

In order to build up small decision trees: select best attributes first (best = most informative)
• measure information in bits
• One bit of information is enough to answer a yes/no question about which one has no idea (e.g. flip of a fair coin)
• If the possible answers ui have probabilities P(ui), then
I(P(u1),... ,P(un)) = SUMi=1n -P(ui)* ld(P(ui))
• e.g., fair coin: I(½,½) = 1 (1 bit)
• e.g., if we know already the outcome by 99%, the information of the real outcome has the (expected) information of: I(1/100,99/100)=0.08 bits.
If we know outcome by 100%, no additional information, I=0.

# Logarithm

ld(x) (dual logarithm) is defined for every positive real number x such that
2ld(x) = x

# Logarithm (Cont'd)

Some values:

 x 1 2 4 8 10 16 1/2 1/4 1/8 ld(x) 0 1 2 3 3.32 4 -1 -2 -3

limx-> 0+ld(x)=-∞          limx-> 0+x*ld(x)=0

Remember:

log10x= log102 * ld(x)

# Calculations in the Examples

 I(½,½) = SUMi=12 -P(ui)* ld(P(ui)) = -½*ld(½)-½*ld(½) = -½* (-1)-½* (-1) = ½+½ = 1

with P(ui)=½

# Calculations in the Examples (Cont'd)

 I(0/ 100,100/ 100) = SUMi=12 -P(ui)* ld(P(ui)) = -0/ 100*ld(0/ 100)-100/ 100*ld(100/ 100) *= -0* (-∞)-1* 0 *= 0+0 = 0

with P(u1)=0/ 100 and with P(u2)=100/ 100
(*) Strictly you have to use here
limx-> 0+x*ld (x) = 0.

# Calculations in the Examples (Cont'd)

 I(1/ 100,99/ 100) = SUMi=12 -P(ui)* ld(P(ui)) = -1/ 100*ld(1/ 100)-99/ 100*ld(99/ 100) = -1/ 100* (-6.64386)-99/ 100* (-0.0145) = 0.066439 + 0.014355 = 0.080794

with P(u1)=1/ 100 and with P(u2)=99/ 100

ld(1/ 100) = -6.64386 and ld(99/ 100) = -0.0145

# Applied to Attributes

p := number of positive examples
n := number of negative examples

I(p/ p+n,n/ p+n)= - p/ p+n ld p/ p+n - n/ p+nldn/ p+n

Restaurant example: p=n=6, hence we need 1 bit of information. A test of one single attribute A will not usually give this, but only some of it. A divides example set E into subsets E1,... ,Eu. Each subset Ei has pi positive and ni negative examples, so we need in this branch an additional I(pi/ (pi+ni),ni/ (pi+ni)) bits of information, weighted by
(pi+ni)/(p+n) (probability of a random example)

HENCE information gain:
Gain(A) = I(p/ p+n,n/ p+n) - SUMi=1u pi+ni/p+n * I(pi/ pi+ni,ni/ pi+ni)

# Heuristics

Choose attribute with largest information gain.

In the restaurant example, initially:

 alternative bar friday hungry patrons -------- ------------- ------------- 0.0 0.0 0.020721 0.195709 0.540852

 price rain reservation type estimate -------- ------------- ------------- 0.195709 0.0 0.020721 0.0 0.207519

Hence: choose "patrons"

# Noise and Overfitting

• Overfitting, problem not to find meaningless regularity in the data (examples: rolling dice characterised according to attributes like hour, day, month result in perfect decision tree, when no two examples have identical description)

• possibility: decision tree pruning by detecting irrelevant attributes. Irrelevant = no information gain for an infinitely large sample.
null hypothesis, assumes that there is no underlying pattern. Only if significant deviation (e.g more than 5%) attribute considered.

• alternative: cross-validation, i.e. take only part of the data for learning and rest for testing the prediction performance. Repeat with different subsets and select best tree. (can be combined with pruning)

# Reinforcement Learning

Assume the following stochastic environment
Each training sequence has the form:
• (1,1)->(1,2)->(1,3)->(2,3) ->(1,3)->(2,3)->(3,3)->(4,3) reward +1
• (1,1)->(2,1)->(1,1)->(2,1) ->(3,1)->(3,2)->(4,2) reward -1

# Rewards

Probability for a transition to a neighbouring state is equal among all possibilities, i.e.

Assume utility function is additive, i.e.
U([s0,s1,... ,sn]) = reward(s0) + U([s1,... ,sn])
with e.g. pected utility of a state is the expected reward-to-go of that state.

# Utility to be Learned

Can be learned by Least Mean Squares approach, short LMS, (also called adaptive control theory). It assumes that the observed reward-to-go on that sequence provides direct evidence of the actual expected reward-to-go. At end of each sequence: calculate reward-to-go for each state and update utility

# Passive Reinforcement Learning

```vars U ;;; table of utility estimates
vars N ;;; table of frequencies for states
vars M ;;; table of transition probabilities from state to state
vars percepts ;;; percept sequence, initially empty
function Passive-RL-Agent(e);;; returns an action
increment N(State(e))
UPDATE(U,e,percepts,M,N) -> U
if Terminal?(e)  then nil -> percepts
return action Observe
```

```function LMS-Update(U,e,percepts,M,N);;; returns updated U
if Terminal?(e) then 0 -> reward-to-go;
for each ei in percepts (starting at end) do
reward-to-go+Reward(ei) -> reward-to-go;
Running-Average(U(State(ei)),reward-to-go,N(state(ei)))
-> U(State(ei));
end
```

# Summary - Decision Tree Learning

• Decision Tree Learning: very efficient way of non-incremental learning space.
• It adds a subtree to the current tree and continues its search.
• It does not backtrack.
• It is highly dependent upon the criteria for selecting properties to test.
• It can be extended to allow more than two values as result of the classification
• It can be extended to deal with noise.

# Summary - Reinforcement Learning

• Reinforcement Learning: incremental learning approach.
• We could only give a glimpse of reinforcement learning.
• We looked only at the example of a passive agent, which observes the world. Typically you will have an active agent, which can make decisions based on its partial knowledge of the world.
• An active agent has to decide whether it should exploit its current knowledge, or explore the world.