A(high)=search, wait A(low)=search, wait, recharge 1,R wait 1-,-3 search β,R wait 1.0 recharge high search ●wait 1. Rait R search 1-α, R search
Agent State Reward Action Environment S Goal: Learn to choose actions that maximize o+yr,+y-n2+…, where 0≤y<1
Markov Decision processes Assume ● finite set of states s ● set of actions4 at each discrete time agent observes state st E S and chooses action at EA o then receives immediate reward rt and state changes to St+1 Markov assumption: St+1= d(St, at)and t =r(St,at i.e., rt and st+1 depend only on current state and action functions d and r may be nondeterministic - functions o and r not necessarily known to agent
Value function To begin, consider deterministic worlds For each possible policy the agent might adopt we can define an evaluation function over states V(s)=r+7r+1+y2r+2+ r Tt+i where rt, rt+1,.. are generated by following policy T starting at state s Restated, the task is to learn the optimal policy T 丌*≡ argmax v(s),(Vs)
What to learn We might try to have agent learn the evaluation function Vm(which we write as V*) It could then do a lookahead search to choose best action from any state s because T"(s)=argmax[r( s, a)+?V*(8(s, a problem · This works well if agent knows6:S×A→S, andr:S×A4→犹 . But when it doesnt it cant choose actions this way
Q Function Define new function very similar to V Q(s,a)=r(s,a)+yV(8(s, a If agent learns Q, it can choose optimal action even without knowing 8! T(s)=argmax[r(s,a)+yV(8(s, a)) 丌*(s)= argmax Q(s,a) Q is the evaluation function the agent will learn
Training Rule to Learn Q Note Q and v closely related: V(s)=max Q(s, a) Which allows us to write Q recursively as Q(St, at)=r(St, at)+V(S(st, at)) r(St, at)+y max Q(st+1, a') Nice! Let Q denote learners current approximation to Q. Consider training rule Q(s,a)←r+maxQ(s,a’) where s is the state resulting from applying action a in state s
Q Learning for Deterministic Worlds For each s, a initialize table entry Q(s, a)<0 Observe current state s Do forever Select an action a and execute it Receive immediate reward r Observe the new state s Update the table entry for Q(s, a)as follows Q(s,a)←r+ r max q(s,a) ●s←S
Updating Q 100 10 R R 81 81 righr initial state: S, next state: s Q(s1, aright)< r+y max Q(s2, a) 0+0.9max{63,81,100} 90 notice if rewards non-negative, then (√s,a,m)qn+1(s,a)≥qn(s,a) an Qn(s,a)≤Q(s,a)
1 r(s, a)(immediate reward) values. 100 90 100 81 81 9 0 10o Q(s, a) values V*(s values One optimal policy