当前位置:高等教育资讯网  >  中国高校课件下载中心  >  大学文库  >  浏览文档

《机器学习》演示文稿(15)

资源类别:文库,文档格式:PPT,文档页数:10,文件大小:473KB,团购合买
点击下载完整版文档(PPT)

A(high)=search, wait A(low)=search, wait, recharge 1,R wait 1-,-3 search β,R wait 1.0 recharge high search ●wait 1. Rait R search 1-α, R search

Agent State Reward Action Environment S Goal: Learn to choose actions that maximize o+yr,+y-n2+…, where 0≤y<1

Markov Decision processes Assume ● finite set of states s ● set of actions4 at each discrete time agent observes state st E S and chooses action at EA o then receives immediate reward rt and state changes to St+1 Markov assumption: St+1= d(St, at)and t =r(St,at i.e., rt and st+1 depend only on current state and action functions d and r may be nondeterministic - functions o and r not necessarily known to agent

Value function To begin, consider deterministic worlds For each possible policy the agent might adopt we can define an evaluation function over states V(s)=r+7r+1+y2r+2+ r Tt+i where rt, rt+1,.. are generated by following policy T starting at state s Restated, the task is to learn the optimal policy T 丌*≡ argmax v(s),(Vs)

What to learn We might try to have agent learn the evaluation function Vm(which we write as V*) It could then do a lookahead search to choose best action from any state s because T"(s)=argmax[r( s, a)+?V*(8(s, a problem · This works well if agent knows6:S×A→S, andr:S×A4→犹 . But when it doesnt it cant choose actions this way

Q Function Define new function very similar to V Q(s,a)=r(s,a)+yV(8(s, a If agent learns Q, it can choose optimal action even without knowing 8! T(s)=argmax[r(s,a)+yV(8(s, a)) 丌*(s)= argmax Q(s,a) Q is the evaluation function the agent will learn

Training Rule to Learn Q Note Q and v closely related: V(s)=max Q(s, a) Which allows us to write Q recursively as Q(St, at)=r(St, at)+V(S(st, at)) r(St, at)+y max Q(st+1, a') Nice! Let Q denote learners current approximation to Q. Consider training rule Q(s,a)←r+maxQ(s,a’) where s is the state resulting from applying action a in state s

Q Learning for Deterministic Worlds For each s, a initialize table entry Q(s, a)<0 Observe current state s Do forever Select an action a and execute it Receive immediate reward r Observe the new state s Update the table entry for Q(s, a)as follows Q(s,a)←r+ r max q(s,a) ●s←S

Updating Q 100 10 R R 81 81 righr initial state: S, next state: s Q(s1, aright)< r+y max Q(s2, a) 0+0.9max{63,81,100} 90 notice if rewards non-negative, then (√s,a,m)qn+1(s,a)≥qn(s,a) an Qn(s,a)≤Q(s,a)

1 r(s, a)(immediate reward) values. 100 90 100 81 81 9 0 10o Q(s, a) values V*(s values One optimal policy

点击下载完整版文档(PPT)VIP每日下载上限内不扣除下载券和下载次数;
按次数下载不扣除下载券;
24小时内重复下载只扣除一次;
顺序:VIP每日次数-->可用次数-->下载券;
已到末页,全文结束
相关文档

关于我们|帮助中心|下载说明|相关软件|意见反馈|联系我们

Copyright © 2008-现在 cucdc.com 高等教育资讯网 版权所有