正在加载图片...
Belief Tracking Estimating p, (x)is now easy After each action a, and observation Z, X∈X, update (x)=p(z, Ix>p(x a, x)Pi-(x) This algorithm is quadratic in X (Recall that Kalman Filter is quadratic in number of state features Continuous x means infinite number of states. The Three Basic Problems for HMms 1)Given the history o=a1, Z,, a, Z2 aT, ZT, and a model n=(A, B, ) how do we efficientl compute P(on), the probability of the history, given the model? 2)Given the history O=a1, Z1, a2, Z2, aT, ZT and a model i, how do we choose a corresponding state sequence X=X,X2,XT which is optimal in some meaningful sense (i.e., best explains the observations)? 3)How do we adjust the model parameters n=(A, B, T)to maximize P(o)?Ɣ Estimating pt (x) is now easy Ɣ After each action at and observation zt , xX, update : Ɣ This algorithm is quadratic in |X|. Ɣ number of state features. Continuous X means infinite number of states.) Belief Tracking ( ) ( | ) ( | , ') ( ) 1 ' p x x x p x t X t t ¦ t  The Three Basic Problems for HMMs 1,z1,a2,z2,...,aT,zT, and a model O=(A,B,S), how do we efficiently compute P(O|O), the probability of the history, given the model? 1,z1,a2,z2,...,aT,zT and a model O, how do we choose a corresponding state sequence X=x1,x2,...,xT which is optimal in some meaningful sense (i.e., best “explains” the observations)? O=(A,B,S) to maximize P(O|O)? (Recall that Kalman Filter is quadratic in p z p x a 1) Given the history O=a 2) Given the history O=a 3) How do we adjust the model parameters
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有