当前位置:高等教育资讯网  >  中国高校课件下载中心  >  大学文库  >  浏览文档

《认知机器人》(英文版) Foundations of state Estimation

资源类别:文库,文档格式:PDF,文档页数:10,文件大小:972.21KB,团购合买
Foundations of state Estimation Topics: Bayes Filters Kalman filters Hidden markov models Additional readins G. Welch and G. Bishop. An Introduction to the Kalman Filter ". University of North Carolina at Chapel Hill,
点击下载完整版文档(PDF)

Foundations of state Estimation Topics: Bayes Filters Kalman filters Hidden markov models 。 Additional readins G. Welch and G. Bishop. An Introduction to the Kalman Filter ". University of North Carolina at Chapel Hill, DepartmentofComputersCience.Tr95-041.http://www.cs.unc.edu/-welch/kalman/kalmanintro.html JJ. Leonard and H. F. Durrant-Whyte. " Mobile robot localization by tracking geometric beacons. "IEEE Trans Robotics and Automation, 7(3): 376-382, June 1991 L.R. Rabiner, "A tutorial on hidden Markov models, " Proceedings of the IEEE, vol. 77, pp. 257-286, 1989 Issues Statement Given a set of observations of the world. what is the current state of the world? Alternatives Given a set of observations of the world what is the state of the world at time t? Inputs Sequence of observations, or perhaps actions and observations Outputs from different algorithms Most likely current state x Probability distribution over possible current states p(xi) Most likely sequence of states over time: X,, 2, X3, x4, .X, Sequence of probability distributions over time p(,), p(x2), p(x3).p(x,) Choices compute the estimate x, or p(xi) present p(x)

Foundations of State Estimation Topics: Hidden Markov Models Ɣ Additional reading: Ɣ Ɣ J. J. Leonard and H. F. Durrant-Whyte. “ IEEE Trans. Ɣ Issues Ɣ Statement: – world? Ɣ – time t? Ɣ Inputs: – Model – Ɣ Outputs from different algorithms: – Most likely current state xi – i ) – Most likely sequence of states over time: x1, x2, x3, x4, ... xt – 1), p(x2), p(x3) ... p(xt ) Ɣ Choices: – How to compute the estimate xi or p(xi ) – i ) Bayes Filters Kalman Filters G. Welch and G. Bishop. “An Introduction to the Kalman Filter”. University of North Carolina at Chapel Hill, Department of Computer Science. TR 95-041. http://www.cs.unc.edu/~welch/kalman/kalmanIntro.html Mobile robot localization by tracking geometric beacons.” Robotics and Automation, 7(3):376-382, June 1991 L.R. Rabiner, “A tutorial on hidden Markov models," Proceedings of the IEEE, vol. 77, pp. 257-286, 1989. Given a set of observations of the world, what is the current state of the Alternatives: Given a set of observations of the world, what is the state of the world at Sequence of observations, or perhaps actions and observations Probability distribution over possible current states p(x Sequence of probability distributions over time p(x How to represent p(x

Graphical models, bayes' rule and the markov Assumption actions Beliefs T(xi a Observations (Z, Observable O(z /x) states Bayes rule: P(x y) p(y)p p(y) Markov:p(x1|x1,a1,a0,z0,a1,21,…,z)=p(x1|x1,a1) The Bayes' Filter p, (x)=p(x ao, zo, a,, z, a ,,z, According to Bayes rule: p(z|x,a0,z0,a1,z1,a2,2…,a,)p(xao,20,a1,2,a a.) According to the markov assumption p(z,)p(x Introducing an auxiliary variable p(z,1x)P(x|a, x')P(x ao, Zo, a, -,,a-1, 2-1) And with recursion =p(z, 1x)p(xla, x')pi-I(x)

Graphical Models, Bayes’ Rule and the Markov Assumption States x1 x2 T(xj |ai , xi ) Z2 b Beliefs 1 Observations Z1 a Actions 1 O(zj |xi ) b2 Z2 Hidden Observable ( ) ( | ) ( ) ( | ) p y p y x p x Bayes rule : p x y ( | , , , , , , , ) ( | , ) t t 1 t 0 0 1 1 t 1 t t 1 at p x x a a z a z z p x x     Markov : The Bayes’ Filter ( | ) ( | , ') ( ) ( | ) ( | , ') ( '| , , , ,..., , ) ( | ) ( | , , , , , ,..., ) ( | , , , , , , ,..., ) ( | , , , , , ,..., ) ( ) ( | , , , , , ,..., , ) 1 ' 0 0 1 1 1 1 ' 0 0 1 1 2 2 0 0 1 1 2 2 0 0 1 1 2 2 0 0 1 1 2 2 p z x p x a x p x p z x p x a x p x a z a z a z p z x p x a z a z a z a p z x a z a z a z a p x a z a z a z a p x p x a z a z a z a z t X t t t t X t t t t t t t t t t    ³ ³ And with recursion: Introduc ing an auxiliary variable : Accordi ng to the Markov assumption : Accordi ng toBayes rule :

Bayes Filter An action is taken 意/ 二2 Initial belief Posterior belief Posterior belief after an action after sensing The Kalman filter Kalman/960 Linear process and measurement models Gaussian noise Gaussian state estimate Image adapted from Maybeck Process model is Ax,+ B qn Measurement model is

Posterior belief after an action An action is taken Posterior belief after sensing State Space Initial belief Ɣ Linear process and measurement models Ɣ Gaussian noise Ɣ Gaussian state estimate Kalman, 1960 fx(t1)|z(t1)(x|z1) fx(t2)|z(t2)(x|z2) fx(t2)|z(t1 2) |z1 2) Z1 X x1 Z1 Z2 Z1 Z2 x σ σ σ µ xt zt t . Bayes Filter The Kalman Filter ),z(t (x ,z x2 Process model is = Axt-1 + But-1 + qt-1 Measurement model is = Hxt-1 + r Image adapted from Maybeck

Linear Kalman filter Only need first two moments E(x,) E[(x2-元,)(x1-元,)] Process model ax,+ Bu C=ACIA+o Measurement model K,=C, H(HC H+R)- x,=x,+K,(z1-BE) C1=(-K,H)C The Extended Kalman Filter Process model is now f(x-1,u-1,q1) Measurement model is h(x,, r Linearising we get f(x x≈x1+A(x-1-x1)+Wq1 h(x1,0)+H(x-x)+V Where A.H. w and v are the jacobians …100a…「aa , of afr

Ɣ Only need first two moments: Ɣ Process model: Ɣ Measurement model: [( ˆ )( ˆ ) ] ˆ ( ) T t t t t t t t C E x x x x x E x   C AC A Q x Bu T t t t t t        1 1 1 ˆ ˆ           t t t t t t t t T t T t t C I K H C x x K z K C H HC H R ( ) ˆ ˆ ( ˆ ) ( ) 1 Ɣ Process model is now Ɣ Measurement model is Ɣ Linearising, we get Ɣ where A, H, W and V are the Jacobians: ( , , ) t t1 t1 t1 x u q ( , ) t t t z r t t t t t t t t t t t t t z H x x Vr x x A x x Wq x u |    |         ) ~ ) ( ~( ( ˆ ) ~ ( , ) ~ 1 1 1 1 1 » » » » » ¼ º « « « « « ¬ ª w w w w w w w w n n n n x f x f x f x f A      1 1 1 1 » » » » » ¼ º « « « « « ¬ ª w w w w w w w w n m m n x h x h x h x h H      1 1 1 1 » » » » » ¼ º « « « « « ¬ ª w w w w w w w w n n n n q f q f q f q f W      1 1 1 1 » » » » » ¼ º « « « « « ¬ ª w w w w w w w w m m m m v h v h v h v h V      1 1 1 1 Linear Kalman Filter A x H x The Extended Kalman Filter f x h x h x f x , 0 , 0

Updates for the eKF Still using only first two moments Process model x, =fo CI=ACIA +w,2w Measurement model T K C H,(H,C H +VR V) x,=x,+K,(2,-h(x1,0) (-K,H,)C Leonard and Durrant-White 1991 Robot navigation Initial belief Action Measurement x+△ t cos e+q△ t cos e △b f(5,u,q)=y+△sinb+q△tsin b+△b+q△6 h(,v) nge bearing tan 6+v

Updates for the EKF Ɣ Still using only first two moments. Ɣ Process model: Ɣ Measurement model: T t t t T t t t t t t t C A x f x u       1 1 1 ˆ (ˆ , )           t t t t t t t t t T t t t T t t t T t t t C I C x x K z K C H H ( ) ˆ ˆ ( (ˆ ( ) 1 Robot Navigation Initial belief Action Measurement » » » ¼ º « « « ¬ ª T [ y x » » » ¼ º « « « ¬ ª  '  '  ' T T T T T T T [ q y t q t x t q t f sin sin cos cos ( , , ) » ¼ º « ¬ ª ' ' T t u » ¼ º « ¬ ª b r z » » » ¼ º « « « ¬ ª   ¸ ¸ ¹ · ¨ ¨ © §       » ¼ º « ¬ ª  T T O O O O [ v x y x y v h v x y x y r 1 2 2 tan ( , ) bearing range [ t1  [t [ t » ¼ º « ¬ ª y x O O O A C W Q W , 0 K H h x H C V R V , 0 ))  '  '  ' u q Leonard and Durrant-White, 1991

The ekf for robot localization Prediction step Measurement step +△ t cos e+q△tcos6 x)+ f(5,u,q)=y+△sine+q△sinh(5,v)= an 6+ 6+△6+q△6 0 in e 0 A=01△ t cos e y h 00 W cosb△ t sin e△6 Problems with Kalman Filters Unimodal probability distribution Gaussian sensor and motion error models Particularly painful for range data than can have occlusions Quadratic in number of state features Data association

The EKF for Robot Localization » » » ¼ º « « « ¬ ª  '  '  ' T T T T T T T [ q y t x t f sin sin cos cos ( , , ) » » » ¼ º « « « ¬ ª ' 0 0 1 0 1 cos 1 0 sin T T t t A > @T W 't cosT 'tsinT 'T » » » » ¼ º « « « « ¬ ª      1 0 2 2 r x r y r y r x H y x x y O O O O » » » ¼ º « « « ¬ ª   ¸ ¸ ¹ · ¨ ¨ © §        T T O O O O [ v x y x y v h v x y x y r 1 2 2 tan ( , ) » ¼ º « ¬ ª 0 1 1 0 V Prediction step Measurement step Ɣ Ɣ Gaussian sensor and motion error models – Particularly painful for range data than can have occlusions Ɣ Quadratic in number of state features Data Association ?  '  '  ' q t q t u q  ' Problems with Kalman Filters Unimodal probability distribution

Hidden markov models actions Beliefs T(xi a Observations O(2/x states Discrete states actions and observations f(,, ,h(,, .) can now be written as tables (Somewhat) USeful for Localization in Topological aps p(x2x1a)=9 X3:p(xx1,a)=05 p(x4x1a)=05 A Observations can be feature as corridor features junction features, etc

Hidden Markov Models Ɣ Discrete states, actions and observations – f(•,•,•), h(•,•) can now be written as tables States x1 x2 T(xj |ai , xi ) Z2 b Beliefs 1 Observations Z1 a Actions 1 O(zj |xi ) b2 Z2 Hidden Observable (Somewhat) Useful for Localization in Topological Maps x1 x2: p(x2|x1,a)= .9 x3: p(x3|x1,a)=.05 x4: p(x4|x1,a)=.05 Observations can be features such as corridor features, junction features, etc

Belief Tracking Estimating p, (x)is now easy After each action a, and observation Z, X∈X, update (x)=p(z, Ix>p(x a, x)Pi-(x) This algorithm is quadratic in X (Recall that Kalman Filter is quadratic in number of state features Continuous x means infinite number of states. The Three Basic Problems for HMms 1)Given the history o=a1, Z,, a, Z2 aT, ZT, and a model n=(A, B, ) how do we efficientl compute P(on), the probability of the history, given the model? 2)Given the history O=a1, Z1, a2, Z2, aT, ZT and a model i, how do we choose a corresponding state sequence X=X,X2,XT which is optimal in some meaningful sense (i.e., best explains the observations)? 3)How do we adjust the model parameters n=(A, B, T)to maximize P(o)?

Ɣ Estimating pt (x) is now easy Ɣ After each action at and observation zt , xX, update : Ɣ This algorithm is quadratic in |X|. Ɣ number of state features. Continuous X means infinite number of states.) Belief Tracking ( ) ( | ) ( | , ') ( ) 1 ' p x x x p x t X t t ¦ t  The Three Basic Problems for HMMs 1,z1,a2,z2,...,aT,zT, and a model O=(A,B,S), how do we efficiently compute P(O|O), the probability of the history, given the model? 1,z1,a2,z2,...,aT,zT and a model O, how do we choose a corresponding state sequence X=x1,x2,...,xT which is optimal in some meaningful sense (i.e., best “explains” the observations)? O=(A,B,S) to maximize P(O|O)? (Recall that Kalman Filter is quadratic in p z p x a 1) Given the history O=a 2) Given the history O=a 3) How do we adjust the model parameters

HMM Basic Problem 1 Probability of history o given n is sum over all state sequences Q=x,, X2, X3 X, PO|A)=∑PO|Q,)P(Q|A allo ∑(x)p(1|x)p(x2|x1,a)p(z2|x2)p(x3|x2,a2 Summing over all state sequences is 2T. XT Instead build lattice of states forward in time computing probabilities of each possible trajectory as lattice is built Forward algorithm is X/2T HMM Basic Problem 1 1. nitialization ar(1)=x;p(z1|x) 2. Induction Repeat for t=1: T ()=∑a(0(x1x)m(x1x) 3. Termination: p(O|4)=∑a() Implementation of the computation of a ()in terms of a lattice of observations t, and states

HMM Basic Problem 1 Ɣ Probability of history O given O is sum over all state sequences Q=x1,x2,x3,...,xT,: Ɣ Summing over all state sequences is 2T|X|T Ɣ Instead, build lattice of states forward in time, computing probabilities of each possible trajectory as lattice is built Ɣ Forward algorithm is |X|2T ¦ ¦ , ,... 1 1 1 2 1 1 2 2 3 2 2 2 ( ) ( | ) ( | , ) ( | ) ( | , )... ( | ) ( | , ) ( | ) Q x x x x a Q all 1 all S O O O HMM Basic Problem 1 ( ) ( | ) 1 i 1 i D i S x ( ) ( ) ( | ) ( | ) 1 | | 1 1 t i X j t t j i a i j x x   » ¼ º « ¬ ª ¦D ¦ | | 1 ( | ) ( ) X i t O D i (j) j. 1 1 2 2 3 T N αt q q p z p x x a p z p x P O P O P Q 1. Initialization 2. Induction: Repeat for t=1:T 3. Termination: p z p x p z p O Implementation of the computation of in terms of a lattice of observations t, and states Observation, t State

HMM Basic Problem 2 Viterbi Decoding Same principle as forward algorithm with an extra term iNitialization tErmination 6(i)=x;p(z1|x;) P=max[5(l v1(i)=0 rg 2)Induction 4) Back tracking 2()RNOp(x1x,a)小v=x) 1+1(x v, =argmax8, ()p(x, |x, a)] Isis What you should know Form of Bayes Filter Kalman filter representation How to take a state estimation problem and represent it as a Kalman(or Extended Kalman) filter What terms are needed and how to find them What a hidden Markov model is The Forward algorithm The Viterbi algorithm

3) Termination 4) Back tracking HMM Basic Problem 2 Ɣ algorithm, with an extra term 1) Initialization 2) Induction ( ) 0 ( ) ( | ) 1 1 1 i i x i i \ G S > @ ( ) > @ ( ) ( | , ) ( ) max ( ) ( | , ) ( | ) 1 1 | | 1 1 | | t i j j j X t t i j j t i j X t i j x a i j x a x   \ G G G > @ > @ ( ) max ( ) 1 | | * 1 | | x i P i T i X T T i X G G ( ) * 1 1 * t t t x \ x What you should know Ɣ Form of Bayes’ Filter Ɣ Ɣ How to take a state estimation problem and represent Ɣ What terms are needed and how to find them Ɣ What a Hidden Markov Model is Ɣ The Forward algorithm Ɣ Viterbi Decoding: Same principle as forward p z arg max p x p x p z d d d d arg max d d d d Kalman filter representation it as a Kalman (or Extended Kalman) filter The Viterbi algorithm

点击下载完整版文档(PDF)VIP每日下载上限内不扣除下载券和下载次数;
按次数下载不扣除下载券;
24小时内重复下载只扣除一次;
顺序:VIP每日次数-->可用次数-->下载券;
已到末页,全文结束
相关文档

关于我们|帮助中心|下载说明|相关软件|意见反馈|联系我们

Copyright © 2008-现在 cucdc.com 高等教育资讯网 版权所有