当前位置:高等教育资讯网  >  中国高校课件下载中心  >  大学文库  >  浏览文档

《认知机器人》(英文版) Mapping Topics: Topological Maps

资源类别:文库,文档格式:PDF,文档页数:9,文件大小:396.17KB,团购合买
Mapping Topics: Topological Maps SLAM HMMs Revisited Additional reading: B. J. Kuipers Y.-T. Byun. 1991. "A robot exploration and mapping strategy based on a semantic hierarchy of spatial rep rstations. jour. s and
点击下载完整版文档(PDF)

Mapping Topics: Topological maps SLAM HMMs Revisited Additional reading B J. Kuipers&Y.-T. exploration and mapping based on a semantic hierarchy of JJ. Leonard. I. J. Cox Int J. Robotics Resear hyte. "Dynamic map building for an autonomous mobile robot Ssues Statement Given a set of observations of the world what is the world like? Inputs Sequence of observations, or perhaps actions and observations, or perhaps actions observations and vehicle motion/sensor models Outputs from different algorithms Graph of connected states Graph of connected states, transition probabilities p(xi x, ak) and observation probabilities p(zxi) Description of landmark locations Occupancy grid List of points from range sensors Choices: How to represent model <Closing the loop

Mapping Topics: Topological Maps SLAM HMMs Revisited ● Additional reading: ● B. J. Kuipers & Y.-T. Byun. 1991. “A robot exploration and mapping strategy based on a semantic hierarchy of spatial representations.” Journal of Robotics and Autonomous Systems, 8: 47-63 ● J. J. Leonard, I. J. Cox, and H. F. Durrant-Whyte. “Dynamic map building for an autonomous mobile robot”. Int. J. Robotics Research, 11(4):286--298, August 1992. Issues ● Statement: – Given a set of observations of the world, what is the world like? ● Inputs: – Sequence of observations, or perhaps actions and observations, or perhaps actions, observations and vehicle motion/sensor models ● Outputs from different algorithms: – Graph of connected states – Graph of connected states, transition probabilities p(xi |xj ,ak) and observation probabilities p(zi |xj ) – Description of landmark locations – Occupancy grid – List of points from range sensors ● Choices: – How to represent model “Closing the loop

Graphical Models Actions Bell p(xlap Observations Observable 2 hidden state Want to estimate p(Xi x, ax), p(zixi)instead of 区X1x2,…,Xx Topological Maps Idea: build map as graph of sensor signatures and control laws connecting Extracted Topology nodes Actions are control laws Observations are .? p()={0,1} Metric map layered on top Layer geometric information on topology

Graphical Models States x1 x2 p(xj |ai , xi ) Z2 b Beliefs 1 Observations Z1 a Actions 1 p(zj |xi ) b2 Z2 Hidden Observable ● Want to estimate p(xi |xj ,ak), p(zi |xj ) instead of {x1,x2,...,xT} Topological Maps ● Idea: build map as graph of sensor signatures and control laws connecting nodes ● Actions are control laws ● Observations are...? ● p(•|•) = {0, 1} ● Layer geometric information on topology Extracted Topology Metric map layered on top

Building the Topology Assume a set of distinctiveness measures 1. dn and a set of local control strategies Choose a lcs 2. Follow lcs until maximum in one of d is seen 3. Hill-climb on d, until maximum is reached 4. Move towards open space 5. Repeat A simple environment The Distinctiveness surface Hill-Climbing 易 D P2777 WALL2 WALL3 WALL2 WALL3 C Measures and strategies Measures Range variance Lateral range variance Range symmetry Temporal discontinuity Number of directions of reasonable motion Temporal change in number of directions of motion Strategies Follow the midline Move along Object on Left Move along Object on Right Blind step

Building the Topology ● Assume a set of distinctiveness measures, d1,...,dn and a set of local control strategies 1. Choose a LCS 2. Follow LCS until maximum in one of di is seen 3. Hill-climb on di until maximum is reached 4. Move towards open space 5. Repeat A simple environment The Distinctiveness Surface Hill-Climbing Measures and Strategies ● Measures: – Range variance – Lateral range variance – Range symmetry – Temporal discontinuity – Number of directions of reasonable motion – Temporal change in number of directions of motion ● Strategies – Follow the midline – Move along Object on Left – Move along Object on Right – Blind step C A B D E P1 P2 WALL2 WALL2 WALL3 WALL1 WALL3

Pros and cons Human-like maps Exploration strategy built into mapping process Purely on-line Con: Loop closure Will not necessarily map space completely Absence of decision -theoretic measures means no measure of map quality a large number of free parameters and heuristics Purely on-line The Extended Kalman Filter for Building Maps Process model is now x=f(x1-1,l1-1,q=1) Measurement model is h(x1,r;) Linearising, we get x,=f(x1-1,1-1,0) x≈x1+A(x1-1-x1)+Wq1 二≈h(x1,0)+H(x1-x,)+V1 Where A.H. w and v are the jacobians ah ch afr, a af, ah

Pros and Cons ● Pro:● Human-like maps ● Exploration strategy built into mapping process ● Purely on-line ● Con:● Loop closure ● Will not necessarily map space completely ● Absence of decision-theoretic measures means no measure of map quality ● A large number of free parameters and heuristics ● Purely on-line ● Process model is now ● Measurement model is ● Linearising, we get ● where A, H, W and V are the Jacobians: The Extended Kalman Filter for Building Maps ( , , ) t = t−1 t−1 t−1 x f x u q ( , ) t t t z = h x r t t t t t t t t t t t t t z h x H x x Vr x x A x x Wq x f x u ≈ + − + ≈ + − + = − − − − − ) ~ ,0) ( ~( ( ˆ ) ~ ( , ,0) ~ 1 1 1 1 1 ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ ⎡ ∂ ∂ ∂ ∂ ∂ ∂ ∂ ∂ = n n n n x f x f x f x f A L M O M L 1 1 1 1 ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ ⎡ ∂ ∂ ∂ ∂ ∂ ∂ ∂ ∂ = n m m n x h x h x h x h H L M O M L 1 1 1 1 ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ ⎡ ∂ ∂ ∂ ∂ ∂ ∂ ∂ ∂ = n n n n q f q f q f q f W L M O M L 1 1 1 1 ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ ⎡ ∂ ∂ ∂ ∂ ∂ ∂ ∂ ∂ = m m m m v h v h v h v h V L M O M L 1 1 1 1

Updates for the eKF Still using only first two moments Process model f(i CI=ACIA +w,ew Measurement model K,=CH,(HCIH+V, V) x,=x,+K,(=1-h(x,0) C1=(-K,H,)C1 Leonard. Cox and Durranf-White /992 Robot mapping Initial belief Action Measurement △ t cose+ gAt coS6 +△sinb+q△sin 6+A6+qA6 5=0 Mx △b h(,v)= range .-x)+(2,-y bearing tan

Updates for the EKF ● Still using only first two moments. ● Process model: ● Measurement model: T t t t T t t t t t t t C AC A W QW x f x u = + = − − − − − 1 1 1 ˆ (ˆ , ,0) − − − − − − = − = + − = + t t t t t t t t t T t t t T t t t T t t t C I K H C x x K z h x K C H H C H V RV ( ) ˆ ˆ ( (ˆ ,0)) ( ) 1 Robot Mapping Initial belief Action Measurement ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ ⎡ = y x y x 1 1 λ λ ξ θ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ ⎡ + ∆ + ∆ + ∆ + ∆ + ∆ + ∆ = x x q y t q t x t q t f u q 1 1 sin sin cos cos ( , , ) λ λ θ θ θ θ θ θ θ ξ ⎥ ⎦ ⎤ ⎢ ⎣ ⎡ ∆ ∆ = θ t u ⎥ ⎦ ⎤ ⎢ ⎣ ⎡ = b r z ( ) ( ) ⎥ ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎢ ⎣ ⎡ − + ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ − − − + − + =⎥ ⎦ ⎤ ⎢ ⎣ ⎡ = − θ θ λ λ λ λ ξ v x y x y v h v x y x y r 1 2 2 tan ( , ) bearing range ξ t−1 − ξt ξ t ⎥ ⎦ ⎤ ⎢ ⎣ ⎡ = y x λ λ λ Leonard, Cox and Durrant-White, 1992

The ekf for robot localization Prediction step Measurement step x+△cosb+q△cosb (2-x)+(2,-y+ y+Tsin 8+qAt sin 8 h(2,v) f(5,u,q) 0+△b+q△6 6+v A A 10-△ tsin e00 01△tcos00 H= r A=00 00 0 10 00 W 110 Pros and cons Pro: Very reliable Easy to implement if assumptions hold Gives very powerful framework for reasoning about quality of m Can be run on -or off-line Landmark representation somewhat intuitive to humans Con Very sensitive to data association errors Point features Linear-Gaussian world model Quadratic complexity

The EKF for Robot Localization ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ ⎡ ∆ − ∆ = 0 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 1 cos 0 0 1 0 sin 0 0 θ θ t t A W = [ ] 1 1 1 0 0 ⎥ ⎥ ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎢ ⎢ ⎣ ⎡ − − − − − − − − − = 2 1 2 1 2 1 2 1 1 1 1 1 1 0 r x r y r x r y r y r x r y r x H y x y x x y x y λ λ λ λ λ λ λ λ ( ) ( ) ⎥ ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎢ ⎣ ⎡ − + ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ − − − + − + = − θ θ λ λ λ λ ξ v x y x y v h v x y x y r 1 2 2 tan ( , ) ⎥ ⎦ ⎤ ⎢ ⎣ ⎡ = 0 1 1 0 V Prediction step Measurement step ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎦ ⎤ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ ⎡ + ∆ + ∆ + ∆ + ∆ + ∆ + ∆ = x x q y t q t x t q t f u q 1 1 sin sin cos cos ( , , ) λ λ θ θ θ θ θ θ θ ξ Pros and Cons ● Pro:● Very reliable ● Easy to implement if assumptions hold ● Gives very powerful framework for reasoning about quality of map ● Can be run on- or off-line ● Landmark representation somewhat intuitive to humans ● Con:● Very sensitive to data association errors ● Point features ● Linear-Gaussian world model ● Quadratic complexity

HMMs Revisited 3 How do we adjust the model parameters n=(A, B, to maximize P(o2)? Assume a topology of states X Want to learn p(xilx ak), and p(zilx) from a sequence O(z1, a1,Z2, a zr. a · Approximate idea Assume we have O(z1, a, X, Z2, a2, x2, . Z,, aT, X,) Use counting to estimate p(xix, ak), and p(zixi) ·Re- estimate x1,X2,x · Repeat How to do this efficiently and correctly Remember hmm Basic Problem 1 a1(1)=xp(二1|x) ∑a()p(x,|x,a)p(=1x) J p(O|)=∑a( i=1 Backwards terms Bn()=1 月(1=∑p(x|x2a1)p(1x)B1(

HMMs Revisited 3) How do we adjust the model parameters λ=(A,B,π) to maximize P(O|λ)? ● Assume a topology of states X ● Want to learn p(xi |xj , ak), and p(zi |xj ) from a sequence O(z1,a1, z2,a2,…, zT,aT) ● Approximate idea: ● Assume we have O(z1,a1,x1,z2,a2,x2,…, zT,aT,xT) ● Use counting to estimate p(xi |xj , ak), and p(zi |xj ) ● Re-estimate x1,x2,…,xT ● Repeat ● How to do this efficiently and correctly? Remember HMM Basic Problem 1 ( ) ( | ) 1 i 1 i α i = π p z x ( ) ( ) ( | , ) ( | ) 1 | | 1 1 t i X j t t j i t i j p x x a p z x + = + ⎥ ⎦ ⎤ ⎢ ⎣ ⎡ α = ∑α ∑= = | | 1 ( | ) ( ) X i t p O λ α i β T (i) =1 ( ) ( | , ) ( | ) ( ) 1 1 | | 1 i p x x a p z x j t j t X j t j i t + + = β = ∑ β Backwards terms:

HMMs Revisited 1. Assume a prior model 2. Compute probability laijbj(ot+1) of transition from Xi to x at time t at(1): i Bt+10 t-1 t+2 (j=a, (p(x, 1-x)p(=, 1x )B) Plon a, P(Ixp(,x)B() ∑∑a、()p(x|x)p(|x,)B1( and probability of all transitions at time t 1(=∑51( We've left out actions here. Actions imply an extra index on s and some extra accounting for time indices HMMS Revisited 3. Update parameters using expected counts ∑5( P(xIx) 2 ∑r、( ∑()6(=,==) p(=|x,)==1 ∑( 4. Repeat until model stops changing

HMMs Revisited 1. Assume a prior model 2. Compute probability of transition from xi to xj at time t and probability of all transitions at time t ∑∑= = + + = N i N j t tjtji t tjtji jxzpxxpi jxzpxxpi 1 1 1 1 )()|()|()( )()|()|()( α β α β ∑= = i j t t jii 1 ζγ ),()( We’ve left out actions here. Actions imply an extra index on ζ and some extra accounting for time indices. HMMs Revisited 3. Update parameters using expected counts 4. Repeat until model stops changing ∑ ∑ − = − = = 1 1 1 1 ( ) ( , ) ( | ) T t t T t t i j i i j p x x γ ζ ∑ ∑ = = == = T t t T t t t i i j j j z z p z x 1 1 ( ) ( ) ( ) ( | ) γ γ δ si aij t-1 t t t+2 bj sj (ot+1) a (i) bt+1 t+1 ( j) si aij t-1 t t t+2 bj sj (ot+1) a (i) bt+1 t+1 ( j) ∑∑= = + + + = = N i N j t tjtji t tjtji t tjtji t jxzpxxpi jxzpxxpi Op jxzpxxpi ji 1 1 1 1 1 )()|()|()( )()|()|()( )|( )()|()|()( ),( α β α β λ α β ζ ∑= = i j t t jii 1 ζγ ),()( We’ve left out actions here. Actions imply an extra index on ζ and some extra accounting for time indices

Pros and cons Pro: Most general Gives very powerful framework for reasoning about many, many different kinds of problems Con: Computationally complex(iterative procedure for a single update) Can be data-inefficient Subject to local minima Purely off-line Hard to extract human-oriented map

Pros and Cons ● Pro:● Most general ● Gives very powerful framework for reasoning about many, many different kinds of problems ● Con:● Computationally complex (iterative procedure for a single update) ● Can be data-inefficient ● Subject to local minima ● Purely off-line ● Hard to extract human-oriented map

点击下载完整版文档(PDF)VIP每日下载上限内不扣除下载券和下载次数;
按次数下载不扣除下载券;
24小时内重复下载只扣除一次;
顺序:VIP每日次数-->可用次数-->下载券;
已到末页,全文结束
相关文档

关于我们|帮助中心|下载说明|相关软件|意见反馈|联系我们

Copyright © 2008-现在 cucdc.com 高等教育资讯网 版权所有