●●°| Probab| listic Mode ● EStimate p(X Be(x,)=p(x,|=,a1,1,a12,=0) ● Bayes’rule Bel(x,= P(,|x,a1-12=1,a12…,2=0)p(x1a1,21,a12,-) tt-1-t-1t-2…:-0 my(1|x1)p(x1an1,x1,a12x,=0)
Bel (x )t Probabilistic Model z Estimate p(xt): Bel (x ) = x p | a z t −1, zt −1, at −2 ,..., z ) t ( t t , 0 z Bayes’ rule: (z p | x , a ( t −1, zt −1, at −2 ,..., z ) x p | at −1, zt −1, at −2 ,..., z ) = t t 0 t (z p t | at −1, zt −1, at −2 ,..., z )0 =α ( ( z p | x ) x p | at −1, zt −1, at −2 ,..., z ) t t t 0 0
●●°| Probab| listic Mode o Integrate over all p(x-1) Bel(x)=∞p(1|x1)p(x1an12E1,a1-2,2-0) (=,|x,)p(x,|x )p(x-1|a12,-0)x1 x-1 D(1x)m(x1|x1,a-)(x)
Probabilistic Model z Integrate over all p(xt-1): Bel (x ) = α ( ( z p | x ) x p | at −1, zt −1, at − 2 ,..., z ) t t t t 0 z p | xt ) ∫ x p t | xt −1, at −1, zt − 2 ,..., z ) x p | at −1 = α ( ( ( ,..., z )dx t 0 t −1 0 t −1 xt −1 z p | xt ) ∫ x p t | xt −1, at −1 = α ( ( ) (x p )dx t t −1 t −1 xt −1
●。。 Probab| istic Mode o Bayes' filter gives recursive, two-step procedure for estimating p(X,) Bel(,)+ap(= l*, p(x, 1x,a p(*-)dr Measurement Prediction o How to represent Bel(X+?
Probabilistic Model | Bayes’ filter gives recursive, two-step procedure for estimating p(x t x t) Bel (x ) = αp (z | x ) p (x | xt − 1, at − 1) p (x )dx t t t ∫ t t −1 t − 1 −1 Measurement Prediction | How to represent Bel(xt)?
Kalman, 1960 ●。。 Kalman filter An action is taken State space Posterior belier Initial belier Posterior belief after sensing after an action
Kalman, 1960 An action is taken Kalman Filter State Space Posterior belief Posterior belief Initial belief after an action after sensing
●。 Problems o Gaussian process and sensor noise Often solved extracting low-dimensional features Data-association problem o Often hard to implement Kalman filters o Gaussian posterior estimate
Problems | Gaussian Process and Sensor Noise • Often solved extracting low-dimensional features • Data-association problem | Often hard to implement Kalman filters | Gaussian Posterior Estimate
●。G| obal localization State space Posterior belier Posterior belief Initial belief after sensing after an action
Global Localization State Space Initial belief Posterior belief after an action Posterior belief after sensing
Burgard et al,1996 ●。° Markov loca| lization 三 L■■ State space itial belief Posterior belief Posterior belief after an action after sensing
Markov Localization State Space Initial belief Posterior belief after an action Posterior belief after sensing Burgard et al, 1996
●。。Prob|ems o Large memory footprint 50 m x 50m map 1° increments ≈343M o FIXed cell size limIts accuracy
Problems | Large memory footprint • 50 m x 50m map • 1° increments • ≈343M | Fixed cell size limits accuracy
Monte carlo localization ●。● The particle filter State space o Sample particles randomly from distribution o Carry around particle sets, rather than full distribution
Monte Carlo Localization: The Particle Filter State Space | Sample particles randomly from distribution | Carry around particle sets, rather than full distribution
●● Using Particle Filters o Can update distribution by updating particle set directly o Can(sometimes)compute properties of distribution directly from particles E.g., any moments: mean, variance, etc o If necessary can recover distribution from particles Fit Gaussian by recovering mean, covariance Kalman filter) Can fit multiple Gaussians using Expectation Maximization Can bin data and recover discrete multinomial (Markov localization)
Using Particle Filters | Can update distribution by updating particle set directly | Can (sometimes) compute properties of distribution directly from particles • E.g., any moments: mean, variance, etc. | If necessary, can recover distribution from particles • Fit Gaussian by recovering mean, covariance (Kalman filter) • Can fit multiple Gaussians using ExpectationMaximization • Can bin data and recover discrete multinomial (Markov localization)