正在加载图片...
8 CHAPTER 7.STATISTICAL FUNCTIONALS AND THE DELTA METHOD 3 Metrics Probability Distributions F and P We have already encountered the total variation and Hellinger metrics in the course of studying Scheffe's lemma,Bayes estimators,and tests of hypotheses.As we will see,as useful as these metrics are,they are too strong:the empirical measure Pn fails to converge to the true P in either the total variation or Hellinger distance in general.In fact this fails to hold in general for the Prohorov and dual bounded Lipschitz metrics which we introduce below,as has been shown by Dudley (1969), Kersting (1978),and Bretagnolle and Huber-Carol (1977);also see the remarks in Huber (1981), page 39.Nonetheless,it will be helpful to have in mind some some useful metrics for probability measures P and df's F,and their properties. Definition 3.1 The Kolmogorov or supremum metric between two distribution functions F and G is dK(F,G)≡IF-Glo≡sup F(x)-G(x) IERk Definition 3.2 The Levy metric between two distribution functions F and G is d(F,G)≡inf{e>0:G(x-e)-e≤F(x)≤G(x+e)+e for all x∈R}. Definition 3.3 The Prohorov metric between two probability measures P,Q on a metric space (S,d)is dpr(P,Q)=infe>0:P(B)<Q(B)+e for all Borel sets B) where Be≡{x:infyeB d(x,y)≤e}. To define the next metric for P,Q on a metric space(S,d)for any real-valued function f on S,set lfl=supf()-f(y)/d(,y),and denote the usual supremum norm bylfloo=sup If()I. Finally,set lIfllBL fllt +llflloo Definition 3.4 The dual-bounded Lipschitz metric dBL.is defined by dB-(P,Q)≡sup{l fdp-fdQl:lIflBL≤1 Definition 3.5 The total variation metric dry is defined by dv(C,Q)三supP(A)-Q(Al:A∈A}=互/p-9ld where p≡dP/dμ,q=dQ/dμfor some measureμdominating both P and Q(e.g.μ=P+Q). Definition 3.6 The Hellinger metric H is defined by H2(P.Q)=V-VQ)2du=1-VDGdu=1-p(P.Q) whereμis any measure dominating both P and Q.The quantity P(P,Q)≡∫√pgdlμis called the affinity between P and Q. The following basic theorem establishes relationships between these metrics:8 CHAPTER 7. STATISTICAL FUNCTIONALS AND THE DELTA METHOD 3 Metrics Probability Distributions F and P We have already encountered the total variation and Hellinger metrics in the course of studying Scheff´e’s lemma, Bayes estimators, and tests of hypotheses. As we will see, as useful as these metrics are, they are too strong: the empirical measure Pn fails to converge to the true P in either the total variation or Hellinger distance in general. In fact this fails to hold in general for the Prohorov and dual bounded Lipschitz metrics which we introduce below, as has been shown by Dudley (1969), Kersting (1978), and Bretagnolle and Huber-Carol (1977); also see the remarks in Huber (1981), page 39. Nonetheless, it will be helpful to have in mind some some useful metrics for probability measures P and df’s F, and their properties. Definition 3.1 The Kolmogorov or supremum metric between two distribution functions F and G is dK(F, G) ≡ .F − G.∞ ≡ sup x∈Rk |F(x) − G(x)|. Definition 3.2 The L´evy metric between two distribution functions F and G is dL(F, G) ≡ inf{' > 0 : G(x − ') − ' ≤ F(x) ≤ G(x + ') + ' for all x ∈ R}. Definition 3.3 The Prohorov metric between two probability measures P, Q on a metric space (S, d) is dpr(P, Q) = inf{' > 0 : P(B) ≤ Q(B$ ) + ' for all Borel sets B} where B$ ≡ {x : infy∈B d(x, y) ≤ '}. To define the next metric for P, Q on a metric space (S, d) for any real-valued function f on S, set .f.L ≡ supx)=y |f(x)− f(y)|/d(x, y), and denote the usual supremum norm by .f.∞ ≡ supx |f(x)|. Finally, set .f.BL ≡ .f.L + .f.∞. Definition 3.4 The dual - bounded Lipschitz metric dBL∗ is defined by dBL∗ (P, Q) ≡ sup{| # f dP − # f dQ| : .f.BL ≤ 1}. Definition 3.5 The total variation metric dT V is defined by dT V (P, Q) ≡ sup{|P(A) − Q(A)| : A ∈ A} = 1 2 # |p − q|dµ where p ≡ dP/dµ, q = dQ/dµ for some measure µ dominating both P and Q (e.g. µ = P + Q). Definition 3.6 The Hellinger metric H is defined by H2(P, Q) = 1 2 # { √p − √q}2dµ = 1 − # √pqdµ ≡ 1 − ρ(P, Q) where µ is any measure dominating both P and Q. The quantity ρ(P, Q) ≡ " √pqdµ is called the affinity between P and Q. The following basic theorem establishes relationships between these metrics:
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有