正在加载图片...
160 L.LE CAM We now have a family {we;0e)such that ve(v)is continuous in 0 for each v.Also x(v)+ sup log k≥产m w;(v)] does not exceed 10 1w-12-1 Since this is certainly integrable,the theorem due to Wald(1949)is applicable and6 is consistent. So throwing away quite a bit of information made the m.l.e.consistent.Here we wasted information by fudging the observations.Another way would be to enlarge the parameter space and introduce irrelevant other measures pe. For this purpose consider our original variables Xi,but record only in which interval (ax,]the variable X falls.We obtain then discrete variables,say Y such that PlY=k]is the integral qi(k)of pi(x)on (ak,ak-1].Now,the set e of all possible discrete measures on the integers k=1,2,...can be metrized,for instance by the metric I2,-2=Σlq.(k)-9(kl For this metric the space is a complete separable space. Given discrete observations Y,j=1,...,n,we can compute a maximum likelihood estimate,say in this whole space The value of is that element e of which assigns to the integer k a probability (k)equal to the frequency of k in the sample. Now,if 6 is any element whatsoever of e,for every s>0,Pe{lle-l>E)tends to zero as no.More precisely,almost surely. The family we are interested in,the qa,i=1,2,...,constructed above form a certain subset,say o,of It is a nice closed(even discrete)subset of Suppose that we do know that eeo.Then,certainly,one should waste that information.However if we insist on taking a 6e o that maximizes the likelihood there, then 6 will almost never tend to 6.If on the contrary we maximize the likelihood over the entire space of all probability measures on the integers,we get an estimate that is consistent. It is true that this is not the answer to the problem of estimating a 6 that lies in o.May be that is too hard a problem?Let us try to select a point 0o closest to If there is no such closest point just take such that l6*-8.ll≤2-m+inf{ll8*-6l;0e⊙o}. Then Pa{0=0 for all sufficiently large n)=1. So the problem cannot be too terribly hard.In addition Doob(1948)says that,if we place on eo a prior measure that charges every point,the corresponding Bayes estimate will behave in the same manner as our 6. As explained this example is imitated from one given by Bahadur (1958).Another example imitated from Bahadur and from the mixture of Example 1 has been given by Ferguson (1982).Ferguson takes =[0,1]and considers i.i.d.variables taking values in [-1,+1].The densities,with respect to Lebesgue measure on [-1,-1],are of the form e,n=+[6o别oTL. LE CAM We now have a family {tie; 0 e 0} such that ipe(v) is continuous in 0 for each v. Also sup log lkv + kam L ti(V) does not exceed 106 I(v - 1)2 - v21. Since this is certainly integrable, the theorem due to Wald (1949) is applicable and 0 is consistent. So throwing away quite a bit of information made the m.l.e. consistent. Here we wasted information by fudging the observations. Another way would be to enlarge the parameter space and introduce irrelevant other measures Po. For this purpose consider our original variables Xj, but record only in which interval (ak, ak_-] the variable Xj falls. We obtain then discrete variables, say Yj such that Pi[Yj = k] is the integral qi(k) of pi(x) on (ak, ak_-]. Now, the set 0 of all possible discrete measures on the integers k = 1, 2, ... can be metrized, for instance by the metric IIQs - Qrl1 = > Iqs(k)- qr(k)l. k For this metric the space is a complete separable space. Given discrete observations Yj, j = 1,..., n, we can compute a maximum likelihood estimate, say O*, in this whole space 0. The value of O* is that element O* of 0 which assigns to the integer k a probability O*(k) equal to the frequency of k in the sample. Now, if 0 is any element whatsoever of 0, for every e >0, Pe{llO - 0* > e} tends to zero as n -- oo. More precisely, 0* - 0 almost surely. The family we are interested in, the qi, i = 1, 2,..., constructed above form a certain subset, say 00, of 0. It is a nice closed (even discrete) subset of 0. Suppose that we do know that 0 Eo. Then, certainly, one should waste that information. However if we insist on taking a On e 00 that maximizes the likelihood there, then On will almost never tend to 0. If on the contrary we maximize the likelihood over the entire space of all probability measures on the integers, we get an estimate 6* that is consistent. It is true that this is not the answer to the problem of estimating a 0 that lies in 00. May be that is too hard a problem? Let us try to select a point On E 0o closest to 0*. If there is no such closest point just take On such that |I0n - 0nil 2-n + inf {|0n - 011; 0 E o}. Then Pe {n = 0 for all sufficiently large n} = 1. So the problem cannot be too terribly hard. In addition Doob (1948) says that, if we place on 00 a prior measure that charges every point, the corresponding Bayes estimate will behave in the same manner as our On. As explained this example is imitated from one given by Bahadur (1958). Another example imitated from Bahadur and from the mixture of Example 1 has been given by Ferguson (1982). Ferguson takes 0 = [0, 1] and considers i.i.d. variables taking values in [-1, +1]. The densities, with respect to Lebesgue measure on [-1, -1], are of the form f (X, ) 2 + -0) -x-9(o) \]' f(x^e)62b() 6(o) 160
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有