正在加载图片...
DEFINITION OF REFERENCE PRIORS 9 ExAMPLE 4 (Mixture model).Let x={r1,...,n be a random sample from the mixture p(xi)=N(x0,1)+N(x 0,1),and consider the uni- form prior function (0)=1.Since the likelihood function is bounded below by 2-II=1N(;,1)>0,the integrated likelihood ()(do= ()d will diverge.Hence,the corresponding formal posterior is im- proper,and therefore the uniform prior is not a permissible prior function for this model.It can be shown that Jeffreys prior for this mixture model has the shape of an inverted bell,with a minimum value 1/2 at u=0;hence, it is also bounded from below and is,therefore,not a permissible prior for this model either. Example 4 is noteworthy because it is very rare for the Jeffreys prior to yield an improper posterior in univariate problems.It is also of interest because there is no natural objective prior available for the problem.(There are data-dependent objective priors:see Wasserman [43].) Theorem 2 can easily be modified to apply to models that can be trans- formed into a location model. COROLLARY1.Consider M≡{p(x|0),0∈Θ,x∈X}.If there are mono- tone functions y=y(x)and o=(0)such that p(yo)=f(y-o)is a lo- cation model and there exists such that limf(t)=0,then n(0)=o(0)is a permissible prior function for M. The most frequent transformation is the log transformation,which con- verts a scale model into a location model.Indeed,this transformation yields the following direct analogue of Theorem 2. COROLLARY 2.Consider M={p(x0)=0-f(x/0),0>0,ER), a scale model where f(s),s>0,is a density function.If,for some e>0, (2.4) lim ltee'f(e)=0, |→∞ then n(0)=0-1 is a permissible prior function for the scale model M. EXAMPLE 5 (Exponential data).If x is an observation from an expo- nential density,(2.4)becomes t+et exp(-et)→0,aslt一o,which is true.From Corollary 2,(0)=0-1 is a permissible prior;indeed,i(0)= (2i)-l0-l,e-t≤0≤e is expected logarithmically convergent toπ(0): EXAMPLE 6(Uniform data).Let z be one observation from the uniform distribution M={Un(x 0,0)=0-1,x[0,0],0>0}.This is a scale den- sity,and equation(2.4)becomes+ee1o<e'<1y→0,asl→oo,which is indeed true.Thus,m(0)=0-1 is a permissible prior function for M.DEFINITION OF REFERENCE PRIORS 9 Example 4 (Mixture model). Let x = {x1,...,xn} be a random sample from the mixture p(xi | θ) = 1 2 N(x | θ, 1) + 1 2 N(x | 0, 1), and consider the uni￾form prior function π(θ) = 1. Since the likelihood function is bounded below by 2−n Qn j=1 N(xj | 0, 1) > 0, the integrated likelihood R ∞ −∞ p(x | θ)π(θ) dθ = R ∞ −∞ p(x | θ) dθ will diverge. Hence, the corresponding formal posterior is im￾proper, and therefore the uniform prior is not a permissible prior function for this model. It can be shown that Jeffreys prior for this mixture model has the shape of an inverted bell, with a minimum value 1/2 at µ = 0; hence, it is also bounded from below and is, therefore, not a permissible prior for this model either. Example 4 is noteworthy because it is very rare for the Jeffreys prior to yield an improper posterior in univariate problems. It is also of interest because there is no natural objective prior available for the problem. (There are data-dependent objective priors: see Wasserman [43].) Theorem 2 can easily be modified to apply to models that can be trans￾formed into a location model. Corollary 1. Consider M ≡ {p(x | θ),θ ∈ Θ,x ∈ X }. If there are mono￾tone functions y = y(x) and φ = φ(θ) such that p(y | φ) = f(y − φ) is a lo￾cation model and there exists ε > 0 such that lim|t|→0 |t| 1+εf(t) = 0, then π(θ) = |φ ′ (θ)| is a permissible prior function for M. The most frequent transformation is the log transformation, which con￾verts a scale model into a location model. Indeed, this transformation yields the following direct analogue of Theorem 2. Corollary 2. Consider M = {p(x | θ) = θ −1f(|x|/θ),θ > 0,x ∈ R}, a scale model where f(s), s > 0, is a density function. If, for some ε > 0, lim |t|→∞ |t| 1+ε e t f(e t (2.4) ) = 0, then π(θ) = θ −1 is a permissible prior function for the scale model M. Example 5 (Exponential data). If x is an observation from an expo￾nential density, (2.4) becomes |t| 1+ε e t exp(−e t ) → 0, as |t| → ∞, which is true. From Corollary 2, π(θ) = θ −1 is a permissible prior; indeed, πi(θ) = (2i) −1 θ −1 , e −i ≤ θ ≤ e i is expected logarithmically convergent to π(θ). Example 6 (Uniform data). Let x be one observation from the uniform distribution M ={Un(x | 0,θ) = θ −1 , x ∈ [0,θ], θ > 0}. This is a scale den￾sity, and equation (2.4) becomes |t| 1+ε e t1{0<et<1} → 0, as |t| → ∞, which is indeed true. Thus, π(θ) = θ −1 is a permissible prior function for M
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有