DEFINITION OF REFERENCE PRIORS > distribution satisfying this condition will yield a posterior that,on average over x,is a good approximation to the proper posterior that would result from restriction to a large compact subset of the parameter space. To some Bayesians,it might seem odd to worry about averaging the log- arithmic discrepancy over the sample space but,as will be seen,reference priors are designed to be "noninformative"for a specified model,the notion being that repeated use of the prior with that model will be successful in practice. ExAMPLE 2 (Fraser,Monette and Ng [21]continued).In Example 1,the discrepancies (x)mi(.x)}between x)and the posteriors de- rived from the sequence of proper priors {i()1 converged to zero.How- ever,Berger and Bernardo [7]shows that Jkf(x)i(x)}pi(x)dx- log 3 as i-oo,so that the expected logarithmic discrepancy does not go to zero.Thus,the sequence of proper priors ()=1/i,01.. does not provide a good global approximation to the formal prior m(0)=1, providing one explanation of the paradox found by Fraser,Monette and Ng 21. Interestingly,for the improper prior (0)=1/0,the approximating com- pact sequence considered above can be shown to yield posterior distributions that expected logarithmically converge to (0-p(x0),so that this is a good candidate objective prior for the problem.It is also shown in Berger and Bernardo [7 that this prior has posterior confidence intervals with the correct frequentist coverage. Two potential generalizations are of interest.Definition 4 requires con- vergence only with respect to one approximating compact sequence of pa- rameter spaces.It is natural to wonder what happens for other such approx- imating sequences.We suspect,but have been unable to prove in general, that convergence with respect to one sequence will guarantee convergence with respect to any sequence.If true,this makes expected logarithmic con- vergence an even more compelling property. Related to this is the possibility of allowing not just an approximating series of priors based on truncation to compact parameter spaces,but in- stead allowing any approximating sequence of priors.Among the difficulties in dealing with this is the need for a better notion of divergence that is symmetric in its arguments.One possibility is the symmetrized form of the logarithmic divergence in Bernardo and Rueda[12],but the analysis is con- siderably more difficult. 2.2.Permissible priors.Based on the previous considerations,we re- strict consideration of possibly objective priors to those that satisfy the expected logarithmic convergence condition,and formally define them as follows.(Recall that x represents the entire data vector.)DEFINITION OF REFERENCE PRIORS 7 distribution satisfying this condition will yield a posterior that, on average over x, is a good approximation to the proper posterior that would result from restriction to a large compact subset of the parameter space. To some Bayesians, it might seem odd to worry about averaging the logarithmic discrepancy over the sample space but, as will be seen, reference priors are designed to be “noninformative” for a specified model, the notion being that repeated use of the prior with that model will be successful in practice. Example 2 (Fraser, Monette and Ng [21] continued). In Example 1, the discrepancies κ{π(· | x) | πi(· | x)} between π(θ | x) and the posteriors derived from the sequence of proper priors {πi(θ)}∞ i=1 converged to zero. However, Berger and Bernardo [7] shows that R X κ{π(· | x) | πi(· | x)}pi(x) dx → log 3 as i → ∞, so that the expected logarithmic discrepancy does not go to zero. Thus, the sequence of proper priors {πi(θ) = 1/i,θ ∈ {1,...,i}}∞ i=1 does not provide a good global approximation to the formal prior π(θ) = 1, providing one explanation of the paradox found by Fraser, Monette and Ng [21]. Interestingly, for the improper prior π(θ) = 1/θ, the approximating compact sequence considered above can be shown to yield posterior distributions that expected logarithmically converge to π(θ | x) ∝ θ −1p(x | θ), so that this is a good candidate objective prior for the problem. It is also shown in Berger and Bernardo [7] that this prior has posterior confidence intervals with the correct frequentist coverage. Two potential generalizations are of interest. Definition 4 requires convergence only with respect to one approximating compact sequence of parameter spaces. It is natural to wonder what happens for other such approximating sequences. We suspect, but have been unable to prove in general, that convergence with respect to one sequence will guarantee convergence with respect to any sequence. If true, this makes expected logarithmic convergence an even more compelling property. Related to this is the possibility of allowing not just an approximating series of priors based on truncation to compact parameter spaces, but instead allowing any approximating sequence of priors. Among the difficulties in dealing with this is the need for a better notion of divergence that is symmetric in its arguments. One possibility is the symmetrized form of the logarithmic divergence in Bernardo and Rueda [12], but the analysis is considerably more difficult. 2.2. Permissible priors. Based on the previous considerations, we restrict consideration of possibly objective priors to those that satisfy the expected logarithmic convergence condition, and formally define them as follows. (Recall that x represents the entire data vector.)