正在加载图片...
Maximum Likelihood 163 nice equation ∑s=∑n[1-p(xl, which unfortunately is hard to solve. So,somebody decided,let us try the next best,namely the minimum x2.This leads to minimize ∑U-p心P P(1-p) with f=r/nj.It is even a worse affair.However Joe Berkson had the bright idea of taking the log of(1-p)/p and noticing that it is linear in a which leads to the question, why not just apply least squares and minimize e'-e+ Well,as Gauss said,that will not do.One should divide the square terms by their variances to get a good estimate.The variance of log [(1-f)/f]?Oops!It is infinite.Too bad,let us approximate.After all if is differentiable,then (f)-(p)is about (f-p)'(p),so its variance is almost ('(p))Var (f-p)give or take a mile and a couple of horses tails.If (f)is log [(1-f)/f],that gives '(p)=-[p(1-p)]-1.Finally, we would want to minimize %ps1-p(1og[月-e+) Not pleasant!All right,let us replace the coefficients p(x)[1-p(x)]by estimates 1-): Now we have to minimize -s(-a+刘 a very easy matter. After all these approximations nobody but a true believer would expect that the estimate so obtained would be any good,but there are people ornery enough to try it anyway.Berkson was one of them.He found to his dismay,that the estimate had,at last at one place,a risk ratio F=I(a)E(-a)2strictly less than unity.Furthermore,that was a point where the estimate was in fact unbiased!So Joe was ready to yell'down with Cramer-Rao!'when Neyman pointed out that the derivative of the bias was not zero,and that Frechet before Cramer and Rao had written an inequality which involves the derivative of the bias. To make a long story short,they looked at the estimate.Then Joe Berkson and Joe Hodges Jr.noticed that one could Rao-Blackwellize it.Also these two authors tried to find a minimax estimator.They found one which for most purposes is very close to being minimax. Their work is reported in 4th Berkeley Symposium,volume IV.Numerical computa- tions show that for certain log doses 7 (x1=-log3x2=0,x=1og3 with 10 rats at each dose,the minimum logit estimate is definitely better than m.1.e.Maximum Likelihood nice equation s = nj[1 -p(xj)], which unfortunately is hard to solve. So, somebody decided, let us try the next best, namely the minimum X2. This leads to minimize nj[fi -p(xj)]2 pj(1 - pj) with f = rj/nj. It is even a worse affair. However Joe Berkson had the bright idea of taking the log of (1 -p)/p and noticing that it is linear in a which leads to the question, why not just apply least squares and minimize [log 1 - (c + xj)] ? fi Well, as Gauss said, that will not do. One should divide the square terms by their variances to get a good estimate. The variance of log [(1 -f)/fJ]? Oops! It is infinite. Too bad, let us approximate. After all if 4 is differentiable, then ?(f)- f(p) is about (f -p))'(p), so its variance is almost (4'(p))2Var(f -p) give or take a mile and a couple of horses tails. If 4(f) is log [(1 -f)/f ], that gives O'(p) = -[p(1 -p)]-1. Finally, we would want to minimize E np(xj)[l -p(xj)]{log [ 1 ] - (a +) Not pleasant! All right, let us replace the coefficients p(xj)[l-p(xj)] by estimates f(1 -fi). Now we have to minimize E nj,(1 -fj)[log (1 f) - (a + xj)], a very easy matter. After all these approximations nobody but a true believer would expect that the estimate so obtained would be any good, but there are people ornery enough to try it anyway. Berkson was one of them. He found to his dismay, that the estimate had, at last at one place, a risk ratio F = I(a)E,(& - a)2 strictly less than unity. Furthermore, that was a point where the estimate was in fact unbiased! So Joe was ready to yell 'down with Cramer-Rao!' when Neyman pointed out that the derivative of the bias was not zero, and that Frechet before Cramer and Rao had written an inequality which involves the derivative of the bias. To make a long story short, they looked at the estimate. Then Joe Berkson and Joe Hodges Jr. noticed that one could Rao-Blackwellize it. Also these two authors tried to find a minimax estimator. They found one which for most purposes is very close to being minimax. Their work is reported in 4th Berkeley Symposium, volume IV. Numerical computa￾tions show that for certain log doses X1 = -log, 2O, = 0, X3 = log with 10 rats at each dose, the minimum logit estimate is definitely better than m.l.e. 163
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有