正在加载图片...
not erped to duplicate them eractly if you solve the problem a second time.I don't see how this loss of determinism can be stopped.Of course from a tecnical point of view,it would be easy to make our machines deterministic by simply leaving out all thut intelligence.Hoeve,we will not do this,for intelligence is too powerful. In the last fifty uears.the great message communicated to scientists and engineers was that it is unreasonable to ask for exactness in mmerical computation.In the nert i儿头,they wil learn not to ask for repeatability,eth. So much will change in fifty years that it is refreshing to to predict some continuity One thing that i believe will last is Hoating point arithinetic.of course the details will change,and in particular,word lengths will continue their progression from 16 to 32 to 6A to128 bits and beyond,as sequences of computations becomelonger and require more accuracy to contain aceumulation of erors.Conceivably we might even switch to hardware based on a logarithmic representation of numb ers.But I believ e the tie deining features of floating point rithmetic will persist:rativerather thon absolute magnitudes,and rounding of all intermediate operations. Outside the numerical analysis community,some people fedl that foating poin ed to b e ca aside as machme ecome more sophisticat ed.Computers moy hoveb een born as mumber cruncers,the eeling goes,bu t they are fast enou high the fact that ymt ofically.You hov e to ma ing the e 6.Linear sustems of euations will be solved in O(N2+)flops O(N)point operation ere N is the dimenstom ised 9 he that the A "fast matrir may be possible,perhaps one with complerity O(N2logN)or not expect to duplicate them exactly if you solve the problem a second time. I don't see how this loss of determinism can be stopped. Of course, from a technical point of view, it would be easy to make our machines deterministic by simply leaving out all that intelligence. However, we will not do this, for intelligence is too powerful. In the last fty years, the great message communicated to scientists and engineers was that it is unreasonable to ask for exactness in numerical computation. In the next fty, they will learn not to ask for repeatability, either. 5. The importance of oating point arithmetic wil l be undiminished. So much will change in fty years that it is refreshing to to predict some continuity. One thing that I believe will last is oating point arithmetic. Of course, the details will change, and in particular, word lengths will continue their progression from 16 to 32 to 64 to 128 bits and beyond, as sequences of computations become longer and require more accuracy to contain accumulation of errors. Conceivably we might even switch to hardware based on a logarithmic representation of numbers. But I believe the two de ning features of oating point arithmetic will persist: relative rather than absolute magnitudes, and rounding of all intermediate operations. Outside the numerical analysis community, some people feel that oating point arithmetic is an anachronism, a 1950s kludge that is destined to be cast aside as machines become more sophisticated. Computers may have been born as number crunchers, the feeling goes, but now that they are fast enough to do arbitrary symbolic manipulations, we must move to a higher plane. In truth, no amount of computer power will change the fact that most numerical problems cannot be solved symbolically. You have to make approximations, and oating point arithmetic is the best general-purpose approximation idea ever devised. It will persist, but get hidden deeper in the machine. 6. Linear systems of equations wil l be solved in O(N2+ ) ops. Matrix computations as performed on machines around the world typically require O(N3 ) oating point operations|\ ops"|where N is the dimension of the problem. This statement applies exactly for computing inverses, determinants, and solutions of systems of equations, and it applies approximately for eigenvalues and singular values. But all of these problems involve only O(N2 ) inputs, and as machines get faster, it is increasingly aggravating that O(N3 ) operations should be needed to solve them. Strassen showed in 1968 that the O(N3 ) barrier could be breached. He devised a recursive algorithm whose running time was O(N log27 ), approximately O(N2:81), and subsequent improvements by Coppersmith, Winograd and others have brought the ex￾ponent down to 2.376. However, the algorithms in question involve constants so large that they are impractical, and they have had little e ect on scienti c computing. As a result, the problem of speeding up matrix computations is viewed by many numerical analysts as a theoretical distraction. This is a strange attitude to take to the most con￾spicuous unsolved problem in our eld! Of course, it may be that there is some reason why no practical algorithm can ever be found, but we certainly do not know that today. A \fast matrix inverse" may be possible, perhaps one with complexity O(N2 log N) or 4
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有