正在加载图片...
not expect to duplicate them exactly if you solve the problem a second time.I don't see how this loss of determinism can be stopped.Of course,from a tednical point of view,it would be easy to make our machines deterministic by simply lea ing out all that intelligence.However,we will not do this,for int dligence is too powerful. was that it is unreasonable to ask for ex actness in numerical comput ation.In the net fifty,they will learn not to ask for repeatability,either. ff The importance of foating point arithmetic will be undminishedf So muc will change in fifty y ears that it is rereshing to to predict some continuity. One thing that I bdieve will last is floating point arithmetic.Of course,the details will change and in particular,word lengths will continue their progression from 16 to 32 to 64 to 128 bits and beyond,as sequences of computations become longer and require to hardware based on a logarithmic representation of numb ers.But I b elieve the two magnitudes,and rounding of all intermediate operations. Outside the numerical analysis community,some people fed that foating point arithmetic is an anacronism,a1950s kludge that is destined tobe cast aside as macines b ecome more sophisticated.Computers may hae b een b orn as number crunders,the feding goes,but now that they arefast enough to do abitrary symbolic manipulations, we must move to a higher plane.In truth,no amount of computer power will cange thefact that most numerical problems cannot be solv ed sy mb olically.You ha e to make approximations,and foating p oint arithmetic is thebest generalpurpose approximation idea ev er dev ised.It will persist,but get hidden deeper m the macme. ff Linear systems of equations will be solved in O+)fopsff Matrix computations as performed on machines around the world ty pically require O4N3)floating point operations-"flops"-where N is the dimension of the problem. This statement applies exactly for computing inverses,determinants,and solutions of sy stems of equations,and it applies approximately for eigerv alues and singular values. But all of these problems involve only OAN2)inputs,and as machines get faster,it is increasingly aggraating that ON)operations should be needed to solve them. strassen showed in 1968 that the oan3)barrier could be breaded.he devised a recursive agorithm whose ruming time),ON)and subsequent improv ements by Copp ersmith,Winograd and others have brought the ex- ponent down to 2.376.Howev,the algorithms in question inolve constants so large that they are impractical,and they have had little e.ect on scientific computing.As a result,the problem of sp eeding up matrix computations is viewed by many mumerical analy sts as a theoretical distraction.This is a strange attitude to take to the most con- spicuous unsolv ed problem in our fidd!Of course,it may be that there is some reason why no practical algorithm can ever befound,but we certainly do not know that today. A "fast matrix inv erse"may be possible,perhaps one with complexity O4N log N)or not expect to duplicate them exactly if you solve the problem a second time I don t see how this loss of determinism can be stopped Of course from a technical point of view it would be easy to make our machines deterministic by simply leaving out all that intelligence However we will not do this for intelligence is too powerful In the last fty years the great message communicated to scientists and engineers was that it is unreasonable to ask for exactness in numerical computation In the next fty they will learn not to ask for repeatability either The importance of oating point arithmetic wil l be undiminished So much will change in fty years that it is refreshing to to predict some continuity One thing that I believe will last is oating point arithmetic Of course the details will change and in particular word lengths will continue their progression from  to  to  to   bits and beyond as sequences of computations become longer and require more accuracy to contain accumulation of errors Conceivably we might even switch to hardware based on a logarithmic representation of numbers But I believe the two dening features of oating point arithmetic will persist relative rather than absolute magnitudes and rounding of all intermediate operations Outside the numerical analysis community some people feel that oating point arithmetic is an anachronism a s kludge that is destined to be cast aside as machines become more sophisticated Computers may have been born as number crunchers the feeling goes but now that they are fast enough to do arbitrary symbolic manipulations we must move to a higher plane In truth no amount of computer power will change the fact that most numerical problems cannot be solved symbolically You have to make approximations and oating point arithmetic is the best general purpose approximation idea ever devised It will persist but get hidden deeper in the machine  Linear systems of equations wil l be solved in ON￾￾ ops Matrix computations as performed on machines around the world typically require ON  oating point operationsopswhere N is the dimension of the problem This statement applies exactly for computing inverses determinants and solutions of systems of equations and it applies approximately for eigenvalues and singular values But all of these problems involve only ON￾  inputs and as machines get faster it is increasingly aggravating that ON  operations should be needed to solve them Strassen showed in  that the ON  barrier could be breached He devised a recursive algorithm whose running time was ON log￾  approximately ON￾ and subsequent improvements by Coppersmith Winograd and others have brought the ex ponent down to  However the algorithms in question involve constants so large that they are impractical and they have had little e ect on scientic computing As a result the problem of speeding up matrix computations is viewed by many numerical analysts as a theoretical distraction This is a strange attitude to take to the most con spicuous unsolved problem in our eld Of course it may be that there is some reason why no practical algorithm can ever be found but we certainly do not know that today A fast matrix inverse may be possible perhaps one with complexity ON￾ log N or 
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有