正在加载图片...
O(N2l0g2N),and discovering it would change everything. In 1985 I made a bet with Peter Alfeld of the University of Utah that a matrix algorithm with complexity 0(N2+)for any e>0would be found within ten years.None was,and I gave Alfeld a check for $100.We renewed our bet,however,to 2005,and in that year I will renew it again if necessary.One morning,with ludk,the headlines will appear.I think fifty years should be long enough. 7.Multipole methods and their des cendants will be ubiguitous. The conjugate gradient and Lanczos algorithms were invented around 1950,and their story is a curious one.Nowadays we have no doubt as to what these methods are good for:they are matrix iterations,which for certain structured matrices bring those O(N3)operation counts down to O(N2)or even better.Though there are constants hidden in the"O",these methods are often mudh faster than Gaussian elimination and its relatives when N is large. What is curiousis that Hestenes,Stiefel,Lanczos and the rest didn't see this coming. In the 1950s,N was too small for conjugate gradients and Lanczos yet to be competitive, but all the mathematical pieces were in place.These men knew something of the conver gence properties of their iterations,enough to have been able to predict that eventually, as machines grew faster.they must beat the competition.Yet they seem not to have made this prediction.A numerical analyst writing an essay like this one in 1960 might not have mentioned conjugate gradients at all. It is with this history in mind that I mention multipole methods,by which I mean methods related to the recent algorithms of Rokhlin and Greengardfor N-body problems and integral equations.Times have changed,and we are all asymptotickers.When multipole methods were being invented in the 1980s,they were competitive in 2D but not 3D.Yet Rokhlin and Greengard saw immedately that these techniques reduced operation counts from (N2)to O(N),give or take a logarithmic factor,o how could they not win in the long run?And so they will. The success of multipole methods will exemplify a general trend As time goes by. large-scale numerical computations rely more on approximate algorithms,even for prob- lems that might in principle be solved exactly in a finite number of steps.Approximate algorithms are more robust than exact ones,and they are also often faster 8.Breakthroughs will have occurred in matrir preconditioners,spectral methods,and time stepping for partial differential equations It is hard not to be optimistic about merely technical hurdles.The business of matrix preconditioners is vitally important,but it is a jungle these days-surely improvements arein store!Spectral methods for PDEs are in a similar state-remarkably powerful,but varying awkwar dly from one application to the next.Order is needed here,and it will come.As for time-stepping,this is the old problems of stiffness,reasonably well in hand for ODEs but still unsolved in a general way for PDEs.To this day,the CFL restriction constrains our computations all across the range of science and engineering.To get around this constraint,time steps are taken smaller than we would wish,huge matrixON￾ log￾ N and discovering it would change everything In  I made a bet with Peter Alfeld of the University of Utah that a matrix algorithm with complexity N￾￾ for any  would be found within ten years None was and I gave Alfeld a check for  We renewed our bet however to  and in that year I will renew it again if necessary One morning with luck the headlines will appear I think fty years should be long enough Multipole methods and their descendants wil l be ubiquitous The conjugate gradient and Lanczos algorithms were invented around  and their story is a curious one Nowadays we have no doubt as to what these methods are good for they are matrix iterations which for certain structured matrices bring those ON  operation counts down to ON￾  or even better Though there are constants hidden in the O these methods are often much faster than Gaussian elimination and its relatives when N is large What is curious is that Hestenes Stiefel Lanczos and the rest didn t see this coming In the s N was too small for conjugate gradients and Lanczos yet to be competitive but all the mathematical pieces were in place These men knew something of the conver gence properties of their iterations enough to have been able to predict that eventually as machines grew faster they must beat the competition Yet they seem not to have made this prediction A numerical analyst writing an essay like this one in  might not have mentioned conjugate gradients at all It is with this history in mind that I mention multipole methods by which I mean methods related to the recent algorithms of Rokhlin and Greengard for N body problems and integral equations Times have changed and we are all asymptotickers When multipole methods were being invented in the s they were competitive in D but not D Yet Rokhlin and Greengard saw immediately that these techniques reduced operation counts from ON￾  to ON give or take a logarithmic factor so how could they not win in the long run And so they will The success of multipole methods will exemplify a general trend As time goes by large scale numerical computations rely more on approximate algorithms even for prob lems that might in principle be solved exactly in a nite number of steps Approximate algorithms are more robust than exact ones and they are also often faster  Breakthroughs wil l have occurred in matrix preconditioners spectral methods and time stepping for partial dierential equations It is hard not to be optimistic about merely technical hurdles The business of matrix preconditioners is vitally important but it is a jungle these dayssurely improvements are in store Spectral methods for PDEs are in a similar stateremarkably powerful but varying awkwardly from one application to the next Order is needed here and it will come As for time stepping this is the old problems of sti ness reasonably well in hand for ODEs but still unsolved in a general way for PDEs To this day the CFL restriction constrains our computations all across the range of science and engineering To get around this constraint time steps are taken smaller than we would wish huge matrix 
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有