当前位置:高等教育资讯网  >  中国高校课件下载中心  >  大学文库  >  浏览文档

《数值分析》课程教学资源(课外阅读)Lloyd N. Trefethen, Predictions for scientific computing 50 years from now, Mathematics Today, 2000

资源类别:文库,文档格式:PDF,文档页数:5,文件大小:2.2MB,团购合买
点击下载完整版文档(PDF)

Mathematics Today April 2000 53 Predictions for Scientific Computing Fifty Years From Now LLOYD N TREFETHEN Oxford University Computing Laboratory this one?-in the very first century of radio.television.light bulbs,telephones,phonographs,lasers,refrigerators,automo biles,airplanes,space a,computers,nuclear power,nuclear hat the nton of our in his fty in any technological field.In 950g tymay be that it is not so special after all,because tends not to problems in 1950 were solved with slide rules and on paper,or wn a Fermi's paradox,that also suggests that technological civilisa oday'scomputers ome of ay were oare short-lived changed numerical computing beyond recognition.The next tens of thousands of years?An 二 ecause it can.(Don t ask 1.WE MAY NOT BE HERE millions are so rare as to arise on only Ibelieve that the explanation of the there may be that tec level off Here at the beginning of the third perish in a cataclysm so great as to take the galaxy with them Suddenly the problem of predicting fifty years of scienific computing begins to look easy!Let's get down to it. The presene 2.WE'LL TALK TO COMPUTERS MORE OFTEN THAN TYPE TO THEM,AND trajectory is evidence that technological civilisations do no THEY'LL RESPOND WITH PICTURES ast very long.I do not MORE OFTEN THAN NUMBERS A big change in the last twenty years has been the arrival of graphical interfaces.When I was agraduate studentat Stanford phe c round 1980,we played with some Alto machines donared party tricks,to gimmickyto undre years Now i te oaythe descendants of the Aoto

Mathematics Today April 2000 53 Predictions for Scientific Computing Fifty Years From Now LLOYD N TREFETHEN Oxford University Computing Laboratory This cs.;;iy is adapted from a talk given June 17, 1998 at the conference "Numerical Analysis and Computers - 50 Years of Progress" held at the University of Manchester in commemoration of the 50th anniver￾sary of the Mark 1 comprlter. F ifty years is a long, long time in any technological field. In our own field of scientific computing or numerical analy￾sis, think hack to 1950. Around the world, numerical p~.ohlerns in 1950 were solved with slide nlles and on paper, or with mechanical calculators that had little in common with today's computers. Some of the algorithms we use today were in existence then, hilt on the whole, the Iast fifty years have changed ni~merical computing beyond recognition. The next fifty will do it again. My remarks consist of twelve predictions. I did not aim for these to orhit ;iround a unifying theme, hut that is nevertheless w11;it happened. 1. WE MAY NOT BE HERE In the 20th century, everytl~ing technological seems to he changing e~ponentiall~. This raises a problem. Exponentials do not go on for ever; something happens to them. Now in my opinion, many of the exponentials we are sitting on have not yet started to level off. Here at the heginning of the third millennium, hiology is just beginning its great explosion, and although electronics got a head start of a few decades, it is hardlv slowing down yet. The presence of exponentials all around us overshadows any attempt to predict the future. 1 feel 1 must dwell for a moment on one of the shadows, one that has nothing specifically to do with computing. In my opinion, our position on an exponential trajectory is evidence that technological civilisations do not Iast very long. I do not claim that our civilisation must end within fifty years, or five hundred, hut I do believe there is reason to douht it can survive for, say, ten thousand years. My reasoning has nothing to do with any particular cata￾clysm that may hefall us, such as environmental catastrophe or exhaustion of resources or asteroid impact or hiological or nu￾clear war. The argument is more ahstract, and it goes like this. The industrial explosion on earth began just two or three hundred years ago. Now if technological civilisations can last tens of thousands of years, how do you explain the extraordinary coincidence that yoi~ were horn in the first few generations of this one? - in the very first century of radio, television, light hulhs, telephones, phonographs, lasers, refrigerators, automo￾biles, airplanes, spacecraft, computers, nuclear power, nuclear weapons, plastics, antihiotics, and genetic engineering? I helieve that the explanation of our special position in his￾tory may he that it is not so special after all, hecause history tends not to last very long. This argument has heen called the Copernican Principle hy J R Gott of Princeton University. There is a second line of evidence, sometimes known as Fermi's paradox, that also suggests that technological civilisa￾tions are short-lived. The human race is not an outpost of a ga￾lactic society; it is a domestic product. How can we explain this if technological civilisations Iast tens of thousands of years? An ages-old technological civilisation will expand across its galaxy, simply because it can. (Don't ask why, for expanding is what life does. If one species doesn't, another will replace it.) Yet in 100,000 years of expanding at one hundredth the speed of light, a civilisation can spread one thousand light years, a distance encompassing millions ofstars. Is it plausible that technological civilisations are so rare as to arise on only one star among millions? I helieve that the explanation of the emptiness out there may he that technological civilisations perish hefore they start to spread across their galaxy -or that they start spreading, then perish in a cataclysm so great as to take the galaxy with them. Suddenly the problem of predicting fifty years of scientific computing hegins to look easy! Let's get down to it. 2. WE'LL TALK TO COMPUTERS MORE OFTEN THAN TYPE TO THEM, AND THEY'LL RESPOND WITH PICTURES MORE OFTEN THAN NUMBERS A hig change in the last twenty years has heen the arrival of graphical interfaces. When 1 was a graduate student at Stanford around 1980, we played with some Alto machines donated hy Xerox, early workstations featuring windows, icons, mice and pointers, hut I thought these were party tricks, too gimmicky to catch on. Today the descendants of the Altos have driven other machines to extinction. It takes no special insight to predict that soon, an equally great change will occur as we take to interacting with computers hy speech. It has heen a long time

54 Mathematics Today April 2000 com nine.but this transformation is now around the c ight.We couldn't figure out how todo it!a decade It is good fun to i re more inte igent.In some theon for though the development of speech and graphic licated.The san interlock with the er tional intel we never got to th to be based on numbers aga .The digital idea is hat makes everything mplicated arrays of internal states.The first copy ma from the details of computing than we are,just as we are further ne out in lan cape orientation,but the second in portrait, removed than were our parent he ma 3 Typewriterd o be predictable oo you kew har 3.NUMERICAL COMPUTING WILL BE lays,in Word or LaTeX ng one er ot input may alter the w ADAPTIVE.ITERATIVE.EXPLORATORY. of mind we don't fully un INTELLIGENT-AND THE you re COMPUTATIONAL POWER WILL BE BEYOND YOUR WILDEST DREAMS swill be true of nume puter age Gauss quadratur was invented two .but out fail to the p bed precision.you will no quadrature didn until the ctly if you solve the problem a s ime. ofa black box.Par our machine tial differer re not yet boxed in bl ck,but determini wide the mte cated to not s wise s people,but the can ex t of po the machine what we want,and the achine an intelligen ting atop an nun 5.THE IMPORTANCE OF FLOATING quired. Then it will give us the a ndfesit,tmay POINT ARITHMETIC WILL BE by this kind of UNDIMINISHED So much will change in fitty years that it is refr uld envision that the word"computation"may begin to seem e,and in pa old-fashi on from 16t speaking. dure:relat rather than absolute magnitudes,and rounding of e operation nave diminis feel that floating point arithmetic is an anachr nism,a 1950 My next prediction is a corollary hines become the feeing go but now that thev are fast 4.DETERMINISM IN NUMERICA COMPUTING WILL BE GONE cannot be solved symbolica lly.You hav e to make approximations,and floating meant turning of the dom point arith metic is the best ge al-purpo

54 Mathematics Today April 2000 coming, hut this transformation is now around the corner. It is good fun to imagine what computer graphics will he like in fifty years. 1 hardly dare, except to note that three￾dimensional virtual reality will be as ordinary as Velcro. Curiously, though the development of speech and graphics will make our numerical work ever more human in feel, less ob￾viously numerical, the underlying computations will continue to he hased on numhers represented digitally to many digits of precision. The digital idea is what makes everything possihle, and it is not going to go away. This is one sense in which the scientists and engineers of the future will be further removed from the details of computing than we are, just as we are further removed than were our parents. INTELLIGENT - AND THE COMPUTATIONAL POWER WILL BE BEYOND YOUR WILDEST DREAMS A~laptive numcrical computing is one of thc glories of the com￾puter age. Gauss quadrature was invented two centuries ago, hut adaptive quadrature didn't arrive until the 1960s. Adaptive ODE solvers came soon after, and turned the solution of most ordinary differential equations into the use of a hlack box. Par￾tial differential equations are not yet hoxed in black, hut the trend is in that direction. As time goes hy, adaptivity managed hy the computer's intelligence becomes more and more wide￾spread. Computers are not as wise as people, but they can ex￾plore a forest of possibilities faster than we can. In fifty years, this is how most numerical prohlems will be solved. We will tell the machine what we want, and the machine, an intelligent control system sitting atop an encyclopaedia of numerical methods, will juggle computational options at incomprehensi￾ble speed until it has solved the problem to the accuracy re￾quired. Then it will give us the answer; and if we insist, it may even tell us something of how it got there. The power unleashed hy this kind of computing will he vast. Large parts of physical reality will he simulated in real time before our eyes, with effects so far beyond what the men of 1950 could envislon that the word "computation" may begin to seem old-fashioned and drop out of use. When computations are a11 intelligent, when everything is emhedded in a control loop, the mathematical landscape will l change. One distinction that means a great deal to its today is that, broadly speaking, linear prohlems can he solved in one pass, hut nonlinear ones require iteration. In fifty years, when 1 everything is embedded in an iterative loop anyway, this differ￾ence will have diminished. For the same reason, today's hig dis- l tinction between forward and inverse problems will have faded l l too. My next prediction is a corollary. 4. DETERMINISM IN NUMERICAL COMPUTING WILL BE GONE Iiecently our family rcnted a car for a holiday. One evening we wanted to look at the stars, which meant turning of the dome light. We couldn't figitre out how to do it! A decade ago, closing the doors and flipping a switch woitld have sufficed, hut nowa￾days, cars are more intelligent. In some, the light stays on for a fixed period after you close the doors, and in ours, the situation was even morecomplicated. There was an interlock with the en￾gine, plus some additional intelligence that we never got to the bottom of. Eventually we got the light off, hut WC were not quite surc how we had done it,or ifwecould do it thesamc way again. Have you noticed how many of our machines behave this way? Photocopiers used to he deterministic, hut nowadays they have complicated wrays of internal states. Thc first copy may come out in landscape orientation, hilt the second in portrait, if the machine decides in-hetween that it ought to change modes. Typewriters itqed to he predictable too: you knew what woitld happen when you pressed a key. Nowadays, in Word or LaTeX, changing one character of input may alter the whole clocument in startling ways. Why, at motorway rcst stops, cvcn toilcts are intelligent devices now whose states of mind we don't fully un￾derstand, and when you're finished with the toilet, you have two further negotiations to undertake with the intelligent sink and the intelligent hand drier! What's true of toilets will be true of numerical computations. In fifty years, though the answers you get will hc accurate with￾out fail to the prescribed precision, you will not expect to dupli￾cate them exactly if you solve the problem a second time. 1 don't see how this loss ofdeterminism can he stopped. Of course, from a technical point of view, it would he casy to makc our machines deterministic by simply leaving out all that intelligcnce. How￾ever, we will not do this, for intelligence is too powcrfitl. In the last fifty years, the grcat messagc communicated to scientists and engineers was that it is unreasonahle to ask for exactness in numerical computation. In the next fifty, they will learn not to ask for repeatahility, eithcr. UNDIMINISHED So much will change in fifty years that it is refreshing to predict some continuity. One thing that 1 believe will last is floating point arithmetic. Ofcourse, the details will change, and in par￾ticular, word lengths will continue thcir progression from 16 to 32 to 64 to 128 hits and hcyond, as sequences of computations become longer and require more accuracy to contain accumula￾tion of errors. Conceivably we might even switch to hardware based on a logarithmic representation of numbers. Rut 1 htl' . leve the two defining features of floating point arithmetic will en￾dure: relative rather than ahsolute magnitudes, and rounding of all intermediate operations. Outside the numerical analysis community, some people feel that floating point arithmetic is an anachronism, a 1950s klitdge that is destined to hc cast aside as machines hccomc more sophisticated. Computers may have been horn as numher crunchers, the feeling goes, hut now that they are fast enough to do arbitrary symbolic manipulations, we must movc to a higher plane. In truth, no amount of computer power will change the fact that most numerical prohlems cannot he solved symholically. You have to make approximations, and floating point arithmetic is the hest general-purpose approximation

Mathematics Today April 200055 idea ever devised.It will endure but get hidden deeper in the have changed,and we are all asympt tickers.When multipole meth 6.LINEAR SYSTEMS OF EQUATIONS WILL BE SOLVED IN O(N)FLOPS rom (N)to (N),giv around or tal a logarithmic factor,so how ng rur pr em e n for problems that mi all of th proximate algorithms are more robust than exact ones.and the are also often faster. to solve them in 1968 tha the O(N')ha uld b tunping time 8.BREAKTHROUGHS WILL HAVE was O(N )and sub OCCURRED IN MATRIX ent down to 2.376.How PRECONDITIONERS,SPECTRAL METHODS ct on s AND TIME STEPPING FOR PARTIAL DIFFERENTIAL EQUATIONS ake to the mos ishard ot to be optimistic abou mer ctical algorithm can ever be found,but we cer it is a jungle these days surely impre vements are in store ot know that today.A"f Spectra nethods for PDEs are nasimilar stat rema o N).and th ring it would change e As for 85I made a bet bly well i Pe r Alfeld of the sity o blems ears None ws and ev Alfeld a check for $100.Werenewed our bet,how across the range of science and engin ng.To get ar und thi ver,to 2005 ne steps are nd phy long enough important tern are thrown ay just b cause they are oo har us in the day-today 7.MULTIPOLE METHODS AND THEIR DESCENDANTS WILL BE UBIQUITOUS nd 1 9.THE DREAM OF SEAMLESS have no doubt asto what these methods are good for:they are INTEROPERABILITY WILL HAVE BEEN matrix which for O(N ACHIEVED are ofen h faster than Gaussian elimination and its rel What is curious is that Hestenes,Stiefel,Lancos and the rest for the grid g ator. ddn'hscmnin he N was too small for co the lin algebra,h s were in place.These menkne ethin bolic and numerical calculations sep arate?why can 'tour idea of the nd tools blend together int a se interoper le systen rion.Yet the made h邮r ylike this will have been couped and humans rarely catch sight of actual numbers in

Mathematics Today April 2000 55 idea ever devised. It will endure hut get hidden deeper in the ods, hy which I mean methods related to the recent algorithms of machine. Rokhlin and Grecngard for N-hody problems and intepal equa￾tions. Times have changed, and we are a11 asymptotickers. When multipole methods were heing invented in the 1980s, they were 6. LINEAR SYSTEMS OF EQUATIONS competitive in 2D hut not 3D. Yet Rokhlin and Greengarcl saw WILL BE SOLVED IN O(N~*') FLOPS immediately that these techniques reduced operation counts from o(N') to O(N), give or take a logarithmic factor, so how 1)cnse matrix computations as performed on machines around collld they not in the long nln? ~~~l so they will. the world typically require o(N') floating point operations - -rhe success of multipole methods will exemplify a general "flops" -where N is the dimension of the problem. This state- trend. time goes hy, large-scaIe numerical computations rely ment applies exactly for computing inverses, determinants, and on approximate algorithms, even for problems that might solutions of systems of equations, and it applies approximately in he solved exactly in a finite of for eigenvalues and singular values. But all of these prohlems in- algorithms are more robust than exact ones, and they v~lve only 0(N2) inputs, and as machines get faster, it is in- are ,lso often faster. creasingly aggravating that O(N1) operations should he needed to solve them. Strassen showed in 1968 that the o(N') harrier could he hrcached. He devised a recursive algorithm whose running time was o(N"~:'), approximately 0(N2."), and subsequent im- 8. BREAKTHROUGHS WILL HAVE provements hy Coppersmith, Winograd and others have OCCURRED IN MATRIX brought the exponent down to 2.376. However, the algorithms in question involve constants so large that they are impractical, PRECONDITIONERS, SPECTRAL METHODS and they have had little effect on scientific computing. As a re- AND TIME STEPPING FOR PARTIAL sult, the prohlem of speeding up matrix computations is viewed hy many numerical analysts as a theoretical distraction. This is a DIFFERENTIAL EQUATIONS It is hard not to he optimistic about merely technical hurdles. strange attitude to take to the most conspicuous unsolved proh- The husiness of matrix preconditioners is vitally important, hilt lem in our field! Of course, it may he that there is some reason it is a jungle these days - surely improvements are in store! why no practical algorithm can ever he found, hut we certainly Spectral methods for PDEs are in a similar state - remarkably do not know that today. A "fast matrix inverse" may he possible, powerful, hut varying awkwardly from one application to the perhaps one with complexity O(N210g N) or 0(N2loC2 N), and next. Order is needed here, and it will come. As for time- cliscovering it would change everything. stepping, this is the old prohlems of stiffness, reasonahly well in In 1985 1 made a het with Peter Alfeld of the University of hand for ODE5 hut still unsolved in a general way for PDEq. To Utah that a m;itrix algorithm with complexity 0(N2+" for any this day, the CFL restriction constrains our computations a11 E > 0 woilld he found within ten years. None was, and I gave across the range of science and engineering. To get around this We "lr het' however' to 2005* constraint, time steps are taken smaller than we would wish, and in that year I will renew it again if necessary. One morning, huge matrix prohlems are solved at great cost, and physically with luck, the headlines will appear. I think fifty years shoilld he important terms are thrown away just hecause they are too hard long enough. to implement. The CFL condition will not disappear, hut new weapons will he devised to help us in the day-to-day stri~ggle 7. MULTIPOLE METHODS AND THEIR against it. DESCENDANTS WILL BE UBIQUITOUS The conjugate gradient and Lanczos algorithms were invented :~rorlnd 1950, and their story is a curious one. Nowadays we 9. THE DREAM OFSEAMLESS have no douht as to what these methods are good for: they are matrix iterations, which for certain structured matrices hring those o(N') operation counts down to 0(N2) or even hettet. Though there arc constants hidden in the "O", these methods are often much faster than Gaussian elimination and its rela￾tivcs when N is large. What is curious is that Hestenes, Stiefel, Lanczos and the rest didn't see this coming. In the 1950s, N was too small for conju￾gate gmdients and Lnnczos yet to he competitive, hut all the mathcmatical pieces were in place. These men knew something of the convergence properties of their iterations, enough to have hecn ahle to predict that cvcntually, as machines grew faster, they must heat the competition. Yet they seem not to have made this prediction. A numerical analyst writing an essay like this one in 1960 might not have mentioned conjugate gmdients at all. It is with this history in mind that 1 mention multipole meth￾INTEROPERABILITY WILL HAVE BEEN ACHIEVED Users and onlookers complain year after year, why is so much human intervention needed to get from the whitehoard to the solution? Why does one computer program have to he written for the grid generator, another for the discretisation, and an￾other for the linear algehra, requiring interfaces a11 along the way with repeated opportunities for human error? Why are sym￾holic and numerical calculations separate? Why can't our ideas and tools hlend together into a seamless interoperahle system? Well, of course, they can, and getting there is merely an engi￾neering prohlem. Fifty years from now, the grids and the solvers will have heen coupled - and humans will more and more rarely catch sight of actual numbers in the course of doing science

56 Mathematics Today April 2000 will make discoveries that transform our methods of paralle ck th n;or,just as like of upheaval.The tomo take anoth her generatior ut it will come 1940-1970 point arithmetic onand Crick.we have nown this must be true.and in 1995 eelermeihn the first genome of a free-standing organism was sequenced ide this is gh for 1970-2000 wton iterations u-macroglobulin proteinase inhibi or of Oc 54 bro 2002-2050 11.OUR METHODS OF PROGRAMMING WILL HAVE BEEN BLOWN OPEN BY IDEAS RELATED TO GENOMES AND NATURAL SELECTION nputer programs are strangely analo tingmade by ideas related to Genetic programs and c h are made poibe byids relted to and no othe 10.THE PROBLEM OF MASSIVELY genomics,thinking g digi PARALLEL COMPUTING WILL HAVE BEEN BLOWN OPEN BY IDEAS RELATED TO THE en in ad HUMAN BRAIN There's a program ner in the The information revolution is well underway,but the revolu mention mments!).Yet it is n ble that nowada big to be tific life is that the problem of massively verified,and indeed,the pr cess of industrial 1 as ated by an unending p riment and test code and computing nowadays is a cl correct,a process in which individua human intelligen ced as eve one expected a decadego rom ones eration to the next.and they are pever perfect. but they work to some computer scien nda the two prediction,jut revolutions in store will somehow be linked.Brain researchers pious wish

~ 56 Mathematics Today April 2000 Table 1. Some Past and Future Developments in Scientific Computing. The Asterisks Mark Items Summarised by (*). Refore 1940 Newton's method Gaussian elimination Gauss quadrature least-squares fitting Adams and Runge-Kutta formulas Richardson extrapolation 1940-1970 floating point arithmetic Fortran finite differences finite elements simplex algorithm Monte Carlo orthogonal linear algebra splines FFT 1970-2000 quasi-Newton iterations adaptivity stiff ODE solvers sottware libraries Mat lab multigrid sparscl and iterative linear algebra spectral methods interior point methods wavelets 2000-20.50 linear algebra in O(JV") flops multipole methods breakthroughs in preconditioners, spectral methods, time stepping for PDE speech and graphics everywhere fully intelligent, adaptive numerics * loss of determinism seamless interoperabi lity *massively parallel computing made possible by ideas related to the human brain *new programming methods made possible by ideas related to natural selection 1 10. THE PROBLEM OF MASSIVELY l PARALLEL COMPUTING WILL HAVE BEEN 1 , BLOWN OPEN BY IDEAS RELATED TO THE HUMAN BRAIN The informat~on revolution is well underway, hut the revolu￾tion in understanding the human hrain has not arrived yet. Some key idea is missing. Another fact ofscientific life is that the problem ofmassively parallel computing is stalled. For decades it has seemed plain that eventually, serial computers must run up against the con￾straints of the speed of light and the size of atoms, at which point further increases in power must come about through parallel￾ism. Yet parallel computing nowadays is a clumsy business, hogged down in communication problems, nowhere near as advanced as everyone expected a decade ago. I helieve that the dream of parallel computing will he ful￾filled. And it is hard to avoid the thought that if parallel com￾puting and the human hrain are both on the agenda, the two revolutions in store will somehow he linked. Brain researchers will make discoveries that transform our methods of parallel computing; or computer scientists will make discoveries that unlock the secrets of the brain; or, just as likely, the two fields will change in tandem, perhaps during an astonishing ten years of upheaval. The upheaval could hegin tomorrow, or it might take another generation, hut it will come hefore 2050. Meanwhile, another revolution in hiology is already happening: the working out of DNA/RNA genomes and their implications. Every organism from virus to man is specified by a program written in the alphahet of the nucleotides. Since Wat￾son and Crick, we have known this must he true, and in 1995, the first genome of a free-standing organism was sequenced. Since then, dozens more have followed, with the human ge￾nome itself now nearly complete, and everything in hiology, from development to drug design, is being reinvented as we watch. If I give you the sequence KPSGCGEQNMINFYPNVL in the standard code for the amino acids, this is enough for you to determine in a few seconds that 1 am speaking of an a-macroglohulin proteinase inhibitor of Octopus wlpris, and to locate related enzymes in ten other species. Just point your hrowser to http://www.ncbi.nlm.nih.gov and run hlnstp. I helieve that this drama has implicaticms for computing. 1 1. OUR METHODS OF PROGRAMMING WILL HAVE BEEN BLOWN OPEN BY IDEAS RELATED TO GENOMES AND NATURAL SELECTION Genetic programs and computer programs are strangely analo￾gous. Both are ahsolutely precise digital codcs, and no other codes that we know of have anything like the complexity of these two, with the size of a genome being of roughly the same order of magnitude (3 X 109 nucleotides for Homo sapiens) as the size of an operating system (2 X 10" hits for Windows 98). As a generation of engineers grows up with genomics, thinking digi￾tally about the evolution of life on earth, our methods of com￾puter programming will change. (Some ideas in this direction are already with us.) Traditionally, computer programs arc writ￾ten in a different way from biological ones. There's a program￾mer in the loop, an intelligence, which gives computer programs a logical structure that biological programs lack (not to mention comments!). Yet it is notahle that nowadays, large-scale software systems are too hig to he understood in detail by any individual, let alone mechanically analysed or verified, and indeed, the process of industrial software design already seems as close to evolution by natural selection as to mathematical logic. Software at a place like Microsoft is gencr￾ated hy an unending process of experiment and test, code and correct, a process in which individual human intelligences seem less important than they used to. Software systems evolve from one generation to the next, and they are never perfect, hut they work. The process is repugnant to some computer scien￾tists, hut it is scalable and i~nstoppahle. Finally, a prediction that is not really a prediction, just a pious wish

Mathematics Today April 2000 57 12.IF WE START THINKING Now, MAYBE WE CAN COOK UP A GOOD NAME FOR OUR FIELD! Table I lists some highlights from the hist computing.Its atte Answer to Enigmaths 67- have expr thisa Plain to see that a theme emerges from them.Some are what one oy as Human beings will be removed from the loop 5 9 findlhave an unsetling future,a future in which longer much involved in the detals of go have a cup of coffee That's my report from 2000,down here on the exponential 5 n 1997 Nick Tr 3 ly to 6 8 d at http://w v.comlab.ox.ac.ul The Catherine Richards Prize Ninth Interational Congresson Mathematical Education ICME9 Tokyo,Makuhari,Japan 31 July-6 August 2000 ceshave organised a special travel package from the UK or the above event.This is basedon Airways and 3rd Quadrennial Congress European Mathematical Society Barcelona,10-14 July 2000 For full details and a boo f the GRAZIANO FONTA NIN.Lotus Conference

Mathematics Today April 2000 5 7 12. IF WE START THINKING NOW, MAYBE WE CAN COOK UP A GOOD NAME FOR OUR FIELD! For f 1111 details 2 Tahlc 1 lists some highlights from the history of scientific computing. Its attempt to extrnpolate to the future summarises some of the thoughts I have expressed in this essay. When I looked at this collection ofpredictions, I was startlecl to sec that a theme emerges from them. Some are what one might c;ill purely tcc11nic;il. The others, however, those m;~rkcd hv asterisks, suggest ;I trend: Htonnii hcitips cclill hc rcmorrtl from rhc loop. (*) I fincl 1 hi~\ye envisioned ;in unsettling future, a fi~tilre in which humans, though still the taskmasters of computers, ;Ire no longer ~iiuch invol\.ed in the det;lils of getting the tasks done. Fifty years from non7, it is hard to imagine that our machines will still he dim enough to hencfit much from our assist;1nce. Sketch your neecls to the m;ichinc, ;ind then - urcll, you might as we11 go have a cup of coffee. Thiit's my report from 2000, down here on the exponential. Intern -.- I The Catherine Richards Prize The Acljudic;~tc>rs recomrnen~lecl that the Catherine Richilrcls Prize for 1999 he awarded to Professor Kenncth Morgan, Dr 0r1h;iv H;lss;in, ;ind Professor Nigel Weatherill of the University of Wales Swansea for their article "Why Didn't The Si~pcrsonic Car Fly!" in the August 1999 issrle of "M;~thematics Today". :yo, MaKunarl, japan s I JUIY-b AU~US~ ZI Lotrrz Conirrrnrc.s havr organised a special travel package from the UK for thr al,ove rvrnt. This is based on travel on British Airw.iys ,incl ofic,rs a crlrrtion of acrommotlation convenient to the mr~tings. Dcymrturrs from various UK airports can be organised. Prices start from £990 and include flights, 8 nights accommodation and private airport transirrs. 1 Ouadrennial Coneress Euro~ean Mathematical Society ly 200( An inclusive package has also been organised for this meeting. Prices start from £395 and are inclusive of ilights, 5 nights accommodation and trc-tr.il Mc.thorl1 Mntl.ih (SIAM. 20001. Krcrnt rc.sc,,irc prrhlirations and other iniormation c tountl .it litt~~:/'www.ronil~il~.o~.ar.~r ational . . 3 9 ng form fo~ 0207 962 'i 3 F- 2 3 -- P ~- - -p-ppP 5 3 1 8 72 6 8 4 7 -~ - I Congr II. Barcelc r either of 1 ,030 Fax: the ahove F 0207 334 l matica l r. ZME 9 4NO FONl 7tusgroup.t uuu tus Conferc

点击下载完整版文档(PDF)VIP每日下载上限内不扣除下载券和下载次数;
按次数下载不扣除下载券;
24小时内重复下载只扣除一次;
顺序:VIP每日次数-->可用次数-->下载券;
已到末页,全文结束
相关文档

关于我们|帮助中心|下载说明|相关软件|意见反馈|联系我们

Copyright © 2008-现在 cucdc.com 高等教育资讯网 版权所有