正在加载图片...
$30.2 THE RISE OF CONCURRENCY 953 30.2 THE RISE OF CONCURRENCY Back to square one.We must first review the various forms of concurrency,to understand how the evolution of our field requires most software developers to make concurrency part of their mindset.In addition to the traditional concepts of multiprocessing and multiprogramming,the past few years have introduced two innovative concepts:object request brokers and remote execution through the Net. Multiprocessing More and more,we want to use the formidable amount of computing power available around us;less and less,we are willing to wait for the computer(although we have become quite com fortable with the idea that the computer is waiting for us).So if one processing unit would not bring us quickly enough the result that we need,we will want to rely on several units working in parallel.This form of concurrency is known as multiprocessing. Spectacular applications of multiprocessing have involved researchers relying on hundreds of computers scattered over the Internet,at times when the computers' (presumably consenting)owners did not need them,to solve computationally intensive problems such as breaking cryptographic algorithms.Such efforts do not just apply to computing research:Hollywood's insatiable demand for realistic computer graphics has played its part in fueling progress in this area;the preparation of the movie Toy Story,one of the first to involve artificial characters only (only the voices are human),relied at some point on a network of more than one hundred high-end workstations-more economical, it seems,than one hundred professional animators. Multiprocessing is also ubiquitous in high-speed scientific computing,to solve ever larger problems of physics,engineering,meteorology,statistics,investment banking More routinely,many computing installations use some form of load balancing: automatically dispatching computations among the various computers available at any particular time on the local network of an organization. Another form of multiprocessing is the computing architecture known as client- server computing,which assigns various specialized roles to the computers on a network: the biggest and most expensive machines,of which a typical company network will have just one or a few,are "servers"handling shared databases,heavy computations and other strategic central resources;the cheaper machines,ubiquitously located wherever there is an end user,handle decentralizable tasks such as the human interface and simple com putations;they forward to the servers any task that exceeds their competence. The current popularity of the client-server approach is a swing of the pendulum away from the trend of the preceding decade.Initially (nineteen-sixties and seventies) architectures were centralized,forcing users to compete for resources.The personal computer and workstation revolution of the eighties was largely about empowering users with resources theretofore reserved to the Center(the "glass house"in industry jargon). Then they discovered the obvious:a personal computer cannot do everything,and some resources must be shared.Hence the emergence of client-server architectures in the nineties.The inevitable cynical comment-that we are back to the one-mainframe- many-terminals architecture ofour youth,only with more expensive terminals now called "client workstations"-is not really justified:the industry is simply searching,through trial and error,for the proper tradeoff between decentralization and sharing.§30.2 THE RISE OF CONCURRENCY 953 30.2 THE RISE OF CONCURRENCY Back to square one. We must first review the various forms of concurrency, to understand how the evolution of our field requires most software developers to make concurrency part of their mindset. In addition to the traditional concepts of multiprocessing and multiprogramming, the past few years have introduced two innovative concepts: object request brokers and remote execution through the Net. Multiprocessing More and more, we want to use the formidable amount of computing power available around us; less and less, we are willing to wait for the computer (although we have become quite comfortable with the idea that the computer is waiting for us). So if one processing unit would not bring us quickly enough the result that we need, we will want to rely on several units working in parallel. This form of concurrency is known as multiprocessing. Spectacular applications of multiprocessing have involved researchers relying on hundreds of computers scattered over the Internet, at times when the computers’ (presumably consenting) owners did not need them, to solve computationally intensive problems such as breaking cryptographic algorithms. Such efforts do not just apply to computing research: Hollywood’s insatiable demand for realistic computer graphics has played its part in fueling progress in this area; the preparation of the movie Toy Story, one of the first to involve artificial characters only (only the voices are human), relied at some point on a network of more than one hundred high-end workstations — more economical, it seems, than one hundred professional animators. Multiprocessing is also ubiquitous in high-speed scientific computing, to solve ever larger problems of physics, engineering, meteorology, statistics, investment banking. More routinely, many computing installations use some form of load balancing: automatically dispatching computations among the various computers available at any particular time on the local network of an organization. Another form of multiprocessing is the computing architecture known as client￾server computing, which assigns various specialized roles to the computers on a network: the biggest and most expensive machines, of which a typical company network will have just one or a few, are “servers” handling shared databases, heavy computations and other strategic central resources; the cheaper machines, ubiquitously located wherever there is an end user, handle decentralizable tasks such as the human interface and simple computations; they forward to the servers any task that exceeds their competence. The current popularity of the client-server approach is a swing of the pendulum away from the trend of the preceding decade. Initially (nineteen-sixties and seventies) architectures were centralized, forcing users to compete for resources. The personal computer and workstation revolution of the eighties was largely about empowering users with resources theretofore reserved to the Center (the “glass house” in industry jargon). Then they discovered the obvious: a personal computer cannot do everything, and some resources must be shared. Hence the emergence of client-server architectures in the nineties. The inevitable cynical comment — that we are back to the one-mainframe￾many-terminals architecture of our youth, only with more expensive terminals now called “client workstations” — is not really justified: the industry is simply searching, through trial and error, for the proper tradeoff between decentralization and sharing
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有