正在加载图片...
program and all I/O operations; this is known as a checkpoint. Should a component on which this program is running fail, the operating system can restart the program from the most recent checkpoint. While the advantage of fault-tolerant systems is obvious, they come at a price. Redundant hardware is expensive, and software capable of recovering from faults runs more slowly. As with many other systems, the price may be more than offset by the advantage of continuous computing. 96.5 Parallel processing No matter how fast computers become, it seems they are never fast enough. Manufacturers make faster computers by decreasing the amount of time it takes to do each operation. An alternative is to build a computer that performs several operations simultaneously. A parallel computer, also called a multiprocessor, is one that contains more than one cpu. I The advantage of a parallel computer is that it can run more than one program simultaneously. In a general- purpose time-sharing environment, parallel computers can greatly enhance overall system throughput. A program shares a CPU with fewer programs. This approach is similar to having several computers connected on a network but has the advantage that all resources are more easily shared. To take full advantage of a parallel Iter will require changes to the operating system Boykin and Langerman, 1990] and application programs. Most programs are easily divided into pieces that can each run at the same time. If each of these pieces is a separate thread of control, they could run simultaneously on a parallel computer. By so dividing the application, the program may run in less time than it would on a single processor (uniprocessor)computer. Within the application program, each thread runs as if it were the only thread of control. It may call functions, manipulate memory, perform I/O operations, etc. If the threads do not interact with each other, then, to the application programmer, there is little change other than determining how to subdivide the program. However, it would be unusual for these threads not to interact. It is this interaction that makes parallel programming more complex. In principle, the solution is rather simple. Whenever a thread will manipulate memory or perform an I/O operation, it must ensure that it is the only thread that will modify that memory location or do iyo to that file til it has completed the operation. To do so, the programmer uses a lock. a lock is a mechanism that allows only a single thread to execute a given code segment at a time. Consider an application with several threads of ntrol. Each thread performs an action and writes the result to a file--the same file. within each thread we might have code that looks as follows: thread) dowork(; writeresult(); riteresult() lock() write(logfid, result, 512); unlock( In this example the writeresult function calls the lock function before it writes the result and calls unlock afterward. Other threads simultaneously calling writeresult will wait at the call to lock until the thread that currently holds the lock calls the unlock function. While this approach is simple in principle, in practice it is more difficult. It takes experience to determine how a program may be divided. Even with appropriate experience, it is more difficult to debug a multithreaded Central processing unit, the hardware component that does all arithmetic and logical operations. e 2000 by CRC Press LLC© 2000 by CRC Press LLC program and all I/O operations; this is known as a checkpoint. Should a component on which this program is running fail, the operating system can restart the program from the most recent checkpoint. While the advantage of fault-tolerant systems is obvious, they come at a price. Redundant hardware is expensive, and software capable of recovering from faults runs more slowly. As with many other systems, the price may be more than offset by the advantage of continuous computing. 96.5 Parallel Processing No matter how fast computers become, it seems they are never fast enough. Manufacturers make faster computers by decreasing the amount of time it takes to do each operation. An alternative is to build a computer that performs several operations simultaneously. A parallel computer, also called a multiprocessor, is one that contains more than one CPU.1 The advantage of a parallel computer is that it can run more than one program simultaneously. In a general￾purpose time-sharing environment, parallel computers can greatly enhance overall system throughput. A program shares a CPU with fewer programs. This approach is similar to having several computers connected on a network but has the advantage that all resources are more easily shared. To take full advantage of a parallel computer will require changes to the operating system [Boykin and Langerman, 1990] and application programs. Most programs are easily divided into pieces that can each run at the same time. If each of these pieces is a separate thread of control, they could run simultaneously on a parallel computer. By so dividing the application, the program may run in less time than it would on a single￾processor (uniprocessor) computer. Within the application program, each thread runs as if it were the only thread of control. It may call functions, manipulate memory, perform I/O operations, etc. If the threads do not interact with each other, then, to the application programmer, there is little change other than determining how to subdivide the program. However, it would be unusual for these threads not to interact. It is this interaction that makes parallel programming more complex. In principle, the solution is rather simple. Whenever a thread will manipulate memory or perform an I/O operation, it must ensure that it is the only thread that will modify that memory location or do I/O to that file until it has completed the operation. To do so, the programmer uses a lock. A lock is a mechanism that allows only a single thread to execute a given code segment at a time. Consider an application with several threads of control. Each thread performs an action and writes the result to a file—the same file. Within each thread we might have code that looks as follows: thread() { dowork(); writeresult(); } writeresult() { lock(); write(logfid, result, 512); unlock(); } In this example the writeresult function calls the lock function before it writes the result and calls unlock afterward. Other threads simultaneously calling writeresult will wait at the call to lock until the thread that currently holds the lock calls the unlock function. While this approach is simple in principle, in practice it is more difficult. It takes experience to determine how a program may be divided. Even with appropriate experience, it is more difficult to debug a multithreaded 1 Central processing unit, the hardware component that does all arithmetic and logical operations
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有