正在加载图片...
3BChapter 1-Introduction-11B1.2.Benefits of Threads 1.2.Benefits of Threads When used properly,threads can reduce development and maintenance costs and improve the performance of complex applications.Threads make it easier to model how humans work and interact,by turning asynchronous workflows into mostly sequential ones.They can also turn otherwise convoluted code into straight-line code that is easier to write, read,and maintain. Threads are useful in GUl applications for improving the responsiveness of the user interface,and in server applications for improving resource utilization and throughput.They also simplify the implementation of the JVM-the garbage collector usually runs in one or more dedicated threads.Most nontrivial Java applications rely to some degree on threads for their organization. 1.2.1.Exploiting Multiple Processors Multiprocessor systems used to be expensive and rare,found only in large data centers and scientific computing facilities.Today they are cheap and plentiful;even low-end server and midrange desktop systems often have multiple processors.This trend will only accelerate;as it gets harder to scale up clock rates,processor manufacturers will instead put more processor cores on a single chip.All the major chip manufacturers have begun this transition,and we are already seeing machines with dramatically higher processor counts. Since the basic unit of scheduling is the thread,a program with only one thread can run on at most one processor at a time.On a two-processor system,a single-threaded program is giving up access to half the available CPU resources;on a 100-processor system,it is giving up access to 99%.On the other hand,programs with multiple active threads can execute simultaneously on multiple processors.When properly designed,multithreaded programs can improve throughput by utilizing available processor resources more effectively. Using multiple threads can also help achieve better throughput on single-processor systems.If a program is single- threaded,the processor remains idle while it waits for a synchronous l/O operation to complete.In a multithreaded program,another thread can still run while the first thread is waiting for the l/O to complete,allowing the application to still make progress during the blocking I/O.(This is like reading the newspaper while waiting for the water to boil,rather than waiting for the water to boil before starting to read.) 1.2.2.Simplicity of Modeling It is often easier to manage your time when you have only one type of task to perform(fix these twelve bugs)than when you have several(fix the bugs,interview replacement candidates for the system administrator,complete your team's performance evaluations,and create the slides for your presentation next week).When you have only one type of task to do,you can start at the top of the pile and keep working until the pile is exhausted (or you are);you don't have to spend any mental energy figuring out what to work on next.On the other hand,managing multiple priorities and deadlines and switching from task to task usually carries some overhead. The same is true for software:a program that processes one type of task sequentially is simpler to write,less error- prone,and easier to test than one managing multiple different types of tasks at once.Assigning a thread to each type of task or to each element in a simulation affords the illusion of sequentiality and insulates domain logic from the details of scheduling,interleaved operations,asynchronous I/O,and resource waits.A complicated,asynchronous workflow can be decomposed into a number of simpler,synchronous workflows each running in a separate thread,interacting only with each other at specific synchronization points. This benefit is often exploited by frameworks such as servlets or RMI(Remote Method Invocation).The framework handles the details of request management,thread creation,and load balancing,dispatching portions of the request handling to the appropriate application component at the appropriate point in the work-flow.Servlet writers do not need to worry about how many other requests are being processed at the same time or whether the socket input and output streams block;when a servlet's service method is called in response to a web request,it can process the request synchronously as if it were a single-threaded program.This can simplify component development and reduce the learning curve for using such frameworks. 1.2.3.Simplified Handling of Asynchronous Events A server application that accepts socket connections from multiple remote clients may be easier to develop when each connection is allocated its own thread and allowed to use synchronous l/O.3BChapter 1Ͳ IntroductionͲ11B1.2. Benefits of Threads 3 1.2. Benefits of Threads When used properly, threads can reduce development and maintenance costs and improve the performance of complex applications. Threads make it easier to model how humans work and interact, by turning asynchronous workflows into mostly sequential ones. They can also turn otherwise convoluted code into straightͲline code that is easier to write, read, and maintain. Threads are useful in GUI applications for improving the responsiveness of the user interface, and in server applications for improving resource utilization and throughput. They also simplify the implementation of the JVM Ͳthe garbage collector usually runs in one or more dedicated threads. Most nontrivial Java applications rely to some degree on threads for their organization. 1.2.1. Exploiting Multiple Processors Multiprocessor systems used to be expensive and rare, found only in large data centers and scientific computing facilities. Today they are cheap and plentiful; even lowͲend server and midrange desktop systems often have multiple processors. This trend will only accelerate; as it gets harder to scale up clock rates, processor manufacturers will instead put more processor cores on a single chip. All the major chip manufacturers have begun this transition, and we are already seeing machines with dramatically higher processor counts. Since the basic unit of scheduling is the thread, a program with only one thread can run on at most one processor at a time. On a twoͲprocessor system, a singleͲthreaded program is giving up access to half the available CPU resources; on a 100Ͳprocessor system, it is giving up access to 99%. On the other hand, programs with multiple active threads can execute simultaneously on multiple processors. When properly designed, multithreaded programs can improve throughput by utilizing available processor resources more effectively. Using multiple threads can also help achieve better throughput on singleͲprocessor systems. If a program is singleͲ threaded, the processor remains idle while it waits for a synchronous I/O operation to complete. In a multithreaded program, another thread can still run while the first thread is waiting for the I/O to complete, allowing the application to still make progress during the blocking I/O. (This is like reading the newspaper while waiting for the water to boil, rather than waiting for the water to boil before starting to read.) 1.2.2. Simplicity of Modeling It is often easier to manage your time when you have only one type of task to perform (fix these twelve bugs) than when you have several (fix the bugs, interview replacement candidates for the system administrator, complete your team's performance evaluations, and create the slides for your presentation next week). When you have only one type of task to do, you can start at the top of the pile and keep working until the pile is exhausted (or you are); you don't have to spend any mental energy figuring out what to work on next. On the other hand, managing multiple priorities and deadlines and switching from task to task usually carries some overhead. The same is true for software: a program that processes one type of task sequentially is simpler to write, less errorͲ prone, and easier to test than one managing multiple different types of tasks at once. Assigning a thread to each type of task or to each element in a simulation affords the illusion of sequentiality and insulates domain logic from the details of scheduling, interleaved operations, asynchronous I/O, and resource waits. A complicated, asynchronous workflow can be decomposed into a number of simpler, synchronous workflows each running in a separate thread, interacting only with each other at specific synchronization points. This benefit is often exploited by frameworks such as servlets or RMI (Remote Method Invocation). The framework handles the details of request management, thread creation, and load balancing, dispatching portions of the request handling to the appropriate application component at the appropriate point in the workͲflow. Servlet writers do not need to worry about how many other requests are being processed at the same time or whether the socket input and output streams block; when a servlet's service method is called in response to a web request, it can process the request synchronously as if it were a singleͲthreaded program. This can simplify component development and reduce the learning curve for using such frameworks. 1.2.3. Simplified Handling of Asynchronous Events A server application that accepts socket connections from multiple remote clients may be easier to develop when each connection is allocated its own thread and allowed to use synchronous I/O.
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有