正在加载图片...
Java Concurrency In Practice 1.1.A (Very)Brief History of Concurrency In the ancient past,computers didn't have operating systems;they executed a single program from beginning to end, and that program had direct access to all the resources of the machine.Not only was it difficult to write programs that ran on the bare metal,but running only a single program at a time was an inefficient use of expensive and scarce computer resources. Operating systems evolved to allow more than one program to run at once,running individual programs in processes: isolated,independently executing programs to which the operating system allocates resources such as memory,file handles,and security credentials.If they needed to,processes could communicate with one another through a variety of coarse-grained communication mechanisms:sockets,signal handlers,shared memory,semaphores,and files. Several motivating factors led to the development of operating systems that allowed multiple programs to execute simultaneously: Resource utilization.Programs sometimes have to wait for external operations such as input or output,and while waiting can do no useful work.It is more efficient to use that wait time to let another program run. Fairness.Multiple users and programs may have equal claims on the machine's resources.It is preferable to let them share the computer via finer-grained time slicing than to let one program run to completion and then start another. Convenience.It is often easier or more desirable to write several programs that each perform a single task and have them coordinate with each other as necessary than to write a single program that performs all the tasks. In early timesharing systems,each process was a virtual von Neumann computer;it had a memory space storing both instructions and data,executing instructions sequentially according to the semantics of the machine language,and interacting with the outside world via the operating system through a set of I/O primitives.For each instruction executed there was a clearly defined "next instruction",and control flowed through the program according to the rules of the instruction set.Nearly all widely used programming languages today follow this sequential programming model, where the language specification clearly defines"what comes next"after a given action is executed. The sequential programming model is intuitive and natural,as it models the way humans work:do one thing at a time, in sequence mostly.Get out of bed,put on your bathrobe,go downstairs and start the tea.As in programming languages,each of these real-world actions is an abstraction for a sequence of finer-grained actions -open the cupboard,select a flavor of tea,measure some tea into the pot,see if there's enough water in the teakettle,if not put some more water in,set it on the stove,turn the stove on,wait for the water to boil,and so on.This last step-waiting for the water to boil-also involves a degree of asynchrony.While the water is heating,you have a choice of what to do- just wait,or do other tasks in that time such as starting the toast (another asynchronous task)or fetching the newspaper,while remaining aware that your attention will soon be needed by the teakettle.The manufacturers of teakettles and toasters know their products are often used in an asynchronous manner,so they raise an audible signal when they complete their task.Finding the right balance of sequentiality and asynchrony is often a characteristic of efficient people-and the same is true of programs. The same concerns(resource utilization,fairness,and convenience)that motivated the development of processes also motivated the development of threads.Threads allow multiple streams of program control flow to coexist within a process.They share process-wide resources such as memory and file handles,but each thread has its own program counter,stack,and local variables.Threads also provide a natural decomposition for exploiting hardware parallelism on multiprocessor systems;multiple threads within the same program can be scheduled simultaneously on multiple CPUs. Threads are sometimes called lightweight processes,and most modern operating systems treat threads,not processes, as the basic units of scheduling.In the absence of explicit coordination,threads execute simultaneously and asynchronously with respect to one another.Since threads share the memory address space of their owning process,all threads within a process have access to the same variables and allocate objects from the same heap,which allows finer- grained data sharing than inter-process mechanisms.But without explicit synchronization to coordinate access to shared data,a thread may modify variables that another thread is in the middle of using,with unpredictable results.2 Java Concurrency In Practice 1.1. A (Very) Brief History of Concurrency In the ancient past, computers didn't have operating systems; they executed a single program from beginning to end, and that program had direct access to all the resources of the machine. Not only was it difficult to write programs that ran on the bare metal, but running only a single program at a time was an inefficient use of expensive and scarce computer resources. Operating systems evolved to allow more than one program to run at once, running individual programs in processes: isolated, independently executing programs to which the operating system allocates resources such as memory, file handles, and security credentials. If they needed to, processes could communicate with one another through a variety of coarseͲgrained communication mechanisms: sockets, signal handlers, shared memory, semaphores, and files. Several motivating factors led to the development of operating systems that allowed multiple programs to execute simultaneously: Resource utilization. Programs sometimes have to wait for external operations such as input or output, and while waiting can do no useful work. It is more efficient to use that wait time to let another program run. Fairness. Multiple users and programs may have equal claims on the machine's resources. It is preferable to let them share the computer via finerͲgrained time slicing than to let one program run to completion and then start another. Convenience. It is often easier or more desirable to write several programs that each perform a single task and have them coordinate with each other as necessary than to write a single program that performs all the tasks. In early timesharing systems, each process was a virtual von Neumann computer; it had a memory space storing both instructions and data, executing instructions sequentially according to the semantics of the machine language, and interacting with the outside world via the operating system through a set of I/O primitives. For each instruction executed there was a clearly defined "next instruction", and control flowed through the program according to the rules of the instruction set. Nearly all widely used programming languages today follow this sequential programming model, where the language specification clearly defines "what comes next" after a given action is executed. The sequential programming model is intuitive and natural, as it models the way humans work: do one thing at a time, in sequence mostly. Get out of bed, put on your bathrobe, go downstairs and start the tea. As in programming languages, each of these realͲworld actions is an abstraction for a sequence of finerͲgrained actions Ͳ open the cupboard, select a flavor of tea, measure some tea into the pot, see if there's enough water in the teakettle, if not put some more water in, set it on the stove, turn the stove on, wait for the water to boil, and so on. This last stepͲwaiting for the water to boilͲalso involves a degree of asynchrony. While the water is heating, you have a choice of what to doͲ just wait, or do other tasks in that time such as starting the toast (another asynchronous task) or fetching the newspaper, while remaining aware that your attention will soon be needed by the teakettle. The manufacturers of teakettles and toasters know their products are often used in an asynchronous manner, so they raise an audible signal when they complete their task. Finding the right balance of sequentiality and asynchrony is often a characteristic of efficient peopleͲand the same is true of programs. The same concerns (resource utilization, fairness, and convenience) that motivated the development of processes also motivated the development of threads. Threads allow multiple streams of program control flow to coexist within a process. They share processͲwide resources such as memory and file handles, but each thread has its own program counter, stack, and local variables. Threads also provide a natural decomposition for exploiting hardware parallelism on multiprocessor systems; multiple threads within the same program can be scheduled simultaneously on multiple CPUs. Threads are sometimes called lightweight processes, and most modern operating systems treat threads, not processes, as the basic units of scheduling. In the absence of explicit coordination, threads execute simultaneously and asynchronously with respect to one another. Since threads share the memory address space of their owning process, all threads within a process have access to the same variables and allocate objects from the same heap, which allows finerͲ grained data sharing than interͲprocess mechanisms. But without explicit synchronization to coordinate access to shared data, a thread may modify variables that another thread is in the middle of using, with unpredictable results.
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有