正在加载图片...
952 CONCURRENCY,DISTRIBUTION,CLIENT-SERVER AND THE INTERNET $30.1 The extension covering full-fledged concurrency and distribution will be as minimal as it can get starting from a sequential notation:a single new keyword-separate.How is this possible?We use the fundamental scheme ofO-O computation:feature call,x.f(a), executed on behalf of some object Ol and calling on the object O2 attached to x,with the argument a.But instead of a single processor that handles operations on all objects,we may now rely on different processors for Ol and 02-so that the computation on Ol can move ahead without waiting for the call to terminate,since another processor handles it. Because the effect of a call now depends on whether the objects are handled by the same processor or different ones,the software text must tell us unambiguously what the intent is for any x.Hence the need for the new keyword:rather than just x:SOME TYPE, we declare x:separate SOME TYPE to indicate that x is handled by a different processor, so that calls of targetx can proceed in parallel with the rest of the computation.With such a declaration,any creation instruction !x.make (...)will spawn offa new processor-a new thread of control-to handle future calls on x. Nowhere in the software text should we have to specify which processor to use.All we state,through the separate declaration,is that two objects are handled by different processors,since this radically affects the system's semantics.Actual processor assignment can wait until run time.Nor do we settle too early on the exact nature of processors:a processor can be implemented by a piece of hardware(a computer),but just as well by a task(process)of the operating system,or,on a multithreaded OS,just a thread of such a task.Viewed by the software,"processor"is an abstract concept,you can execute the same concurrent application on widely different architectures (time-sharing on one computer,distributed network with many computers,threads within one Unix or Windows task...)without any change to its source text.All you will change is a "Concurrency Configuration File"which specifies the last-minute mapping of abstract processors to physical resources. We need to specify synchronization constraints.The conventions are straightforward: No special mechanism is required for a client to resynchronize with its supplier after a separate call x.f(a)has gone off in parallel.The client will wait when and if it needs to:when it requests information on the object through a query call,as in value=x.some query.This automatic mechanism is called wait by necessity. To obtain exclusive access to a separate object O2,it suffices to use the attached entity a as an argument to the corresponding call,as in r(a). A routine precondition involving a separate argument such as a causes the client to wait until the precondition holds. To guarantee that we can control our software and predict the result (in particular, rest assured that class invariants will be maintained),we must allow the processor in charge of an object to execute at most one routine at any given time. We may,however,need to interrupt the execution of a routine to let a new,high- priority client take over.This will cause an exception,so that the spurned client can take the appropriate corrective measures-most likely retrying after a while. This covers most of the mechanism,which will enable us to build the most advanced concurrent and distributed applications through the full extent of O-O techniques,from A complete sum- multiple inheritance to Design by Contract-as we will now study in detail,forgetting mary appears in for a while all that we have read in this short preview. 3011,page1025952 CONCURRENCY, DISTRIBUTION, CLIENT-SERVER AND THE INTERNET §30.1 The extension covering full-fledged concurrency and distribution will be as minimal as it can get starting from a sequential notation: a single new keyword — separate. How is this possible? We use the fundamental scheme of O-O computation: feature call, x ● f (a), executed on behalf of some object O1 and calling f on the object O2 attached to x, with the argument a. But instead of a single processor that handles operations on all objects, we may now rely on different processors for O1 and O2 — so that the computation on O1 can move ahead without waiting for the call to terminate, since another processor handles it. Because the effect of a call now depends on whether the objects are handled by the same processor or different ones, the software text must tell us unambiguously what the intent is for any x. Hence the need for the new keyword: rather than just x: SOME_TYPE, we declare x: separate SOME_TYPE to indicate that x is handled by a different processor, so that calls of target x can proceed in parallel with the rest of the computation. With such a declaration, any creation instruction !! x ● make (…) will spawn off a new processor — a new thread of control — to handle future calls on x. Nowhere in the software text should we have to specify which processor to use. All we state, through the separate declaration, is that two objects are handled by different processors, since this radically affects the system’s semantics. Actual processor assignment can wait until run time. Nor do we settle too early on the exact nature of processors: a processor can be implemented by a piece of hardware (a computer), but just as well by a task (process) of the operating system, or, on a multithreaded OS, just a thread of such a task. Viewed by the software, “processor” is an abstract concept; you can execute the same concurrent application on widely different architectures (time-sharing on one computer, distributed network with many computers, threads within one Unix or Windows task…) without any change to its source text. All you will change is a “Concurrency Configuration File” which specifies the last-minute mapping of abstract processors to physical resources. We need to specify synchronization constraints. The conventions are straightforward: • No special mechanism is required for a client to resynchronize with its supplier after a separate call x ● f (a) has gone off in parallel. The client will wait when and if it needs to: when it requests information on the object through a query call, as in value := x ● some_query. This automatic mechanism is called wait by necessity. • To obtain exclusive access to a separate object O2, it suffices to use the attached entity a as an argument to the corresponding call, as in r (a). • A routine precondition involving a separate argument such as a causes the client to wait until the precondition holds. • To guarantee that we can control our software and predict the result (in particular, rest assured that class invariants will be maintained), we must allow the processor in charge of an object to execute at most one routine at any given time. • We may, however, need to interrupt the execution of a routine to let a new, high￾priority client take over. This will cause an exception, so that the spurned client can take the appropriate corrective measures — most likely retrying after a while. This covers most of the mechanism, which will enable us to build the most advanced concurrent and distributed applications through the full extent of O-O techniques, from multiple inheritance to Design by Contract — as we will now study in detail, forgetting for a while all that we have read in this short preview. A complete sum￾mary appears in 30.11, page 1025
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有