正在加载图片...
of the operation to be performed,and the argument.The current thread cid first tries to perform Treiber stack's push (line 5).It returns if succeeds.Otherwise,it writes its descriptor in the global loc array (line 6)to allow other threads to eliminate its push.The elimination array loc1..n has one slot for each thread,which LP LPs oft and t' potential LP commit holds the pointer to a thread descriptor.The thread randomly reads a slot him in loc (line 7).If the descriptor q says him is doing (a)Simple Simulation (b)Pending Thread Pool (c)Speculation pop.cid tries to eliminate itself with him by two cas instructions. Figure 2.Simulation Diagrams The first clears cid's entry in loc so that no other thread could eliminate with cid (line 9).The second attempts to mark the entry of him in loc as "eliminated with cid"(line 10).If successful,it reads j's data,but only if the validation at line 6 succeeds.That is should be the LPs of both the push of cid and the pop of him,with whether we should linearize the operation at line 5 depends on the the push happening immediately before the pop. future unpredictable behavior of line 6. The helping mechanism allows the current thread to linearize As discussed a lot in previous work (e.g.,[1,32)),the future the operations of other threads,which cannot be expressed in the dependent LPs cannot be handled by introducing history variables, basic logic.It also breaks modularity and makes thread-local ver- which are auxiliary variables storing values or events in the past ex- ification difficult.For the thread cid,its concrete step could cor- ecutions.We have to refer to events coming from the unpredictable respond to the steps of both cid and him at the abstract level.For future.Thus people propose prophecy variables [1.32]as the dual him,a step from its environment could fulfill its abstract operation. of history variables to store future behaviors.But as far as we know We must ensure in the thread-local verification that the two threads there is no semantics of prophecy variables suitable for Hoare-style cid and him always take consistent views on whether and how the local and compositional reasoning. abstract operation of him is done.For example,if we let a concrete Instead of resorting to prophecy variables,we follow the specu- step in cid fulfill the abstract pop of him,we must know him is lation idea [31].For the concrete step at a potential LP(e.g..line 5 indeed doing pop and its pop has not been done before.Otherwise. of readPair),we execute the abstract operation speculatively and we will not be able to compose cid and him in parallel. keep both the result and the original abstract configuration.Later We extend the basic logic to express the helping mechanism. based on the result of the validation (e.g.,line 6 in readPair),we First we introduce a new auxiliary command lin(t)to linearize a keep the appropriate branch and discard the other. specific thread t.For instance,in Fig.1(b)we insert line 10'at For the logic,we introduce two new auxiliary commands: the LP to execute both the push of cid and the pop of him at the trylinself is to do speculation,and commit(p)will commit to the abstract level.We also extend the auxiliary state to record both appropriate branch satisfying the assertion p.In Fig.1(c),we insert abstract operations of cid and him.More generally,we embed lines 5,and 6,,where cid(end,(a,b))means that the cur- a pending thread pool U,which maps threads to their abstract rent thread cid should have done its abstract operation and would operations.It specifies a set of threads whose operations might return (a.b).We also extend the auxiliary state to record the mul- be helped by others.Then under the new state (o,(U,0)),the tiple possibilities of abstract operations and abstract states after semantics of lin(t)just executes the thread t's abstract operation speculation. in U,similarly to the semantics of linself discussed before. Furthermore,we can combine the speculation idea with the The shared pending thread pool U allows us to recover the pending thread pool.We allow the abstract operations in the pend- thread modularity when verifying the helping mechanism.A con- ing thread pool as well as the current thread to speculate.Then crete step of cid could fulfill the operation of him in U as well as we could handle some trickier algorithms such as RDCSS [12].in its own abstract operation;and conversely,the thread him running which the location of LP for thread t may be in the code of some in parallel could check U to know if its operation has been finished other thread and also depend on the future behaviors of that thread. by others (such as cid)or not.We gain consistent abstract infor- Please see Sec.6 for one such example. mation of other threads in the thread-local verification.Note that the need of 0 itself does not break modularity because the required 2.4 Simulation as Meta-Theory information of other threads'abstract operations can be inferred from the concrete state.In the HSY stack example,we know him The LP proof method can be understood as building simulations be- is doing pop by looking at its thread descriptor in the elimination tween the concrete implementations and the abstract atomic opera- array.In this case U can be viewed as an abstract representation of tions,such as the simple weak simulation in Fig.2(a).The lower- the elimination array. level and higher-level arrows are the steps of the implementation and of the abstract operation respectively,and the dashed lines de- note the simulation relation.We use dark nodes and white nodes 2.3 Try-Commit Commands for Future-Dependent LPs at the abstract level to distinguish whether the operation has been Another challenge is to reason about optimistic algorithms whose finished or not.The only step at the concrete side corresponding LPs depend on the future interleavings. to the single abstract step should be the LP of the implementation We give a toy example,pair snapshot [27],in Fig.1(c).The (labeled"LP"in the diagram).Since our program logic is based on object is an array m,each slot of which contains two fields:d for the LP method,we can expect simulations to justify its soundness. the data and v for the version number.The write(i,d)method In particular,we want a thread-local simulation which can handle (lines 9)updates the data stored at address i and increments the both the helping mechanism and future-dependent LPs and can en- version number instantaneously.The readPair(i,j)method in- sure linearizability. tends to perform an atomic read of two slots i and j in the presence To support helping in the simulation,we should allow the LP of concurrent writes.It reads the data at slots i and j separately at step at the concrete level to correspond to an abstract step made by lines 4 and 5,and validate the first read at line 6.If i's version num a thread other than the one being verified.This requires informa- ber has not been increased,the thread knows that when it read j's tion from other threads at the abstract side,thus makes it difficult data at line 5,i's data had not been updated.This means the two to build a thread-local simulation.To address the problem,we intro reads were at a consistent state,thus the thread can return.We can duce the pending thread pool at the abstract level of the simulation see that the LP of readPair should be at line 5 when the thread just as in the development of our logic in Sec.2.2.The new simula- 461of the operation to be performed, and the argument. The current thread cid first tries to perform Treiber stack’s push (line 5). It returns if succeeds. Otherwise, it writes its descriptor in the global loc array (line 6) to allow other threads to eliminate its push. The elimination array loc[1..n] has one slot for each thread, which holds the pointer to a thread descriptor. The thread randomly reads a slot him in loc (line 7). If the descriptor q says him is doing pop, cid tries to eliminate itself with him by two cas instructions. The first clears cid’s entry in loc so that no other thread could eliminate with cid (line 9). The second attempts to mark the entry of him in loc as “eliminated with cid” (line 10). If successful, it should be the LPs of both the push of cid and the pop of him, with the push happening immediately before the pop. The helping mechanism allows the current thread to linearize the operations of other threads, which cannot be expressed in the basic logic. It also breaks modularity and makes thread-local ver￾ification difficult. For the thread cid, its concrete step could cor￾respond to the steps of both cid and him at the abstract level. For him, a step from its environment could fulfill its abstract operation. We must ensure in the thread-local verification that the two threads cid and him always take consistent views on whether and how the abstract operation of him is done. For example, if we let a concrete step in cid fulfill the abstract pop of him, we must know him is indeed doing pop and its pop has not been done before. Otherwise, we will not be able to compose cid and him in parallel. We extend the basic logic to express the helping mechanism. First we introduce a new auxiliary command lin(t) to linearize a specific thread t. For instance, in Fig. 1(b) we insert line 10’ at the LP to execute both the push of cid and the pop of him at the abstract level. We also extend the auxiliary state to record both abstract operations of cid and him. More generally, we embed a pending thread pool U, which maps threads to their abstract operations. It specifies a set of threads whose operations might be helped by others. Then under the new state (σ,(U, θ)), the semantics of lin(t) just executes the thread t’s abstract operation in U, similarly to the semantics of linself discussed before. The shared pending thread pool U allows us to recover the thread modularity when verifying the helping mechanism. A con￾crete step of cid could fulfill the operation of him in U as well as its own abstract operation; and conversely, the thread him running in parallel could check U to know if its operation has been finished by others (such as cid) or not. We gain consistent abstract infor￾mation of other threads in the thread-local verification. Note that the need of U itself does not break modularity because the required information of other threads’ abstract operations can be inferred from the concrete state. In the HSY stack example, we know him is doing pop by looking at its thread descriptor in the elimination array. In this case U can be viewed as an abstract representation of the elimination array. 2.3 Try-Commit Commands for Future-Dependent LPs Another challenge is to reason about optimistic algorithms whose LPs depend on the future interleavings. We give a toy example, pair snapshot [27], in Fig. 1(c). The object is an array m, each slot of which contains two fields: d for the data and v for the version number. The write(i,d) method (lines 9) updates the data stored at address i and increments the version number instantaneously. The readPair(i,j) method in￾tends to perform an atomic read of two slots i and j in the presence of concurrent writes. It reads the data at slots i and j separately at lines 4 and 5, and validate the first read at line 6. If i’s version num￾ber has not been increased, the thread knows that when it read j’s data at line 5, i’s data had not been updated. This means the two reads were at a consistent state, thus the thread can return. We can see that the LP of readPair should be at line 5 when the thread (a) Simple Simulation (b) Pending Thread Pool (c) Speculation Figure 2. Simulation Diagrams reads j’s data, but only if the validation at line 6 succeeds. That is, whether we should linearize the operation at line 5 depends on the future unpredictable behavior of line 6. As discussed a lot in previous work (e.g., [1, 32]), the future￾dependent LPs cannot be handled by introducing history variables, which are auxiliary variables storing values or events in the past ex￾ecutions. We have to refer to events coming from the unpredictable future. Thus people propose prophecy variables [1, 32] as the dual of history variables to store future behaviors. But as far as we know, there is no semantics of prophecy variables suitable for Hoare-style local and compositional reasoning. Instead of resorting to prophecy variables, we follow the specu￾lation idea [31]. For the concrete step at a potential LP (e.g., line 5 of readPair), we execute the abstract operation speculatively and keep both the result and the original abstract configuration. Later based on the result of the validation (e.g., line 6 in readPair), we keep the appropriate branch and discard the other. For the logic, we introduce two new auxiliary commands: trylinself is to do speculation, and commit(p) will commit to the appropriate branch satisfying the assertion p. In Fig. 1(c), we insert lines 5’ and 6’, where cid (end,(a, b)) means that the cur￾rent thread cid should have done its abstract operation and would return (a, b). We also extend the auxiliary state to record the mul￾tiple possibilities of abstract operations and abstract states after speculation. Furthermore, we can combine the speculation idea with the pending thread pool. We allow the abstract operations in the pend￾ing thread pool as well as the current thread to speculate. Then we could handle some trickier algorithms such as RDCSS [12], in which the location of LP for thread t may be in the code of some other thread and also depend on the future behaviors of that thread. Please see Sec. 6 for one such example. 2.4 Simulation as Meta-Theory The LP proof method can be understood as building simulations be￾tween the concrete implementations and the abstract atomic opera￾tions, such as the simple weak simulation in Fig. 2(a). The lower￾level and higher-level arrows are the steps of the implementation and of the abstract operation respectively, and the dashed lines de￾note the simulation relation. We use dark nodes and white nodes at the abstract level to distinguish whether the operation has been finished or not. The only step at the concrete side corresponding to the single abstract step should be the LP of the implementation (labeled “LP” in the diagram). Since our program logic is based on the LP method, we can expect simulations to justify its soundness. In particular, we want a thread-local simulation which can handle both the helping mechanism and future-dependent LPs and can en￾sure linearizability. To support helping in the simulation, we should allow the LP step at the concrete level to correspond to an abstract step made by a thread other than the one being verified. This requires informa￾tion from other threads at the abstract side, thus makes it difficult to build a thread-local simulation. To address the problem, we intro￾duce the pending thread pool at the abstract level of the simulation, just as in the development of our logic in Sec. 2.2. The new simula- 461
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有