正在加载图片...
local simulations do not support non-fixed LPs (except the recent 1 readPair(int i,j){ work [31].which we will discuss in Sec.7).We will explain the 1 push(int v){ 2 local a,b,v,w; challenges in detail in Sec.2. 2 local x,t,bi 3 while(true){ Our work is inspired by the earlier work on linearizability 3 x :new node(v); <a :m[i].d;v :m[i].v;> verification,in particular the use of auxiliary code and states by 4 do{ <b:=m[j].d;w:=m[j].v; Vafeiadis [32]and our previous work on thread-local simulation 5 t:=S; 61 trylinself;> RGSim [19],but makes the following new contributions: 6 x.next :=t; 6 if(v m[i].v){ 7 <b :cas(&s,t,x); commit(cid(end,(a,b))); We propose the first program logic that has a formal soundness 7 if(b)linself;> 7 return (a,b);} proof for linearizability with non-fixed LPs.Our logic is built 8 while(!b); 8]] upon the unary program logic LRG [8],but we give a relational 9] 9 write(int i,d){ interpretation of assertions and rely/guarantee conditions.We (a)Treiber Stack 10<m[i].d:=d;m[i].v+;>] also introduce new logic rules for auxiliary commands used (c)Pair Snapshot specifically for linearizability proofs. 1 push(int v){ 2 local p,him,q; We give a light instrumentation mechanism to relate con- 3 p :new thrdDescriptor(cid,PUSH,v); crete implementations with abstract operations.The system- while(true){ atic use of auxiliary states and commands makes it possible 5 if (tryPush(v))return; to execute the abstract operations synchronously with the con- 6 loc[cid]:p; crete code.The try-commit clauses allow us to reason about him :rand();q :loc[him]; future-dependent uncertainty without resorting to prophecy if (q !null &q.id him &q.op POP) variables [1,32],whose existing semantics (e.g.,[1])is un- 9 if (cas(&loc[cid],p,null)){ suitable for Hoare-style verification. 10 <b :cas(&loc[him],q,p); 101 if(b)lin(cid);lin(him);> We design a novel thread-local simulation as the meta-theory 11 if (b)return; for our logic.It generalizes RGSim [19]and other composi- 12 44 tional reasoning of refinement (e.g.,[5,30])with the support (b)HSY Elimination-Based Stack for non-fixed LPs. Figure 1.LPs and Instrumented Auxiliary Commands Instead of ensuring linearizability directly,the program logic and the simulation both establish contextual refinement,which we prove is equivalent to linearizability.A program logic for a sequence of values with"::"for concatenation.Then push(v) contextual refinement is interesting in its own right,since con- can be linearized at the successful cas since it is the single point textual refinement is also a widely accepted (and probably more where the operation takes effect. natural)correctness criterion for library code. We can encode the above reasoning in an existing (unary)con- current program logic,such as Rely-Guarantee reasoning [17]and We successfully apply our logic to verify 12 well-known algo- CSL [24].Inspired by Vafeiadis [32],we embed the abstract oper- rithms.Some of them are used in the java.util.concurrent ation and the abstract state 0 as auxiliary states on the concrete package,such as MS non-blocking queue [23]and Harris- side,so the program state now becomes ((,0)).where a is the Michael lock-free list [11.22]. original concrete state.Then we instrument the concrete implemen- In the rest of this paper,we first analyze the challenges in the tation with an auxiliary command linself(shorthand for "linearize logic design and explain our approach informally in Sec.2.Then self)at the LP to update the auxiliary state.Intuitively,linself will we give the basic technical setting in Sec.3,including a formal execute the abstract operation over the abstract state 0,as de- operational definition of linearizability.We present our program scribed in the following operational semantics rule: logic in Sec.4,and the new simulation relation as the meta-theory (Y,8)…(end,8) in Sec.5.In Sec.6 we summarize all the algorithms we have (linself,(o,(y,0))-→(skip,(a,(end,8)) verified and sketch the proofs of three representative algorithms We discuss related work and conclude in Sec.7. Here~encodes the transition of yy at the abstract level,and end is a termination marker.We insert linself into the same atomic block Challenges and Our Approach with the concrete statement at the LP,such as line 7'in Fig.1(a), so that the concrete and abstract sides are executed simultaneously. Below we start from a simple program logic for linearizability with Here the atomic block (C)means C is executed atomically.Then fixed LPs,and extend it to support algorithms with non-fixed LPs. we reason about the instrumented code using a traditional concur- We also discuss the problems with the underlying meta-theory rent logic extended with a new inference rule for linself. which establishes the soundness of the logic w.r.t.linearizability. The idea is intuitive,but it cannot handle more advanced algo- rithms with non-fixed LPs,including the algorithms with the help- 2.1 Basic Logic for Fixed LPs ing mechanism and those whose locations of LPs depend on the We first show a simple and intuitive logic which follows the LP future interleavings.Below we analyze the two challenges in detail approach.As a working example,Fig.1(a)shows the implementa- and explain our solutions using two representative algorithms,the tion of push in Treiber stack [29](let's first ignore the blue code at HSY stack and the pair snapshot. line 7').The stack object is implemented as a linked list pointed to by S,and push(v)repeatedly tries to update S to point to the new 2.2 Support Helping Mechanism with Pending Thread Pool node using compare-and-swap (cas)until it succeeds. HSY elimination-based stack [14]is a typical example using the To verify linearizability,we first locate the LP in the code.The helping mechanism.Figure 1(b)shows part of its push method LP of push(v)is at the cas statement when it succeeds (line 7). implementation.The basic idea behind the algorithm is to let a push That is,the successful cas can correspond to the abstract atomic and a pop cancel out each other. PUSH(v)operation:Stk :=v::Stk;and all the other concrete At the beginning of the method in Fig.1(b),the thread allocates steps cannot.Here we simply represent the abstract stack Stk as its thread descriptor (line 3),which contains the thread id,the name 460local simulations do not support non-fixed LPs (except the recent work [31], which we will discuss in Sec. 7). We will explain the challenges in detail in Sec. 2. Our work is inspired by the earlier work on linearizability verification, in particular the use of auxiliary code and states by Vafeiadis [32] and our previous work on thread-local simulation RGSim [19], but makes the following new contributions: • We propose the first program logic that has a formal soundness proof for linearizability with non-fixed LPs. Our logic is built upon the unary program logic LRG [8], but we give a relational interpretation of assertions and rely/guarantee conditions. We also introduce new logic rules for auxiliary commands used specifically for linearizability proofs. • We give a light instrumentation mechanism to relate con￾crete implementations with abstract operations. The system￾atic use of auxiliary states and commands makes it possible to execute the abstract operations synchronously with the con￾crete code. The try-commit clauses allow us to reason about future-dependent uncertainty without resorting to prophecy variables [1, 32], whose existing semantics (e.g., [1]) is un￾suitable for Hoare-style verification. • We design a novel thread-local simulation as the meta-theory for our logic. It generalizes RGSim [19] and other composi￾tional reasoning of refinement (e.g., [5, 30]) with the support for non-fixed LPs. • Instead of ensuring linearizability directly, the program logic and the simulation both establish contextual refinement, which we prove is equivalent to linearizability. A program logic for contextual refinement is interesting in its own right, since con￾textual refinement is also a widely accepted (and probably more natural) correctness criterion for library code. • We successfully apply our logic to verify 12 well-known algo￾rithms. Some of them are used in the java.util.concurrent package, such as MS non-blocking queue [23] and Harris￾Michael lock-free list [11, 22]. In the rest of this paper, we first analyze the challenges in the logic design and explain our approach informally in Sec. 2. Then we give the basic technical setting in Sec. 3, including a formal operational definition of linearizability. We present our program logic in Sec. 4, and the new simulation relation as the meta-theory in Sec. 5. In Sec. 6 we summarize all the algorithms we have verified and sketch the proofs of three representative algorithms. We discuss related work and conclude in Sec. 7. 2. Challenges and Our Approach Below we start from a simple program logic for linearizability with fixed LPs, and extend it to support algorithms with non-fixed LPs. We also discuss the problems with the underlying meta-theory, which establishes the soundness of the logic w.r.t. linearizability. 2.1 Basic Logic for Fixed LPs We first show a simple and intuitive logic which follows the LP approach. As a working example, Fig. 1(a) shows the implementa￾tion of push in Treiber stack [29] (let’s first ignore the blue code at line 7’). The stack object is implemented as a linked list pointed to by S, and push(v) repeatedly tries to update S to point to the new node using compare-and-swap (cas) until it succeeds. To verify linearizability, we first locate the LP in the code. The LP of push(v) is at the cas statement when it succeeds (line 7). That is, the successful cas can correspond to the abstract atomic PUSH(v) operation: Stk := v::Stk; and all the other concrete steps cannot. Here we simply represent the abstract stack Stk as 1 push(int v) { 2 local x, t, b; 3 x := new node(v); 4 do { 5 t := S; 6 x.next := t; 7 <b := cas(&S,t,x); 7’ if(b) linself;> 8 } while(!b); 9 } (a) Treiber Stack 1 readPair(int i, j) { 2 local a, b, v, w; 3 while(true) { 4 <a := m[i].d; v := m[i].v;> 5 <b := m[j].d; w := m[j].v; 5’ trylinself;> 6 if(v = m[i].v) { 6’ commit(cid (end, (a, b))); 7 return (a, b); } 8 }} 9 write(int i, d) { 10 <m[i].d := d; m[i].v++;> } (c) Pair Snapshot 1 push(int v) { 2 local p, him, q; 3 p := new thrdDescriptor(cid, PUSH, v); 4 while(true) { 5 if (tryPush(v)) return; 6 loc[cid] := p; 7 him := rand(); q := loc[him]; 8 if (q != null && q.id = him && q.op = POP) 9 if (cas(&loc[cid], p, null)) { 10 <b := cas(&loc[him], q, p); 10’ if(b) {lin(cid); lin(him);}> 11 if (b) return; } 12 ... 13 } } (b) HSY Elimination-Based Stack Figure 1. LPs and Instrumented Auxiliary Commands a sequence of values with “::” for concatenation. Then push(v) can be linearized at the successful cas since it is the single point where the operation takes effect. We can encode the above reasoning in an existing (unary) con￾current program logic, such as Rely-Guarantee reasoning [17] and CSL [24]. Inspired by Vafeiadis [32], we embed the abstract oper￾ation γ and the abstract state θ as auxiliary states on the concrete side, so the program state now becomes (σ,(γ,θ)), where σ is the original concrete state. Then we instrument the concrete implemen￾tation with an auxiliary command linself (shorthand for “linearize self”) at the LP to update the auxiliary state. Intuitively, linself will execute the abstract operation γ over the abstract state θ, as de￾scribed in the following operational semantics rule: (γ,θ) (end, θ ) (linself, (σ, (γ,θ))) −→ (skip, (σ, (end, θ ))) Here encodes the transition of γ at the abstract level, and end is a termination marker. We insert linself into the same atomic block with the concrete statement at the LP, such as line 7’ in Fig. 1(a), so that the concrete and abstract sides are executed simultaneously. Here the atomic block C means C is executed atomically. Then we reason about the instrumented code using a traditional concur￾rent logic extended with a new inference rule for linself. The idea is intuitive, but it cannot handle more advanced algo￾rithms with non-fixed LPs, including the algorithms with the help￾ing mechanism and those whose locations of LPs depend on the future interleavings. Below we analyze the two challenges in detail and explain our solutions using two representative algorithms, the HSY stack and the pair snapshot. 2.2 Support Helping Mechanism with Pending Thread Pool HSY elimination-based stack [14] is a typical example using the helping mechanism. Figure 1(b) shows part of its push method implementation. The basic idea behind the algorithm is to let a push and a pop cancel out each other. At the beginning of the method in Fig. 1(b), the thread allocates its thread descriptor (line 3), which contains the thread id, the name 460
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有