正在加载图片...
(MName)f∈String (ThrdlD)t E Nat (Expr)E ::x n E+E... (Mem)o E (PVarUNat)-Int (BExp)B ::true false E=E !B .. (CallStk)::=(L:,C)o (Instr)c:=:=E:=[E]]:=E print(E) (ThrdPool)K:={t1∽K1,.·,tn∽Kn} |x=cons(E,,E)「dispose(E)|· (PState)S:=(acao,C) (Stmt)C ::skip cx:=f(E)return E noret (LState)s ::=(dc,o,K) (C)C;C if(B)C else C while (B){C) (Em))e=(t,f,n)I(t,ok,n)I(t,obj,abort) (Prog)W:=skip|letΠinCl..l‖C (t,out,n)(t,clt,abort) (ODecl)II ::={f1(1,C1),...,fn(n,Cn)} (ETrace)H::=e e::H Figure 3.Syntax of the Programming Language Figure 4.States and Event Traces tion is shown in Fig.2(b).We can see that a concrete step of thread We use a runtime command noret to abort methods that termi- t could help linearize the operation of t'in the pending thread pool nate but do not execute return E.It is automatically appended to as well as its own operation.Thus the new simulation intuitively the method code and is not supposed to be used by programmers. supports the helping mechanism. Other commands are mostly standard.Clients can use print(E)to As forward simulations,neither of the simulations in Fig.2(a) produce observable external events.We do not allow the object's and (b)supports future-dependent LPs.For each step along the con- methods to produce external events.To simplify the semantics,we crete execution in those simulations,we need to decide immedi- also assume there are no nested method calls. ately whether the step is at the LP,and cannot postpone the decision Figure 4 gives the model of program states.Here we partition a to the future.As discussed a lot in previous work (e.g.,[1,3.6,211). global state S into the client memory oe,the object oo and a thread we have to introduce backward simulations or hybrid simulations pool K.A client can only access the client memory oe,unless it to support future-dependent LPs.Here we exploit the speculation calls object methods.The thread pool maps each thread id t to its idea and develop a forward-backward simulation [21].As shown in local call stack frame.A call stack k could be either empty (o)when Fig.2(c).we keep both speculations after the potential LP,where the thread is not executing a method,or a triple (o,C),where the higher black nodes result from executing the abstract operation o maps the method's formal argument and local variables (if any) and the lower white nodes record the original abstract configura- to their values.is the caller's variable to receive the return value. tion.Then at the validation step we commit to the correct branch. and C is the caller's remaining code to be executed after the method Finally,to ensure linearizability,the thread-local simulation has returns.To give a thread-local semantics,we also define the thread to be compositional.As a counterexample,we can construct a local view s of the state. simple simulation(like the one in Fig.2(a))between the following Figure 5 gives selected rules of the operational semantics.We implementation C and the abstract atomic increment operation y, show three kinds of transitions:for the top-level program but C is not linearizable w.r.t. transitions,- t.n for the transitions of thread t with the methods' declaration II,and-t for the steps inside method calls of thread C:local t;t :x;x :t+1; y:x++ t.To describe the operational semantics for threads.we use an The reason is that the simple simulation is not compositional w.rt. execution context E: parallel compositions.To address this problem,we proposed a (ExecContext)E::=[]E:C compositional simulation RGSim [19]in previous work.The idea is to parameterize the simple simulation with the interference with The hole shows the place where the execution of code occurs. the environment,in the form of rely/guarantee conditions (R and E[C]represents the code resulting from placing C into the hole. G)[17].RGSim says,the concrete executions are simulated by the We label transitions with events e defined in Fig.4.An event abstract executions under interference from the environment R,and could be a method invocation (t,f,n)or return (t,ok,n).a fault all the related state transitions of the thread being verified should (t,obj,abort)produced by the object method code,an output satisfy G.For parallel composition,we check that the guarantee (t,out,n)generated by print(E),or a fault (t,clt,abort)from the G of each thread is permitted in the rely R of the other.Then the client code.The first two events are called object events,and the last simulation becomes compositional and can ensure linearizability. two are observable external events.The third one (t,obj,abort) We combine the above ideas and develop a new compositional belongs to both classes.An event trace H is then defined as a finite simulation with the support of non-fixed LPs as the meta-theory of sequence of events. our logic.We will discuss our simulation formally in Sec.5. 3.2 Object Specification and Linearizability 3.Basic Technical Settings and Linearizability Next we formalize the object specification T,which maps method names to their abstract operations as shown in Fig.6.y trans- In this section,we formalize linearizability of an object implemen- forms an argument value and an initial abstract object to a return tation w.r.t.its specification,and show that linearizability is equiv- value with a resulting abstract object in a single step.It specifies alent to contextual refinement. the intended sequential behaviors of the method.The abstract ob- 3.1 Language and Semantics ject representation 0 is defined as a mapping from program vari- ables to abstract values.We leave the abstract values unspecified As shown in Fig.3,a program W contains several client threads in here,which can be instantiated by programmers. parallel,each of which could call the methods declared in the object Then we give an abstract version of programs W,where clients II.A method is defined as a pair (C).where r is the formal interact with the abstract object specification I instead of its im- argument and C is the method body.For simplicity,we assume plementation II.The semantics is almost the same as the concrete there is only one object in W and each method takes one argument language shown in Sec.3.1,except that the abstract atomic opera only,but it is easy to extend our work with multiple objects and tion y is executed when the method is called,which now operates arguments. over the abstract object 0 instead of over the concrete one oo.The 462(MName) f ∈ String (Expr) E ::= x | n | E + E | ... (BExp) B ::= true | false | E = E | !B | ... (Instr) c ::= x := E | x := [E] | [E] := E | print(E) | x := cons(E,...,E) | dispose(E) | ... (Stmt) C ::= skip | c | x := f(E) | return E | noret | C | C; C | if (B) C else C | while (B){C} (Prog) W ::= skip | let Π in C ...C (ODecl) Π ::= {f1 (x1, C1),...,fn (xn, Cn)} Figure 3. Syntax of the Programming Language tion is shown in Fig. 2(b). We can see that a concrete step of thread t could help linearize the operation of t in the pending thread pool as well as its own operation. Thus the new simulation intuitively supports the helping mechanism. As forward simulations, neither of the simulations in Fig. 2(a) and (b) supports future-dependent LPs. For each step along the con￾crete execution in those simulations, we need to decide immedi￾ately whether the step is at the LP, and cannot postpone the decision to the future. As discussed a lot in previous work (e.g., [1, 3, 6, 21]), we have to introduce backward simulations or hybrid simulations to support future-dependent LPs. Here we exploit the speculation idea and develop a forward-backward simulation [21]. As shown in Fig. 2(c), we keep both speculations after the potential LP, where the higher black nodes result from executing the abstract operation and the lower white nodes record the original abstract configura￾tion. Then at the validation step we commit to the correct branch. Finally, to ensure linearizability, the thread-local simulation has to be compositional. As a counterexample, we can construct a simple simulation (like the one in Fig. 2(a)) between the following implementation C and the abstract atomic increment operation γ, but C is not linearizable w.r.t. γ. C : local t; t := x; x := t + 1; γ : x++ The reason is that the simple simulation is not compositional w.r.t. parallel compositions. To address this problem, we proposed a compositional simulation RGSim [19] in previous work. The idea is to parameterize the simple simulation with the interference with the environment, in the form of rely/guarantee conditions (R and G) [17]. RGSim says, the concrete executions are simulated by the abstract executions under interference from the environment R, and all the related state transitions of the thread being verified should satisfy G. For parallel composition, we check that the guarantee G of each thread is permitted in the rely R of the other. Then the simulation becomes compositional and can ensure linearizability. We combine the above ideas and develop a new compositional simulation with the support of non-fixed LPs as the meta-theory of our logic. We will discuss our simulation formally in Sec. 5. 3. Basic Technical Settings and Linearizability In this section, we formalize linearizability of an object implemen￾tation w.r.t. its specification, and show that linearizability is equiv￾alent to contextual refinement. 3.1 Language and Semantics As shown in Fig. 3, a program W contains several client threads in parallel, each of which could call the methods declared in the object Π. A method is defined as a pair (x, C), where x is the formal argument and C is the method body. For simplicity, we assume there is only one object in W and each method takes one argument only, but it is easy to extend our work with multiple objects and arguments. (ThrdID) t ∈ Nat (Mem) σ ∈ (PVar ∪ Nat)  Int (CallStk) κ ::= (σl, x, C) | ◦ (ThrdPool) K ::= {t1 κ1,...,tn κn} (PState) S ::= (σc, σo, K) (LState) s ::= (σc, σo, κ) (Evt) e ::= (t, f,n) | (t, ok, n) | (t, obj, abort) | (t, out, n) | (t, clt, abort) (ETrace) H ::=  | e::H Figure 4. States and Event Traces We use a runtime command noret to abort methods that termi￾nate but do not execute return E. It is automatically appended to the method code and is not supposed to be used by programmers. Other commands are mostly standard. Clients can use print(E) to produce observable external events. We do not allow the object’s methods to produce external events. To simplify the semantics, we also assume there are no nested method calls. Figure 4 gives the model of program states. Here we partition a global state S into the client memory σc, the object σo and a thread pool K. A client can only access the client memory σc, unless it calls object methods. The thread pool maps each thread id t to its local call stack frame. A call stack κ could be either empty (◦) when the thread is not executing a method, or a triple (σl, x, C), where σl maps the method’s formal argument and local variables (if any) to their values, x is the caller’s variable to receive the return value, and C is the caller’s remaining code to be executed after the method returns. To give a thread-local semantics, we also define the thread local view s of the state. Figure 5 gives selected rules of the operational semantics. We show three kinds of transitions: −→ for the top-level program transitions, −→t,Π for the transitions of thread t with the methods’ declaration Π, and −t for the steps inside method calls of thread t. To describe the operational semantics for threads, we use an execution context E: (ExecContext) E ::= [ ] | E; C The hole [ ] shows the place where the execution of code occurs. E[ C ] represents the code resulting from placing C into the hole. We label transitions with events e defined in Fig. 4. An event could be a method invocation (t,f,n) or return (t, ok, n), a fault (t, obj, abort) produced by the object method code, an output (t, out, n) generated by print(E), or a fault (t, clt, abort) from the client code. The first two events are called object events, and the last two are observable external events. The third one (t, obj, abort) belongs to both classes. An event trace H is then defined as a finite sequence of events. 3.2 Object Specification and Linearizability Next we formalize the object specification Γ, which maps method names to their abstract operations γ, as shown in Fig. 6. γ trans￾forms an argument value and an initial abstract object to a return value with a resulting abstract object in a single step. It specifies the intended sequential behaviors of the method. The abstract ob￾ject representation θ is defined as a mapping from program vari￾ables to abstract values. We leave the abstract values unspecified here, which can be instantiated by programmers. Then we give an abstract version of programs W, where clients interact with the abstract object specification Γ instead of its im￾plementation Π. The semantics is almost the same as the concrete language shown in Sec. 3.1, except that the abstract atomic opera￾tion γ is executed when the method is called, which now operates over the abstract object θ instead of over the concrete one σo. The 462
<<向上翻页向下翻页>>
©2008-现在 cucdc.com 高等教育资讯网 版权所有