An Introduction to separation Logic (Preliminary Draft) John C.Reynolds Computer Science Department Carnegie Mellon University ©,John C.Reynolds2008 ITU University,Copenhagen October 20-22.2008,corrected October 23
An Introduction to Separation Logic (Preliminary Draft) John C. Reynolds Computer Science Department Carnegie Mellon University c John C. Reynolds 2008 ITU University, Copenhagen October 20–22, 2008, corrected October 23
An Introduction to Separation Logic 2008 John C.Reynolds October 23.2008 Chapter 1 An Overview Separation logic is a novel system for reasoning about imperative programs. It extends Hoare logic with enriched assertions that can describe the separa- tion of stor ources concisely.The original goal of the logic ed mutable data struct res,i.e.,stru tuere upatablebe rfren from more th one po! More recently,the logic has been extended to deal with shared-variable con- currency and information hiding,and the notion of separation has proven applicable to a wider conceptual range,where access to memory is replaced by permission to exercise capabilities,or by knowledge of structure.In a research area,with a growing 1.1 An Example of the Problem The use of shared mutable data structures is widespread in areas as diverse as systems programming and artificial intelligence.Approaches to reasoning about this technique have been studied for three decades,but the result has been methods that suffer from either limited applicability or extreme complexity,and scale poorly to programs of even moderate size. For co onventional logics,the problem with sharing is that it is the default in the logic hile nonsharing is the default in programming,so that decaring all of the instances where sharing does not occur-or at least those instances necessary for correctness-can be extremely tedious. For example,consider the following program,which performs an in-place 3
Chapter 1 An Overview An Introduction to Separation Logic c 2008 John C. Reynolds October 23, 2008 Separation logic is a novel system for reasoning about imperative programs. It extends Hoare logic with enriched assertions that can describe the separation of storage and other resources concisely. The original goal of the logic was to facilitate reasoning about shared mutable data structures, i.e., structures where updatable fields can be referenced from more than one point. More recently, the logic has been extended to deal with shared-variable concurrency and information hiding, and the notion of separation has proven applicable to a wider conceptual range, where access to memory is replaced by permission to exercise capabilities, or by knowledge of structure. In a few years, the logic has become a significant research area, with a growing literature produced by a variety of researchers. 1.1 An Example of the Problem The use of shared mutable data structures is widespread in areas as diverse as systems programming and artificial intelligence. Approaches to reasoning about this technique have been studied for three decades, but the result has been methods that suffer from either limited applicability or extreme complexity, and scale poorly to programs of even moderate size. For conventional logics, the problem with sharing is that it is the default in the logic, while nonsharing is the default in programming, so that declaring all of the instances where sharing does not occur — or at least those instances necessary for correctness — can be extremely tedious. For example, consider the following program, which performs an in-place 3
4 CHAPTER 1.AN OVERVIEW reversal of a list LREVj:=nil while i nil do (k:=i+1];i+]:=j;j:=i;i:=k) (Here the notation [e]denotes the contents of the storage at address e.) The invariant of this program must state that i and j are lists representing two sequences a and B such that the reflection of the initial value ao can be obtained by concatenating the reflection of a onto B: 3a,B.list a in list B ja a!.B, where the predicate list i is defined by induction on the length of list eiei=nil list(a-a)ij.ia,jA list aj (and-can be read as“"points to”)- Unfortunately,however,this is not enough,since the program will mal- function if there is any sharing between the lists i and j.To prohibit this we must extend the invariant to assert that only nil is reachable from both and j: (3a,B.list a in list B jan=at.B) A(Vk.reachable(i,k)A reachable(j,k)=k=nil), (1.1 where reachable(i,j)n20.reachable(i,j) reachableo(i,j)i=j reachable+(i,j)a,k.ia,k A reachable(k,j). Even worse, that is not supposed ose there is some other list representing 00 aftected by th of our program.Then it must not share with either i or j,so that the invariant becomes (a,3.1 listi list=a.) ∧(k.reachable(i,k)A reachable(j,k)→k=nil) A list x (1.2) A(Vk.reachable(x,k) A (reachable(i,k)Vreachable(j,k))=k nil)
4 CHAPTER 1. AN OVERVIEW reversal of a list: LREV def = j := nil ; while i 6= nil do (k := [i + 1] ; [i + 1] := j ; j := i ; i := k). (Here the notation [e] denotes the contents of the storage at address e.) The invariant of this program must state that i and j are lists representing two sequences α and β such that the reflection of the initial value α0 can be obtained by concatenating the reflection of α onto β: ∃α, β. list α i ∧ list β j ∧ α † 0 = α † ·β, where the predicate list α i is defined by induction on the length of α: list i def = i = nil list(a·α)i def = ∃j. i ,→ a, j ∧ list α j (and ,→ can be read as “points to”). Unfortunately, however, this is not enough, since the program will malfunction if there is any sharing between the lists i and j. To prohibit this we must extend the invariant to assert that only nil is reachable from both i and j: (∃α, β. list α i ∧ list β j ∧ α † 0 = α † ·β) ∧ (∀k. reachable(i, k) ∧ reachable(j, k) ⇒ k = nil), (1.1) where reachable(i, j) def = ∃n ≥ 0. reachablen(i, j) reachable0(i, j) def = i = j reachablen+1(i, j) def = ∃a, k. i ,→ a, k ∧ reachablen(k, j). Even worse, suppose there is some other list x, representing a sequence γ, that is not supposed to be affected by the execution of our program. Then it must not share with either i or j, so that the invariant becomes (∃α, β. list α i ∧ list β j ∧ α † 0 = α † ·β) ∧ (∀k. reachable(i, k) ∧ reachable(j, k) ⇒ k = nil) ∧ list γ x ∧ (∀k. reachable(x, k) ∧ (reachable(i, k) ∨ reachable(j, k)) ⇒ k = nil). (1.2)
1.1.AN EXAMPLE OF THE PROBLEM Even in this trivial situation,where all sharing is prohibited,it is evident that this form of reasoning scales poorly. In separation logic,however,this kind of difficulty can be avoided by using a novel logical operation P*Q,called the separating conjunction,that that P and Q hold for disjoint portions of the addre ssable storag ing is built into this operation,Invariant(1.1) can be written more succinctly as (3a,B.list a i list Bj)A ad =at.B, (1.3) and Invariant (1.2)as (3a,B.list a i list Bj list x)A ao=at.B. (1.4) A more general advantage is the support that separation logic gives to local reasoning,which underlies the scalability of the logic.For example,one can use (1.3)to prove a local specification: {list a i}LREV {list at j). In separation logic,this specification implies,not only that the program expects to find a list at i representing a,but also that this list is the only addressable storage touched by the execution of LREV(often called the foot- print of LREV).If LREV is a part of a larger program that also manipulates some separate storage,say containing the list k.then one can use an infer- ence rule due to O'Hear called the frame rule,to infer directly that the additional storage is unaffected byLR {list a i list yk}LREV {list at j list k}, thereby avoiding the extra complexity of Invariant(1.4). In a realistic situation,of course,LREV might be a substantial subpro- n.and the description n of the s eparate sto might also be volumine reaso locally about】 REV,i.e.,while ignoring the separate storage,and then scale up to the combined storage by using the frame rule. There is little need for local reasoning in proving toy examples.But it provides scalability that is critical for more complex programs
1.1. AN EXAMPLE OF THE PROBLEM 5 Even in this trivial situation, where all sharing is prohibited, it is evident that this form of reasoning scales poorly. In separation logic, however, this kind of difficulty can be avoided by using a novel logical operation P ∗ Q, called the separating conjunction, that asserts that P and Q hold for disjoint portions of the addressable storage. Since the prohibition of sharing is built into this operation, Invariant (1.1) can be written more succinctly as (∃α, β. list α i ∗ list β j) ∧ α † 0 = α † ·β, (1.3) and Invariant (1.2) as (∃α, β. list α i ∗ list β j ∗ list γ x) ∧ α † 0 = α † ·β. (1.4) A more general advantage is the support that separation logic gives to local reasoning, which underlies the scalability of the logic. For example, one can use (1.3) to prove a local specification: {list α i} LREV {list α † j}. In separation logic, this specification implies, not only that the program expects to find a list at i representing α, but also that this list is the only addressable storage touched by the execution of LREV (often called the footprint of LREV ). If LREV is a part of a larger program that also manipulates some separate storage, say containing the list k, then one can use an inference rule due to O’Hearn, called the frame rule, to infer directly that the additional storage is unaffected by LREV: {list α i ∗ list γ k} LREV {list α † j ∗ list γ k}, thereby avoiding the extra complexity of Invariant (1.4). In a realistic situation, of course, LREV might be a substantial subprogram, and the description of the separate storage might also be voluminous. Nevertheless, one can still reason locally about LREV, i.e., while ignoring the separate storage, and then scale up to the combined storage by using the frame rule. There is little need for local reasoning in proving toy examples. But it provides scalability that is critical for more complex programs
6 CHAPTER 1.AN OVERVIEW 1.2 Background A partial bibliography of early work on reasoning about shared mutable data structure is given in Reference [1]. The central concept of separating conjunction is implicit in Burstall's early idea of a "distinct nonrepeating tree system"[2.In lectures in the fall of 1999,the author described the concept explicitly,and embedded it in a flawed extension of Hoare logic 3,4.Soon thereafter,a sound intuitionistic version of the logic was discovered independently by Ishtiag and O'Hearn 5]and by the author [1].Realizing that this logic was an instance of the separating implication PQ. The intuitionistic character of this logic implied a monotonicity property that an assertion true for some portion of the addressable storage would extension of that portion,such as might be created by in their paper.however.Ishtiaa and o'Hearn also presented a classical version of the logic that does not impose this monotonicity property,and can therefore be used to reason about explicit storage deallocation:they showed that this version is than the intuitionistic logic. since the latter can be translated into the classical logic O'Hearn also went on to devise the frame rule and illustrate its importance [8,9,10,5 Originally,in both the intuitionistic and classical version of the logic addresses were assumed to be disjoint from integers,and to refer to entire records rather than particular fields,so that address arithmetic was pre- cluded.Later,the author generalized the logic to permit reasoning about unrestricted address arithmetic,by regarding addresses as integers which re- fer to individual fields 8,11].It is this form of the logic that will be described and used in most of these note Since these logics are based on the idea that the structure of an assertion can describe the separation of storage into disjoint components,we have come to use the term separation logic,both for the extension of predicate calculus with the separation operators and for the resulting extension of Hoare logic At present,separation logic has been used to manually verify a variety of small programs,as well as a few that are large enough to demonstrate the potential of local reasoning for scalability [12,10,13,14,15].In addition:
6 CHAPTER 1. AN OVERVIEW 1.2 Background A partial bibliography of early work on reasoning about shared mutable data structure is given in Reference [1]. The central concept of separating conjunction is implicit in Burstall’s early idea of a “distinct nonrepeating tree system” [2]. In lectures in the fall of 1999, the author described the concept explicitly, and embedded it in a flawed extension of Hoare logic [3, 4]. Soon thereafter, a sound intuitionistic version of the logic was discovered independently by Ishtiaq and O’Hearn [5] and by the author [1]. Realizing that this logic was an instance of the logic of bunched implications [6, 7], Ishtiaq and O’Hearn also introduced a separating implication P −∗ Q. The intuitionistic character of this logic implied a monotonicity property: that an assertion true for some portion of the addressable storage would remain true for any extension of that portion, such as might be created by later storage allocation. In their paper, however, Ishtiaq and O’Hearn also presented a classical version of the logic that does not impose this monotonicity property, and can therefore be used to reason about explicit storage deallocation; they showed that this version is more expressive than the intuitionistic logic, since the latter can be translated into the classical logic. O’Hearn also went on to devise the frame rule and illustrate its importance [8, 9, 10, 5]. Originally, in both the intuitionistic and classical version of the logic, addresses were assumed to be disjoint from integers, and to refer to entire records rather than particular fields, so that address arithmetic was precluded. Later, the author generalized the logic to permit reasoning about unrestricted address arithmetic, by regarding addresses as integers which refer to individual fields [8, 11]. It is this form of the logic that will be described and used in most of these notes. Since these logics are based on the idea that the structure of an assertion can describe the separation of storage into disjoint components, we have come to use the term separation logic, both for the extension of predicate calculus with the separation operators and for the resulting extension of Hoare logic. At present, separation logic has been used to manually verify a variety of small programs, as well as a few that are large enough to demonstrate the potential of local reasoning for scalability [12, 10, 13, 14, 15]. In addition:
1.2.BACKGROUND 1.It has been shown that deciding the validity of an assertion in separa- tion logic is not recursively enumerable,even when address arithmetic and the characteristic operation emp,,*,and-*,but not are oo ited,ther the valid of as s is algorithmically decidable within the complexity class PSPACE [16]. 2.An iterated form of separating conjunction has been introduced to rea- son about arrays [17]. 3.The logic has been extended to procedures with global variables,where a"hypothetical frame rule"permits reasoning with information hiding [18,19].Recently,a further extension to higher-order procedures (in the sense of Algol-like languages)has been developed [20]. 4.The logic has been integrated with data refin nt and with object-oriented programming(i.e.,with a subset o t,24 5.The logic has been extended to shared-variable concurrency with condi tional critical regions,where one can reason about the transfer of owner- ship of storage from one process to another [25,26].Further extensions have been made to nonblocking algorithms [27]and to rely/guarantee reasoning 28 6.In the context of proof-carryi aration logic has inspired work ocedure has been dev d for restricted form of the logic 8.Fractional permissions(in the sense of Boyland (31])and counting per- missions have been introduced so that one can permit several concur- rent processes to have read-only access to an area of the heap [32].This approach has also been applied to program variables [33]. 9.Separation logic itself has been extended to a higher-order logic [341. 10.Separation logic has been implemented in Isabelle/HOL [15]
1.2. BACKGROUND 7 1. It has been shown that deciding the validity of an assertion in separation logic is not recursively enumerable, even when address arithmetic and the characteristic operation emp, 7→, ∗ , and −∗, but not ,→ are prohibited [16, 10]. On the other hand, it has also been shown that, if the characteristic operations are permitted but quantifiers are prohibited, then the validity of assertions is algorithmically decidable within the complexity class PSPACE [16]. 2. An iterated form of separating conjunction has been introduced to reason about arrays [17]. 3. The logic has been extended to procedures with global variables, where a “hypothetical frame rule” permits reasoning with information hiding [18, 19]. Recently, a further extension to higher-order procedures (in the sense of Algol-like languages) has been developed [20]. 4. The logic has been integrated with data refinement [21, 22], and with object-oriented programming (i.e., with a subset of Java) [23, 24]. 5. The logic has been extended to shared-variable concurrency with conditional critical regions, where one can reason about the transfer of ownership of storage from one process to another [25, 26]. Further extensions have been made to nonblocking algorithms [27] and to rely/guarantee reasoning [28]. 6. In the context of proof-carrying code, separation logic has inspired work on proving run-time library code for dynamic allocation [29]. 7. A decision procedure has been devised for a restricted form of the logic that is capable of shape analysis of lists [30]. 8. Fractional permissions (in the sense of Boyland [31]) and counting permissions have been introduced so that one can permit several concurrent processes to have read-only access to an area of the heap [32]. This approach has also been applied to program variables [33]. 9. Separation logic itself has been extended to a higher-order logic [34]. 10. Separation logic has been implemented in Isabelle/HOL [15]
8 CHAPTER 1.AN OVERVIEW It should also be mentioned that separation logic is related to other recent logics that embody a notion of separation,such as spatial logics or ambient 1ogic35,36,37,38,39,40. 1.3 The Programming Language The programming language we will use is a low-level imperative language- specifically,the simple imperative language originally axiomatized by Hoare (3,4],extended with new commands for the manipulation of mutable shared data structures: (comm):=… |(var):=cons(exp〉,.,(exp) allocation I(war〉:=[(expl lookup I[(exp】:=(exp mutation dispose(exp) deallocation Memory managenent is explicit;there is no garbage collection.As we will dereferencing of dangling addresses will cs mantically we extend computational states tocontain two components a store (son mes called stack mapping variables into values(as in the semantics of the unextended simple imperative language),and a heap,map- ping addresses into values(and representing the mutable structures). In the early versions of separation logic,integers,atoms,and addresses were regarded as distinct kinds of value,and heaps were mappings from finite sets of addresses to nonempty tuples of values: Values Integers U Atoms U Addresses where Integers,Atoms,and Addresses are disjoint nil e Atoms Storesy=V→Values Henp --Valae) Statesv =Storesv x Heaps where V is a finite set of variables
8 CHAPTER 1. AN OVERVIEW It should also be mentioned that separation logic is related to other recent logics that embody a notion of separation, such as spatial logics or ambient logic [35, 36, 37, 38, 39, 40]. 1.3 The Programming Language The programming language we will use is a low-level imperative language — specifically, the simple imperative language originally axiomatized by Hoare [3, 4], extended with new commands for the manipulation of mutable shared data structures: hcommi ::= · · · | hvari := cons(hexpi, . . . ,hexpi) allocation | hvari := [hexpi] lookup | [hexpi] := hexpi mutation | dispose hexpi deallocation Memory managenent is explicit; there is no garbage collection. As we will see, any dereferencing of dangling addresses will cause a fault. Semantically, we extend computational states to contain two components: a store (sometimes called a stack), mapping variables into values (as in the semantics of the unextended simple imperative language), and a heap, mapping addresses into values (and representing the mutable structures). In the early versions of separation logic, integers, atoms, and addresses were regarded as distinct kinds of value, and heaps were mappings from finite sets of addresses to nonempty tuples of values: Values = Integers ∪ Atoms ∪ Addresses where Integers, Atoms, and Addresses are disjoint nil ∈ Atoms StoresV = V → Values Heaps = S A fin ⊆Addresses (A → Values+ ) StatesV = StoresV × Heaps where V is a finite set of variables
1.3.THE PROGRAMMING LANGUAGE (Actually,in most work using this kind of state,authors have imposed re- stricted formats on the records in the heap,to reflect the specific usage of the program they are specifying.) To permit unrestricted address arithmetic,however,in the version of the logic used in most of this paper wewill assume that all vales are integers. an infinite number of which are addr we also assume that atoms ar integers that are not addresses,and that heaps map addresses into single values: Values=Integers Atoms U Addresses C Integers where Atoms and Addresses are disjoint nil e Atoms Storesv=V→Values Henps=U(A-Valucs) Statesy =Storesy x Heaps where V is a finite set of variables (To permit unlimited allocation of records of arbitrary size,we require that, for all n >0,the set of addresses must contain infinitely many consecutive sequences of length n.For instance,this will occur if only a finite number of positive integers are not addresses.) Our inte is to apture the low-level character of machine lan uage.One estore as describing the ontents of reg the contents of n addresbe 41 ters,and the heap by assumin g that each address is equipped with an "activity bit";then the domain of the heap is the finite set of active addresses. The semantics of ordinary and boolean expressions is the same as in the simple imperative language: l (exp)Uv)Storesv)-Vales [be(boolexp)lbep∈ Storesv)(true,false) (where FV(p)is the set of variables occurring free in the phrase p).In
1.3. THE PROGRAMMING LANGUAGE 9 (Actually, in most work using this kind of state, authors have imposed restricted formats on the records in the heap, to reflect the specific usage of the program they are specifying.) To permit unrestricted address arithmetic, however, in the version of the logic used in most of this paper we will assume that all values are integers, an infinite number of which are addresses; we also assume that atoms are integers that are not addresses, and that heaps map addresses into single values: Values = Integers Atoms ∪ Addresses ⊆ Integers where Atoms and Addresses are disjoint nil ∈ Atoms StoresV = V → Values Heaps = S A fin ⊆Addresses (A → Values) StatesV = StoresV × Heaps where V is a finite set of variables. (To permit unlimited allocation of records of arbitrary size, we require that, for all n ≥ 0, the set of addresses must contain infinitely many consecutive sequences of length n. For instance, this will occur if only a finite number of positive integers are not addresses.) Our intent is to capture the low-level character of machine language. One can think of the store as describing the contents of registers, and the heap as describing the contents of an addressable memory. This view is enhanced by assuming that each address is equipped with an “activity bit”; then the domain of the heap is the finite set of active addresses. The semantics of ordinary and boolean expressions is the same as in the simple imperative language: [[e ∈ hexpi]]exp ∈ ( S V fin ⊇FV(e) StoresV ) → Values [[b ∈ hboolexpi]]bexp ∈ ( S V fin ⊇FV(b) StoresV ) → {true,false} (where FV(p) is the set of variables occurring free in the phrase p). In
CHAPTER 1.AN OVERVIEW particular,do not depend upon the heap,so that they are always well-defined and never cause side-effects. Thus expressions do not contain notations,such as cons or [-that refer to the heap;instead these notations are part of the structure of com- mands (and thus cannot be nested).It follows that none of the new heap- manipulating commands are instances of the simple assignment command (var):=(exp e h we w ite the allo lookup, and mutation ands with the familiar operator:=).In act,these com vill n obey Hoare's inference rule for assignment.However,since they alter the store at the variable v,we will say that the commands v:=cons(.)and v:=[el,as well as v:=e (but not [u]:=e or dispose v)modify v. Our strict avoidance of side-effects in expressions will allow us to use them in assertions with the same freedom as in ordinary mathematics.This will substantially simplify the logic,at the expense occasional co mplications in programs. The semantics of the new commands is simple enough to be conveyed by example.If we begin with a state where the store maps the variables x and y into three and four,and the heap is empty,then the typical effect of each kind of heap-manipulating command is: Store:x:3,y:4 Heap:empty Allocation ×:=cons(1,2); Store:x:37,y:4 Heap:37:1,38:2 Lookup y:=[x]; Store:x:37.y:1 Heap:37:1,38:2 Mutation x+1刂=3; Store:x:37,y:1 Heap:37:1,38:3 Deallocation dispose(x+1) Store:x:37,y:1 Heap:37:1 The allocation operation cons(ei,. .en)activates and initializes n cells in the heap.It is important to notice that,aside from the requirement that the addresses of these cells be consecutive and previously inactive,the choice of addresses is indeterminate
10 CHAPTER 1. AN OVERVIEW particular, expressions do not depend upon the heap, so that they are always well-defined and never cause side-effects. Thus expressions do not contain notations, such as cons or [−], that refer to the heap; instead these notations are part of the structure of commands (and thus cannot be nested). It follows that none of the new heapmanipulating commands are instances of the simple assignment command hvari := hexpi (even though we write the allocation, lookup, and mutation commands with the familiar operator :=). In fact, these commands will not obey Hoare’s inference rule for assignment. However, since they alter the store at the variable v, we will say that the commands v := cons(· · ·) and v := [e], as well as v := e (but not [v] := e or dispose v) modify v. Our strict avoidance of side-effects in expressions will allow us to use them in assertions with the same freedom as in ordinary mathematics. This will substantially simplify the logic, at the expense of occasional complications in programs. The semantics of the new commands is simple enough to be conveyed by example. If we begin with a state where the store maps the variables x and y into three and four, and the heap is empty, then the typical effect of each kind of heap-manipulating command is: Store : x: 3, y: 4 Heap : empty Allocation x := cons(1, 2) ; ⇓ Store : x: 37, y: 4 Heap : 37: 1, 38: 2 Lookup y := [x] ; ⇓ Store : x: 37, y: 1 Heap : 37: 1, 38: 2 Mutation [x + 1] := 3 ; ⇓ Store : x: 37, y: 1 Heap : 37: 1, 38: 3 Deallocation dispose(x + 1) ⇓ Store : x: 37, y: 1 Heap : 37: 1 The allocation operation cons(e1, . . . , en) activates and initializes n cells in the heap. It is important to notice that, aside from the requirement that the addresses of these cells be consecutive and previously inactive, the choice of addresses is indeterminate
1.4.ASSERTIONS 之 The remaining operations,for mutation,lookup,and deallocation,all cause memory faults (denoted by the terminal configuration abort)if an inactive address is dereferenced or deallocated.For example: Store x:3,y:4 Heap:empty Allocation x:=cons(1,2); Store:: Lookup y=X☒: Store x:37,y:1 Heap:37:1,38:2 Mutation+]:=3; abort 1.4 Assertions As in Hoare logic,assertions describe states,but now states contain heaps as Thus,in addition to the sual operati 兰e four ner ·emp (empty heap The heap is empty. ·e-e (singleton heap) The heap contains one cell,at address e with contents e'. D1*D2 (separating conjunction) The heap can be split into two disjoint parts such that p holds for one part and pa holds for the other. (se parating implication) If the heapis extended witha disjont part in then holds for the extended heap. It is also useful to introduce abbreviations for asserting that an address e is active: edf 3r'er'where r'not free in e
1.4. ASSERTIONS 11 The remaining operations, for mutation, lookup, and deallocation, all cause memory faults (denoted by the terminal configuration abort) if an inactive address is dereferenced or deallocated. For example: Store : x: 3, y: 4 Heap : empty Allocation x := cons(1, 2) ; ⇓ Store : x: 37, y: 4 Heap : 37: 1, 38: 2 Lookup y := [x] ; ⇓ Store : x: 37, y: 1 Heap : 37: 1, 38: 2 Mutation [x + 2] := 3 ; ⇓ abort 1.4 Assertions As in Hoare logic, assertions describe states, but now states contain heaps as well as stores. Thus, in addition to the usual operations and quantifiers of predicate logic, we have four new forms of assertion that describe the heap: • emp (empty heap) The heap is empty. • e 7→ e 0 (singleton heap) The heap contains one cell, at address e with contents e 0 . • p1 ∗ p2 (separating conjunction) The heap can be split into two disjoint parts such that p1 holds for one part and p2 holds for the other. • p1 −∗ p2 (separating implication) If the heap is extended with a disjoint part in which p1 holds, then p2 holds for the extended heap. It is also useful to introduce abbreviations for asserting that an address e is active: e 7→ − def = ∃x 0 . e 7→ x 0 where x 0 not free in e