Improving the Compositionality of Separation

Improving the Compositionality of Separation Algebras
Aquinas Hobor?
National University of Singapore
Abstract. We show how to improve one of the constructors of separation algebras to increase modularity and expressibility. Our technique has advantages for
both paper and mechanized proofs. Our results are implemented in Coq.
1 Introduction
Separation algebras are mathematical structures that track resource accounting and are
most commonly used in semantic models of separation logic [1]. Dockins et al. showed
that new separation algebras could be constructed from old ones by using some of the
standard constructors of category theory [2]. This modular approach is extremely practical, especially in a machine-checked setting, and often provides theoretical insight.
However, one of the common constructors presented by Dockins, the so-called “lift”
operator, is not as modular as one could hope for because it limits the expressibility of
the composed objects in an inconvenient way. Moreover, its implementation in Coq
leads to undue hassle due to an awkwardly-placed dependent type.
We present a new pair of constructors whose composition is the lift operator but
that allow for greater modularity “in the middle”. Our key insight is that while the
intermediate structure is not a separation algebra, it is a well-behaved structure in its
own right, and by utilizing this structure directly we can allow for greater expressibility
than we can by skipping over it. In addition, the new structure puts the dependent type
in a much more convenient location, allowing for a much smoother mechanization.
2 Separation Algebras
There are several related definitions of separation algebras; here we use a variant called
a disjoint multi-unit separation algebra (DSA) [2]. Briefly, a DSA is a set S and an
associated three-place partial join relation ⊕, written x ⊕ y = z, such that:
(a) A function:
x ⊕ y = z1 ⇒ x ⊕ y = z2 ⇒ z1 = z2
(b) Commutative: x ⊕ y = y ⊕ x
(c) Associative:
x ⊕ (y ⊕ z) = (x ⊕ y) ⊕ z
(d) Cancellative: x1 ⊕ y = z ⇒ x2 ⊕ y = z ⇒ x1 = x2
(e) Multiple units: ∀x. ∃ux . x ⊕ ux = x
(f) Disjointness: x ⊕ x = y ⇒ x = y
We say that x ∈ S is an identity if ∀y, z. x ⊕ y = z ⇒ y = z. Observe that “identity”
is much stronger than “unit” as given in axiom (e): a unit ux need only behave as an
Supported by a Lee Kuan Yew Postdoctoral Fellowship.
identity for the associated element x; the axiom says nothing about how ux behaves
with other elements. However, as it happens, the axioms together imply that all units
are identities. Thus, x is an identity if and only if x ⊕ x = x.
The elements in S can be partitioned into equivalence classes indexed by the identity
elements. For each identity element i, two elements a and b are in the same equivalence
class if a ⊕ i and b ⊕ i is defined. Within an equivalence class there is only a single
identity; however, since the ⊕ operation is partial, multiple identities are possible for
the set S as a whole. Elements from distinct equivalence classes never join together.
3 Constructors Over Separation Algebras
Dockins develops a series of standard constructions for building DSAs. One simple
construction is the discrete DSA, where any set S is given DSA structure by defining
s1 ⊕ = s2
s1 = s2 = s3
In the discrete DSA, every element is an identity, and no distinct elements join together.
More powerful is the ability to build complicated DSAs from simpler DSAs, including products, sums, functions, etc. Most of these constructions define the joining
operation in the “obvious” way. For example, if (A, ⊕A ) and (B, ⊕B ) are two DSAs,
then the join function for the product DSA on A × B is defined componentwise:
(a1 , b1 ) ⊕A×B (a2 , b2 ) = (a3 , b3 )
(a1 ⊕A a2 = a3 ) ∧ (b1 ⊕B b2 = b3 ) (2)
Similarly, the function DSA from a set A to a DSA (B, ⊕B ) is defined pointwise:
f ⊕A→B g = h
∀a. f (a) ⊕B g(a) = h(a)
Dockins also defines a “lift” operator, which is a kind of coercion between multiunit separation algebras and single-unit separation algebras. Hereafter we will call this
the “smash” operator because of its relationship to the “smash product” of category
theory. Starting from a DSA (A, ⊕) one first constructs the related set A+ as follows:
{a ∈ A | ¬identity a}
That is, A+ is obtained from A by removing all the identity elements over ⊕. One then
adds some distinguished element ⊥ 6∈ A+ to reach the set A+
⊥ and defines the join
relation ⊕⊥ as the least relation satisfying the following rules:
1. ⊥ ⊕⊥ a = a
2. a ⊕⊥ ⊥ = a
3. a1 ⊕ a2 = a3
a1 ⊕ ⊥ a2
One common use for the smash operator is to construct a separation algebra for
resource sharing in a concurrent language as follows. We start with a share model: a
separation algebra (S, ⊕S ) that models what kinds of sharing we would like to support.
One simple share model defines S = {q | 0 ≤ q ≤ 1} and ⊕S as partial addition (i.e.,
undefined when the sum is greater than 1)1 . In a concurrent setting, a share of 1 (of
some resource) will denote full ownership; 0 will denote no ownership; and 0 < x < 1
will denote partial ownership. Let L be a set of heap locations (addresses) and V be a
set of values in our operational semantics. Now we can define heaps H as follows:
L → (S × V= )⊥
This simple-looking equation describes both the type of heaps (functions from locations
to a pointed set containing pairs of shares and values) and how the join function should
be constructed: start with the function constructor (3) from L to the smashed (5) product
(2) of shares S and values V under the discrete construction (1). That is, each location
is associated with either the distinguished element ⊥, indicating that the thread has now
ownership, or by a nonempty (¬identity) pair of share and value. Using the constructors
guarantees that axioms (a)–(f) hold on heaps H without having to prove them directly.
4 The Problem with Smash
As elegant as equation (6) is, its form conceals some problems, particularly with regard to the smash operator. The first problem is the location of the side condition from
equation (4): that is, the side condition that (S × V= )+ contains no identity elements. A
quick reflection reveals that under the product DSA constructor a pair is not an identity
if and only if at least one of its components is not be an identity, i.e.:
¬identity(a, b)
(¬identity a) ∨ (¬identity b)
Since every element in V= is an identity, however, for (S×V= )+ we have the following:
¬identity(s, v)
¬identity s
In other words, in set theory (S × V= )+ equals S +× V= . However, in type theory the
two are not quite equal since the non-identity side condition is carried around via a
dependent (Σ-) type, and the definitions differ due to where this is placed. That is,
(S × V= )+
Σp : (S × V). ¬identity p
is what one gets by applying the smash operator, instead of the more preferable
S +× V=
(Σs : S. ¬identity s) × V
One reason to prefer equation (10) over equation (9) is that the dependent type is
“closer” to the object being restricted. Consider the operation of updating a heap cell:
one takes apart a share/value pair (s, v) and reconstitutes the new pair (s, v 0 ). If we are
using equation (9), we discover a new proof obligation during reconstitution:
identity (s, v) ⇒ identity (s, v 0 )
Dockins develops more powerful kinds of share models, but this is sufficient here [2].
Of course, this obligation is not very hard to satisfy using equation (8), but it is inconvenient that it shows up at all. In fact, in a fully-formal development (mechanized
or paper), almost every use of heaps runs into similar irritating problems. Although it
may seem minor, these kinds of “stupid obligations” can take up a surprising amount of
effort: for example, in a recent proof development approximately 5% of the Coq code
(more than 500 lines, distributed over more than a hundred places in the development!)
was dealing with this kind (and related) obligations [3]. Needless to say, these were not
the most interesting parts of the mechanization effort.
Using equation (10), breaking apart an (s, v) pair is simpler since one does not end
up with an associated ¬identity (s, v) proof, and updating the value component of a pair
is simpler since the dependency is directly attached to the share, which is reconstituted
into the new pair unchanged. Only updating the share itself requires an update of the
associated proof—and even then, the value does not get involved.
Beyond the engineering concerns outlined above, there are good theoretical reasons
to prefer a style closer to equation (10). In the particular example given in equation (6),
the values V were only given the trivial discrete separation structure, but in general one
wants to be able to use richer separation structures. However, if one does give V a richer
structure, one runs into the problem that equation (8) no longer holds. Instead, even if
one’s intention is to restrict the shares to the nonidentity elements, one must return
to the more general disjunctive equivalence given in equation (7). This inconvenient
disjunction may require the imposition of additional side conditions in awkward places.
Why do we not apply the smash operator directly to shares S, as equation (10)
would seem to suggest? Unfortunately, if we do so then we produce not the positive
shares, but the positive shares plus a fresh bottom element. That is, we end up with
S⊥ × V= instead of (S +× V= )⊥ . These are not the same: the left-hand has infinitely
many identity elements ((⊥, v) for all v), whereas the right-hand has only one (⊥).
Another question is why we do not simply define some constructor from a DSA
(A, ⊕A ) to some positive subset DSA (A+ , ⊕A+ ). The answer is that the positive subset
lacks identities and thus does not satisfy axiom (e)—that is, (A+ , ⊕A+ ) is not a DSA.
5 Positive Disjoint Separation Algebras
The good news is that (S +× V= )⊥ does satisfy the DSA axioms; the bad news is that it
is not decomposable componentwise into DSA subparts: we have lost modularity.
However, S +×V= is almost a DSA; indeed, it is a member of a well-behaved mathematical structure hereby christened a positive disjoint separation algebra (PDSA). A
PDSA, like a DSA, is a set S + and an associated 3-place positive join relation ⊕+ .
A PDSA’s ⊕ satisfies axioms (a)–(d) from a DSA. Since it does not have any units, it
drops axiom (e), and to enforce positivity it uses a modified version of axiom (f):
(f0 ) Positive Disjointness: x ⊕ x = y ⇒ False
That is, f0 says that no elements join with themselves. PDSAs are well-behaved because
they also enjoy many of the constructions from category theory (e.g., products, sums).
What is particularly interesting is how PDSAs and DSAs can be constructed from
each other. For example, let us suppose that we have a PDSA such as S +× V= ; we can
define the lowering constructor, which takes a PDSA (S + , ⊕+ ) and turns it into a DSA
by adding a fresh unit ⊥ and by defining ⊕↓ as the least relation satisfying:
1. ⊥ ⊕↓ a = a
2. a ⊕↓ ⊥ = a
3. a1 ⊕+ a2 = a3
a1 ⊕ ↓ a2
By design, this construction is remarkably similar to the construction for the smash
operator; the difference is that while the smash operator takes a DSA and produces
another DSA, the lowering operator takes a PDSA and produces a DSA.
In contrast, the lift constructor takes a DSA and produces a PDSA by removing all
of the units. If (S, ⊕) is a DSA, then the lifted join operation on S + , written ⊕↑ , is:
1 ⊕ ↑ s2 = s3
1 ⊕ s2 = s3
In other words, the lift operator gets its join structure directly from the underlying operation. The purpose is to move from a DSA to a PDSA by exchanging S with S + .
We observe that the new lower and lift operators are at least as powerful as the old
smash operator because for any DSA (S, ⊕) we have S⊥ ∼
= (S↑ )↓ . The question then
is if we can get any additional leverage out of the intermediate structure. The answer,
happily, is that we can by defining other operators that connect DSAs and PDSAs.
One important additional operator we can define is the semiproduct constructor,
which takes a PDSA (A+ , ⊕+
A+ ) and DSA (B, ⊕B ) and forms a PDSA over the set
A+ × B by defining the positive join operation ⊕+
A+ ×B as follows
1 , b1 ) ⊕A+ ×B (a2 , b2 ) = (a3 , b3 )
1 ⊕A+ a2 = a3 ) ∧ (b1 ⊕B b2 = b3 )
That is, we define the semiproduct constructor componentwise in exactly the same fashion as the regular product constructor on DSAs (and also the regular product constructor on PDSAs). However, since the operation takes different structures on the left- and
right-hand sides, we prefer to write A+ ∝ B to indicate the semiproduct2.
There are other kinds of constructions that illustrate the connections between DSAs
and PDSAs, but we already have enough to reformulate heaps H the way we want:
L → (S↑ ∝ V= )↓
That is, heaps H are functions (3) from locations L to a lowered (12) semiproduct (14)
of a lifted (13) share S and discrete (1) value V. Elements of H are functions to a pointed
set whose nonidentity elements are exactly of the desirable form given in equation (10):
that is, the dependent type is attached to the share instead of the share-value pair.
We can illustrate the additional expressivity our PDSAs with the following observation. Suppose we have an additional DSA (O, ⊕O ) that we want to attach to the heap
cells. We can easily modify our heaps to accommodate this as follows:
L → (S↑ ∝ (V= × O))↓
We could also define a semiproduct that takes a DSA on the left and a PDSA on the right.
This is better than what we would obtain with the smash operator because it specifies
exactly whence positivity is obtained. If we tried to do the same with smash:
L → (S × V= × O)⊥
Now if we have a nonidentity (s, v, o) triple from H00 , then we do not know whether
the share s is positive or if the other data o is positive. In contrast, the same triple from
H0 (equation 16) must have s positive. This is a feature, not a bug, but if we did want
o to be positive as well then we can use (S↑ ∝ V= ) × O↑ . We could allow the same
ambiguity that the smash operator does by using ((S × O × V= )↑ )↓ , and so forth. The
key point is that we have greater expressibility with lift/lower than we have with smash.
6 Conclusion
1. Cristiano Calcagno, Peter W. O’Hearn, and Hongseok Yang. Local action and abstract separation logic. In Symposium on Logic in Computer Science, 2007.
2. Robert Dockins, Aquinas Hobor, and Andrew W. Appel. A fresh look at separation algebras
and share accounting. In The 7th Asian Symposium on Programming Languages and Systems,
pages 161–177. Springer ENTCS, 2009.
3. Aquinas Hobor and Cristian Gherghina. Barriers in concurrent separation logic. In 20th
European Symposium of Programming (ESOP 2011), page to appear, 2011.