Michele Pagani - Home Page

Transcription

Michele Pagani - Home Page
Linearity, Non-determinism and Solvability∗
Michele Pagani
Laboratoire d’Informatique de Paris-Nord – Université Paris 13
[email protected]
Simona Ronchi Della Rocca
Dipartimento di Informatica – Università di Torino
[email protected]
Abstract
We study the notion of solvability in the resource calculus, an extension
of the λ-calculus modelling resource consumption. Since this calculus is
non-deterministic, two different notions of solvability arise, one optimistic
(angelical, may) and one pessimistic (demoniac, must). We give a syntactical, operational and logical characterization for the may-solvability and
only a partial characterization of the must-solvability. Finally, we discuss
the open problem of a complete characterization of the must-solvability.
1
Introduction
We investigate the notion of solvability in the resource calculus (Λr ), an extension of the λ-calculus, where both the features of linearity and non-determinism
are present. Λr is a calculus allowing to model resource consumption. Namely,
the argument of a function comes as a finite multiset of resources, which in turn
can be either linear or reusable. A linear resource needs to be used exactly once,
while a reusable one can be called ad libitum, also zero times. Hence, the evaluation of a function applied to a multiset of resources gives rise to different possible
choices, because of the different possibilities of distributing the resources among
the occurrences of the formal parameter. So the calculus is not deterministic,
but rather than having a purely non-deterministic notion of evaluation, the reduction introduces a formal sum of terms, representing all possible results of a
non-deterministic computation. The presence of linear resources not only is a
further source of non-determinism but it also introduces a notion of failure of
the computation, the empty sum, distinct from non-termination: whenever the
number of the available resources does not fit exactly the number of occurrences
of the variable abstracted in a redex, then this latter evaluates to the empty
sum. The resource calculus is a useful framework for studying the notions of
linearity and non-determinism, and the relation between them.
Λr is an evolution of the calculus of multiplicities, this last introduced by
Boudol in order to study the semantics of the lazy λ-calculus [Bou93]. Ehrhard
∗ Partially founded by the Italian MIUR project CONCERTO and the French ANR projet
blanc CHOCO, ANR-07-BLAN-0324.
1
and Regnier designed the differential λ-calculus [ER03], drawing on insights
gained from an analysis of some denotational models of linear logic. As the
authors remarked, the differential λ-calculus seemed quite similar to Boudol’s
calculus of multiplicities. Indeed, this was formalized by Tranquilli, who defined
the Λr syntax, and showed a Curry-Howard correspondence between this calculus and Ehrhard and Regnier’s differential nets [Tra08]. The main differences
between Boudol’s calculus and Λr are that the former is equipped with explicit
substitution and lazy operational semantics, while the latter is a true extension
of the classical λ-calculus.
One way to appreciate the resource calculus is by observing the various
subcalculi it contains. Intuitively, usual λ-calculus can be embedded into Λr
translating the application M N into M [N ! ], where [N ! ] represents the multiset
containing one copy of the resource N , which is reusable (see the grammar of
Figure 1(a)). Forbidding linear terms but allowing non-empty finite multisets of
reusable terms yields a purely non-deterministic extension of λ-calculus, which
recalls de’ Liguoro and Piperno’s λ⊕ -calculus [dP95] (studied also in [BEM09]).
This fragment is also a variant of Vaux’s algebraic λ-calculus [Vau09] without
coefficients, where one moreover forbids the formation of the zero sum.
We will deal extensively with this fragment (noted here Λk and called parallel
calculus), in fact all the results we have obtained for Λr hold also for Λk . On the
other side, allowing only multisets of linear terms yields the linear fragment of
Λr , used by Ehrhard and Regnier for giving a quantitative account to λ-calculus
β-reduction through Taylor expansion [ER06, ER08].
As far as the operational behaviour of Λr is concerned, the properties of
confluence and a sort of standardization have been proved in [PT09]. In fact,
confluence does not clash with non-determinism since, as written above, the
outcome of a computation is a sum carrying all the possible results.
In this paper we will study the solvability property. Following the λ-calculus
terminology, the word solvable denotes a term that can interact operationally
with the environment, i.e., that can produce a given output when inserted into
a context supplying it with suitable resources. According to this definition, in a
computer science setting the solvable terms represent the meaningful programs.
In the λ-calculus, a closed term M is called solvable if and only if there is a
~ such that M N
~ reduces to the identity. λ-solvability
sequence of arguments N
has been completely characterized, by different points of view. Syntactically,
a term is solvable if and only if it reduces to a head-normal form [Bar84], operationally, if and only if the head reduction strategy applied to it eventually
stops [Bar84], logically, if and only if it can be typed in a suitable intersection
type assignment system [CDCV81], denotationally, if and only if its denotation
is not minimal in a suitable sensible model [Hyl76, RDRP04]. Our aim is to
characterize the notion of solvability in Λr and Λk , following the same lines.
Actually, because of the non-deterministic behaviour of the calculus, two
different notions of solvability arise, one optimistic (angelical, may) and one
pessimistic (demoniac, must). A closed term M is may-solvable if there is a
sequence of bags P~ such that M P~ reduces to a sum in which at least one term
of the sum is the identity, while it is must-solvable when in the final sum all the
terms are the identity. We stress the two notions collapse and coincide with the
usual notion of solvability in the fragment of Λr corresponding to the λ-calculus.
Our result is a characterization of the may-solvability in Λr (Theorem 25),
2
and a characterization of the must-solvability in Λk (Theorem 36).
Theorem 25 characterizes the may-solvability from a syntactical, operational
and logical point of view. An extended notion of head-normal form can be
defined (called may-outer-normal form), such that a term is solvable if and
only if it can reduce to a term of such form. From an operational point of
view, we use the notion of outer-reduction strategy, defined in [PT09], where no
reduction is made inside reusable resources, and we prove that in order to reach
the outer-normal form we can restrict ourselves to use just reduction strategies
of this kind. Also, we give a logical characterization of solvability, through a
type system, assigning to terms suitable non-idempotent intersection types.
The characterization of must-solvability is a difficult problem, since it would
imply to separate between the empty sum and the non-terminating terms (see
discussion in Section 5). We achieve a complete characterization of mustsolvability only for Λk , the parallel fragment of Λr , where all resources are
reusable and there is no empty sum. Inside this fragment, we characterize
the must-solvability from a syntactical, operational and logical point of view,
through a notion of must-outer-normal form.
All these characterizations are conservative with respect to the λ-calculus.
The type assignment system characterizing may-solvability is strongly related to the relational semantics of linear logic. It can be seen, basically, as an
extension to Λr of the type system introduced by de Carvalho in the restricted
case of λ-calculus [dC09]. On the other side, the type assignment system characterizing must-solvability is based on a type theory supplying a logical description
of the D∞ model of Scott [Sco76, PRDR04].
The characterization of the may-solvability in Λr has been already presented,
in a preliminary version, at FOSSACS 2010 [PRDR10].
The paper is organized as follows. Section 2 contains a syntactical description of the resource calculus. Section 3 is dedicated to the definition of mayand must-solvability and of outer-normal form. In Section 4, the complete
characterization of may-solvability is given. In Section 5, the problem of the
characterization of the must-solvability is discussed, and its characterization is
given, inside the non-deterministic fragment Λk .
2
Resource λ-calculus
The syntax of Λr . Basically, we have three syntactical sorts: terms, that
are in functional position, bags, that are in argument position and represent
multisets of resources, and finite formal sums, that represent the possible results
of a computation. Precisely, Figure 1(a) gives the grammar for generating the
set Λr of terms and the set Λb of bags (which are in fact finite multisets of
resources Λ(!) ) together with their typical metavariables. A resource can be
linear (it must be used exactly once) or not (it can be used ad libitum, also zero
times), in the last case it is written with a ! apex. Bags are multisets presented
in multiplicative notation, so that P ·Q is the multiset union, and 1 = [ ] is the
empty bag: that means, P ·1 = P and P ·Q = Q·P . It must be noted though
that we will never omit the dot ·, to avoid confusion with application.
Sums are multisets in additive notation, with 0 referring to the empty multiset, so that: M+0 = M and M+N = N+M. We use two different notations for
multisets in order to underline the different role of bags and sums. The former
3
Λr :
M, N, L ::= x | λx.M | M P
(!)
(!)
::= M | M
terms
!
Λ :
M
Λb :
P, Q, R
::= 1 | [M (!) ] | P ·Q
A, B
::= M | P
NathΛ i:
M, N, L
::= 0 | M | M + N
sums of terms
NathΛb i:
P, Q, R
::= 0 | P | P + Q
sums of bags
(b)
Λ
:
r
,N
(!)
resources
bags
expressions
A, B ∈ NathΛ(b) i := NathΛr i ∪ NathΛb i
sums of expressions
(a) Grammar of terms, bags, sums, expressions.
P
P
λx.( i Mi ) := i λx.Mi
P
P
( i Mi )P := i Mi P
P
P
[( i Mi )]·P := i [Mi ]·P
M(
P
i
Pi ) :=
P
i
M Pi
P
!
[( i Mi ) ]·P := [M1! , . . . , Mk! ]·P
(b) Notation on NathΛ(b) i.
Figure 1: Syntax of the resource calculus.
are multisets in argument position, while the latter are in functional position:
the two define notions of parallel composition behaving quite differently, as we
will see in a while.
An expression (whose set is denoted by Λ(b) ) is either a term or a bag.
Though in practice only sums of terms are needed, for the sake of the proofs
we also introduce sums of bags and of expressions. The symbol Nat denotes the
set of natural numbers, so that NathΛr i (resp. NathΛb i) can be seen as the set
of finite formal sums of terms (resp. bags). In particular, for n ∈ Nat and A an
expression, the writing nA denotes the sum A + · · · + A. In writing NathΛ(b) i
|
{z
}
ntimes
we are abusing the notation, as it does not denote the Nat-module generated
over Λ(b) = Λr ∪ Λb but rather the union of the two Nat-modules. This amounts
to say that sums may be taken only in the same sort.
The grammar for terms and bags does not include sums in any point, so
that in a sense they may arise only as a top level constructor. However, as an
inductive notation (and not in the actual syntax) we extend all the constructors
!
to sums as shown in Figure 1(b). In fact, all constructors but the (·) are,
as expected, linear in the algebraic sense, i.e. they commute with sums.1 In
!
particular, we have that 0 is always absorbing but for the (·) constructor, in
!
which case we have [0 ] = 1. Notice the similarity between the equations [0! ] = 1
!
and [(M + N ) ] = [M ! ]·[N ! ] and, respectively, e0 = 1 and ex+y = ex ·ey : this
is far from a coincidence, as the relation between Taylor expansion and linear
logic semantics shows [ER08]. We refer to [Tra08, Tra09] for the mathematical
intuitions underlying the resource calculus.
We adopt the usual λ-calculus conventions as in [Bar84]. Also, we use the
1 However, the constructors do not commute with bag product, i.e. M (P ·Q) 6= M P + M Q.
This is a first motivation for the different multiset notation between sums and bags.
4
yhN/xi :=

N
if y = x,
0
otherwise,
(λy.M )hN/xi := λy.(M hN/xi),
(M P )hN/xi := M hN/xiP + M (P hN/xi),
1hN/xi := 0,
[M ]hN/xi := [M hN/xi],
[M ! ]hN/xi := [M hN/xi, M ! ],
(P ·R)hN/xi := P hN/xi·R + P ·RhN/xi.
Figure 2: Linear substitution. In the abstraction case we suppose y ∈/ FV(N ) ∪ {x}.
following notation for terms useful to build examples:
I := λx.x ,
F := λxy.y ,
∆ := λx.x[x! ] ,
Ω := ∆[∆! ] .
There is no technical difficulty in defining α-equivalence and the set FV(A)
of free variables as in ordinary λ-calculus. The symbol = will denote the αequivalence between expressions.
The pair reusable/linear has a counterpart in the following two different
notions of substitutions: their definition, hence that of reduction, heavily uses
the notation of Figure 1(b).
Definition 1 (Substitutions). We define the following substitution operations.
1. A {N/x} is the usual λ-calculus (i.e. capture free) substitution of N for
x. It is extended to sums as in A {N/x} by linearity in A2 and using the
notations of Figure 1(b) for N. The form A {x + N/x} is called partial
substitution.
2. AhN/xi is the linear substitution defined inductively in Figure 2. It is
extended to AhN/xi by bilinearity in both A and N.
Roughly speaking, the linear substitution corresponds to the replacement of
the resource to exactly one linear occurrence of the variable. In the presence
of multiple occurrences, all the possible choices are made, and the result is the
sum of them. For example (y[x][x])hN/xi = y[N ][x] + y[x][N ]. In the case there
are no free linear occurrences, then linear substitution returns 0, morally an
error message. For example (λy.y)hN/xi = λy.(yhN/xi) = λy.0 = 0. Finally, in
case of reusable occurrences of the variable, linear substitution acts on a linear
copy of the variable, e.g. [x! ]hN/xi = [N, x! ]. Indeed linear substitution bears
resemblance to differentiation, as it is in Ehrhard and Regnier’s differential λcalculus [ER03], so deriving a constant function (as is λy.y with respect to the
parameter x) returns 0 and deriving a product of functions (as is y[x][x] with
respect to x) gives a sum. Also the rule [M ! ]hN/xi = [M hN/xi, M ! ] resembles
the chain rule. The following are examples with sums
(x[x! ])h(M +N )/xi = (x[x! ])hM/xi + (x[x! ])hN/xi
= M [x! ] + x[M, x! ] + N [x! ] + x[N, x! ],
!
(x[x! ]) {(M + N )/x} = (M + N )[(M + N ) ] = M [M ! , N ! ] + N [M ! , N ! ].
Substitutions commute as stated in the following.
2 F (A)
P
(resp. F (A, B)) is extended by linearity (resp. bilinearity) by setting F
`P
´ P
P
F
(A
)
(resp. F
i
i
i Ai ,
j Bj =
i,j F (Ai , Bj )).
5
`P
i
´
Ai =
Lemma 2 ([ER03, Tra08, Tra09]). For A a sum of expressions, M, N sums of
terms and x, y variables such that y ∈
/ FV(M) ∪ FV(N), we have
AhN/yi hM/xi = AhM/xi hN/yi + AhNhM/xi/yi
A {y + N/y} hM/xi = AhM/xi {y + N/y} + AhNhM/xi/yi {y + N/y} .
In particular, if x ∈
/ FV(N) then the second term of both sums is 0 and the two
substitutions commute. Furthermore if x ∈
/ FV(M) ∪ FV(N),
(A {x + M/x}) {x + N/x} = A {x + M + N/x} = (A {x + N/x}) {x + M/x} .
The reductions of Λr . A (monic) context C(·) is a term that uses a distinguished free variable called its hole exactly once. Formally, the set of simple
contexts is given by the following grammar
Λ(·) :
!
C(·), D(·) ::= (·) | λx.C(·) | C(·)P | M [C(·)]·P | M [(C(·)) ]·P
A context C(·) is a simple context in Λ(·) summed to any sum in NathΛr i. The
expression C(M ) denotes the result of blindly replacing M to the hole (allowing
variable capture) in C(·). We generalize to sums applying the notations of
!
Figure 1(b). For example C(·) := λx.y[(·) ] and D(·) := λx.y[(·)] are simple
contexts. If M = x + y, then C(M) = λx.y[x! , y ! ] and D(M) = λx.y[x] + λx.y[y].
A relation r̃ in Λr × NathΛr i is extended to one in NathΛr i × NathΛr i by
context closure3 by setting: M r̃ N iff ∃C(·) and M 0 r̃ N0 such that M =
C(M 0 ), N = C(N0 ).
!
A context is linear if its hole is not under the scope of a ( ) operator.
!
Linear contexts can be defined inductively omitting the case M [(C(·)) ] · P in
the Λ(·) definition. An outer-context is a context having the hole not in a
bag. Outer-contexts can be defined inductively omitting the cases M [C(·)]·P
!
and M [(C(·) )]·P in the Λ(·) definition. Notice that the composition C(D(·)) of
two outer- (resp. linear) contexts C(·), D(·) is an outer- (resp. linear) context.
We define two kinds of reduction rule, baby-step and giant-step reduction,
the former being a decomposition of the latter. Both are meaningful: baby-step
is more atomic, performing one substitution at a time, while the giant-step is
closer to λ-calculus β-reduction, wholly consuming its redex in one shot.
b
Definition 3 ([Tra08, Tra09]). The baby-step reduction →
is defined by the
context closure of the following relation (assuming x not free in N ):
b
(λx.M )1 →
M {0/x}
b
(λx.M )[N ]·P →
(λx.M hN/xi)P
b
(λx.M )[N ! ]·P →
(λx.M {N + x/x})P
g
The giant-step reduction → is defined by the context closure of the following
relation, for `, n ≥ 0 (assuming x not free in the Li ’s and Ni ’s):
g
(λx.M )[L1 , . . . , L` , N1! , . . . , Nn! ] → M hL1 /xi . . . hL` /xi {N1 + · · · + Nn /x} .
x
In the case n = 0, the reduct is M hL1 /xi . . . hL` /xi {0/x}. For any reduction →,
x+
x∗
we denote by → and →
its transitive and reflexive-transitive closure respectively.
3 In [Tra08, Tra09] bag contexts are defined too, so that context closure extends a relation
to NathΛ(b) i × NathΛ(b) i. In fact we prefer to introduce the term contexts only, making clear
that the set NathΛr i is the actual protagonist of the calculus. However, our choice is a matter
of taste, affecting no main property of the calculus.
6
(λx.x[∆! ])[∆! , I]
b
b
(λx.(∆ + x)[∆! ])[I]
g
(λx.x[∆! ])[I]
b
(λx.I[∆! ])1
b
b
I[∆! ]
g
(λx.∆)1
b
∆
Figure 3: An example of baby-step and giant-step reductions. We use the notation of
Figure 1(b): after the first b-step the term (λx.(∆ + x)[∆! ])[I] stands for (λx.∆[∆! ])[I] +
(λx.x[∆! ])[I], and the term after the following step is equal to 0 + (λx.x[∆! ])[I]. In fact 0 is
b
the neutral element of the sum and (λx.∆[∆! ])[I] →
0.
Notice that giant-step reduction is defined independently of the ordering of
the linear substitutions, as shown by the substitution commutations stated in
Lemma 2. Baby-step and giant-step reductions are clearly related to each other.
g
g∗ g∗
b∗
Proposition 4 ([Tra08, Tra09]). We have → ⊂ →
⊂ →←, where the last
g∗
g∗
denotes the relational composition between → and its inverse ←.
Figure 3 shows an example of baby-step and giant-step reduction sequences.
The reader can check, in this example, that the two reductions are related to
each other as stated in Proposition 4. By the way, let us mention that although
giving the same normal forms, baby-step and giant-step reductions might have
different properties: for example, the starting term in the Figure 3 is strongly
normalizing for giant-step but only weakly normalizing for baby-step reduction
(an infinite reduction sequence can be obtained by firing the ∆[∆! ] redex in the
first term of the sum (λx.(∆ + x)[∆! ])[I]).
The fragment Λk . We will consider a notable fragment of Λr , denoted by Λk
and called the parallel fragment. It corresponds to a purely non-deterministic
extension of λ-calculus which recalls de’ Liguoro and Piperno’s λ⊕ -calculus
[dP95]. Seeing the product of bags as a convolution product and taking into
!
account that [(M + N ) ] = [M ! ] · [N ! ], the fragment Λk is also a fragment of
Vaux’s algebraic λ-calculus [Vau09], without coefficients nor zero sums.
Λk is defined permitting only non-empty bags with reusable resources and
non-zero sums. Formally, this is obtained by replacing the grammar defining Λb
(Figure 1(a)) by the following:
Λbk :
P, Q, R ::= [M ! ] | P ·Q
and by considering only non-zero sums, i.e. the sets Nat+ hΛk i and Nat+ hΛbk i
of formal sums with coefficients in the set Nat+ of non-zero natural numbers.
Notice Λk is not closed under baby step reduction, since the latter can create
empty bags. We therefore slightly adapt such a reduction as follows.
m
Definition 5. The middle-step reduction →
is defined by the context closure
of the following relation (assuming x not free in N ):
m
(λx.M )[N ! ] →
M {N/x} ,
m
(λx.M )[N ! ]·[L! ]·P →
(λx.M {N + x/x})[L! ]·P.
7
Clearly middle-step reduction does not alter Λr operational semantics.
g
m∗
b∗
Proposition 6. We have → ⊂ →
⊂→
in Λr .
The outer reduction. The notion of head-reduction is extended to such a
setting as follows.
o
Definition 7 ([PT09]). Let ∈ {b, g, m}. The outer -reduction −
→ is the
closure to linear contexts of the steps given in Definition 3 and 5.
o
Notice that an outer redex (i.e. a redex for −
→) is a redex not under the
!
scope of a (·) constructor. For the terms in Λk (so in particular for the terms
corresponding to the λ-terms) the outer redexes are exactly the head-redexes.
However, in terms with linear resources it is possible to have outer redexes also
in argument position, e.g. in x[I[y]].
3
May and Must Solvability
A λ-term is solvable whenever there is a outer-context reducing it to the identity
[Bar84]. In the resource calculus, terms appear in formal sums, where repetitions
do matter, so various notions of solvability arise, depending on the number of
times one gets the identity. We deal extensively with the two different notions
of solvability, related to a may and must operational semantics, respectively.
Definition 8. A simple term M is may-solvable whenever there are a simple
g∗
outer-context C(·) and a sum of terms N, possibly 0, such that C(M ) −→ I + N;
M is must-solvable whenever there are a simple outer-context C(·) and a
g∗
number n 6= 0 such that C(M ) −→ nI. We will say M is may-solvable (resp.
must-solvable) in Λk in case the context C(·) is in Λk .
The above definition considers the giant-step reduction, however, one can
replace it with the baby-step or middle-step reductions, obtaining an equivalent
notion of solvability, as easily argued from Proposition 4 and 6. The restriction
to simple and outer-contexts is crucial: there are general contexts reducing
constantly to I, disregarding the term they are applied to. Let C(·) be either
the outer- but non-simple context I + (·)[I1] or the simple but non-outer-context
g
!
(λx.I)[(·) ]: we have C(M ) →
− I for every term M .
Clearly, must-solvability implies the may-solvability but not vice versa. Consider, for example, the following reductions, where I and Ω have been defined
in the previous section:
g
I[I, Ω] → 0
g
!
I[I , Ω] → Ω
!
g
!
g
I[I, Ω ] → I
!
I[I , Ω ] → I + Ω
(1)
(2)
(3)
(4)
We can reasonably guess that Ω is neither may-solvable nor must-solvable (this
will be proved in what follows), as well as any term reducing to 0. Hence, (1) and
(2) give examples of unsolvable terms, (3) is both may- and must-solvable and
8
(4) is a may-solvable but not must-solvable term. One major outcome of this
paper is the operational characterization of solvability by means of the following
notions of outer normalizability.
Definition 9. An expression is an outer-normal form, onf for short, if it
!
has no redex but under the scope of a ( ) . The set onf can be defined inductively
as follows.
M ∈ onf
λx.M ∈ onf
1 ∈ onf
M ∈ onf
[M ] ∈ onf
Pi ∈ onf, for i ≤ p
p≥0
xP1 . . . Pp ∈ onf
[M ! ] ∈ onf
P ∈ onf Q ∈ onf
P ·Q ∈ onf
A sum M of terms is a may-outer-normal form, monf for short, whenever
it contains a term in outer-normal form; M is a must-outer-normal form,
Monf for short, whenever it is a non-empty sum of outer-normal forms. A term
M is may-/must-outer normalizable iff it is reducible to a may-/must-outernormal form.
Notice that the Monf are the normal forms with respect to the outer reductions defined in Definition 7. Let us restrict ourselves to consider resource terms
which correspond to λ-terms, according to the standard correspondence recalled
in the introduction. Then the two definitions of may and must solvability coincide, and they become the standard λ-solvability. Moreover the two notions of
may and must-outer-normal form collapse, and they become the classical definition of head-normal form; indeed the only possible outer redex in such terms
is the head-redex.
The term λx.y1[x! , Ω! ] is a onf, and it is must-solvable (hence may-solvable)
g
g
via (λy.(·))[F! ][I]: indeed, (λy.(λx.y1[x! , Ω! ]))[F! ][I] → (λx.F1[x! , Ω! ])[I] →
g
g∗
(λx.I[x! , Ω! ])[I] → I[I] + (λx.Ω)[I] → I + 0 = I. The terms F[I! , Ω! ], I[I! , Ω! ] are
not onf but the first is must-outer normalizable, the second is only may-outer
normalizable (the former reducing to I, the latter to I + Ω); indeed, the first is
also must-solvable, the second only may-solvable. The terms F[x] or I[Ω! ] are
not may/must-outer normalizable: they reduce to 0 and Ω, respectively.
The main result of this section is to prove that may-outer and must-outer
normalizability entails may-solvability and must-solvability respectively (Theorem 13). Section 4 will prove the converse of this implication with respect to
the may-solvability, while Section 5 discusses the case of the must-solvability,
where the implication must-solvable ⇒ must-outer normalizable does not hold
in general but in Λk .
In contrast to the λ-calculus, where having a head-normal form straightforwardly implies being solvable (see e.g. [Bar84]), Theorem 13 is not immediate,
for two reasons. In the λ-calculus one can replace the head-variable of a onf with
a term erasing all its arguments and returning the identity. In the resource calculus this is not possible, because of the linear resources, that cannot be erased
but must be used. A further difficulty arises in the must case. In fact, given
a term M in must-outer normal form, one should define a simple outer-context
g∗
C(·) such that C(M ) → nM I for all terms M in M. This is not at all obvious
since the terms in M can be quite different each other. Such a context is built
in the proof of Lemma 12 and it is based on the notion of resource underlined
9
by this calculus. In Definition 10 we give a notion of query of resources of a onf,
then, roughly speaking, what Lemma 12 shows is that one can build for any
g∗
term M in the sum M a context CM (·) such that CM (M 0 ) → nM 0 I whenever
g∗
M 0 is a term querying the same number of resources as M and CM (M 0 ) → 0
0
whenever M queries more resources than M . Then Theorem 13 is achieved
taking CM (·) for M a term of the sum M querying the minimum amount of
resources among the simple terms in M.
Definition 10. The query of resources of a onf A is the number r(A) of vari!
able occurrences (both bound and free) which are not under the scope of a ( )
constructor, i.e.:
r(λx.M ) := r(M )
r(xP1 . . . Pp ) := 1 +
p
X
r(Pi )
i=1
r([M ! ]) := 0
r(P ·Q) := r(P ) + r(Q)
Clearly for any onf y P~ and term N , we have r(y P~ ) = r y P~ [N ! ] .
r([M ]) := r(M )
r(1) := 0
Lemma 11. For every n ∈ Nat, let Xn := λx1 . . . xn+1 .xn+1 [x!1 , . . . , x!n ]. Let
A be a onf, x be a variable free in A, and n be any number greater than the size
of A. Then A {Xn /x} g-reduces to a onf having the same query of resources as
A and no free occurrence of x.
Proof. By induction on the structure of A. The only interesting case is when
A = xP1 . . . Pp , where by hypothesis p ≤ n. By induction each Pi {Xn /x}
g-reduces to a bag P̂i in onf with query of resources equal to that of Pi , so
M {Xn /x} = Xn P1 {Xn /x} . . . Pp {Xn /x}
g∗
−→ λxn+1−p . . . xn+1 .xn+1 P̂1 · . . . · P̂p · [x!p+1 , . . . , x!n ]
The last term is a onf with query of resources equal to 1 +
P
p
i=1 r(Pi ) = r(M ).
Pp
i=1
r(P̂i ) = 1 +
Lemma 12. Let M be a must-outer-normal form. There is a simple outerg∗
context C(·) such that C(M) → nI, for n ∈ Nat>0 . If moreover M ∈ Λk , then
also C(·) ∈ Λk .
Proof. In fact we prove a stronger statement:
Pm
(∗) let M = i=1 Mi be a must-outer-normal form, s.t. r(M1 ) = mini≤m r(Mi ),
then there is a simple outer-context C(·) and numbers {ni }i≤m such that
for every i ≤ m,
g∗
1. C(Mi ) → ni I,
2. r(Mi ) 6= r(M1 ) ⇒ ni = 0,
3. n1 6= 0.
Pm
We prove (∗) by induction on i=1 r(Mi ). The context C(·) is the composition
of the three contexts D(·), E(·), F (·) defined in the equations (5), (6), (7). The
contexts D(·) and E(·) are always in Λk , while F (·) is in Λk if M is.
10
The first context reduces every Mi to an applicative form. Let ` ≥ 0 be
the maximal length of the prefixes of abstractions of the Mi ’s and x be a fresh
variable, the context
(5)
D(·) := (·) [x! ] . . . [x! ]
| {z }
` times
reduces each Mi to a onf M̂i of the form yi Ri,1 . . . Ri,ri having the same query
of resources as Mi , indeed r([x! ]) = 0.
The second context mainly performs three tasks: it closes every bag Ri,1 ,. . . ,
Ri,ri (namely, it replaces all the free variables in the bags with closed terms),
consequently, the head occurrence of yi will become the only free occurrence of
a variable; the last task consists in gathering the contents of every bag Ri,1 ,. . . ,
Ri,ri into their product Ri,1 · . . . · Ri,ri . Let J := λxz.z[x! ] and notice that
JR[J ! ] reduces to JR for any bag R. Let {z1 , . . . , zh } be the free variables in
P
every yi is in {z1 , . . . , zh }), let y be a fresh variable and `
i M̂i (in particular
P
be the size of i M̂i . Define the context
E(·) := (λz1 . . . zh .(·)) X` ! . . . X` ! [J ! ] . . . [J ! ][y ! ].
{z
} | {z }
|
(6)
`+1 times
h times
By Lemma 11 each Ri,j {X` /z1 } . . . {X` /zh } reduces to a closed onf R̂i,j such
that r(R̂i,j ) = r(Ri,j ). So we have, denoting the sequence {X` /y1 } . . . {X` /ym }
simply as {X` /~y }:
g∗
E(yi Ri,1 . . . Ri,ri ) → X` (Ri,1 {X` /~y }) . . . (Ri,ri {X` /~y })[J ! ] . . . [J ! ][y ! ]
g∗
→ X` R̂i,1 . . . R̂i,ri [J ! ] . . . [J ! ][y ! ]
g∗
→ λxri +1 . . . x`+1 .x`+1 R̂i,1 · . . . · R̂i,ri · [x!ri +1 , . . . , x!` ] [J ! ] . . . [J ! ][y ! ]
g∗
→ J R̂i,1 · . . . · R̂i,ri · [J ! , . . . , J ! ] [J ! ] . . . [J ! ][y ! ]
| {z } | {z }
`−ri
g∗
ri
!
→ y R̂i,1 · . . . · R̂i,ri · [J , . . . , J ! ].
Pm
g∗ Pm
To sum up, the composition of D(·) and E(·) gives E(D( i=1 Mi )) → i=1 yQi ,
where for every i ≤ m, Qi is a bag of closed terms and r(yQi ) = r(Mi ). Notice
also that both contexts D(·) and E(·) are in Λk .
Every bag Qi can be decomposed into a bag Q̂i = [Li,1 , . . . , Li,ki ] of closed
ˆ
linear onf and a bag Q̂i of closed exponentiated simple terms. Of course one
ˆ
or both between Q̂i and Q̂i can be equal to 1. For every j ≤ k1 consider the
following family Fj , possibly empty, of pair of indexes:
Fj := {(i, h) s.t. i ≤ m, h ≤ ki and r(L1,j ) ≤ r(Li,h )}
P
P
Remark that (i,h)∈Fj r(Li,h ) < i r(Mi ), hence in case Fj is non-empty we
P
can apply the induction hypothesis on every Fj := (i,h)∈Fj Li,h , choosing L1,j
as the onf of minimal query of resources. This yields a family of simple outer
contexts {Cj (·)}j≤k1 such that for every (i, h) ∈ Fj ,
11
g∗
1. Cj (Li,h ) → nj,i,h I,
2. r(Li,h ) 6= r(L1,j ) ⇒ nj,i,h = 0,
3. nj,1,j 6= 0.
Actually, since every Li,h is closed one can suppose each Cj (·) to be just applicative, i.e. of the form (·)Pj,1 . . . Pj,pj . In the following, we will denote simply
by P~j the sequence of bags Pj,1 . . . Pj,pj . Let us define, with z a fresh variable:
H := λz.I[z P~1 ] . . . [z P~k1 ].
Notice that whenever k1 = 0 (i.e. Q1 contains no linear simple term), we do not
apply the induction hypothesis and we simply define H := λz.I, which is in Λk .
Notice that k1 is always equal to 0 in the case we started with a M in Λk .
g∗
Let us consider HQi for every i ≤ m. We claim that HQi → ni I, with n1
nonzero and whenever r(Qi ) > r(Q1 ), ni = 0. Notice that z occurs in H linearly
k1 times and never exponentially. This means that whenever ki > k1 we have
g∗
HQi → 0, since H does not succeed in using every linear term in Qi .
g∗
Also in case ki < k1 , we have HQi → 0. Indeed, since r(Qi ) ≥ r(Q1 ), if ki <
k1 there should be a Li,h ∈ Qi such that for every L1,j ∈ Q1 , r(Li,h ) > r(L1,j ).
g∗
This means for every j ≤ k1 , (i, h) ∈ Fj and hence Li,h P~j → 0. Thus, denoting
by Qi /Li,h the bag obtained erasing Li,h in Qi ,
b
HQi →
k1
X
λz.I[z P~1 ] . . . [Li,h P~j ] . . . [z P~k1 ] Qi /Li,h
j=1
g∗
→
k1
X
λz.I[z P~1 ] . . . 0 . . . [z P~k1 ] Qi /Li,h = 0.
j=1
g∗ P
Finally, if ki = k1 , we have HQi → σ∈Sk I[Li,σ(1) P~1 ] . . . [Li,σ(k1 ) P~k1 ]. For
1
every permutation σ ∈ Sk1 , if for a j ≤ k1 we have r(Li,σ(j) ) 6= r(L1,j ), then there
should be j 0 ≤ k1 (possibly j 0 = j) s.t. r(Li,σ(j 0 ) ) > r(L1,j 0 ), since by hypothesis
Pki
Pk1
r(Mi ) = h=1
r(Li,h ) ≥ h=1
r(L1,h ) and ki = k1 . We deduce that Li,σ(j 0 ) P~j 0
reduces to 0 and so I[Li,σ(1) P~1 ] . . . [Li,σ(k1 ) P~k1 ] does. Otherwise, for every j ≤
g∗
k1 , r(Li,σ(j) ) = r(L1,j ), and so Li,σ(j) P~j → nj,i,σ(j) I, for a number nj,i,σ(j) . We
g∗ Qk1
deduce I[Li,σ(1) P~1 ] . . . [Li,σ(k1 ) P~k1 ] →
j=1 nj,i,σ(j) I. In particular, if σ is the
Qk1
identity and i = 1, then every nj,i,σ(j) is nonzero, and so it is j=1
nj,i,σ(j) . We
conclude for an integer ni :
X
g∗
I[Li,σ(1) P~1 ] . . . [Li,σ(k1 ) P~k1 ] → ni I.
σ∈Sk1
In particular, for i = 1, ni is not Q
zero. Moreover, if ni is nonzero, then there is
k1
a permutation σ ∈ Sk1 such that j=1
nj,i,σ(j) is nonzero. This means that for
every j ≤ k1 r(Li,σ(j) ) = r(L1,j ), and so r(Qi ) = r(Q1 ).
Finally, we define the third context, with x a fresh variable:
F (·) := λy.(·) [λx.H[x! ]],
(7)
12
x : σ `m x : σ
var
`m 1 : ω
Γ, x : σ1 , . . . , x : σn `m M : τ, x 6∈ d(Γ)
→In
Γ `m λx.M : σ1 ∧ . . . ∧ σn → τ
1
Γ `m Ai : π
⊕
P
Γ `m i Ai : π
Γ `m M : π → τ ∆ `m P : π
→E
Γ, ∆ `m M P : τ
Γi `m M : σi , for 1 ≤ i ≤ n ∆ `m P : π
!n
Γ1 , . . . , Γn , ∆ `m [M ! ]·P : σ1 ∧ ... ∧ σn ∧ π
Γ `m M : σ ∆ `m P : π
`
Γ, ∆ `m [M ]·P : σ ∧ π
Figure 4: The type assignment system `m . The rules →In and !n are parametrized by a
natural number n, their 0-ary versions →I0 and !0 yield ω → τ and π respectively.
g∗
g∗
and we conclude: F (yQi ) → HQi → ni I, where ni is nonzero in case i = 1, and
it is zero in case r(Qi ) > r(Q1 ).
Notice that linear resources play a crucial role in the above proof. For an ex!
ample, consider the must-outer normal form
y[x]+y[x, Ω ]. The resource context
satisfying Lemma 12 is C(·) := λyx.(·) [λk.I[k]], indeed C(y[x]) + C(y[x, Ω! ])
!
reduces to 2I. If you consider the λ-context C λ (·) := λyx.(·) [(λk.I[k ! ]) ], we
g∗
have C λ (y[x]) + C λ (y[x, Ω! ]) → 2I + λx.Ω.
Theorem 13. Let M be a term with resources. If M has a may-outer-normal
form (resp. must-outer-normal form) then M is may-solvable(resp. must-solvable);
moreover, if M ∈ Λk then M is may-solvable (resp. must-solvable) in Λk .
Proof. By applying Lemma 12 to the outer-normal forms in a may/must-outernormal form of M .
4
Characterization of May Solvability
In this section we study the converse of Theorem 13 for the ”may” semantics:
the properties of may-outer normalizability and may-solvability are equivalent
in Λr and in Λk . The key ingredient we use is an intersection type system `m
assigning types to all and only the expressions having may-outer-normal form,
so giving a complete characterization of may-solvability, from the syntactical,
operational and logical point of view (Theorem 25).
The system `m lacks idempotency (σ ∧ σ 6= σ): in fact, we use the intersection as logical counterpart of the multiset union. The system has some
similarities with that one in [BCL99], which supplies a logical semantics for the
language in [Bou93]. The main logical difference between the two systems is
that the one in [BCL99] is affine and describes a lazy operational semantics. In
the restricted setting of λ-calculus similar non-idempotent systems have been
considered starting from [CDCV80], e.g. [Kfo00, WDMT02, NM04, dC09].
13
Definition 14. The set of types is the union of the set of linear types and that
of intersection types, given by the following grammars
σ, τ ::= a | π → σ
linear types
π, ζ ::= σ | ω | π ∧ ζ
intersection types
where a ranges over a countable set of atoms and ω is a constant. We consider
types modulo the equivalence ∼ generated by the following rules:
π ∧ ζ ∼ ζ ∧ π,
π ∧ ω ∼ π,
π1 ∧ (π2 ∧ π3 ) ∼ (π1 ∧ π2 ) ∧ π3
i.e., ∼ states that ∧ defines a commutative monoid with ω as the neutral element.
The last two rules allow us to consider n-ary intersections σ1 ∧ . . . ∧ σn , for any
n ∈ Nat, ω being the 0-ary intersection.
A basis is a finite multiset of assignments of the shape x : σ, where x is a
variable and σ is a linear type. Capital Greek letters Γ, ∆ range over bases.
We denote by d(Γ) the set of variables occurring in Γ and by Γ, ∆ the multiset
union between the bases Γ and ∆. A typing judgement is a sequent Γ `m A : π.
The `m type assignment system derivates typing judgements for NathΛ(b) i.
Its rules are defined in Figure 4. Capital Greek letters Φ, Ψ range over derivations, Φ :: Γ `m A : π denoting a derivation Φ with conclusion Γ `m A : π.
Some comments are in order. The bases have a multiplicative behaviour,
and there is no weakening nor contraction, in neither explicit nor implicit form.
In the rule !n the parameter n takes into account the number of times a reusable
resource will be called, whereas the rule ` assigns just one type to the linear
resource M . Duplication and erasure is handled at the level of types by the
intersection. For example, in (λx.y)[M ! ], λx.y is typed by ω → σ (by the rules
var and →I0 ), for some σ and [M ! ] is typed by ω (by the rules 1 and !0 ), and
in (λx.y[x][x])[y ! ], λx.y[x][x] is typed by σ ∧ σ → τ , using the rule →I2 , and [y ! ]
is typed by σ ∧ σ, using rule !2 , for any σ and τ .
All other rules are almost standard.
Definition 15. The measure of a derivation Φ is the number m(Φ) of axioms
(i.e. var and 1 rules) in Φ. The measure m(A) of a sum of expressions A is
m(A) := inf{m(Φ) ; Φ :: Γ `m A : π, for Γ a basis and π a type}.
An easy inspection of the rules in Figure 4 will convince the reader that the
shape of a type derivation is strictly related with that of the expression it types,
as formally stated by the following lemma.
Lemma 16 (Generation).
1. Π :: Γ `m x : π implies π = σ and Γ = x : σ,
and Π is an instance of the axiom var.
2. Π :: Γ `m 1 : π implies Π is an instance of the axiom 1, hence Γ = ∅ and
π = ω.
3. Π :: Γ `m λx.M : π implies π = (σ1 ∧ . . . ∧ σn ) → τ and Π is an application
of the rule →In with premise Π0 :: Γ, x : σ1 , . . . , x : σn `m M : τ ; clearly,
m(Π) = m(Π0 ).
14
4. Π :: Γ `m M P : π implies π = σ, Γ = Γ1 , Γ2 and Π is an application of
the rule →E with premises Π1 :: Γ1 `m M : π 0 → σ and Π2 :: Γ2 `m P : π 0
for an intersection type π 0 ; also, m(Π) = m(Π1 ) + m(Π2 ).
5. Π :: Γ `m [M ]·P : π implies there are two derivations Π1 :: Γ1 `m M : σ
and Π2 :: Γ2 `m P : π 0 such that π = σ ∧ π 0 , Γ = Γ1 , Γ2 and m(Π) =
m(Π1 ) + m(Π2 ).
6. Π :: Γ `m [M ! ] · P : π implies there are a number n ≥ 0 of derivations
0
Πi :: Γi `m M : σi , and a derivation Πn+1 :: Γn+1 `P
m P : π such that
n+1
0
π = σ1 ∧ . . . ∧ σn ∧ π , Γ = Γ1 , . . . , Γn+1 and m(Π) = i=1 m(Πi ).
7. Π :: Γ `m A : π implies there is an expression A in the sum A and a
derivation Π0 :: Γ `m A : π such that m(Π) = m(Π0 ).
Notice in the above lemma that the cases of the bag decomposition (items 5,
6) refer to proofs that can be different from the subproofs of Π. This is due to
the fact that bags can be broken up in various ways: different decompositions
yield typing derivations having some swapped rules ` and !n . Also, notice that
terms and sums of terms are typed by linear types, bags and sums of bags by
intersection types.
The following lemmata (Lemma 17-21) state basically that the typing system
behaves well with respect to the substitutions of the resource calculus. They
are needed to prove that typing judgements are invariant under baby, middle
and giant-step reductions (Proposition 22).
Lemma 17 (Linear Substitution). Let Φ :: Γ, x : τ `m A : π and Ψ :: ∆ `m
N : τ . There is a derivation L(Φ, Ψ) :: Γ, ∆ `m AhN/xi : π with m(L(Φ, Ψ)) =
m(Φ) + m(Ψ) − 1.
Proof. By induction on the structure of the derivation Φ. We treat in detail
only the case of a terminal !n rule. The base of induction is trivial: var is
immediate, while 1 does not meet the condition of having x : τ in the basis.
The cases →In , ⊕ are immediate consequences of the induction hypothesis, the
cases →E, ` are easier variant of the !n case. So let us assume
Φ :=
..
..
.. Φi
.. ΦP
Γi `m M : σi , for 1 ≤ i ≤ n
ΓP `m P : ζ
!n
Γ1 , . . . , Γn , ΓP `m [M ! ]·P : σ1 ∧ . . . ∧ σn ∧ ζ
We suppose the underlined hypothesis x : τ is in Γ1 , i.e. Γ1 = Γ01 , x : τ (the case
x : τ is in another Γi or in ΓP being similar). Notice that supposing x : τ in
Γ1 entails n ≥ 1. By induction there is L(Φ1 , Ψ) :: Γ01 , ∆ `m M hN/xi : σ1 s.t.
Pk
m(L(Φ1 , Ψ)) = m(Φ1 ) + m(Ψ) − 1. Let M hN/xi = j=1 Lj . By Generation
(Lemma 16, item 7) M hN/xi must have a term (i.e. k > 0), say L1 , and a proof
Φ01 :: Γ01 , ∆ `m L1 : σ1 s.t. m(Φ01 ) = m(L(Φ1 , Ψ)). We define
.. 0
.. Φ1
L(Φ, Ψ) := Γ01 , ∆ `m L1 : σ1
..
..
.. Φi
.. ΦP
Γi `m M : σi , for 2 ≤ i ≤ n
ΓP `m P : ζ
!n−1
Γ2 , . . . , Γn , ΓP `m [M ! ]·P : σ2 ∧ . . . ∧ σn ∧ ζ
Γ01 , ∆, Γ2 , ..., Γn , ΓP `m [L1 , M ! ]·P : σ1 ∧ ... ∧ σn ∧ ζ
⊕
Pk
0
Γ1 , ∆, Γ2 , . . . , Γn , ΓP `m j=1 [Lj , M ! ]·P : σ1 ∧ ... ∧ σn ∧ ζ
15
`
Pk
where by definition [M hN/xi, M ! ]·P = j=1 [Lj , M ! ]·P , and if k = 1 the last
Pn
⊕ rule is omitted. Moreover,
m(L(Φ, Ψ)) = m(Φ01 ) + m(ΦP ) + Pi=2 m(Φi ) =
Pn
n
m(L(Φ1 , Ψ))+m(ΦP )+ i=2 m(Φi ) = m(Φ1 )+m(Ψ)−1+m(ΦP )+ i=2 m(Φi ) =
m(Φ) + m(Ψ) − 1.
Lemma 18 (Linear Expansion). Let Φ :: Γ `m AhN/xi : π. There are a linear
type τ and derivations Φ1 :: Γ1 , x : τ `m A : π and Φ2 :: Γ2 `m N : τ with
Γ = Γ1 , Γ2 .
Proof. By structural induction on the sum A. We detail the case A = [M ! ]·P ,
the other cases being easy variants. In this case, AhN/xi = [M hN/xi, M ! ]·P +
[M ! ]·P hN/xi, so Φ types only one term of the sum AhN/xi through a ⊕ rule.
Let us suppose this term is in [M hN/xi, M ! ]·P (the case it is in [M ! ]·P hN/xi
being easier), so being of the form [M 0 , M ! ]·P , with M hN/xi = M 0 + M. By
1
Generation (Lemma 16, items 5 and 6) there is a derivation Φ :: Γ1 `m M 0 : σ
2
2
and a derivation Φ :: Γ2 `m [M ! ] · P : π s.t. Γ = Γ1 , Γ2 , π = σ ∧ π and Φ
ends in a !n rule with n premises typing M and one premise typing P . Possibly
1
applying one ⊕ rule to Φ we get a derivation of Γ1 `m M hN/xi : σ, hence by
1
1
induction hypothesis we have Φ1 :: Γ11 , x : τ `m M : σ and Φ2 :: Γ12 `m N : τ .
1
Then we define Φ1 :: Γ1 , x : τ `m [M ! ]·P : π as a !n+1 rule with premise Φ1 plus
2
1
the premises of the !n rule ending Φ , and Φ2 as Φ2 .
Lemma 19 (Partial Substitution). Let m ≥ 0, Φ :: Γ, x : σ1 , . . . , x : σm `m A : π
and ∀i ≤ m, Ψi :: ∆i `m N : σi with ∆ = ∆1 , . . . , ∆m . There isPP(Φ, Ψi≤m ) ::
m
Γ, ∆ `m A {(N + x)/x} : π with m(P(Φ, Ψi≤m )) = m(Φ) − m + i=1 m(Ψi ).
Proof. Like in the proof of Linear Substitution (Lemma 17), we do induction
on the structure of Φ. In the var case, we have A = y and m = 0 or 1.
If y 6= x, then m = 0 and we set P(Φ, Ψi≤m ) = Φ. Otherwise, y = x, so
A {N + x/x} = N + x. Hence we define P(Φ, Ψi≤m ) by adding one ⊕ rule to
Φ or to Ψ1 , depending whether m = P
0 or m = 1. In both cases, we easily
m
deduce m(P(Φ, Ψi≤m )) = m(Φ) − m + i=1 m(Ψi ). The case 1 is the last rule
is immediate.
As for the induction step we detail only the case of a terminal !n rule, the
other cases being immediate or easier variants. So let
Φ :=
Γj , Γxj
..
.. Φj
`m M : τj , for 1 ≤ j ≤ n
..
.. Φn+1
`m P : ζ
Γn+1 , Γxn+1
Γ, x : σ1 , . . . , x : σm `m [M ! ]·P : τ1 ∧ . . . ∧ τn ∧ ζ
!n
where Γ = Γ1 , . . . , Γn+1 and x : σ1 , . . . , x : σm = Γx1 , . . . , Γxn+1 . 4 Notice
Pn+1
m(Φ) = j=1 m(Φj ). For every j ≤ n + 1, let I j be the set of i ≤ m s.t. x : σi is
Pn+1
in Γxj , mj being the cardinality of I j , possibly 0. Notice m = j=1 mj . Let ∆I j
be the multiset union of the ∆i bases with i ∈ I j .We apply the induction hypothesis to each pair Φj and Ψi∈I j , getting a derivation P(Φj , Ψi∈I j ) :: Γj , ∆I j `m
M {N + x/x} : τj for every j ≤ n, and P(Φn+1 , ΨI n+1 ) :: P
Γn+1 , ∆I n+1 `m
P {N + x/x} : ζ, such that m(P(Φj , Ψi∈I j )) = m(Φj ) − mj + i∈I j m(Ψi ) for
4 Be careful not to confuse m (the number of underlined occurrences of x) with n (the
number of premises typing M in the !n rule).
16
every j ≤ n + 1. Moreover, in case mj = 0 U
we have m(P(Φj , Ψi∈I j )) = m(Φj ).
Let us shorten the notation, setting ∆I j = i∈ I j ∆i .
Pk
As always, M {N + x/x} (resp. P {N + x/x}) is generally a sum h=1 Mh
(resp. P). Let us suppose k ≥ 2, the case k = 0 not holding since M {N + x/x}
is typed and the case k = 1 being immediate. Applying Generation (Lemma 16)
to each m(P(Φj , Ψi∈I j )) we obtain a function f : {0, . . . , n−1} → {0, . . . , k −1},
an term P 0 in the sum P, and a family of derivations Φ0j :: Γj , ∆I j `m Mf (j) : τj
for j ≤ n, and Φ0n+1 :: Γn+1 , ∆I n+1 ∆i `m P 0 : ζ, s.t. m(P(Φj , Ψi∈I j )) = m(Φ0j )
for j ≤ n + 1. For every h ≤ k, let Jh = f −1 (h), V
and lh be the cardinality
of Jh ; for h, 0 ≤ h < k, let π0 = ζ, πh+1 = πh ∧ j∈f −1 (h+1) τj , and Γ0 =
Γn+1 , Γh+1 = Γh , Γf −1 (h) , and ∆0 = ∆I n+1 , ∆h+1 = ∆h , ∆j∈f −1 (h) , where,
i∈I j
consistency as before, Γf −1 (h) (resp. ∆j∈f −1 (h) ) denotes the multiset union of
i∈I j
S
−1
the Γj (resp. ∆i ) bases with j ∈ f (h) (resp. i ∈ j∈f −1 (h) I j ). Recalling
πk = τ1 ∧ . . . ∧ τn ∧ ζ = π, we have P(Φ, Ψi≤m ) :=
Γ j , ∆I j
Γ j , ∆I j
.. 0
.. Φj
`m Mk : τk , for j ∈ Jk
.. 0
.. Φj
`m M1 : τj , for j ∈ J1
.. 0
.. Φn+1
Γn+1 , ∆I n+1 `m P 0 : ζ
Γ1 , ∆1 `m [M1! ]·P 0 : π1
..
..
!
Γk−1 , ∆k−1 `m [M1! , ..., Mk−1
]·P 0 : πk−1
!
Γ, ∆ `m [(M {N + x/x}) ]·P 0 : τ1 ∧ . . . ∧ τn ∧ ζ
!
! l1
!lk
⊕
Γ, ∆ `m [(M {N + x/x}) ]·P : τ1 ∧ . . . ∧ τn ∧ ζ
Pn+1
Pn+1
Pn+1
We have m(P(Φ, Ψi≤m )) = j=1 m(φ0j ) = j=1 m(P(Φj , Ψi∈I j )) = j=1 m(Φj )+
P
Pn+1 j
Pm
i∈I j m(Ψi ) −
j=1 m = m(Φ) − m +
i=1 m(Ψi ).
The next lemmata have proofs similar to the previous Lemma 17-19 (by
induction on the structure of A or Φ). We omit their proofs.
Lemma 20 (Partial Expansion). Let Φ :: Γ `m A {N + x/x} : π, then there is
a number m ≥ 0, linear types τ1 , . . . , τm and derivations Φ1 :: Γ1 , x : τ1 , . . . , x :
τm `m A : π and Ψi :: ∆i `m N : τi for i ≤ m and Γ = Γ1 , ∆1 , . . . , ∆m .
Lemma 21. Let x ∈
/ d(Γ), then for every Φ :: Γ `m A : π there is Ψ :: Γ `m
A {0/x} : π with m(Φ) = m(Ψ), and vice versa.
The following proposition states that the type system `m enjoys both subject
reduction and subject expansion.
Proposition 22 (Invariance of `m typings). Let ∈ {b, m, g} and M →
M, then
M and M share the same judgements, i.e. Γ `m M : τ iff Γ `m M : τ . Also,
og
M−
→ M entails m(M ) > m(M).
Proof. The proof is by induction on the context enclosing the redex reduced
in M →
M. All induction steps follow by induction, taking into account that,
whenever the redex is inside a reusable resource N ! (so the reduction is not
outer) the measure m may not decrease since (the bag containing) N ! may be
typed by ω.
The base of induction is when M is the redex fired by the reduction M →
M.
One can consider only the baby-step cases, the giant and the middle ones will
17
..
..
..
.. Ψ2
.. Ψ3
.. Ψ1
Γ2 `m N : τ Γ3 `m P : ζ
Γ1 , x : τ, ~x : ~τ `m L : σ
→In
`
Γ1 `m λx.L : τ ∧ ζ → σ
Γ2 , Γ3 `m [N ]·P : τ ∧ ζ
→E
Γ `m (λx.L)[N ]·P : σ
(a) definition of Ψ
..
.. Φ1
Γ1 , Γ2 , ~x : ~τ `m L : σ
Γ1 , Γ2 `m λx.L : ζ → σ
→In
..
.. Φ3
Γ3 `m P : ζ
→E
Γ `m (λx.L)P : σ
⊕
Γ `m (λx.LhN/xi)P : σ
(b) definition of Φ
Figure 5: Definition of the derivations Ψ and Φ used in the proof of Proposition 22 (case 2).
follow since they correspond to a sequence of baby-steps. In particular, one
proves that the measure m is monotone strictly decreasing on every baby-step
but the one choosing a bang element from the bag, in which case m is monotone
decreasing (case 3 below). Then m strictly decreases on giant-steps since they
correspond to sequences of baby-steps ending always in an empty bag baby-step.
b
Case 1 (empty bag). Let M = (λx.L)1 →
L {0/x} = M, and suppose Ψ :: Γ `m
(λx.L)1 : σ. By Generation (Lemma 16) Ψ ends in a →E rule with premises
Ψ0 :: Γ `m λx.L : ω → σ and an instance of the 1 rule, and m(Ψ) = m(Ψ0 ) + 1.
Again by Generation Ψ0 ends in a →I0 rule with premise Ψ00 :: Γ `m L : σ
and x ∈
/ d(Γ), m(Ψ00 ) = m(Ψ0 ). By Lemma 21 there is a derivation Φ :: Γ `m
L {0/x} : σ with m(Φ) = m(Ψ00 ) = m(Ψ) − 1. We conclude that the judgements
of (λx.L)1 are also of L {0/x} and m((λx.L)1) ≥ m(L {0/x}) + 1.
Conversely, suppose Φ :: Γ `m L {0/x} : σ. By definition of substitution,
L {0/x} is either 0 or a term with no free occurrence of x. The first hypothesis
cannot hold, being L {0/x} typable, hence L {0/x} is a simple term having no
free occurrence of x, so x ∈
/ d(Γ). Then we can apply Lemma 21, from which we
deduce Ψ :: Γ `m L : σ. Adding to Ψ one →I0 rule, and then composing it with
one 1 rule through a →E yields a derivation Ψ0 :: Γ `m (λx.L)1 : σ). We thus
conclude that (λx.L)1 and L {0/x} share the same types and m((λx.L)1) >
m(L {0/x}).
b
Case 2 (bag with linear resource). Let M = (λx.L)[N ]·P →
(λx.LhN/xi)P =
M and suppose Ψ :: Γ `m (λx.L)[N ] · P : σ. By Generation (Lemma 16) we
can assume Ψ to be as in Figure 5(a), where by ~x : ~τ we are meaning x :
τ1 , . . . , x : τm , and ζ is τ1 ∧ . . . ∧ τm (in case m = 0, ζ = ω). By Linear
Substitution (Lemma 17) we get L(Ψ1 , Ψ2 ) :: Γ1 , ~x : ~τ , Γ2 `m LhN/xi : σ, with
m(L(Ψ1 , Ψ2 )) = m(Ψ1 )+m(Ψ2 )−1. As usual, we should notice that LhN/xi : σ
might not be a simple term but a sum: in that case L(Ψ1 , Ψ2 ) ends in a ⊕ rule
with premise a derivation Φ1 :: Γ1 , ~x : ~τ , Γ2 `m L : σ with L a simple term in the
sum LhN/xi and m(Φ1 ) = m(L(Ψ1 , Ψ2 )). Then we define Φ as in Figure 5(b),
with Φ3 = Ψ3 .
We remark that m(Φ) = m(Φ1 )+m(Φ3 ) = m(L(Ψ1 , Ψ2 ))+m(Ψ3 ) = m(Ψ)−
1. We conclude that every type of (λx.L)[N ]·P is also a type of (λx.LhN/xi)P
and m((λx.L)[N ]·P ) ≥ m((λx.LhN/xi)P ) + 1.
Conversely, assume Φ :: Γ `m (λx.LhN/xi)P : σ. We can suppose Φ as in
Figure 5(b), where as above ~x : ~τ denotes the basis x : τ1 , . . . , x : τm with ζ =
τ1 ∧. . .∧τm , and in case LhN/xi is a simple term the terminal ⊕ rule is omitted.
By possibly adding one ⊕ rule to Φ1 one get Φ1 :: Γ1 , Γ2 , ~x : ~τ `m LhN/xi : σ.
Applying Linear Expansion (Lemma 18) we have Ψ1 :: Γ1 , ~x : ~τ , x : τ `m L : σ
18
and Ψ2 :: Γ2 `m N : τ (where we recall x ∈
/ FV(N )). Then we set Ψ as in
Figure 5(a). This proves that the types of (λx.LhN/xi)P are also of (λx.L)[N ]·P .
b
Case 3 (bag with exponential resource). Suppose M = (λx.L)[N ! ] · P →
!
(λx.L {N + x/x})P = M and assume Ψ :: Γ `m (λx.L)[N ] · P : σ. As in
the previous case, Generation (Lemma 16) allows us to suppose
..
.. Ψ1
Γ1 , x : τ1 , ..., x : τn , ~x : ~σ `m L : σ
→In
Ψ=
Vn
Γ1 `m λx.L :
i=1 τi ∧ ζ → σ
.. i
..
.. Ψ
.. Ψ3
i
Γ2 `m N : τi for i ≤ n Γ3 `m P : ζ
`
Vn
Γ2 , Γ3 `m [N ! ]·P :
i=1 τi ∧ ζ
→E
Γ `m (λx.L)[N ! ]·P : σ
where Γ2 = Γ12 , . . . , Γn2 , and as usual ~x : ζ denotes the context x : σ1 , . . . , x : σm
i≤n
for ζ = σ1 ∧. . .∧σm . By Partial Substitution (Lemma 19) we get P(ΨP
) ::
1, Ψ
n
i≤n
Γ1 , ~x : ~σ , Γ2 `m L {N + x/x} : σ, with m(P(Ψ1 , Ψ )) = m(Ψ1 )−n+ i=1 m(Ψi ).
Let us suppose L {N + x/x} : σ is a sum of terms (the case it is a single term being an easier variant): under this hypothesis Generation (Lemma 16) states that
P(Ψ1 , Ψi≤n ) ends in a ⊕ rule with premise a derivation Ψ :: Γ1 , ~x : ~σ , Γ2 `m L : σ
with L a simple term in the sum L {N + x/x} and m(Ψ) = m(P(Ψ1 , Ψi≤n )).
Then we define
..
.. Ψ
Γ1 , ~x : ~σ , Γ2 `m L : σ
Φ := Γ1 , Γ2 `m λx.L : ζ → σ
→In
..
.. Ψ3
Γ3 `m P : ζ
→E
Γ `m (λx.L)P : σ
⊕
Γ `m (λx.L {N + x/x})P : σ
We remark that m(Φ) = m(Ψ)+m(Ψ3 ) = m(P(Ψ1 , Ψi≤n ))+m(Ψ3 ) = m(Ψ)−n.
In particular, notice that whenever n = 0 the measure is constant.
For the converse we use a similar variant of the above case 2, using Exponential Expansion (Lemma 20) instead of Linear Expansion. We conclude that
(λx.L)[N ]·P and (λx.LhN/xi)P share the same judgements and m((λx.L)[N ! ]·
P ) ≥ m((λx.L {N + x/x})P ).
We prove the equivalence among may-solvability, typability in `m and mayouter normalizability (Theorem 25). As a byproduct we achieve also an operational characterization through the notion of outer reduction (Definition 7).
In various λ-calculi the implication typable ⇒ head-normalizable is often proven
using suitable notions of computability or saturated sets (e.g. [Kri93], and our
proof of the Theorem 42), whereas the implication solvable ⇒ head-normalizable
is argued through a standardization theorem (e.g. [Bar84]). Our proof is instead
based on a different method, namely both implications are easy consequences of
Lemma 23, which is argued by induction on the measure on the type derivations
given in Definition 15. In the λ-calculus setting, a somewhat similar approach
can be found in [Val01]. More generally, the idea of measuring quantitative
properties of terms using non-idempotent intersection types can be found also
in [dC09], springing out from the analysis of the β-reduction via the notion of
Taylor expansion [ER06, ER08]. As an aside let us point out [dCPTdF08] also,
where similar methods have been used to study (outer-)normalizability of linear
logic proof-nets.
19
Lemma 23. Let M be a resource term and C(·) be a simple outer-context. If
C(M ) is typable, then M is reducible to a monf by outer reduction.
Proof. By induction on m C(M ) , which is a finite number, C(M ) being typable. If M is a monf we are done. Otherwise, it has an outer redex, so let
og
M−
→ M. Since C(·) is a outer-context, every outer redex of M is outer in C(M ),
og
hence we have C(M ) −
→ C(M). By Proposition
22 m C(M ) > m C(M) . Let
M = M 0 + M00 be such that m C(M 0 ) = m C(M) : the fact that M 0 exists
is due to C(·) being simple, as every term in the sum C(M) is obtained by
plugging a term of M in C(·). By induction hypothesis M 0 is outer reducible to
og
og∗
a monf L. We conclude by context closure: M −
→ M 0 + M00 −−→ L + M00 .
Lemma 24. Every may-outer-normal form is typable in `m .
Proof. By induction on the structure of a monf M. The only interesting case
is when M is of the form xP1 . . . Pp with each Pi of the form [Mi,1 , . . . , Mi,mi ]·
!
!
], with mi , ni ≥ 0 and for each j ≤ mi , Mj,mi onf. By induction
[Ni,1
, . . . , Ni,n
i
hypothesis we have derivations Ψi,j :: Γi,j `m Mi,j : τi,j for each i ≤ p, j ≤ mi
hence we can construct a derivation Φi :: Γi,1 , . . . , Γi,mi `m Pi : τi,1 ∧. . .∧τi,mi by
applying a tree of mi rules ` having as premises the Ψi,1 , . . . , Ψi,mi respectively
!
!
and, as the rightmost leaf, a derivation of `m [Ni,1
, . . . , Ni,n
] : ω made of ni rules
i
!0 and one rule 1. Similarly we get a derivation typing xP1 . . . Pp by applying a
tree of p rules →E having as premises
the Φi ’s derivations
and, as the leftmost
V
V
leaf, a var rule typing x with ( j≤m1 τ1,j ) ∧ . . . ∧ ( j≤mp τp,j ) → σ, for a linear
type σ.
Theorem 25. Given a resource term M ∈ Λr , the following are equivalent:
1. M is may-outer normalizable,
2. M is typable by `m ,
3. M is reducible to a monf by outer reduction,
4. M is may-solvable.
Moreover, if M ∈ Λk then 4 can be replaced by
4’. M is may-solvable in Λk .
Proof. 1 ⇒ 2: by Proposition 22 and Lemma 24. 2 ⇒ 3: by Lemma 23, merely
taking the hole as the simple head-context. 3 ⇒ 4 (resp. 3 ⇒ 40 ): by Theorem 13
and context closure. 4 ⇒ 1: if there is a simple head-context C(·) s.t. C(M ) has
a monf, by the already proven implication 1 ⇒ 2, C(M ) is typable, we conclude
by Lemma 23.
The implication 1 ⇒ 3 can also be argued as a corollary of the standardization proven in [PT09]. However, our proof uses the type assignment system,
namely Lemma 23, so it adopts a different approach with respect to the techniques in [PT09].
We can now trust in the examples 1-4 of Section 3. In fact Ω, having no
monf, is may (hence must) unsolvable.
20
5
The problem of Must Solvability
The characterization of the must solvability in Λr is a tough problem. In Λr
we have the empty sum (i.e. 0) and diverging terms (like Ω). They are both
considered unsolvable since they do not communicate with the environment,
but they have a crucial difference: 0 is the neutral element of the sum, hence
it disappears when added to a solvable term, Ω does not. This difference is
harmless when considering may-solvability but becomes relevant for the mustsolvability. Consider the terms:
g
M1 = λx.I[Ω! , x! ] → λx.Ω + λx.x
g
!
M2 = λx.I[(Ω[x]) , x! ] → λx.Ω[x] + λx.x
We claim that M1 is must-solvable whilst M2 is not. In fact, both λx.Ω and
λx.Ω[x] have no monf, so they are not may solvable (Theorem 25). Hence the
only possibility for having a sum of identities from M1 (resp. M2 ) is to find a
context reducing λx.Ω (resp. λx.Ω[x]) to 0 and keeping λx.x different from 0.
Such a context exists for M1 , e.g. (·)[I], but not for M2 , both λx.Ω[x] and λx.x
using x linearly.
This example shows: first, that the implication must-solvable ⇒ must-outer
normalizable, i.e. the converse of Theorem 13 for the must case, does not hold in
Λr , M1 being a contra-example; second, that in order to characterize whether
a sum is must-solvable, one should know if there are contexts reducing the
diverging terms of a sum to 0 and keeping at least one among the others different
from 0. With respect to this point the situation can be even worse, consider:
g
!
M3 = λx.I[(Ω[x]) , y ! ] → λx.Ω[x] + λx.y
The term M3 is must-solvable via the context λy.(·)1. This means that the
term λx.Ω[x], occurring in the sums that are reduct of both M2 and M3 , has a
different behaviour depending on the shape of the other terms of the sum. In
particular, the property of being must-solvable does not commute with the sum.
Such kinds of problems are too difficult for us at the present, so we study
the must-solvability in the simpler setting of Λk , the purely non-deterministic
fragment of Λr . In this fragment we have only non-empty bags of reusable
resources and non-empty sums, so no term reduces to 0, and the problem of
must-solvability becomes smoother than in Λr .
5.1
Characterization of Must Solvability in Λk
From now on we consider Λk , the purely non-deterministic fragment of Λr that
has been defined at the end of Section 2.
A type assignment system characterizing logically the must-solvability must
assign types to all the elements of a sum, so to type the sum itself (by definition
of must-solvability), and, in order to type an application (λx.M )P the types of
all elements of P need to match the types of all the free occurrences of x in M .
Namely, the type assignment needs to have a correct behaviour with respect to
the λ-calculus, which is a proper fragment of Λk , i.e., a term of Λ needs to be
typed in it if and only if it has head-normal form, in the standard sense.
Therefore, we use a well-know λ-model, namely the D∞ of Scott [Sco76],
described in logical form in [PRDR04]. In fact, the underlined structure is a
21
πω
π π∧π
ωω→ω
π1 ∧ π2 πi (i = 1, 2)
π→ωω→ω
ω→φφω→φ
(π → ρ) ∧ (π → ζ) π → (ρ ∧ ζ)
π π 0 ρ ρ0
π ∧ ρ π 0 ∧ ρ0
π 0 π ρ ρ0
π → ρ π 0 → ρ0
(a) rules defining the preorder over types
Γ, x : π `M x : π
var
Γ `M M : ω
Γ, x : π `M M : ρ
→I
Γ `M λx.M : π → ρ
Γ `M M : π
!
Γ `M [M ! ] : π
ω
Γ `M M : π Γ `M N : π
mix+
Γ `M M + N : π
Γ `M M : π → τ Γ `M P : π
→E
Γ `M M P : τ
Γ `M M : π Γ `M P : π
mix·
Γ `M [M ! ]·P : π
Γ `M M : π Γ `M M : ρ
∧I
Γ `M M : π ∧ ρ
Γ `M M : ρ ρ π
Γ `M M : π
(b) deductive rules of `M
Figure 6: The type assignment system `M .
complete lattice with infinite descending chains, so every pair of compact points
has a least upper bound (lub) greater than ⊥ and a greatest lower bound (glb)
less than >. From a logical point of view, since types are names for compact
elements, this means that every two typable terms have a non-trivial type in
common. Since D∞ is a sensible model, giving non-trivial interpretation to all
and only the solvable terms, this implies that every pair of solvable terms have
at least one type in common.
The type assignment system `M characterizing must-solvability is exactly the
logical description of D∞ , plus two new rules for typing bags and sums. Note
that, thanks to the previously described property, these rules are very easy, since
they ask that all the elements of a bag or a sum have the same type.
Definition 26. The set of types for `M is given by the following grammar:
π, ρ, ζ ::= φ | ω | π → ρ | π ∧ ρ
where φ and ω are two different constants. We consider the smallest pre-order
relation over types satisfying the rules in Figure 6(a). The relation ≈ is the
equivalence generated by . The symbol = denote the identity relation on types.
A basis is a function from variables to types, with finite domain. A typing
judgement is a sequent Γ `M A : π, where Γ is a basis. The rules of the type
assignment system `M are defined in Figure 6(b).
We extend to `M the notations used in the previous section for `m .
22
The system `M is very different from `m . Essentially, `m has a quantitative
aspect missing in `M . This is reflected in the fact that the intersection is idempotent in the latter and not in the former. Indeed, in `m the non-idempotency
of the intersection is crucial to account for the consumption of linear resources,
while in `M we do not need to count the number of resources in a bag since Λk
allows only reusable resources. Moreover, in order to type a sum, all elements
of the sum need to be typed by the same type. The constant type ω represents
the empty bag in `m , while in `M it is a trivial type, which can be assigned to
all terms. Indeed, the empty bag does not belong to Λk . Another difference
is that in `M we allow intersection in the right side of an arrow, but this is
made for making the type matching exactly the compact elements of D∞ . Since
π1 ∧ π2 πi , the rule entails the ∧-elimination rule, which we have therefore
omitted in the definition of `M :
Γ `M M : π1 ∧ π2
∧E
Γ `M M : πi
Also the following rules are admissible:
Γ `M M : ρ x 6∈ d(Γ)
weak
Γ, x : π `M M : ρ
Γ, x : ζ `M M : ρ π ζ
L
Γ, x : π `M M : ρ
The following lemma states well-known properties of ≈ that we will use in the
sequel. We refer to [RDRP04] (chapter 11.1.1, p. 132) for its proof.
Lemma 27 (Properties of , [RDRP04]). The constants ω and φ belong to the
≈-classes that are, respectively, the -maximum and the -minimum. Indeed
such classes can be characterized as follows:
φ 6≈ ω
π ∧ ρ ≈ ω iff π ≈ ω and ρ ≈ ω,
π ∧ ρ ≈ φ iff π ≈ φ or ρ ≈ φ,
π → ρ ≈ ω iff ρ ≈ ω,
π → ρ ≈ φ iff π ≈ ω and ρ ≈ φ.
For every typeVπ there is a minimum n ∈ Nat, and types π11 , . . . , π1n , π21 , . . . , π2n
such that π ≈ i≤n π1i → π2i . Moreover, we have
V
π1i → π2i j≤m ρj1 → ρj2 iff
V
V
ρj2 6≈ ω implies ∃J ⊆ {1, ..., n}, i∈J π1i ρj1 , i∈J π2i ρj2 (1 ≤ j ≤ m)
V
i≤n
(8)
Definition 28 extends to our setting the aggregation operation of [BEM09].
That operation has been introduced to interpret a parallel constructor corresponding to the + of Λk . Indeed, our setting allows to prove that the of
[BEM09] is the sup of the preorder (Proposition 29).
Definition 28. We define the following operation over types by induction on
their lengths:
φ π := π
(π1 → π2 ) (ρ1 → ρ2 ) := (π1 ∧ ρ2 ) → (π2 ρ2 )
ω π := ω
(π1 → π2 ) ρ := ρ (π1 → π2 )
(π1 ∧ π2 ) ρ := (π1 ρ) ∧ (π2 ρ)
if ρ is not an arrow
23
Notice that is associative and commutative modulo the associativity and
commutativity of ∧, in fact, (assuming πi , ρi arrows)
(π1 ∧ π2 ) (ρ1 ∧ ρ2 ) = ((ρ1 π1 ) ∧ (ρ2 π1 )) ∧ ((ρ1 π2 ) ∧ (ρ2 π2 ))
≈ ((π1 ρ1 ) ∧ (π2 ρ2 )) ∧ ((π1 ρ2 ) ∧ (π2 ρ2 )) = (ρ1 ∧ ρ2 ) (π1 ∧ π2 )
Besides, is compatible with ≈, i.e. π ≈ π 0 and ρ ≈ ρ0 entails π ρ ≈ π 0 ρ0 .
Proposition 29. For any two types π, ρ, we have:
π ∧ ρ ≈ inf(π, ρ)
π ρ ≈ sup(π, ρ)
Also, if π, ρ are not ≈-equivalent to ω (resp. φ) then so is π ρ (resp. π ∧ ρ).
Proof. By the characterization of the ≈-class of ω (resp. φ) given in Proposition 27 one can easily deduce that π, ρ 6≈ ω (resp. 6≈ φ) entails π ρ 6≈ ω (resp.
π ∧ ρ 6≈ φ).
That intersection is the inf is an immediate consequence of the rules of
Figure 6(a). As for we proceed by structural induction on π and ρ.
The cases where one between π, ρ is equal to φ or ω are immediate consequences of the fact that the two constants are respectively the minimum and
the maximum
V of (Proposition 27).
V
If π = i≤n (πi1 → πi2 ) for an n > 0, then we have π ρ ≈ i≤n (πi1 →
V
πi2 ) ρ . Hence by induction hypothesis π ρ ≈ i≤n sup(πi1 → πi2 , ρ) .
This clearly implies π ρ π, ρ. Let now ζ π, ρ,Vwe claim ζ π ρ. By
Proposition 27, there is a minimum m such that ζ ≈ i≤m (ζj1 → ζj2 ). Note that
the minimality condition implies either ζ ≈ ω or ζj2 6≈ ω, for all j ≤ m. Since
V
π ζ, for every j ≤ m, there is J ⊆ {1, . . . , n} s.t. i∈J (πi1 → πi2 ) ζj1 → ζj2 .
Since also ρ ζ, we have:
^
^
^
sup(πi1 → πi2 , ρ) sup(πi1 → πi2 , ρ) sup
(πi1 → πi2 ), ρ ζj1 → ζj2
i≤n
i∈J
i∈J
The above inequality holds for every j ≤ m, hence π ρ ζ.
If π = π 1 → π 2 , then the definition of π ρ depends on ρ. The cases ρ = φ, ω
have been already considered and ρ = ρ1 ∧ ρ2 turns to the previous one. So, let
ρ = ρ1 → ρ2 and π ρ = (π 1 ∧ ρ1 ) → (π 2 ρ2 ). By induction hypothesis this
gives, π ρ = inf(π 1 , ρ1 ) → sup(π 2 , ρ2 ), which immediately implies π, ρ π ρ.
The proof that for every ζ π, ρ, we have ζ π ρ is an easy variant of the
case π is a proper intersection.
The next lemma is an easy variant in the setting of Λk of the Generation
Lemma in [RDRP04] (Lemma 10.1.7).
Lemma 30 (Generation).
1. Γ `M x : π implies Γ(x) π.
2. Γ `M λx.M : π implies either π ≈ ω or there are π1 , . . . , πn such that
π1 ∧ . . . ∧ πn π, πi = ζi → ρi and Γ, x : ζi `M M : ρi for every
i ∈ {1, . . . , n}, n > 1.
3. Γ `M λx.M : π → ρ implies Γ, x : π `M M : ρ.
24
4. Γ `M M P : π implies there are π1 , . . . , πn , for n ≥ 1, s.t. π1 ∧ . . . ∧ πn π
and for all 1 ≤ i ≤ n, Γ `M M : ρi → πi and Γ `M P : ρi .
5. Γ `M [M ! ]·P : π implies Γ `M M : π and, if P is non-empty, also Γ `M P : π.
6. Γ `M M : π implies that, for every term M of the sum M, Γ `M M : π.
7. Γ, x : π `M A : ρ and x 6∈ FV(A) imply Γ `M A : ρ.
In what follows we prove that the set of types assigned by `M is invariant
under middle (hence giant) reduction (Proposition 34).
Lemma 31 (Substitution). If Γ, x : π `M A : ρ and Γ `M N : π, then there are
Γ, x : π `M A {(N + x)/x} : ρ and Γ `M A {N/x} : ρ.
Proof. By structural induction on A. If A = x, Lemma 30.1 states π ρ. Then
a derivation of Γ, x : π `M A {(N + x)/x} : ρ can be obtained by applying rule
to the derivation of Γ `M N : π. Moreover, we can build the derivation:
Γ `M N : π
weak
Γ, x : π `M x : π
Γ, x : π `M N : π
Γ, x : π `M x : ρ
Γ, x : π `M N : ρ
mix+
Γ, x : π `M N + x : ρ
If A = [M ! ]·P , we consider P non-empty, and we build only Γ, x : π `M [M ! ]·
P {(N + x)/x} : ρ: the case P empty and the definition of Γ `M A {N/x} : ρ
are easy variants. So, let A {N + x/x} = [M1! , . . . , Mk! ] · P {N + x/x}, where
Pk
M {N + x/x} =
i=0 Mi and P {N + x/x} is always a simple bag, P not
containing linear resources. By Lemma 30.5, we have two derivations of Γ, x :
π `M [M ! ] : ρ and Γ, x : π `M P : ρ. The induction hypothesis yields Γ, x : π `M
[M1! , . . . , Mk! ] : ρ and Γ, x : π `M P {N + x/x} : ρ. By Lemma 30, Γ `M Mi : ρ,
and the result follows by applying n times the rule mix·.
The other cases are straightforward or easy variants of the previous one.
Similarly to above, the following lemmas are proven by easy structural inductions on A.
Lemma 32 (Partial expansion). Let Γ, x : π `M A {N + x/x} : ρ, then there
are is a type ζ π such that Γ, x : ζ `M A : ρ and Γ `M N : ζ.
Lemma 33 (Total expansion). Let x ∈
/ d(Γ), if Γ `M A {N/x} : ρ then there is
a type π such that Γ, x : π `M A : ρ and Γ `M N : π.
Proposition 34 (Invariance of `M typings). Let ∈ {m, g} and M →
M, then
M and M share the same judgements, i.e. Γ `M M : π iff Γ `M M : π.
Proof. The proof is by induction on the context enclosing the redex reduced in
M →
M. The induction steps are immediate, while the base of induction is
when M is the redex fired by the reduction M →
M. One can consider only the
middle-step cases, the giant one will follow since it corresponds to a sequence
of middle-steps.
Suppose M = (λx.L)[N ! ] · P , we suppose also P non-empty, the case P is
m
empty being simpler (using Lemma 33 instead of Lemma 32). So, let M →
!
(λx.L {N + x/x})P = M and assume Γ `M (λx.L)[N ] · P : π. Generation
(Lemma 30) allows us to say that π ≈ π1 ∧. . .∧πn , and, for all i, there is ρi such
25
that Γ, x : ρi `M L : πi and Γ `M N : ρi , and Γ `M P : ρi . By Lemma 31 and the
first two judgements, we get a derivation Φi :: Γ, x : ρi `M L {N + x/x} : πi . For
each i ≤ n, we get a derivation of Ψi :: Γ `M (λx.L {N + x/x})P : πi by applying
to Φi one →I rule and one →E rule with right premise Γ `M P : ρi . Then,
gathering all Ψi ’s with n − 1 rules ∧I and applying one yields a derivation
with conclusion Γ `M (λx.L {N + x/x})P : π.
For the converse, let Γ `M (λx.L {N + x/x})P = M : π. Having all bags
just reusable resources, the variable x can have at most one linear occurrence
in L (and an arbitrary number of reusable ones). M is either a simple term or a
sum having two terms. Let us consider the last case, the former being an easier
variant. So, we have L {N + x/x} = L1 + L2 and M = (λx.L1 )P + (λx.L2 )P .
By applying Generation, we get, for i = 1, 2, Γ `M (λx.Li )P : π and then
π1i ∧. . .∧πni i π and ρi1 , . . . , ρini such that for every j ≤ ni , Γ `M λx.Li : ρij → πji
V
and Γ `M P : ρij . Notice that for every i, j, i,j ρij ρij , so ρij → πji V
V
( i,j ρij ) → πji , and so by a -rule we have Γ `M λx.Li : ( i,j ρij ) → πji .
V
V
Thus, a number of ∧I-rules gives us Γ `M λx.Li : i,j ( i,j ρij ) → πji and,
V
V
V
V
V
since i,j ( i,j ρij ) → πji ( i,j ρij ) → ( i,j πji ) ( i,j ρij ) → π, we have
V
V
Γ `M λx.Li : ( i,j ρij ) → π. By Generation (Lemma 30), Γ, x : i,j ρij `M Li : π
V
and by a mix+-rule Γ, x : i,j ρij `M L {N + x/x} : π. We can now apply
V
Lemma 32, getting a type ζ i,j ρij and derivations Φ :: Γ, x : ζ `M L : π and
V
Ψ :: Γ `M N : ζ. Since Γ `M P : i,j ρij , by one -rule we have Γ `M P : ζ, hence,
by a mix·-rule Γ `M P ·[N ! ] : ζ. We conclude by one rule →In and one →E.
Lemma 35. Every must-outer-normal form of Λk is typable in `M with a nontrivial type, i.e. a type 6≈ ω.
Proof. We do induction on the number n ≥ 1 of terms in a Monf M. Let n = 1,
then M has the following shape: M = λx1 . . . xs .yP1 ...Pp . Let y = xj (the case
y is free being an easy variant), for some j ≤ s. Let Γ(xk ) = ω, for every k 6= j
and let Γ(xj ) = |ω → .{z
. . → ω} → φ. Recall that, for all i ≤ p, Γ `M Pi : ω, hence
p times
by p − 1 rules →E we obtain a derivation of Γ `M xj P1 ..Pp : φ, and then by s
rules →I we get `M λx1 ...xs .xj P1 ..Pp : π, where
π = |ω → .{z
. . → ω} → (ω
. . → ω} → φ) → ω
. . → ω} → φ,
| → .{z
| → .{z
j−1
p
s−j
which is a type 6≈ ω by Lemma 27.
As for the induction case, let M = M0 + M00 , with both M0 , M00 different from
0. By the definition of Monf, both M0 , M00 are Monf and hence by the induction
hypothesis Γ0 `M M0 : σ 0 and Γ00 `M M00 : σ 00 , with σ 0 , σ 00 6≈ ω.
Define the following context having domain equal to d(Γ0 ) ∪ d(Γ00 ):
(
π
if x ∈
/ d(Γ0 ) ∩ d(Γ00 ) and Γ0 (x) or Γ00 (x) is π,
Γ(x) =
0
0
0
π ∧π
if Γ (x) = π 0 and Γ00 (x) = π 00
Since the rules weak and L are admissible, we get Γ `M M0 : σ 0 and Γ `M
M : σ 00 . Then, since σ 0 σ 00 σ 0 , σ 00 (Proposition 29), we have Γ `M M0 : σ 0 σ 00
and Γ `M M00 : σ 0 σ 00 , and so Γ `M M : σ 0 σ 00 by one mix+-rule. We have
σ 0 σ 00 6≈ ω still by Proposition 29.
00
26
In order to prove that having a non-trivial type in the system `M implies
to be must-outer normalizable by means of an outer reduction sequence, we
will use a computability argument. The proof is an adaptation of the proof of
approximation in the model D∞ , given in [PRDR04], and it is given in the next
subsection.
Theorem 36. Given a term M ∈ Λk , the following are equivalent:
1. M is must-outer normalizable,
2. M is non-trivially typable by `M ,
3. M is reducible to a Monf by outer reduction, ∈ {m, g},
4. M is must-solvable in Λk .
Proof. 1 ⇒ 2: by Lemma 35 and Proposition 34. 2 ⇒ 3: by Theorem 42.
3 ⇒ 4: by Theorem 13. 4 ⇒ 1: if M is must-solvable then there is a outerg∗
context C(·) such that C(M ) → nI. Since the latter is a Monf, we have for the
already proven 1 ⇒ 2 that C(M ) is typable with a type ρ 6≈ ω. Since C(·) is a
outer-context, then M is typable with a type π 6≈ ω (this can be easily deduced
by inspecting the rules of Figure 6(b) and the characterization of the ω ≈-class
in Lemma 27). Then, by Theorem 42 M has a Monf.
5.2
o
Non-trivial typability entails the finiteness of →.
We adapt to our setting a variant of the saturated sets technique due to [Kri93].
In this section, outer reduction is short for both middle and giant outer reduco
o
tion, and →
denotes →,
with ∈ {m, g}.
Definition 37. We interpret types as sets of resource terms as follows:
X
o∗
JφK := {M | M →
xi Pi,1 . . . Pi,pi | i ≤ n > 0, pi ≥ 0, Pi,j ∈ Λbk };
i
JωK := Nat+ hΛk i;
Jπ → ρK := {M | ∀N ∈ JπK, M[N! ] ∈ JρK};
Jπ∧ρK := JπK∩JρK.
The following three lemmas show that this interpretation respects the preorder on types, assures the outer normalization for non-trivial types and is closed
under outer expansion.
Lemma 38 (Preorder). For any types π and ρ, π ρ entails JπK ⊆ JρK. In
particular, JφK ⊆ JπK and for any variable x, x ∈ JπK.
Proof. By induction on π, considering all cases defining (Figure 6(a)). Then,
Lemma 27 allows to deduce JφK ⊆ JπK and x ∈ JπK, since x ∈ JφK
by definition.
o∗ P
i i
i
The only delicate case is φ ω → φ. Let M ∈ JφK, i.e. M →
i x P1 . . . Pni .
We prove that for any N ∈ Nat+ hΛk i, M[N! ] ∈ JφK. Indeed, any term of a
sum that
outer reduct of M cannot
a λ abstraction, since otherwise
P isi an
P be
i
i
! o∗
i i
i
!
!
M→
6 o∗
x
P
.
.
.
P
.
Hence,
M[N
]
→
x
P
ni
1
1 . . . Pni [N ], i.e. M[N ] ∈ JφK.
i
i
Lemma 39 (Outer normalization). For every type π 6≈ ω, for every M ∈ JπK,
M is reducible to a must-outer-normal form by the outer reduction.
27
Proof. By induction on π. The case π = φ is by definition of JφK. If π = π1 ∧ π2
then by Lemma 27 there is πi 6≈ ω. By induction hypothesis the terms in Jπi K
are outer normalizable and we conclude by JπK ⊆ Jπi K. If π = π1 → π2 , then
let M ∈ Jπ1 → π2 K. By Lemma 38 any variable x ∈ Jπ1 K, so M[x! ] ∈ Jπ2 K. By
Lemma 27, π2 6≈ ω, so by induction M[x! ] is outer normalizable. We conclude
by the simple remark that if M[x! ] is outer normalizable then so is M.
Lemma 40 (Saturation). For every type π, if M {N/x} P1 . . . Pn ∈ JπK, we have
(λx.M)[N! ]P1 . . . Pn ∈ JπK.
Proof. By induction on π. The case π = ω, φ are straightforward. If π =
π1 → π2 , suppose M {N/x} P1 . . . Pn ∈ Jπ1 → π2 K, and let L ∈ Jπ1 K. We have
M {N/x} P1 . . . Pn [L! ] ∈ Jπ2 K and, by induction on π2 , (λx.M)[N! ]P1 . . . Pn [L! ] ∈
Jπ2 K. We conclude (λx.M)[N! ]P1 . . . Pn ∈ Jπ1 → π2 K. The case π = π1 ∧ π2 is
an immediate consequence of the induction hypothesis applied π1 and π2 .
The adequacy lemma relates the interpretation on types with the derivations.
Lemma 41 (Adequacy). Let x1 : π1 , . . . , xn : πn `M M : ρ and for every i ≤ n,
Ni ∈ Jπi K, we have M {N1 /x1 } . . . {Nn /xn } ∈ JρK.
Proof. By structural induction on a derivation Φ of x1 : π1 , . . . , xn : πn `M M : ρ.
The case var and ω are immediate. If Φ ends in a →I rule with premise
x1 : π1 , . . . , xn : πn , y : ζ `M M 0 : ρ (where M = λy.M 0 ), then by induction we
have, for any i ≤ n, Ni ∈ Jπi K and L ∈ JζK, M 0 {N1 /x1 } . . . {Nn /xn } {L/y} ∈
JρK. By Lemma 40, we have (λy.M 0 {N1 /x1 } . . . {Nn /xn })[L! ] ∈ JρK, hence
(λy.M 0 ) {N1 /x1 } . . . {Nn /xn } ∈ Jζ → ρK.
If Φ ends in a →E rule with premises x1 : π1 , . . . , xn : πn `M M 0 : ζ → ρ
and x1 : π1 , . . . , xn : πn `M P : ζ (where M = M 0 P ), then one concludes
easily by induction hypothesis once remarked that Φ must have a subproof of
x1 : π1 , . . . , xn : πn `M N : ζ for every term N in the bag P .
The case ∧I is an immediate consequence of the induction hypothesis and
the case of a rule is a consequence of the induction and of Lemma 38.
Theorem 42. For any term M ∈ Λk and for any type π ≈
6 ω, if Γ `M M : π,
then M is reducible to a must-outer-normal form by outer reduction.
Proof. Let x1 : π1 , . . . , xn : πn `M M : π, by Lemma 38, xi ∈ Jπi K, then by
Lemma 41, M = M {x1 /x1 } . . . {xn /xn } ∈ JπK. We conclude by Lemma 39.
6
Conclusion and future work
We did study the notions of may and must-solvability in the resource calculus.
We succeeded in characterizing completely the may-solvability, from the syntactical, operational and logical point of view. The notion of must solvability has
been completely characterized only for the purely non deterministic fragment.
In fact, the problem of characterizing the must-solvability in the whole calculus
seems to coincide with the problem of semantically separating the two notions
of failure and non termination, which is an open problem, for the moment.
The two type assignment systems we used for the logical characterization
of may-solvability in Λr and of must-solvability in Λk can be the starting point
for providing a denotational semantics in logical forms of the two calculi. In
28
particular `m can be presented as a logical description of a denotational model for
the resource calculus, where all the may-unsolvable terms are equated. Indeed
such a goal seems to us non immediate, since a quantitative account of resources
does not fit well with the contextual closure of the interpretation function. For
a discussion about this point see [dC09]. A possible solution might be achieved
following the ideas in [BEM07].
References
[Bar84]
Henk Barendregt. The Lambda-Calculus, its Syntax and Semantics. Stud. Logic Found. Math., vol. 103. North-Holland, 1984.
[BCL99]
Gerard Boudol, Pierre-Louis Curien, and Carolina Lavatelli. A
Semantics for Lambda Calculi with Resources. MSCS, 9(5):437–
482, 1999.
[BEM07]
Antonio Bucciarelli, Thomas Ehrhard, and Giulio Manzonetto.
Not Enough Points Is Enough. In CSL, volume 4646 of Lecture
Notes in Comp. Sci., pages 298–312, 2007.
[BEM09]
Antonio Bucciarelli, Thomas Ehrhard, and Giulio Mazonetto. A
relational semantics for parallelism and non-determinism in a functional setting. Preprint, 2009.
[Bou93]
Gérard Boudol. The Lambda-Calculus with Multiplicities. INRIA
Report 2025, 1993.
[CDCV80]
Mario Coppo, Mariangiola Dezani-Ciancaglini, and Betti Venneri.
Principal Type Schemes and Lambda-Calculus Semantics. In To
H. B. Curry. Essays on Combinatory Logic, Lambda-calculus and
Formalism, pages 480–490. Accademic Press, 1980.
[CDCV81]
Mario Coppo, Mariangiola Dezani-Ciancaglini, and Betti Venneri.
Functional Characters of Solvable Terms. Zeitschrift für Mathematische Logik, 27:45–58, 1981.
[dC09]
Daniel de Carvalho. Execution Time of λ-Terms via Denotational
Semantics and Intersection Types. Preprint, 2009.
[dCPTdF08] Daniel de Carvalho, Michele Pagani, and Lorenzo Tortora de
Falco. A Semantic Measure of the Execution Time in Linear Logic.
Th. Comp. Sc., 2008. to appear.
[dP95]
Ugo de’Liguoro and Adolfo Piperno. Non Deterministic Extensions of Untyped Lambda-Calculus. Inf. Comput., 122(2):149–177,
1995.
[ER03]
Thomas Ehrhard and Laurent Regnier. The Differential LambdaCalculus. Theor. Comput. Sci., 309(1):1–41, 2003.
[ER06]
Thomas Ehrhard and Laurent Regnier. Böhm trees, Krivine’s
Machine and the Taylor Expansion of Lambda-Terms. In CiE,
volume 3988 of LNCS, pages 186–197, 2006.
29
[ER08]
Thomas Ehrhard and Laurent Regnier. Uniformity and the Taylor Expansion of Ordinary Lambda-Terms. Theor. Comput. Sci.,
403(2-3):347–372, 2008.
[Hyl76]
J. Martin E. Hyland. A Syntactic Characterization of the Equality
in Some Models of the Lambda Calculus. J. London Math. Soc.,
2(12):361–370, 1976.
[Kfo00]
Assaf J. Kfoury. A Linearization of the Lambda-Calculus and
Consequences. J. Logic Comp., 10(3):411–436, 2000.
[Kri93]
Jean-Louis Krivine. Lambda-calculus, types and models. Ellis Horwood, 1993.
[NM04]
Peter M. Neergaard and Harry G. Mairson. Types, Potency, and
Idempotency: why Nonlinearity and Amnesia Make a Type System Work. In Okasaki and Fisher, editors, 9th ACM SIGPLAN
ICFP 2004, pages 138–149. ACM, 2004.
[PRDR04]
Luca Paolini and Simona Ronchi Della Rocca. The parametric parameter passing λ-calculus. Information and Computation,
189(1):87–106, February 2004.
[PRDR10]
Michele Pagani and Simona Ronchi Della Rocca. Solvability in Resource Lambda-Calculus. In Luke Ong, editor, FOSSACS, volume
6014 of Lecture Notes in Comp. Sci., pages 358–373, 2010.
[PT09]
Michele Pagani and Paolo Tranquilli. Parallel Reduction in Resource Lambda-Calculus. In APLAS, volume 5904 of LNCS, pages
226–242, 2009.
[RDRP04]
Simona Ronchi Della Rocca and Luca Paolini. The Parametric λCalculus: a Metamodel for Computation. EATCS Series. Springer,
Berlin, 2004.
[Sco76]
Dana S. Scott. Data types as lattices. SIAM J. Comput., 5(3):522–
587, 1976.
[Tra08]
Paolo Tranquilli. Intuitionistic Differential Nets and LambdaCalculus. Theor. Comput. Sci., 2008. to appear.
[Tra09]
Paolo Tranquilli. Nets between Determinism and Nondeterminism.
Ph.D. thesis, Università Roma Tre/Université Paris Diderot (Paris
7), April 2009.
[Val01]
Silvio Valentini. An elementary proof of strong normalization for
intersection types. Archive for Mathematical Logic, 40(7):475–488,
October 2001.
[Vau09]
Lionel Vaux. The algebraic lambda calculus. Math. Struct. Comp.
Sci., 19(5):1029–1059, 2009.
[WDMT02] J. B. Wells, Allyn Dimock, Robert Muller, and Franklyn Turbak.
A Calculus with Polymorphic and Polyvariant Flow Types. J.
Funct. Program., 12(3):183–227, 2002.
30

Documents pareils