Wednesday, July 27, 2011

A new lambda calculus for bunched implications

I am going to put off syntactic extensions for a bit, and talk about an entirely different bit of proof theory instead. I am going to talk about a new lambda-calculus for the logic of bunched implications that I have recently been working on. $\newcommand{\lolli}{\multimap} \newcommand{\tensor}{\otimes}$

First, the logic of bunched implications (henceforth BI) is the substructural logic associated with things like separation logic. Now, linear logic basically works by replacing the single context of intuitionistic logic with two contexts, one for unrestricted hypotheses (which behaves the same as the intuitionistic one, and access to which is controlled by a modality $!A$), and one which is linear --- the rules of contraction and weakening are no longer allowed. The intuitionistic connectives are then encoded, so that $A \to B \equiv !A \lolli B$. BI also puts contraction and weakening under control, but does not do so by simply creating two zones. Instead, contexts now become trees, which lets BI simply add the substructural connectives $A \tensor B$ and $A \lolli B$ to intuitionistic logic. Here's what things look like in the implicational fragment:
$$
\begin{array}{lcl}
A & ::= & P \bnfalt A \lolli B \bnfalt A \to B \\
\Gamma & ::= & A \bnfalt \cdot_a \bnfalt \Gamma; \Gamma \bnfalt \cdot_m \bnfalt \Gamma, \Gamma \\
\end{array}
$$

Note that contexts are trees, since there are two context concatenation operators $\Gamma;\Gamma'$ and $\Gamma,\Gamma'$ (with units $\cdot_a$ and $\cdot_m$) which can be freely nested. They don't distribute over each other in any way, but they do satisfy the commutative monoid properties. The natural deduction rules look like this (implicitly assuming the commutative monoid properties):

$$
\begin{array}{ll}
\frac{}{A \vdash A} &
\\
\frac{\Gamma; A \vdash B}
{\Gamma \vdash A \to B}
&
\frac{\Gamma \vdash A \to B \qquad \Gamma' \vdash A}
{\Gamma;\Gamma' \vdash B}
\\
\frac{\Gamma, A \vdash B}
{\Gamma \vdash A \lolli B}
&
\frac{\Gamma \vdash A \lolli B \qquad \Gamma' \vdash A}
{\Gamma,\Gamma' \vdash B}
\\
\frac{\Gamma(\Delta) \vdash A}
{\Gamma(\Delta;\Delta') \vdash A}
&
\frac{\Gamma(\Delta;\Delta) \vdash A}
{\Gamma(\Delta) \vdash A}
\end{array}
$$

Note that the substructural implication adds hypotheses with a comma, and the intuitionistic implication uses a semicolon. I've given the weakening and contraction explicitly in the final two rules, so we can reuse the hypothesis rule.

Adding lambda-terms to this calculus is pretty straightforward, too:
$$
\begin{array}{ll}
\frac{}{x:A \vdash x:A} &
\\
\frac{\Gamma; x:A \vdash e:B}
{\Gamma \vdash \fun{x}{e} : A \to B}
&
\frac{\Gamma \vdash e : A \to B \qquad \Gamma' \vdash e' : A}
{\Gamma;\Gamma' \vdash e\;e' : B}
\\
\frac{\Gamma, x:A \vdash e : B}
{\Gamma \vdash A \lolli \hat{\lambda}x.e : B}
&
\frac{\Gamma \vdash e : A \lolli B \qquad \Gamma' e': \vdash A}
{\Gamma,\Gamma' \vdash e\;e' : B}
\\
\frac{\Gamma(\Delta) \vdash e : A}
{\Gamma(\Delta;\Delta') \vdash e : A}
&
\frac{\Gamma(\Delta;\Delta') \vdash \rho(e) : A \qquad \Delta \equiv \rho \circ \Delta'}
{\Gamma(\Delta) \vdash e : A}
\end{array}
$$

Unfortunately, typechecking these terms is a knotty little problem. The reason is basically that we want lambda terms to tell us what the derivation should be, without requiring us to do much search. But contraction can rename a host of variables at once, which means that there is a lot of search involved in typechecking these terms. (In fact, I personally don't know how to do it, though I expect that it's decidable, so somebody probably does.) What would be really nice is a calculus which is saturated with respect to weakening, so that the computational content of the weakening lemma is just the identity on derivation trees, and for which the computational content of contraction is an \emph{easy} renaming of a single obvious variable.

Applying the usual PL methodology of redefining the problem to one that can be solved, we can give an alternate type theory for BI in the following way. First, we'll define nested contexts so that they make the alternation of spatial and intuitionistic parts explicit:
$$
\begin{array}{lcl}
\Gamma & ::= & \cdot \bnfalt \Gamma; x:A \bnfalt r[\Delta] \\
\Delta & ::= & \cdot \bnfalt \Delta, x:A \bnfalt r[\Gamma] \\
& & \\
e & ::= & x \bnfalt \fun{x}{e} \bnfalt \hat{\lambda}x.\;e \bnfalt e\;e' \bnfalt r[e] \bnfalt \rho r.\;e \\
\end{array}
$$
The key idea is to make the alternation of the spatial parts syntactic, and then to name each level shift with a variable $r$. So $x$ are ordinary variables, and $r$ are variables naming nested contexts. Then we'll add syntax $r[e]$ and $\rho r.e$ to explicitly annotate the level shifts:
$$
\begin{array}{ll}
\frac{}{\Gamma; x:A \vdash x:A} &
\frac{}{\cdot,x:A \vdash x:A}
\\
\frac{\Gamma; x:A \vdash e:B}
{\Gamma \vdash \fun{x}{e} : A \to B}
&
\frac{\Gamma \vdash e : A \to B \qquad \Gamma \vdash e' : A}
{\Gamma \vdash e\;e' : B}
\\
\frac{\Delta, x:A \vdash e : B}
{\Delta \vdash A \lolli \hat{\lambda}x.e : B}
&
\frac{\Delta \vdash e : A \lolli B \qquad \Delta' e': \vdash A}
{\Delta,\Delta' \vdash e\;e' : B}
\\
\frac{r[\Gamma] \vdash e : A}
{\Gamma \vdash \rho r.\;e : A}
&
\frac{r[\Delta] \vdash e : A}
{\Delta \vdash \rho r.\;e : A}
\\
\frac{\Gamma \vdash e : A}
{\cdot, r[\Gamma] \vdash r[e] : A}
&
\frac{\Delta \vdash e : A}
{\Gamma; r[\Delta] \vdash r[e] : A}
\end{array}
$$
It's fairly straightforward to prove that contraction and weakening are admissible, and doing a contraction is now pretty easy, since you just have to rename some occurences of $r$ to two different variables, but may otherwise leave the branching contexts to be the same.

It's obvious that you can take any derivation in this calculus and erase the $r$-variables to get a well-typed term of the $\alpha\lambda$-calculus. It's only a bit more work to go the other way: given an $\alpha\lambda$-derivation, you prove that given any new context which erases to the $\alpha\lambda$-context, you can find a new derivation proving the same thign. Then there's an obvious proof that you can indeed find a new-context which erases properly.

One interesting feature of this calculus is that the $r$-variables resemble regions in region calculi. The connection is not entirely clear to me, since my variables don't show up in the types. This reminds me a little of the contextual modal type theory of Nanevski, Pfenning and Pientka. It reminds me even more of Bob Atkey's PhD thesis on generalizations of BI to allow arbitrary graph-structured sharing. But all of these connections are still speculative.

There is one remaining feature of these rules which is still non-algorithmic: as in linear logic, the context splitting in the spatial rules with multiple premises (for example, the spatial application rule) just assumes that you know how to divide the context into two pieces. I think the standard state-passing trick should still work, and it may even be nicer than the additives of linear logic. But I haven't worked out the details yet.

No comments:

Post a Comment