Sunday, September 18, 2016

HOPE 2016 Talk Summaries

I'm attending ICFP 2016 in Nara, Japan (near Osaka) this year. In previous years, I've really enjoyed Edward Yang's conference talk summaries, and decided that this year, I would take notes about the talks I attended and put them online myself.

Today, I attended the HOPE 2016 workshop on higher-order programming with effects, and took the following notes. These are all rough notes, taken in haste and based on a highly-jetlagged listener's poor listening comprehension. So assume all errors and misunderstandings are my fault!

Keynote (Leo White)

Due to jetlag, I overslept and missed it! Alas.

Effects

  • Effects via capabilities, Fengyin Liu (speaker)

    The idea behind this work is to introduce explicit capability values that enable or disable effects. So you might have a capability c : IO, which must be passed to all IO operations. If you don't have the capability to pass to the IO operation, you can't invoke them.

    print : IO -> string -> unit

    Naively, though, there is an issue arising from the ability of closures to capture effect capabilities.

    So two function spaces are introduced A => B, which is the usual ML-style function space, and A -> B, which is a function space which cannot capture capabilities or ML-style functions.

    Interestingly, the full substitution lemma doesn't hold: you have to prove a value substitution theorem instead. This suggests to me that there might be a modal variant of this system, Davies-Pfenning style.

    Liu suggested some type equivalences like A -> (B => C) == A -> B -> C, because the outer -> prevents the capture of capabilities. They can't prove the soundness of this by induction, of course, so I suspect they'll eventually want a logical relation to establish soundness. (A logical relation for Dotty would be exciting -- maybe chat with Amal Ahmed & Nada Amin about this?)

    Polymorphism introduces some subtleties (ie, you might not know whether a type variable will be instantiated to a pure type), and they also worry about verbosity in practice (which might hinder adoption by Scala programmers).

  • A Logical Acount of a Type-and-Effect-System, Kasper Svendsen (speaker)

    They have a parallel ML-style calculus with reads, writes and CAS. Locations are approximated in the type system with regions.

    • The region context is split into a public and private part, to control which locations you can get interference on.

    • Function types are annotated with which public, which private regions, and which read/write effects they have.

    • New regions can be introduced with dynamic extent. Private regions can be shared temporarily during parallel composition.

    • The type system makes region variables implicit -- this info is only in the typing derivation. So all terms look like ML terms.

    The goal is to prove the correctness of program transformations which rely on the semantics of effects. For example, a sequential composition can be equated to a parallel composition

    e; e' == e || e'

    when there is no interference between these two terms.

    He explains the intuition with binary logical relation, which are formulated within the Iris program logic (paper at POPL 2015). Here, the logical relation is a per-type family of simulation relation, encoded as Hoare triples in a unary program logic.

    Here, they use a really neat feature of Iris. Since Iris has any monoidal resource, they use simulation relations as resources! Then they express the parallel composition via a separating conjunction of simulations, one for e and one for e'. The fact that e and e' don't interfere means that e' steps are stuttering steps for e, and vice-versa. They even let simulation relations share state, when the state is read-only.

  • Simple Dependent Polymorphic IO Effects, Amin Timay (speaker)

    They want a logical relation for a type-and-effect system for IO. The IO effects are like regexps, but have effect variables, channels, and quantification over what is written -- eg, cat would have the effect:

    (forall x. Rda; Wrb)*

    to indicate that it writes what it reads from a to b. They want the relation to relate (say) that a buffered implementation of cat refines the unbuffered version, and also to validate the parallel composition transformation for non-interfering operations.

    The language of effects is very rich, so I don't have a good feel for what it can and can't express. Eg, the reasoning for the buffering optimization was entirely type-based, based on the effect subtyping order!

Verification

  • Concurrent Data Structures Linked in Time, German Delbianco (speaker)

    This talk was about Hoare-style verification of highly concurrent data structures, such as "concurrent snapshots".

    Eg, you have a shared array a, and a function write(i,x) which makes an atomic update to the array, and a function scan(), which returns a snapshot of the state of the array at a particular instant.

    He then presented Jayanti's algorithm for this, and gave a nice intuition for why the right notion for correctness of the snapshots was linearizability. What made the algorithm tricky is that the linearization points for the algorithm are not fixed -- they are not fixed (they can be vary with the runtime inputs).

    Then he introduced the notion of events in the ghost state, which are organized for each thread into subject and environment events, and which are each connected to an instant of time. The variation in where linearization points occur could then arise via modifications of when each event happened -- which is an update on the "linked list" of time, which can be modelled as ghost state!

    This has all been verified in FCSL, a Coq-based program logic that Ilya Sergey and Aleks Nanevski are some of the main forces behind.

    Ralf Jung asked if they had verified any clients, and German said they had used it for a few small clients, but nothing big. Ilya said that people just assumed snapshots were "obviously" good, but no one had any really good/complex examples of them. He said that they hoped to figure some out now that they had a declarative spec of them.

  • Growing a Proof Assistant, William Bowman (speaker)

    WB began with an example of some notations for an informal type soundness proof. He pointed out that some of the notations (like BNF) are formal, and others (like defining let-binding in terms of lambdas) are informal.

    Then he pointed out that there are tools like Ott which compile BNF to Coq/Agda/LaTeX, and that most proof assistants (like Coq) have APIs to define notations, tactics, and plugins. Then he said that this wasn't enough, and that he wanted to build a proof assistant focused on extensibility.

    The core type theory was TT (basically MLTT with a countable universe hierarchy), and the extensibility framework was Racket (to reuse as much of the macro machinery as possible).

    Then he began explaining how cur could be used to implement pattern matching. Things like recursive functions required compile-time state to communicate information from outside the local syntax tree.

    Then he moved on to defining tactics, and gave a small language for this. He needed to support fully elaborating terms (so that user tactics could be defined solely on kernel syntax), and also a way to run cur code at expansion-time to support Mtac style tactics.

    This made me wonder about phase separation, but then he immediately put this and several other issues (such as how to preserve properties like parametricity while supporting reflection on syntax).

    Someone asked about how to combine extensions. WB replied that macros support this because of the architecture of Racket macros, which essentially support open recursion to permit multiple macros to run at once. He also remarked that this creates classical problems from OO like diamond inheritance.

Compilation

  • Algebraic Effects for Functional Programming, Daan Leijen (speaker)

    Daan Leijen gave a talk about adding algebraic effects to Koka. Koka had effect types based on monads, but (a) he wanted a better story for composition, and (b) he wanted to be able to handle exotic control operators (like yield, exceptions, and async/await) in a uniform way.

    He has a full compiler for effect handlers in Koka, with row-polymorphic effect typing. The compilation scheme uses a type-directed CPS translation to implement effect handlers targeting JS, .NET, JVM (which are rather direct-style). Then he talked a bit about the semantics and syntax of Koka to set the stage.

    Next, he talked about examples, starting with exceptions. He mentioned that row types in Koka permit duplicated lables. I wonder how Koka picks?

    I also wondered about using algebraic effects for the "mock object" pattern from OO testing.

    Then he showed iteration, and remarked that handlers permit the handler to control whether iteration stops or not. Then he showed asynchronous callbacks (Node-style), and showed that you could pass the handler's resume continuation as an argument.

    Then he talked about the compilation to managed code where first-class continuations don't exist. This uses a type-directed style to avoid using CPS when it doesn't need it. The main issue is with effect polymorphism, which he resolves by compiling it twice in both ways, and then using a worked-wrapper transformation to pick the right representation.

    SPJ asked if he depended upon a defined order of evaluation. Daan said this was about effects for Haskell, and handed off to Sam Lindley, who said that the situation in Haskell was the same as in Koka -- you have to say what the evaluation order is (?).

  • Administrative Normal Form, Continued, Luke Maurer

    CPS is good for optimization, but due to the costs, not everyone uses it. Flanagan pointed out that some CPS optimizations work in direct style, but it's not all of then.

    Eg, case-of-case often creates values that go to known call sites, which shouldn't be allocated. This work extends ANF to support these, and to some significant effect in heap allocation rates.

    Case-of-case can be solved by pushing in the context, and while there are engineering issues of code duplication, it can often be helpful. Many compilers do this.

    However, it doesn't work for recursive functions. In CPS this transformation happens "almost for free". That is, you can do the static argument transformation, and then inline the static continuation to get the beta-reductions to fire. This is "contification".

    So how does contification work in ANF? He notes that the static argument transformation only worked because the program was a tail call. So they tag functions which are always tail called (they are called "labels"). Then they add a new rewrite which moves outer cases into a label.

    He gave the operational semantics for join points, which are pretty sensible.

    Then he pointed out that it helps with list fusion. An unfold uses a step function state -> option (a x state), which lets you eliminate intermediate lists, but introduces useless Just/Nothing cruft. But the new label/case-of-case machinery handles this and reduces many of them, and some benchmarks kill all of them.

    This is in a new branch of GHC.

    I asked how they would know if they had all the new constructs they needed. SPJ said he didn't know any CPS optimizations that could not be handled now, but that he didn't have a proof. Someone asked about one benchmark which allocated less but ran more slowly. I didn't understand the answer, though. :( Sam Lindley asked about the connection to SSA and phi-functions. Luke said that labels are much like phi-functions in SSA.

Semantics

  • Functional models of full ground, and general, reference cells, Ohad Kammar

    How can we understand reference cells. They have multiple axes:

    • Locality -- freshness of newly allocated cells. (ie, ref 0 == ref 0 is always false)
    • Ground vs Non-ground -- can we point to computations which can modify the heap (ref (int -> int))?
    • Full storage -- semantics can depend on shape of the heap: (ref ref bool)

    This talk is about full ground state. Approach of the talk is denotational semantics for full ground state, to get an equational theory for programs. He wants to use it to analyze the value restriction, and relax it if possible. He also wants to validate runST.

    He found a bug in his semantics last night, so he will discuss the general techniques used rather than a particular model.

    • Levy introduced a trick to handle circular ground state, using names for reference types. (A la class names in OO).

    • Kripke worlds are a finite set of locations, and a name for each location. A world morphism is a name-preserving injection. Worlds have monoidal structure. The language lives in the functor category over worlds, and so can interpret CCCs, and references at a world w are interpreted as the set of locations of that type.

    • To model computations we need a monad

    To decide which equations we need, we have the usual deref/assignment/allocation equations. We don't have a complete axiomatization of state, so we have to just pick some. So they picked a nontrivial one, the equation for runST.

    Then he gave the wrong monad, which used ends to model stores, but I don't have a good feel for them. It validated masking, but not dereference. Then he gave a second monad which had too little structure, in that it modelled deref but not masking.

    Then he gave a small type system for runST. It interpreted each region label in runST as a world (as above).

    Lars Birkedal asked about which equations he was interested in, and Ohad named some of them. Daan Leijen said he had done an operational proof of runST and didn't like, and encouraged Ohad to keep going.

Monday, May 9, 2016

Löb's theorem is (almost) the Y combinator

As a computer scientist, one of the stranger sociological facts you'll encounter is that regular mathematicians tend to dislike it (and its cousin formal logic). The reason for that is nicely encapsulated in the following anecdote that Danny Hillis tells about Richard Feynman and the design of the Connection Machine computer:

[...] Richard had completed his analysis of the behavior of the router, and much to our surprise and amusement, he presented his answer in the form of a set of partial differential equations. To a physicist this may seem natural, but to a computer designer, treating a set of boolean circuits as a continuous, differentiable system is a bit strange. Feynman's router equations were in terms of variables representing continuous quantities such as "the average number of 1 bits in a message address." I was much more accustomed to seeing analysis in terms of inductive proof and case analysis than taking the derivative of "the number of 1's" with respect to time. Our discrete analysis said we needed seven buffers per chip; Feynman's equations suggested that we only needed five. We decided to play it safe and ignore Feynman.

The decision to ignore Feynman's analysis was made in September, but by next spring we were up against a wall. The chips that we had designed were slightly too big to manufacture and the only way to solve the problem was to cut the number of buffers per chip back to five. Since Feynman's equations claimed we could do this safely, his unconventional methods of analysis started looking better and better to us. We decided to go ahead and make the chips with the smaller number of buffers.

Fortunately, he was right. When we put together the chips the machine worked.

I've put the relevant bit in bold. To most mathematicians, inductive proofs are the tool of last resort. You always try to weave together the known properties of your object together in a creative way, and do an exhaustive case analysis only if all else fails. However, to logicians and computer scientists, induction is the tool of first resort – as soon as we have an object, we grind it to dust with induction and reassemble it bit by bit with the properties we want.

Regular mathematicians find this process roughly as appealing as filling in tax forms, but it does come with its compensations. One of those compensations is that we get to collect a wide variety of fixed point theorems, to justify an equally wide variety of recursive constructions.

The modal version of Löb's theorem is one of the more exotic fixed point theorems in logic. It is often glossed as representing the "abstract content" of Gödel's incompleteness theorem, which is the sort of explanation that conceals as much as it reveals. If you look at the proof, though, you'll see that its structure is a very close match for something very familiar to computer scientists – the proof of Löb's theorem is (almost) a derivation of the Y combinator!

To understand this, let's try formulating the fragment of provability logic we need as a natural deduction system. The types we will use are the following: $$ \begin{array}{lcl} A & ::= & b \bnfalt A \to B \bnfalt \mu a.\,A(a) \bnfalt \Box A \end{array} $$

This looks like the simply-typed lambda calculus, extended with recursive types and a special $\Box A$ type former. This is almost, but not quite, correct! $\Box A$ is indeed a modal operator -- in particular, it satisfies the axioms of the K4 modality. However, the recursive type $\mu a.\,A$ is not an ordinary recursive type, because unrestricted recursive types lead to logical inconsistency. Instead, $\mu a.\,A$ is a modal fixed point. That is, it satisfies the isomorphism: $$ \mu a.\,A \simeq A{[\Box(\mu a.\,A)/a]} $$

Hopefully, this will become a bit clearer if we present typing and inference rules for this logic. We'll use a variant of Davies and Pfenning's judgemental formulation of modal logic, modified to support the K4 modality rather than the S4 modality. The basic judgement will be $\judge{\Delta; \Gamma}{e}{A}$, where $\Delta$ are modal variables and $\Gamma$ are ordinary variables. $$ \begin{array}{ll} \rule{ x:A \in \Gamma} { \judge{\Delta; \Gamma}{x}{A} } & \\[1em] \rule{ \judge{\Delta; \Gamma, x:A}{e}{B} } { \judge{\Delta; \Gamma}{\fun{x}{e}}{A \to B} } & \rule{ \judge{\Delta; \Gamma}{e}{A \to B} & \judge{\Delta; \Gamma}{e'}{A} } { \judge{\Delta; \Gamma}{e\;e'}{B} } \\[1em] \rule{ \judge{\Delta; \Gamma}{e}{A{[\Box(\mu a.\,A)/a]}} } { \judge{\Delta; \Gamma}{\fold(e)}{\mu a.\,A} } & \rule{ \judge{\Delta; \Gamma}{e}{\mu a.\,A} } { \judge{\Delta; \Gamma}{\unfold(e)}{A{[\Box(\mu a.\,A)/a]}} } \\[1em] \rule{ \judge{\Delta; \Delta}{e}{A} } { \judge{\Delta; \Gamma}{\box(e)}{\Box A} } & \rule{ \judge{\Delta; \Gamma}{e}{\Box A} & \judge{\Delta, x:A; \Gamma}{e'}{C} } { \judge{\Delta; \Gamma}{\letv{\box(x)}{e}{e'}}{C} } \end{array} $$

The two key rules are the introduction and elimination rules for $\Box A$.

The elimination rule is the standard Davies-Pfenning rule. It says that if you can prove $\Box A$, you can add an $A$ to the modal context $\Delta$ and carry on with your proof.

The introduction rule is different, and weaker than the DP S4 rule. It says you can introduce a $\Box A$, if you can prove $A$ only using the modal hypotheses in $\Delta$. So the variables in $\Gamma$ are deleted, and the ones in $\Delta$ are copied into the ordinary hypothetical context. This setup is weaker than the DP introduction rule, and captures the fact that in K4 you can't derive the comonadic unit rule that $\Box A \to A$.

Once we have this, it becomes possible to derive Löb's theorem. Before we do that, let's look at the derivation of the Y combinator. To derive a fixed point at a type $P$, we introduce a recursive type $μa.\, a \to P$, and then we can add fold's and unfold's to the standard definition of the Y combinator:

let y : (P → P) → P = 
  λf. let f' = f in                   
      let g =                         
         (λr. let r' = r in          
              f' (unfold r' (r')))
      in 
      (g (fold g))     

You could also write this more compactly as λf. let g = λx. f (unfold x x) in g (fold g), but I have introduced some spurious let-bindings for a reason which will become apparent in a second.

Namely, virtually the same construction works with Löb's theorem! We will take the modal recursive type $\mu a.\,a \to P$, and then add some boxing and unboxing to manage the boxes (I'll annotate the variables with their types so that you can see what's going on):

let lob : □(□P → P) → □P = 
  λf. let box(f)' = f in                      // f' modal, □P → P
      let box(g) =                            // g modal, □(μa. a → P) → P)
         box(λr. let box(r') = r in           // r' modal, μa. a → P
                 f' box(unfold r' (box r')))  // application type P
      in 
      box(g (fold g))     

Note that if you took $\Box$ to be the identity constructor, and erased all the $\box$'s from this term, then this proof of Löb's theorem is actually exactly the same as Curry's Y combinator!

However, people more typically take $\Box$ to mean something like "quotation" or "provability". What proof shows is that there's nothing mysterious going on with self-reference and quotation -- or rather, it is precisely the same mystery as the universality of the lambda-calculus.

Postscript: One of the nice things about blogging is that you can be more informal than in a paper, but it's also one of the pitfalls. I got an email asking about the history of this system, and of course a blog post doesn't have a related work section! Basically, a few years ago, I worked out a calculus giving a syntax for applicative functors. An applicative is basically a monoidal functor with a strength, and I noticed that the modal logic K is basically a monoidal functor without a strength. There matters stood until last year I read Kenneth Foner's Haskell Symposium paper Getting a Quick Fix on Comonads, which was about programming with Löb's theorem. This was interesting, and after Wikipedia's proof of Löb's theorem, I saw that (a) the proof was basically Curry's paradox, and (b) I could adapt the rules for applicatives to model the modal logic K4, which could (with a small bit of tinkering) have a very similar calculus to applicatives. (In fact, the proof I give of Löb's theorem is a transliteration of the proof on Wikipedia into the system in this blog post!)

I should emphasize that I haven't really proved anything about these systems: I think the proofs should be straightforward, but I haven't done them. Luckily, at least for K, I don't have to -- recently, Alex Kavvos has put a draft paper on the arXiv, System K: a simple modal λ-calculus, which works out the metatheory of K with a dual-zone Davies/Pfenning-style calculus. If you need syntax for applicative functors minus strength, do take a look!

Friday, May 6, 2016

Life Update: Moving to Cambridge

This August I'll be changing jobs, by leaving Birmingham and joining the the Computer Laboratory at the University of Cambridge.

I lived in Cambridge when I did a postdoc with Nick Benton at Microsoft Research, so it'll be nice to be able to reconnect with friends I haven't been able to see regularly a while. At the same time, of course, I will miss the friends I have made here, such as Dan Ghica and Paul Levy.

Thursday, April 21, 2016

The Essence of Event-Driven Programming

Together with Jennifer Paykin and Steve Zdancewic, I have a new draft paper, The Essence of Event-Driven Programming:

Event-driven programming is based on a natural abstraction: an event is a computation that can eventually return a value. This paper exploits the intuition relating events and time by drawing a Curry-Howard correspondence between a functional event-driven programming language and a linear-time temporal logic. In this logic, the eventually proposition ◇A describes the type of events, and Girard’s linear logic describes the effectful and concurrent nature of the programs. The correspondence reveals many interesting insights into the nature of event- driven programming, including a generalization of selective choice for synchronizing events, and an implementation in terms of callbacks where ◇A is just ¬□¬A.

If you've been following the work on temporal logic and functional reactive programming that I and others (e.g., Alan Jeffrey, Wolfgang Jeltsch, Andrew Cave et al) have been pursuing for a while, then you'll have heard that there's a really compelling view of reactive programs as proof terms for linear temporal logic.

However, the programs that come out seem to be most naturally conceived of as a kind of "higher-order synchronous dataflow" (in the same space as languages like ReactiveML or Lucid). These languages are based on the synchrony hypothesis, which postulates that programs do relatively little work relative to the clock rate of the event loop, and so a polling implementation where programs wake up once on each trip through the event loop is reasonable. (If you've done game programming, you may be familar with the term immediate mode GUI, which is basically the same idea.

But most GUI toolkits typically are not implemented in this style. Instead, they work via callbacks, in an event-driven style. That is, you write little bits of code that get invoked for you whenever an event occurs. The advantage of this is that it lets your program sleep until an event actually happens, and when an event happens, only the relevant pieces of your program will run. For applications (like a text editor) where the display doesn't change often and events happen at a relatively low clock rate, you can avoid a lot of useless computation.

In this paper, we show that the event-based style also fits into the temporal logic framework! Basically, we can encode events (aka promises or futures) using the double-negation encoding of the possibility monad from modal logic. And then the axioms of temporal logic (such as the linearity of time) work out to be natural primitives of event-based programming, such as the select operator.

Also, before you ask: an implementation is in progress, but not yet done. I'll post a link when it works!

Tuesday, April 19, 2016

Postdoctoral Position in Recursion, Guarded Recursion and Computational Effects

We are looking for a Postdoctoral Research Fellow to work on an EPSRC-funded project Recursion, Guarded Recursion and Computational Effects.

We shall be investigating fine-grained typed calculi and operational, denotational and categorical semantics for languages that combine either guarded or general recursion and type recursion with a range of computational effects. This will build on existing work such as call-by-push-value and Nakano's guarded recursion calculus.

The successful candidate will have a PhD in programming language semantics or a related discipline, and an ability to conduct collaborative mathematical research. Knowledge of operational and denotational semantics, category theory, type theory and related subjects are all desirable.

You will work with Paul Levy (principal investigator) and Neelakantan Krishnaswami (co-investigator) as part of the Theoretical Computer Science group at the University of Birmingham.

The position lasts from 1 October 2016 until 30 September 2019.

You can read more and apply for the job at:

Don't hesitate to contact us (particularly Paul Levy) informally to discuss this position.

The closing date is soon: 22 May 2016.

Reference: 54824 Grade point 7 Starting salary £28,982 a year, in a range up to £37,768 a year.

Monday, March 21, 2016

Agda is not a purely functional language

One of the more surprising things I've learned as a lecturer is the importance of telling lies to students. Basically, the real story regarding nearly everything in computer science is diabolically complicated, and if we try to tell the whole truth up front, our students will get too confused to learn anything.

So what we have to do is to start by telling a simple story which is true in broad outline, but which may be false in some particulars. Then, when they understand the simple story, we can point out the places where the story breaks down, and then use that to tell a slightly less-false story, and so on and so on, until they have a collection of models, which range from the simple-but-inaccurate to the complex-but-accurate. Then our students can use the simplest model they can get away with to solve their problem.

The same thing happens when teaching physics, where we start with Newtonian mechanics, and then refine it with special relativity and quantum mechanics, and then refine it further (in two incompatible ways!) with general relativity and quantum field theories. The art of problem solving is to pick the least complicated model that will solve the problem.

But despite how essential lies are, it's always a risk that our students discover that we are telling lies too early -- because their understanding is fragile, they don't have a sense of the zone of applicability of the story, and so an early failure can erode the confidence they need to try tackling problems. Happily, though, one advantage that computer science has over physics is that computers are more programmable than reality: we can build artificial worlds which actually do exactly obey the rules we lay out.

This is why my colleague Martín Escardó says that when teaching beginners, he prefers teaching Agda to teaching Haskell -- he has to tell fewer lies. The simple story that we want to start with when teaching functional programming is that data types are sets, and computer functions are mathematical functions.

In the presence of general recursion, though, this is a lie. Martín has done a lot of work on more accurate stories, such as data types are topological spaces and functions are continuous maps, but this is not what we want to tell someone who has never written a computer program before (unless they happen to have a PhD in mathematics)!

Agda's termination checking means that every definable function is total, and so in Agda it's much closer to true that types are sets and functions are functions. But it's not quite true!

Here's a little example, as an Agda program. First, let's get the mandatory bureaucracy out of the way -- we're defining the program name, and importing the boolean and identity type libraries.

module wat where 

open import Data.Bool
open import Relation.Binary.PropositionalEquality using (_≡_; refl)

Next, let's define the identity function on booleans.


f : Bool  Bool
f true = true
f false = false

In fact, let's define it a second time!

g : Bool  Bool
g true = true
g false = false

So now we have two definitions of the identity function f, and g, so they must be equal, right? Let's see if we can write a proof this fact.

wat : f  g 
wat = refl

Oops! We see that refl is in red -- Agda doesn't think it's a proof. In other words, two functions which are identical except for their name are not considered equal -- α-equivalence is not a program equivalence as far as Agda is concerned!

This is somewhat baffling, but does have an explanation. Basically, an idea that was positively useful in non-dependent functional languages has turned toxic in Agda. Namely, type definitions in ML and Haskell are generative -- two declarations of the same type will create two fresh names which cannot be substituted for each other. For example, in Ocaml the following two type definitions

type foo = A | B 
type bar = A | B 

are not interchangeable -- Ocaml will complain if you try to use a foo where a bar is expected:

OCaml version 4.02.3

let x : foo = A 

let bar_id (b : bar) = b

let _ = bar_id x;;

Characters 96-97:
  let _ = bar_id x;;
                 ^
Error: This expression has type foo but an
        expression was expected of type bar

In other words, type definitions have an effect -- they create fresh type names. Haskell alludes to this fact with the newtype keyword, and in fact that side-effect is essential to its design: the type class mechanism permits overloading by type, and the generativity of type declarations is the key feature that lets users define custom overloads for their application-specific types.

Basically, Agda inherited this semantics of definitions from Haskell and ML. Unfortunately, this creates problems for Agda, by complicating its equality story. So we can't quite say that Agda programs are sets and functions. We could say that they are sets and functions internal to the Schanuel topos, but (a) that's something we may not want to hit students with right off the bat, and (b) Agda doesn't actually include the axioms that would let you talk about this fact internally. (In fact, Andy Pitts tells me that the right formulation of syntax for nominal dependent types is still an open research question.)

Thursday, March 17, 2016

Datafun: A Functional Datalog

Together with Michael Arntzenius, I have a new draft paper, Datafun: a Functional Datalog
Datalog may be considered either an unusually powerful query language or a carefully limited logic programming language. It has been applied successfully in a wide variety of problem domains thanks to its "sweet spot" combination of expressivity, optimizability, and declarativeness. However, most use-cases require extending Datalog in an application-specific manner. In this paper we define Datafun, an analogue of Datalog supporting higher-order functional programming. The key idea is to track monotonicity via types.

I've always liked domain specific languages, but have never perpetrated one before. Thanks to Michael, now I have! Even better, it's a higher-order version of Datalog, which is the language behind some of my favorite applications, such as John Whaley and Monica Lam's BDDBDDB tool for writing source code analyses.

You can find the Github repo here, as well. Michael decided to implement it in Racket, which I had not looked at closely in several years. It's quite nice how little code it took to implement everything!