They told you in school that types and functions in the simply typed lambda calculus formed a category. They lied. Specifically, this is only true when you apply sufficient normalization so that composition becomes strictly associative.
But I don’t like to normalize things. I want category theory to model actual programming, not idealized programming, because the devil is in the details.
So here’s my proposed structure for what the associated categorical structure to the STLC should be. It’s a bit silly, but I think there’s an insight here.
Consider a virtual double category where:
- The objects are types.
- The vertical morphisms are just identities.
- The horizontal morphisms are functions in STLC.
- There is a 2-cell from X_0 \xrightarrow{f_1} \cdots \xrightarrow{f_n} X_n to X_0 \xrightarrow{g} X_n if and only if [f_n] \circ \cdots \circ [f_n] = [g], where [f_i] \colon [X_{i-1}] \to [X_i] is the denotation of [f_i] in \mathbf{Set}.
Now, this virtual double category turns out to be representable. That is, given any composable sequence of horizontal morphisms, there exists a single horizontal morphism which has the universal property of the composite. However, the representation is only unique up to isomorphism.
In my opinion, this presentation says something important; it says that there are many composites of a pair of functions. The one that you actually choose depends on how clever your optimizer is. For instance, if [f]^{-1} = [g], then a very clever compiler should simply emit a no-op when I ask to compose g and f.
Thinking about functions as forming this virtual double category instead of as forming an actual category gives me the structure in which I can talk about how “optimized composition” is not actually associative on the nose, but is still in some way canonical, because it has to do a certain thing denotationally. This feels right to me.
That’s all for now!
When I tried to learn about the lambda-calculus and cartesian closed categories I was very confused, because on the one hand they told me that this was connected to computation, but on the other hand the processes of computation are not the morphisms in a cartesian closed category.
I decided that processes of computation are 2-morphisms in a cartesian closed 2-category, and I taught a year-long course about it. You can see course notes starting here, and I got into 2-categories here.
The really interesting part to me is how beta- and eta-reduction correspond to the “zig-zagurator” 2-morphisms in a pseudoadjunction - the appropriately weakened concept of adjunction between 2-categories. And these zig-zagurators, in turn, give Thom’s fold catastrophes when you draw them as surface diagrams! So there is a connection between computation, 2-categories, and catastrophe theory.
I don’t doubt that virtual double categories can help, but I hope that these insights - which turn out to go back to Barnaby Hilken, R. A. G. Seely, C. Barry Jay and Neil Ghani - become more widely known!
- Barnaby P. Hilken, Towards a proof theory of rewriting: the simply-typed 2λ-calculus, Theor. Comp. Sci. 170 (1996), 407–444.
- R. A. G. Seely, Weak adjointness in proof theory in Proc. Durham Conf. on Applications of Sheaves, Springer Lecture Notes in Mathematics 753, Springer, Berlin, 1979, pp. 697–701.
- R. A. G. Seely, Modeling computations: a 2-categorical framework, in Proc. Symposium on Logic in Computer Science 1987, Computer Society of the IEEE, pp. 65–71.
- C. Barry Jay and Neil Ghani, The virtues of eta-expansion, J. Functional Programming 1 (1993), 1–19.
2 Likes
Also pertinent is the work of Philip Saville and his PhD supervisor Marcelo Fiore on cartesian closed bicategories, which (to my recollection) follows the philosophy that @johncarlosbaez mentions in terms of viewing the beta-reduction and eta-expansion as “zig-zagurators”. [2007.00624] Cartesian closed bicategories: type theory and coherence, https://philipsaville.co.uk/lics2020.pdf
They proved a coherence/normalisation theorem using a bicategorical version of Artin gluing following Fiore’s semantic analysis of (1-dimensional) normalisation-by-evaluation. In particular, they establish that in the free cartesian closed bicategory on a set, any two parallel 2-cells are equal.
2 Likes
Thanks Jo(h)ns! This makes a lot of sense, and I very much like the perspective that the natural notion of 2-cell is not (as I proposed) the setoidal notion of “denotational equivalence” but something more “concrete”: an actual rewrite between the two programs! That’s far more interesting, and far more implementable!
One question I have in my head though is that rewriting is all well and good as a theoretical computation semantics, but is there an account of computation that works in the same way that real compilers/interpreters work? That is, via translation to a machine language, or via tree-walking, or other techniques?
1 Like