One question we get a lot in AlgebraicJulia is why we use Julia instead of a dependently typed language. Before lean4, we would normally say that we used Julia because Julia was much faster than Idris or Agda, but now lean4 is supposed to be pretty fast.
I’ve been meaning to learn more lean for a while, and after the Topos Colloquium today on lean, I thought I’d give it a whirl and see if I could do something basic like add vectors.
Of course, if this was going to be fast, it was going to need to use
Array from lean4, which uses a contiguous block of memory under the hood.
I made a quick wrapper for
Array which took the length as a type parameter:
structure Vector (n : Nat) (α : Type u) where data : Array α prf : data.size = n
Then it was on to defining addition! Fortunately, there was already
zipWith in the standard library, so it should be as easy as
But not so fast… How do we prove that
zipWith of two
Arrays that are the same size returns a third array that is also the same size? Maybe there is something in the standard library, but I didn’t see it. So on I went to roll up my own.
I did it… but at what cost???
The code is here. https://github.com/olynch/nalgebra/blob/main/src/nalgebra.lean. I’m sure that an experienced lean developer could have written this much more cleanly. But in Julia, we add vectors with
a + b.
So what have I learned from this? First of all, it was pretty cool to use the interactive environment for lean, and this definitely wouldn’t be possible without that. Secondly, if I ever use lean in the future, I’m going to be unapologetic about the use of
sorry; actually writing out this proof was really a pointless exercise. So I guess the real argument for lean is that you should get to choose where you get to put proofs and where you don’t.
Ultimately… I don’t really see the point of this for scientific computing. Even with support for
Arrays, using them seems really clunky. It seems like the only way to create new
Arrays is to push to them; you can’t allocate a big chunk of memory and then assign to it in a for-loop, which is often the way to get the best performance. It seems like the best case scenario for
lean is something like a dependently typed
numpy clone, but it’s hard to go back to
numpy when you are used to the tight integration of Julia
Arrays into Julia.
But I hope to be proved wrong! If you read this, and you feel the urge to tell me I’m wrong, I will happily admit so, on the condition that you show me how to work with n-dimensional arrays in lean in such a way that approaches closely the convenience of Julia, and would easily support stuff like numerical solutions to differential equations, custom convolution kernels, Einstein summation, etc.