AlgebraicJulia in Lean4?

One question we get a lot in AlgebraicJulia is why we use Julia instead of a dependently typed language. Before lean4, we would normally say that we used Julia because Julia was much faster than Idris or Agda, but now lean4 is supposed to be pretty fast.

I’ve been meaning to learn more lean for a while, and after the Topos Colloquium today on lean, I thought I’d give it a whirl and see if I could do something basic like add vectors.

Of course, if this was going to be fast, it was going to need to use Array from lean4, which uses a contiguous block of memory under the hood.

I made a quick wrapper for Array which took the length as a type parameter:

structure Vector (n : Nat) (α : Type u) where
   data : Array α
   prf : data.size = n

Then it was on to defining addition! Fortunately, there was already zipWith in the standard library, so it should be as easy as zipWith add.

But not so fast… How do we prove that zipWith of two Arrays that are the same size returns a third array that is also the same size? Maybe there is something in the standard library, but I didn’t see it. So on I went to roll up my own.

I did it… but at what cost???

The code is here. https://github.com/olynch/nalgebra/blob/main/src/nalgebra.lean. I’m sure that an experienced lean developer could have written this much more cleanly. But in Julia, we add vectors with a + b.

So what have I learned from this? First of all, it was pretty cool to use the interactive environment for lean, and this definitely wouldn’t be possible without that. Secondly, if I ever use lean in the future, I’m going to be unapologetic about the use of sorry; actually writing out this proof was really a pointless exercise. So I guess the real argument for lean is that you should get to choose where you get to put proofs and where you don’t.

Ultimately… I don’t really see the point of this for scientific computing. Even with support for Arrays, using them seems really clunky. It seems like the only way to create new Arrays is to push to them; you can’t allocate a big chunk of memory and then assign to it in a for-loop, which is often the way to get the best performance. It seems like the best case scenario for lean is something like a dependently typed numpy clone, but it’s hard to go back to numpy when you are used to the tight integration of Julia Arrays into Julia.

But I hope to be proved wrong! If you read this, and you feel the urge to tell me I’m wrong, I will happily admit so, on the condition that you show me how to work with n-dimensional arrays in lean in such a way that approaches closely the convenience of Julia, and would easily support stuff like numerical solutions to differential equations, custom convolution kernels, Einstein summation, etc.

1 Like

Oh, and I just learned from here that Array in lean in always an array of pointers; i.e. you can’t have an Array of floating point numbers where the floats are in a contiguous block of memory. So that’s a hard limit on performance as well.

Wait, what? Array is always Array{Ptr}? Like what is the point? Just use a linked list at that point. At least then you get O(1) push and pop.

1 Like

There is also ByteArray, which stores contiguous data. And then you can manually cast to and from the raw bytes if you want: https://github.com/lecopivo/SciLean/blob/b100653d2c80a931eb261f9fa038895d3556aa52/SciLean/Data/DataArray/DataArray.lean#L9.

OK, this is pretty neat, this supports in-place addition of vectors!

Basically, the way lean does mutation is that it notices if the reference count is about to drop to 0, and then mutates instead of copies.

So for this: https://github.com/lecopivo/SciLean/blob/b100653d2c80a931eb261f9fa038895d3556aa52/SciLean/Data/ArrayType/Basic.lean#L147,

it will add in-place to the first argument if the first argument isn’t referenced anywhere else.

And DataArray, which uses a contiguous block of memory to store floats, implements ArrayType, so it gets these methods.

So after about 3000 lines of code, you get a less performant implementation of x .+= y :upside_down_face:

Sarcasm aside, this is still miles ahead of any other dependent programming language, and most of the rest of the code is generic over any implementation of vector spaces, so you don’t have to think about the nasty details. I’m definitely glad that someone is working on this.

3 Likes

+1 for ‘unapologetic about the use of sorry’

I’m glad someone picked up on that :laughing:

1 Like

Hey author of SciLean here, I just stumbled on this :slight_smile: I totally agree with being “unapologetic about the use of sorry”, proving stuff is not the point of what I’m trying to achieve with SciLean. In fact, my feeling is that dependent types have so much to offer besides proving correctness. Unfortunately not many people see that, either they care about 100% correctness or they think that you have to prove everything and that sounds like a chore/waste of time.

My experience in writing numerical software is that you usually start on paper, formulate your problem, do some calculations until you get it into form that can be implemented. I want this process be part of the program. Formulate your problem and with series of transformations and approximations turn it into a program that can be executed. Leans interactivity is amazing for this. The big advantage is that if you want to generalize/extend your problem you do not have to redo all those calculations, the compiler will tell you where they break.

The support for n-dimensional arrays is very limited at this point as my current focus is on general automatic differentiation. The API is not very well developed yet, but you can for example write matrix multiplication as ⊞ i => ∑ j, A[i,j] * x[j] . The notation ⊞ i => f i creates an array with values f i. I’m taking inspiration from dex-lang here which stresses the point that arrays are nothing but memoized functions, hence the similar notation to lambda functions fun i => f i. Furthermore, I’m not too concerned about performance right now. My goal is to generate code that satisfy roughly: “can’t get much faster if directly written in C”. Later on I can worry about writing a specialized compiler for a subset of Lean that achieves the peak performance.

A recent achievement I manage to get is correctly differentiating convolution, the result is just convolution with flipped kernel:

example  (w : K ^ Idx' (-5) 5)
  : (∇ (x : K ^ Idx 100), ⊞ (i : Idx 100) => ∑ j, w[j] * x[j.1 +ᵥ i])
    = 
    fun _ dy => ⊞ i => ∑ (j : Idx' (-5) 5), w[j] * dy[-j.1 +ᵥ i] :=
by
  conv => lhs; autodiff; autodiff; simp

Here I take an array x of 100 elements and do convolution with a kernel w of width 5. The proof conv => lhs; autodiff; autodiff; simp just focuses on left hands side(lhs), executes automatic differentiation and then just check lhs is equal to rhs. If I’m not mistaken, most AD systems have derivative of convolution hardcoded.

3 Likes

This is very slick! I like the notation for creating arrays.

I have to wonder, however, how it works for things like dynamic programming, where you reference previously set values while making new ones. Do you support fixing the iteration order?

The notation ⊞ i => f i is really reserved only for converting a function f : I -> X to an array with values X indexed by I . (it is inspired by the paper Verified Tensor-Program Optimization Via High-Level
Scheduling Rewrites, but I do not think that is the origin of the notation).

In the case you describe, you would create an empty array with specified capacity and then a for loop where you would iteratively push new elements. Like this

def fib (n : Nat) : Array Nat := Id.run do
  let mut a : Array Nat := .mkEmpty n
  a := a.push 0
  a := a.push 1
  for i in [2:n] do
    -- no reallocation because `a` is preallocated to size `n` 
    a := a.push (a[i-1]! + a[i-2]!)
  a

#eval fib 13 -- #[0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144]

The same will be possible with SciLean’s DataArray that stores the data as a contiguous block of memory. I just never had a need to do that so the API is not there yet. I just did the bare minimum API such that DataArrayN is a vector space.

2 Likes

Super quick attempt at introducing notation like ⊞ i => f i where you have access to previous values

def dynInit {α} (f : ∀ i, (Fin i → α) → α) (n : Nat) : Array α := Id.run do
  let mut a : Array α := .mkEmpty n
  for i in [0:n] do
    have h : i = a.size := sorry
    let val := f i (fun j => a.get (h▸j))
    a := a.push val
  a


macro "⊞[" n:term "] " i:ident prev:ident " => " b:term : term =>
  `(dynInit (fun $i $prev => $b) $n)


-- #[0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144]
#eval ⊞[13] i prev =>
          if i = 0 then
            0
          else if i = 1 then
            1
          else
            prev ⟨i-1,sorry⟩ + prev ⟨i-2,sorry⟩
1 Like

Whoa, this is awesome! That was not so hard to do at all; I’m very impressed.