Hey author of SciLean here, I just stumbled on this I totally agree with being āunapologetic about the use of sorryā, proving stuff is not the point of what Iām trying to achieve with SciLean. In fact, my feeling is that dependent types have so much to offer besides proving correctness. Unfortunately not many people see that, either they care about 100% correctness or they think that you have to prove everything and that sounds like a chore/waste of time.
My experience in writing numerical software is that you usually start on paper, formulate your problem, do some calculations until you get it into form that can be implemented. I want this process be part of the program. Formulate your problem and with series of transformations and approximations turn it into a program that can be executed. Leans interactivity is amazing for this. The big advantage is that if you want to generalize/extend your problem you do not have to redo all those calculations, the compiler will tell you where they break.
The support for n-dimensional arrays is very limited at this point as my current focus is on general automatic differentiation. The API is not very well developed yet, but you can for example write matrix multiplication as ā i => ā j, A[i,j] * x[j]
. The notation ā i => f i
creates an array with values f i
. Iām taking inspiration from dex-lang here which stresses the point that arrays are nothing but memoized functions, hence the similar notation to lambda functions fun i => f i
. Furthermore, Iām not too concerned about performance right now. My goal is to generate code that satisfy roughly: ācanāt get much faster if directly written in Cā. Later on I can worry about writing a specialized compiler for a subset of Lean that achieves the peak performance.
A recent achievement I manage to get is correctly differentiating convolution, the result is just convolution with flipped kernel:
example (w : K ^ Idx' (-5) 5)
: (ā (x : K ^ Idx 100), ā (i : Idx 100) => ā j, w[j] * x[j.1 +ᵄ i])
=
fun _ dy => ā i => ā (j : Idx' (-5) 5), w[j] * dy[-j.1 +ᵄ i] :=
by
conv => lhs; autodiff; autodiff; simp
Here I take an array x
of 100 elements and do convolution with a kernel w
of width 5. The proof conv => lhs; autodiff; autodiff; simp
just focuses on left hands side(lhs), executes automatic differentiation and then just check lhs is equal to rhs. If Iām not mistaken, most AD systems have derivative of convolution hardcoded.