- Incremental computing. Use fine-grained caching and purity to enable intermediate results to be shared across multiple similar computations. Examples:
- Partial evaluation. Make the performance of interpreted domain specific languages competitive with native code. Enable performant abstraction by doing appropriate specializations. Examples:
- Propagation networks and friends. Write down a bunch of assumptions, and then have the computer derive all of the conclusions from those assumptions via specified inference rules. Examples:
- Structured diffs. When state changes from A to B, record in a meaningful way how that state changed. This is useful for both reconciling this change with a different change A \to B' (for purposes of collaboration), and for incremental updates of derived quantities as in (1.). Examples:
- Capabilities-based effects. Tie the ability to make side effects to the ability to access certain variables–these variables are “capability bearers”. Use lexical analysis to figure out which parts of your code can access those variables. Controlling side effects is key for enabling principled caching and thus incremental computing; this is the way of doing that which doesn’t make your type system annoying. Examples:
- Probabilistic programming. Reason about the world in the presence of uncertainty. Make predictions, explore counterfactuals and causality, don’t just fit a big old neural net and pray to the “generalize to things not in the training distribution” gods. Use your GPU compute on Markov Chain Monte Carlo! Examples:
- Scientific models as objects. A scientific model (which might involve differential equations, random variables, etc.) should be seen as an assumption about the world, and be a first class citizen in both probabilistic programming (6.) and “assumption propagation” (3.). Derivation of conclusions from assumptions might involve composing several scientific models. If we are describing scientific models by data (i.e., the AST of a differential equation), then when we simulate them, we need (2.) in order for the code in the hot loop of the differential equation solver to be fast (it would be slow if we tree-walked the AST every time step). Examples:
My question is how you’re going to be able to work on all these problems in one short decad!
I’m not! I’m going to organize a community of people via an online forum to collectively work on all of these.
Cool list! What sorts of roles might involve working with some of these topics? Certain ones such as propagation networks seem as though they may be more exclusive to academia
That’s a good, practical question! I was thinking about them in terms of research topics, but of course some of them are a great fit for industry. Specifically, Jane Street has made a lot of money on top of incremental computing, and JuliaHub’s JuliaSim product is very inline with 7. I imagine scientists and Wall Street firms using 6., though I don’t have any specific examples. Where do you see these coming into play?