Description
As came up over here: #9 (comment), the performance could use improvements. The basic design is simply using callbacks on ODE solvers, and for pure Gillespie models, using the Order 0 Discrete
ODE solver. This works, gives interpolations, event handling, etc., and the biggest thing is that this makes it naturally work with ODEs/SDEs/PDEs/etc. So this is a good long-term solution and I don't want to do an alternative design. Instead, I want to find out how to make this work better.
The good thing is, we have @sdwfrost 's Gillespie.jl as a benchmark target. For very large equations we should be faster because we have a sparse representation of our jump stoichiometry vs their dense matrix. So let's not focus there. Instead we should focus on the performance of the small test cases vs Gillespie.jl. The test case from that comment is a good one.
There are two areas which likely need to be optimized. First of all, we are definitely hitting splatting penalties:
I don't think we should work around what is hopefully a bug that should be patched up before 1.0. The other thing is we should check the generated code in the OrdinaryDiffEq solvers (and equivalently, StochsticDiffEq solvers which handle this almost the same) for the event handling. Essentially, a benchmark of jumps is just a benchmark of the differential equation solvers' event handling. There is nothing theoretically that should make this have a large performance gap from a tight loop here other than the fact that tstops
, the structure for the next timepoint to hit, is a Binary Heap. This has to be a Binary Heap in order to allow event handling to push timepoints into tstops
in an arbitrary order. However, I wouldn't expect it to be that bad...
So the aim is: don't lose any generality, but get within 2x of Gillespie.jl, likely closer to 1.5x. We may need to use generated functions for the rate equations to do so, and the splatting recursions may need to be checked. This should be doable.