You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I was looking into switching from criterion to divan, and was wondering if there were any workflows for comparing benchmark results? I scanned the blog, but couldn't find anything. I'm thinking about the workflows enabled by tools I've built like cargo-benchcmp and critcmp. I would really love to not build a third tool.
The READMEs of those tools should outline the general idea of their usefulness. It's very useful to be able to compare benchmarks within the same run and also useful compare benchmarks across runs to determine if there was an improvement or a regression. Criterion somewhat attempts to do the latter for you automatically, but I find it to be incredibly clunky overall.
The text was updated successfully, but these errors were encountered:
i think this is already included in the opening post, but i want to shine focus on benchmarking between git revisions.
i often have an existing function implementation and then try some changes to it. i do not like having two versions of code in parallel, thats what git is for. especially when there are changes to types/structural changes.
i then want to see if my changes improve the runtime.
I was looking into switching from criterion to divan, and was wondering if there were any workflows for comparing benchmark results? I scanned the blog, but couldn't find anything. I'm thinking about the workflows enabled by tools I've built like
cargo-benchcmp
andcritcmp
. I would really love to not build a third tool.The READMEs of those tools should outline the general idea of their usefulness. It's very useful to be able to compare benchmarks within the same run and also useful compare benchmarks across runs to determine if there was an improvement or a regression. Criterion somewhat attempts to do the latter for you automatically, but I find it to be incredibly clunky overall.
The text was updated successfully, but these errors were encountered: