Benchmarks
The numbers below are measured end-to-end against the same medium-sized real-world fixture — ~1400 real-world packages — using hyperfine to time each scenario under identical conditions.
Methodology
Every scenario assumes a committed lockfile is present. The only axis that varies is how warm the tool's cache/store is before the command runs: warm clears node_modules but keeps the global store and packument cache populated, cold wipes the store and cache too. The fixture, scripts, and raw hyperfine output live at benchmarks/. Reproduce locally with mise run bench.
The chart highlights the warm-cache, no-node_modules case. Aube uses its global virtual store on this path, so projects mostly link to package trees that already exist; pnpm v11 supports a global virtual store too, but enableGlobalVirtualStore is not the default for project installs.
Last updated: 2026-04-19 · Source: benchmarks/results.json, regenerated by mise run bench:bump.
Why it's faster
aube uses the same on-disk model as pnpm — a global content-addressable store plus an isolated symlink layout — but the install pipeline is written in a faster, natively-threaded language instead of JavaScript. Same layout, quicker engine.
Scenarios
- CI install (warm cache, no node_modules) — frozen lockfile,
node_moduleswiped, store and packument cache warm. The common "CI install where the cache restored cleanly" path. - CI install (cold cache, no node_modules) — frozen lockfile but the store, packument cache, and
node_modulesare all wiped. Worst-case CI: every tarball is re-downloaded, extracted, and hashed into the CAS. - npm install && npm run test (already installed) — models the developer loop after dependencies are already installed. Each timed run repeats the tool's normal "install if needed, then run tests" command. aube can skip the install work when its install-state file is fresh; other tools still revalidate their lockfile or install state before dispatching the script. The fixture's
testscript is a no-opnode -e, so this scenario mostly measures install short-circuiting and script dispatch. - Add dependency — lockfile and store both warm, then
<pm> add is-oddto exercise the incremental resolve path.
Reproducing
# One-shot: runs the package managers through the full scenario matrix,
# writes results to benchmarks/results.json, and emits the markdown
# summary to stdout.
mise run bench:bumpThe task calls benchmarks/bench.sh through mise x, installs the benchmark toolchain declared on the task, sets RUNS=10 WARMUP=3, and points RESULTS_JSON at benchmarks/results.json so the file is rewritten in-place. Commit the updated JSON to refresh the chart above — the VitePress data loader at docs/benchmarks.data.ts reads it at build time, so you don't need to edit benchmarks.md or hand-write any chart data.
For cross-machine reproducibility, ad-hoc runs use a hermetic local registry with the link throttled to a fixed 500mbit baseline — a "fast home broadband" speed — so two runs on different ISPs or CI runners produce comparable numbers:
flock /tmp/aube-bench.lock \
env BENCH_HERMETIC=1 BENCH_BANDWIDTH=500mbit mise run bench