Skip to content

Benchmarks

The numbers below are measured end-to-end against the same medium-sized real-world fixture — ~1400 real-world packages — using hyperfine to time each scenario under identical conditions.

Methodology

Every scenario assumes a committed lockfile is present. The only axis that varies is how warm the tool's cache/store is before the command runs: warm clears node_modules but keeps the global store and packument cache populated, cold wipes the store and cache too. The fixture, scripts, and raw hyperfine output live at benchmarks/. Reproduce locally with mise run bench.

aube 1.0.0-beta.3 bun 1.3.12 pnpm 10.33.0 npm 11.12.1 yarn 1.22.22node 24.15.0
CI install (warm cache, no node_modules)
aube
139ms
bun
416ms
pnpm
1.01s
npm
2.78s
yarn
2.43s
CI install (cold cache, no node_modules)
aube
1.12s
bun
935ms
pnpm
1.57s
npm
4.21s
yarn
6.60s
npm install && npm run test
aube
21ms
bun
42ms
pnpm
453ms
npm
615ms
yarn
351ms
Add dependency
aube
209ms
bun
414ms
pnpm
1.33s
npm
2.89s
yarn
2.55s

The chart highlights the warm-cache, no-node_modules case. Aube uses its global virtual store on this path, so projects mostly link to package trees that already exist; pnpm v11 supports a global virtual store too, but enableGlobalVirtualStore is not the default for project installs.

Last updated: 2026-04-19 · Source: benchmarks/results.json, regenerated by mise run bench:bump.

Why it's faster

aube uses the same on-disk model as pnpm — a global content-addressable store plus an isolated symlink layout — but the install pipeline is written in a faster, natively-threaded language instead of JavaScript. Same layout, quicker engine.

Scenarios

  • CI install (warm cache, no node_modules) — frozen lockfile, node_modules wiped, store and packument cache warm. The common "CI install where the cache restored cleanly" path.
  • CI install (cold cache, no node_modules) — frozen lockfile but the store, packument cache, and node_modules are all wiped. Worst-case CI: every tarball is re-downloaded, extracted, and hashed into the CAS.
  • npm install && npm run test (already installed) — models the developer loop after dependencies are already installed. Each timed run repeats the tool's normal "install if needed, then run tests" command. aube can skip the install work when its install-state file is fresh; other tools still revalidate their lockfile or install state before dispatching the script. The fixture's test script is a no-op node -e, so this scenario mostly measures install short-circuiting and script dispatch.
  • Add dependency — lockfile and store both warm, then <pm> add is-odd to exercise the incremental resolve path.

Reproducing

sh
# One-shot: runs the package managers through the full scenario matrix,
# writes results to benchmarks/results.json, and emits the markdown
# summary to stdout.
mise run bench:bump

The task calls benchmarks/bench.sh through mise x, installs the benchmark toolchain declared on the task, sets RUNS=10 WARMUP=3, and points RESULTS_JSON at benchmarks/results.json so the file is rewritten in-place. Commit the updated JSON to refresh the chart above — the VitePress data loader at docs/benchmarks.data.ts reads it at build time, so you don't need to edit benchmarks.md or hand-write any chart data.

For cross-machine reproducibility, ad-hoc runs use a hermetic local registry with the link throttled to a fixed 500mbit baseline — a "fast home broadband" speed — so two runs on different ISPs or CI runners produce comparable numbers:

sh
flock /tmp/aube-bench.lock \
  env BENCH_HERMETIC=1 BENCH_BANDWIDTH=500mbit mise run bench
MIT Licenseen.devCopyright © 2026 en.dev