• 博客
  • Turbopack Performance Benchmarks

Turbopack Performance Benchmarks

Monday, October 31st, 2022
Tobias Koppers
Name
Tobias Koppers
Twitter
@wSokra
Alex Kirszenberg
Name
Alex Kirszenberg
Twitter
@alexkirsz

Summary

  • We are thankful for the work of the entire OSS ecosystem and the incredible interest and reception from the Turbopack release. We look forward to continuing our collaboration with and integration into the broader Web ecosystem of tooling and frameworks.
  • In this article, you’ll find our methodology and documentation supporting the benchmarks that show Turbopack is up to 10x and 700x faster than existing approaches.
  • Turbopack and Next.js 13.0.1 are out addressing a regression that snuck in prior to public release and after the initial benchmarks were taken. We also fixed an incorrect rounding bug on our website (0.01s → 15ms). We appreciate Evan You's work that helped us identify and correct this.
  • We’re excited to continue to evolve the incremental build architecture of Turbopack. We believe that there are still significant performance wins on the table.

At Next.js Conf, we announced our latest open-source project: Turbopack, an incremental bundler and build system optimized for JavaScript and TypeScript, written in Rust.

The project began as an exploration to improve Webpack’s performance and create ways for it to more easily integrate with tooling moving forward. In doing so, the team realized that a greater effort was necessary. While we saw opportunities for better performance, the premise of a new architecture that could scale to the largest projects in the world was inspiring.

In this post, we’ll explore why Turbopack is so fast, how its incremental engine works, and benchmark it against existing approaches.

Why is Turbopack blazing fast?

Turbopack’s speed comes from its incremental computation engine. Similar to trends we've seen in frontend state libraries, computational work is split into reactive functions that enable Turbopack to apply updates to an existing compilation without going through a full graph recomputation and bundling lifecycle.

This does not work like traditional caching where you look up a result from a cache before an operation and then decide whether or not to use it. That would be too slow.

Instead, Turbopack skips work altogether for cached results and only recomputes affected parts of its internal dependency graph of functions. This makes updates independent of the size of the whole compilation, and eliminates the usual overhead of traditional caching.

Benchmarking Turbopack, Webpack, and Vite

We created a test generator that makes an application with a variable amount of modules to benchmark cold startup and file updating tasks. This generated app includes entries for these tools:

  • Next.js 11
  • Next.js 12
  • Next.js 13 with Turbopack
  • Vite

As the current state of the art, we're including Vite along with Webpack-based Next.js solutions. All of these toolchains point to the same generated component tree, assembling a Sierpiński triangle in the browser, where every triangle is a separate module.

This image is a screenshot of the test application we run our benchmarks on. It depicts a Sierpiński triangle where each single triangle is its own component, separated in its own file. In this example, there are 3,000 triangles being rendered to the screen.

Cold startup time

This test measures how fast a local development server starts up on an application of various sizes. We measure this as the time from startup (without cache) until the app is rendered in the browser. We do not wait for the app to be interactive or hydrated in the browser for this dataset.

Next.js 13

turbo

Turbopack

1.1s

Next.js 12

3.4s

Vite

4.8s

Next.js 11

7.7s

Startup time by module count. Benchmark data generated from 14” MacBook Pro 2021, M1 Max, 64GB RAM, macOS 12.6 (21G115).

Data

To run this benchmark yourself, clone vercel/turbo and then use this command from the root:

TURBOPACK_BENCH_COUNTS=1000,5000,10000,30000 TURBOPACK_BENCH_BUNDLERS=all cargo bench -p next-dev "startup/(Turbopack SSR|Next.js 12 SSR|Next.js 11 SSR|Vite CSR)."

Here are the numbers we were able to produce on a 14” MacBook Pro 2021, M1 Max, 64GB RAM, macOS 12.6 (21G115):

bench_startup/Next.js 11 SSR/1000 modules              7.7±0.06s
bench_startup/Next.js 11 SSR/5000 modules             24.8±0.11s
bench_startup/Next.js 11 SSR/10000 modules            49.4±0.28s
bench_startup/Next.js 11 SSR/30000 modules           176.2±1.42s
bench_startup/Next.js 12 SSR/1000 modules              3.4±0.01s
bench_startup/Next.js 12 SSR/5000 modules             10.7±0.02s
bench_startup/Next.js 12 SSR/10000 modules            20.1±0.07s
bench_startup/Next.js 12 SSR/30000 modules            76.6±0.66s
bench_startup/Turbopack SSR/1000 modules          1125.3±44.54ms
bench_startup/Turbopack SSR/5000 modules               3.6±0.02s
bench_startup/Turbopack SSR/10000 modules              7.5±0.44s
bench_startup/Turbopack SSR/30000 modules             22.3±1.29s
bench_startup/Vite CSR/1000 modules                    4.8±0.02s
bench_startup/Vite CSR/5000 modules                   19.2±0.15s
bench_startup/Vite CSR/10000 modules                  37.2±0.29s
bench_startup/Vite CSR/30000 modules                 109.5±1.14s

File updates (HMR)

We also measure how quickly the development server works from when an update is applied to a source file to when we receive a custom browser event that the modified code was executed.

Note that executing modified code does not mean that the changes are visible to the user yet. When a React component changes, it still needs to be re-rendered by the browser. With this methodology, we are focusing only on the time the compiler takes to do its work and the time it takes for the browser to evaluate an updated module.

For Hot Module Reloading (HMR) benchmarks, we first start the dev server on a fresh installation with the test application. We then run five changes to warm up the tooling. This step is important as it prevents discrepancies that can arise with cold processes.

Once our tooling is warmed up, we measure the sixth file update. After repeating this 10 times, we use the average as our final number.

Next.js 13

turbo

Turbopack

15ms

Vite

87ms

Next.js 12

134ms

Next.js 11

273ms

Turbopack vs. Next.js (Webpack) vs. Vite HMR by module count. Benchmark data generated from 14” MacBook Pro 2021, M1 Max, 64GB RAM, macOS 12.6 (21G115).
Turbopack vs. Vite HMR by module count. Benchmark data generated from 14” MacBook Pro 2021, M1 Max, 64GB RAM, macOS 12.6 (21G115).

From our benchmarks, we find the following:

  • Turbopack HMR is 10x faster than Vite once the application scales above 30k modules. These marks continue to improve with more modules, showing 20x faster above 50k modules.
  • Turbopack HMR is 700x faster than Webpack-based Next.js 11 for large applications with over 50k modules.

The takeaway: Turbopack performance is a function of the size of an update, not the size of an application.

For more info, visit the comparison docs for Vite and Webpack.

Data

To run this benchmark yourself, clone vercel/turbo and then use this command from the root:

TURBOPACK_BENCH_COUNTS=10,100,200,500,1000,2000,3000,4000,5000,10000,20000,30000,50000 TURBOPACK_BENCH_BUNDLERS=all cargo bench -p next-dev "hmr_to_eval/(Turbopack SSR|Next.js 12 SSR|Next.js 11 SSR|Vite CSR)"

Here are the numbers we were able to produce on a 14” MacBook Pro 2021, M1 Max, 64GB RAM, macOS 12.6 (21G115):

bench_hmr_to_eval/Next.js 11 SSR/10 modules         77.9±21.03ms
bench_hmr_to_eval/Next.js 11 SSR/100 modules        100.5±2.04ms
bench_hmr_to_eval/Next.js 11 SSR/200 modules         98.7±4.48ms
bench_hmr_to_eval/Next.js 11 SSR/500 modules       140.0±14.06ms
bench_hmr_to_eval/Next.js 11 SSR/1000 modules      273.2±17.11ms
bench_hmr_to_eval/Next.js 11 SSR/2000 modules      404.0±24.81ms
bench_hmr_to_eval/Next.js 11 SSR/3000 modules      498.3±22.10ms
bench_hmr_to_eval/Next.js 11 SSR/4000 modules      698.8±54.46ms
bench_hmr_to_eval/Next.js 11 SSR/5000 modules      849.7±14.64ms
bench_hmr_to_eval/Next.js 11 SSR/10000 modules    1713.9±32.96ms
bench_hmr_to_eval/Next.js 11 SSR/20000 modules         5.0±0.12s
bench_hmr_to_eval/Next.js 11 SSR/30000 modules         6.9±1.75s
bench_hmr_to_eval/Next.js 11 SSR/50000 modules        11.6±0.50s
bench_hmr_to_eval/Next.js 12 SSR/10 modules          50.2±5.68ms
bench_hmr_to_eval/Next.js 12 SSR/100 modules         45.7±2.99ms
bench_hmr_to_eval/Next.js 12 SSR/200 modules         50.6±8.27ms
bench_hmr_to_eval/Next.js 12 SSR/500 modules        93.9±19.93ms
bench_hmr_to_eval/Next.js 12 SSR/1000 modules      133.9±12.69ms
bench_hmr_to_eval/Next.js 12 SSR/2000 modules      147.4±24.38ms
bench_hmr_to_eval/Next.js 12 SSR/3000 modules      210.7±33.00ms
bench_hmr_to_eval/Next.js 12 SSR/4000 modules      295.0±21.65ms
bench_hmr_to_eval/Next.js 12 SSR/5000 modules      386.2±84.54ms
bench_hmr_to_eval/Next.js 12 SSR/10000 modules    1067.1±133.67ms
bench_hmr_to_eval/Next.js 12 SSR/20000 modules         2.8±0.09s
bench_hmr_to_eval/Next.js 12 SSR/30000 modules         5.4±0.50s
bench_hmr_to_eval/Next.js 12 SSR/50000 modules         8.5±0.32s
bench_hmr_to_eval/Turbopack SSR/10 modules           13.3±0.49ms
bench_hmr_to_eval/Turbopack SSR/100 modules          13.8±1.10ms
bench_hmr_to_eval/Turbopack SSR/200 modules          14.5±2.18ms
bench_hmr_to_eval/Turbopack SSR/500 modules          14.3±1.63ms
bench_hmr_to_eval/Turbopack SSR/1000 modules         15.3±0.27ms
bench_hmr_to_eval/Turbopack SSR/2000 modules         14.1±0.14ms
bench_hmr_to_eval/Turbopack SSR/3000 modules         15.1±0.37ms
bench_hmr_to_eval/Turbopack SSR/4000 modules         16.0±0.31ms
bench_hmr_to_eval/Turbopack SSR/5000 modules         16.5±0.41ms
bench_hmr_to_eval/Turbopack SSR/10000 modules        14.9±1.35ms
bench_hmr_to_eval/Turbopack SSR/20000 modules        15.3±1.52ms
bench_hmr_to_eval/Turbopack SSR/30000 modules        15.1±1.03ms
bench_hmr_to_eval/Turbopack SSR/50000 modules        16.7±3.73ms
bench_hmr_to_eval/Vite CSR/10 modules                94.8±5.24ms
bench_hmr_to_eval/Vite CSR/100 modules              102.4±2.32ms
bench_hmr_to_eval/Vite CSR/200 modules              101.7±2.39ms
bench_hmr_to_eval/Vite CSR/500 modules              100.1±4.24ms
bench_hmr_to_eval/Vite CSR/1000 modules              86.5±9.13ms
bench_hmr_to_eval/Vite CSR/2000 modules             111.6±3.76ms
bench_hmr_to_eval/Vite CSR/3000 modules             105.0±2.51ms
bench_hmr_to_eval/Vite CSR/4000 modules             99.0±12.26ms
bench_hmr_to_eval/Vite CSR/5000 modules             92.5±22.77ms
bench_hmr_to_eval/Vite CSR/10000 modules           110.4±32.82ms
bench_hmr_to_eval/Vite CSR/20000 modules           204.4±61.72ms
bench_hmr_to_eval/Vite CSR/30000 modules           256.2±67.72ms
bench_hmr_to_eval/Vite CSR/50000 modules          509.8±137.92ms

Visit the Turbopack benchmark documentation to run the benchmarks yourself. If you have questions about the benchmark, please open an issue on GitHub.

The future of the open-source Web

Our team has taken the lessons from 10 years of Webpack, combined with the innovations in incremental computation from Turborepo and Google's Bazel, and created an architecture ready to support the coming decades of computing.

Our goal is to create a system of open source tooling that helps to build the future of the Web—powered by Turbopack. We're creating a reusable piece of architecture that will make both development and warm production builds faster for everyone.

For Turbopack's alpha, we are including it in Next.js 13. But, in time, we hope that Turbopack will power other frameworks and builders as a seamless, low-level, incremental engine to build great developer experiences with.

We look forward to being a part of the community bringing developers better tooling so that they can continue to deliver better experiences to end users. If you’d like to learn more about Turbopack benchmarks, visit turbo.build. To try out Turbopack in Next.js 13, visit nextjs.org.