3
Most of the pictures in this article are from the Internet

monorepo

foreword

On December 9, 2021, Vercel's official blog published a blog post called Vercel acquires Turborepo to accelerate build speed and improve developer experience , as its title says, Vercel acquired Turborepo to accelerate builds Speed and improve the development experience.

Vercel+TURBONREPO

Turborepo is a high-performance build system for JavaScript and TypeScript codebases. Through incremental builds, intelligent remote caching, and optimized task scheduling, Turborepo can speed up builds by 85% or more, enabling teams of all sizes to maintain a fast and efficient build system that The team grows and expands.

The advantages of Turborepo have been succinctly highlighted in the blog post. This article will start from the existing actual scenarios and talk about some of the problems that a large code repository (Monorepo) may encounter. Combined with the existing solutions in the industry, take a look What innovations and breakthroughs has Turborepo made in task scheduling?

Self-cultivation of a qualified Monorepo

With the development of the business and the changes of the team, the projects in the business-oriented Monorepo will gradually increase. An extreme example is that Google puts the code of the entire company into a warehouse, and the size of the warehouse reaches 80TB.

Business Monorepo: Different from lib-type Monorepo (packages in a broad sense such as React, Vue3, Next.js and Babel), business-type Monorepo organizes multiple business application apps and their dependent public component libraries or tool libraries into one warehouse . ——"Eden Monorepo Series: Analysis of Eden Monorepo Engineering Construction"

The increase in the number of projects means that while enjoying the advantages of Monorepo, it also brings huge challenges. Excellent Monorepo tools can allow developers to enjoy the advantages of Monorepo without any burden, while bad Monorepo tools can make developers feel uneasy. , and even makes people doubt the meaning of Monorepo's existence.

List some actual scenarios that I have encountered:

  1. Dependency version conflict

    1. Create a new project, the project cannot be started due to dependency problems
    2. Create a new project, other projects cannot be started due to dependency problems
  2. Dependency installation is slow

    1. Initial installation dependencies 20min+
    2. Add a dependency 3min+
  3. Slow execution of tasks such as build/test/lint

The author has previous experience in the implementation of Rush . In the process of practice, I found that in addition to the most basic code sharing capabilities, there should be at least three capabilities, namely:

  1. Depends on management ability. As the number of dependencies increases, the correctness, stability and installation efficiency of the dependency structure can still be maintained.
  2. Task scheduling ability. It can execute the tasks of projects in Monorepo with maximum efficiency and in the correct order (which can be narrowly understood as npm scripts, such as build, test and lint, etc.), and the complexity will not increase with the number of projects in Monorepo.
  3. Version release capability. Based on the changed project, combined with project dependencies, version number change, CHANGELOG generation, and project release can be performed correctly.

The supported capabilities of some popular tools are shown in the table below:

-dependency managementtask schedulingVersion management
Pnpm Workspace
Rush✅(by Pnpm)
Lage
Turborepo
Lerna
  1. Pnpm : Pnpm has certain task scheduling capabilities (parameter --filter ), so it is also included here. At the same time, as a Package Manager, it is an indispensable part of a large Monorepo.
  2. Rush : Microsoft's open source scalable Monorepo management solution, built-in PNPM and Changesets-like package delivery solution, its plug-in mechanism is a major highlight, making it extremely convenient to use Rush's built-in capabilities to implement custom functions, taking a step into the Rush plug-in ecosystem The first step of the circle.
  3. Lage : Also open sourced by Microsoft, personally think that is the predecessor of Turborepo, which is the Go language version of Lage. Lage calls itself "Monorepo Task Runner", which is much more restrained than Turborepo's "High-Performance Build System", and the number of Stars is also an order of magnitude different (Lage 300+, while Turborepo 5k+), more can be found in this PR . In the following text Lage is equivalent to Turborepo.
  4. Lerna : Maintenance has been discontinued, so it will not be included in subsequent discussions.

Dependency management is too low-level, and version control is relatively simple and mature. It is difficult to make breakthroughs in these two capabilities. In practice, it is basically a combination of Pnpm and Changesets to complement the overall capabilities, or even simply specialize in one point. That is, task scheduling, which is the focus of Lage and Turborepo.

Changesets

How to choose the right Monorepo toolchain for you?

  1. Pnpm Workspace + Changesets: Low cost, suitable for most scenarios
  2. Pnpm Workspace + Changesets + Turborepo/Lage: Enhance task scheduling capabilities on the basis of 1
  3. Rush: comprehensive consideration, strong scalability

Task scheduling can be divided into three steps, each tool supports the following:

Scopingparallel executionCloud Cache
Pnpm
Rush
Turborepo/Lage

Scoping: Execute a subset of tasks on demand

Filtering/Scoping/Selecting subsets of projects

This capability has rich usage scenarios in daily development.

For example, for the first time to pull the repository, starting the project app1 needs to build the pre-dependencies package1 and package2 of app1 in Monorepo.

When packaging project app1 on SCM, it is necessary to build app1 itself and the pre-dependencies package1 and package2 of app1 in Monorepo.

At this time, the projects that need to be built should be filtered out according to the needs, and the project builds that are not related to the current intention should not be introduced.

This behavior is called differently in different Monorepo tools:

  1. Rush calls it Selecting subsets of projects , to select a subset of projects, in this example the following command should be used:
// 本地启动 app1 开发模式,app1 为依赖图的顶端,但不需要构建 app1 自身
$ rush build --to-except @monorepo/app1

// SCM 打包 app1,app1 为依赖图的顶端,且需要构建 @monorepo/app1 自身
$ rush build --to @monorepo/app1
  1. In Pnpm, it is called Filtering , that is, filtering, which restricts the command to a specific subset of packages. In this example, the following command should be used:
// 本地启动 app1 开发模式,app1 为依赖图的顶端,但不需要构建 app1 自身
$ pnpm build --filter @monorepo/app1^...

// SCM 打包 app1,app1 为依赖图的顶端,且需要构建 @monorepo/app1 自身
$ pnpm build --filter @monorepo/app1...
  1. It is called Scoped Tasks in Turborepo/Lage, but at present (2022/02/13) this capability is too limited. The Vercel team is designing a set of filter syntax that is basically consistent with Pnpm. For details, see RFC: New Task Filtering Syntax

Scoping ensures that the number of execution tasks will not increase with the increase of unrelated projects in Monorepo, and rich parameters can help us to select/filter/scoping in various scenarios (package issuance, app building, and CI tasks).

For example, if package5 is modified, in the CI environment of Merge Request, it is necessary to ensure that package5 and the projects that depend on package5 will not fail to build due to this modification, you can use the following command:

// 使用 Rush
$ rush build --to @monorepo/package5 --from @monorepo/package5

// 使用 Pnpm
$ pnpm build --filter ...@monorepo/package5...

In this example, package5 and app3 will eventually be selected for building, thus meeting the minimum requirements for incorporating code on CI - without affecting other project builds.

Based on the package.json files of all projects in the workspace, the specific dependencies between projects can be easily obtained. Each project Project knows its upstream project Dependents and its downstream dependencies, and cooperates with the parameters passed in by the developer, so as to easily Make a subset item selection.

Parallel Execution: Fully Unleash Machine Performance

Local task orchestration

Assuming that 20 subset tasks are selected, how should these 20 tasks be executed to ensure correctness and efficiency?

If there is a dependency between projects, then there is also a dependency between tasks. Taking the build task as an example, the current project can be built only after the pre-dependencies are built.

There is a popular interview question about controlling the maximum number of concurrent requests on the Internet. The general meaning of the question is: given m URLs, the maximum number of parallel requests per time is n, please implement the code to ensure the maximum number of requests.

max-request-count

The idea of this question is actually similar to the parallel execution of tasks in task scheduling, but the url in the interview question does not have a dependency relationship, and there is a topological order between tasks, and the difference is nothing more than that.

Then the execution idea of the task is ready to come out:

  1. The initial executable task must be a task without any predecessor tasks

    • The number of its Dependencies is 0
  2. After a task is executed, find the next executable task from the task queue and execute it immediately

    • After a task is executed, the number of Dependencies of its Dependents needs to be updated, and the current task is removed from it (Number of Dependencies-1)
    • Whether a task can be executed depends on whether all its Dependencies have been executed (the number of Dependencies is 0)

This article does not explain the code level. The specific implementation can be seen in the 1620e623f8c14b, the task scheduling mechanism in , which implements the topological order parallel execution of tasks at the code level.

Break mission boundaries

turborepo-lerna

This image is from Turborepo: Pipelining Package Tasks

When talking about task execution before, it is all under the same task, such as build, lint or test. When executing the build task in parallel, the lint or test task will not be considered. As shown in the Lerna area in the above figure, four tasks are executed in sequence, and each task is blocked by the previous task. Even if the internal execution is performed in parallel, there is still a waste of resources between different tasks.

Lage/Turborepo provides a set of methods for developers to clarify task relationships (see turbo.json). Based on this relationship, Lage/Turborepo can schedule and optimize different types of tasks.

Compared to executing only one task at a time, overlapping waterfall tasks are of course much more efficient.

turbo.json

{
  "$schema": "https://turborepo.org/schema.json",
  "pipeline": {
    "build": {
      // 其依赖项构建命令完成后,进行构建
      "dependsOn": ["^build"]
    },
    "test": {
      // 自身的构建命令完成后,进行测试(故上图存在错误)
      "dependsOn": ["build"]
    },
    "deploy": {
      // 自身 lint 构建测试命令完成后,进行部署
      "dependsOn": ["build", "test", "lint"]
    },
    // 随时可以开始 lint
    "lint": {}
  }
}

correct sequence

fix-turbo-pipeline

Rush also discussed related designs in March and October 2020, and supported similar features at the end of 2021. For specific PRs, please refer to [[rush] Add support for phased commands. #3113]( https: )

Cloud Cache: Reusing Caches Across Multiple Environments

Distributed computation caching

Rush has the feature of incremental build , which enables rush build to skip the input files (input files) projects that have not changed since the last build, and cooperate with third-party storage services to achieve the effect of multiplexing cache across multiple environments .

Rush introduced the plug-in mechanism in version 5.57.0, which in turn supports third-party remote caching capabilities (prior to this, only azure and amazon were supported), giving developers the ability to build caching solutions based on enterprise internal services.

Landed in daily development scenarios, local development, CI and SCM development links can all benefit from it.

As mentioned above, building the change project and its upstream and downstream projects in the CI link can ensure the quality of the Merge Request to a certain extent.

Build changed projects 1

As shown in the figure above, there is a scenario where the code of package0 is modified. In order to ensure that its upstream and downstream builds are not affected, the following commands will be executed during the CI Build Changed Projects stage:

$ rush build --to package0 --from package0
Select projects with source file changes based on git diff, here is package0

After the scope is defined, package0 and its upstream app1 will be included in the build process. Since app1 needs to be built, as its pre-dependencies, package1 to package5 also need to be built, but these five packages actually have no dependencies on package0, nor There are no changes, just to complete the build preparation for app1.

If the dependencies are complicated, for example, a basic package is referenced by multiple applications, the preparation and construction work similar to package1-package5 will be greatly increased, resulting in very slow CI at this stage.

The number of actual build projects = the number of downstream projects of the modified project + the number of upstream projects of the modified project + the number of pre-dependencies of the downstream project of the modified project + the number of pre-dependencies of the upstream project of the modified project

Build changed projects 2

Since five projects such as package1-package5 have no direct or indirect dependencies with package0, and the input files have not changed, they can hit the cache (if any) and skip the build behavior.

This reduces the build scope from 7 projects to 2 projects.

of actual build projects = number of changed projects + number of upstream projects of changed projects

How to determine whether to hit the cache?

Detecting affected projects/packages

In the cloud, the cached compressed package of each project construction result is mapped to the cacheId calculated by the input file input files. If the input file does not change, the calculated cacheId value will not change (content hash), and the corresponding cacheId can be hit. cloud cache.

The input file contains the following:

  1. Project code source file
  2. Project NPM dependencies
  3. The cacheId of other Monorepo internal projects that the project depends on

If you're interested in the implementation, check out @rushstack/package-deps-hash .

Epilogue

In the process of writing this article, the author also remembered the three magic weapons for speeding up construction mentioned in "Systematic Ideas for Speeding Up Front-End Construction" shared by @sorrycc on GMTC:

  1. Delay processing. On-demand compilation and deferred compilation of sourcemaps based on requests
  2. cache. Vite Optmize, Webpack5 physical cache, Babel cache
  3. Native Code。SWC、ESBuild

As a task orchestration tool, the advantages of Native Code are not obvious (although Turborepo is written in Go language, the author of Lage believes that under the existing scale, the efficiency bottleneck of task orchestration is not the orchestration tool itself), but the delay processing and caching are There are similarities.

Finally, use the concise and pragmatic subtitle of Lage's official website as the end of this article's theme "task scheduling":

Run all your npm scripts in topological order incrementally with cloud cache - @microsoft/lage

With cloud caching, run all your npm scripts incrementally in topological order.

refer to


海秋
311 声望18 粉丝

前端新手