1
头图

Translator: Song Bao writes code

Translation: https://bytecodealliance.org/articles/wasmtime-1-0-fast-safe-and-production-ready

summary

On September 20th, Eastern Time, the Bytecode Alliance announced that after three years of development, the Wasmtime 1.0 version was officially ushered in. Wasmtime is a WebAssembly Runtime built on top of the compiler Cranelift. Wasmtime utilizes the Rust programming language, is fully open source and WASI compliant. Wasmtime also supports integration with languages such as C/C++, Python, .NET, Go, and runs on platforms such as Windows/Linux/macOS.

foreword

As of today, the Wasmtime WebAssembly runtime is now 1.0! This means that all of us in the bytecode consortium agree that it is fully production ready.

In fact, we could have called Wasmtime production-ready more than a year ago . But we don't want to just ship any WebAssembly engine. We want a super fast and super safe WebAssembly engine. We want to feel very confident when we advise people to choose Wasmtime.

So to make sure it's production ready for all of you, some of us in the Bytecode Consortium have been running Wasmtime in our own production for the past year. Wasmtime did a great job in these production environments, providing a stable platform that also gave us security and speed benefits.


Here are some of our experiences with the new and improved Wasmtime:

Shopify - 14 months in production

Shopify switched from another WebAssembly engine to Wasmtime in July 2021. By switching, Shopify saw an average execution performance improvement of about 50%.

Very soon - 6 months in production

In March 2022, Fast switched from another WebAssembly engine to Wasmtime. Fast also saw a ~50% increase in execution time . In addition, Fast saw a 72% to 163% increase in requests per second and it could work. It worked very quickly. Trillions of requests use Wasmtime.

DFINITY - 16 months in production

DFINITY launched the internet computer blockchain in May 2021 using Wasmtime. Since then, internet computers have executed 100 Beijing (10^18) instructions for over 150,000 smart contracts without any production issues.

InfiniyOn - 14 months in production

InfiniyOn Cloud has been using Wasmtime in production since July 2021. With Wasmtime, Infineon is able to provide a greater than 5x increase in throughput for end-to-end stream processing compared to Java-based platforms such as Kafka.

Fermyon - 6 months in production

Fermyon's Spin has been using Wasmtime since its March 2022 release. Since then, Fermyon has found that thousands of WebAssembly binaries can run in a single Spin instance while keeping startup times under a millisecond.

Onboard - 2 years in production

Embark has been using Wasmtime in their game engine since 2020. Since then, Embark has been impressed with Wasmtime's excellent stability, security, and performance, keeping games running at 60 FPS.

SingleStore - 3 months in production

SingleStoreDB Cloud has been using Wasmtime since June 2022 to bring developers' code into data securely, quickly, and at scale.

Microsoft - Preview 11 months

Microsoft has previewed Wasmtime's WebAssembly System Interface (WASI) node pool in the Azure Kubernetes Service since October 2021.
With all this experience running it in production, we believe we can recommend it to you too.

So I want to tell you how we made it super fast and super safe. But first, why would you want to use the WebAssembly runtime in the first place?

Why use the WebAssembly runtime?

WebAssembly was originally created to make code run fast in the browser. This means you can run more complex applications like image editing applications or video games in the browser. Therefore, each major browser has its own WebAssembly runtime to run these types of applications.

But WebAssembly also opens up many use cases outside the browser. So in these cases you need to find a standalone WebAssembly runtime such as Wasmtime.

Here are some of the use cases where we see people using Wasmtime.

Microservices and serverless

WebAssembly runtimes like Wasmtime are great for Microservices and serverless platforms where you have independent pieces of code that need to scale up and down quickly. This is because the startup time of WebAssembly is much lower than other similar technologies such as JS isolation, containers or virtual machines.

For example, the fastest alternative - JS isolation - takes about 5ms to start. By comparison, a Wsom instance only takes 5 microseconds to start.

WebAssembly's lightweight isolation is ideal for multi-tenant platforms because you can fit more workloads on the same machine than with virtual machines, containers, or other coarse-grained isolation techniques.

3rd party plugin system

WebAssembly is great for platforms where you often want to run third-party code so that you can support many different specific use cases - for example, through a plugin marketplace where developers in the platform ecosystem can share code with users.

But running it brings up a trust issue - can the platform trust the code written by these ecosystem developers?

With WebAssembly, the platform can run untrusted code, but still have some security guarantees. Since WebAssembly is sandboxed by default and cannot access any resources you don't explicitly hand to it, the platform is not at risk. Communication between platforms and plugins is still fast.

Database, Analytics and Event Streaming

For database-backed applications, WebAssembly can really speed things up. Database-backed applications often waste a lot of time repeatedly querying the database, performing some calculations based on the returned data, and then issuing more queries to get more data. One way to speed up this communication is to use user-defined functions (UDFs) to bring code to the data. With these, you can run code directly in the database and get rid of network calls between them.

With the help of the WebAssembly runtime, databases can use WebAssembly-based UDFs to co-locate code and data. This provides fast computations on the data without exposing the database itself to security risks.

Because it's WebAssembly, these databases can safely support many different languages in their UDFs without the risk of one UDF crashing and bringing down the entire database. This makes UDFs more accessible to users who are less familiar with a particular database.

Trusted Execution Environment

A Trusted Execution Environment (TEE) is designed for situations where the user cannot or does not want to trust the lower levels of the system, such as the hypervisor, kernel, or other system software. The TEE provides a safe enclave on the CPU where code hosted by the TEE runs, isolated from all other software.

WebAssembly is ideal for these use cases because it supports many different languages and is independent of the CPU architecture. This makes it easier to run TEEs across different hardware platforms.

Portable Client

A browser is a good example of a portable client, and many applications can run in a browser. But sometimes you need a portable client that sits outside the browser - either for performance or for a richer user experience.

For these cases, you can create your own portable client using the WebAssembly runtime, like what the BBC does for their iPlayer . The WebAssembly runtime takes care of portability and ensures that guest code can run on different architectures. This means you can focus on the functionality you want your client to provide.

These are some use cases where you might want to use Wasmtime. Now let's talk about how we ensure Wasmtime performs well in these use cases.

How we made Wasmtime super fast

For almost all of these use cases, speed matters. That's why we care so much about performance.

As I said before , when we optimize, we consider two parts of performance - instantiation and runtime.

If you want all the details on how we made both of these faster, you can read Chris Fallin's blog post on the Wasmtime show , but here's a basic breakdown.

instantiate

Instantiation is the time from when new work arrives (such as a web request, plugin invocation, or database query) until the WebAssembly module instance is actually ready to run and has all memory and other state ready for it.

This speed is important for use cases that scale up and down quickly, such as some Microservices architectures and serverless.

As Chris pointed out in his post, some of our recent changes:

The instantiation time of SpiderMonkey.wasm went from about 2 milliseconds... to 5 microseconds, or 400 times faster. good.

This is just an example.

We achieve these results primarily by using two different optimizations: the virtual memory trick and lazy initialization. In both cases, what we're doing is deferring the work until it really needs to be done.

Using the virtual memory trick, we don't need to create new memory every time we create a new instance. Instead, we rely on operating system features to share as much memory as possible between instances, and only create new pages of memory when one of the instances needs to change data.

We apply the same idea to initialization. A module can have many functions and state that it declares but does not use. So we defer initialization of things like function tables until the functions are actually used. This speeds up startup and has the nice benefit of reducing the overall work that needs to be done without using functions or other state.

So we speed things up during the instantiation phase by delaying work until it needs to be done. But we don't just need to make instantiation fast. We also need to make the runtime fast.

Runtime

Runtime is how fast the code actually runs after it starts. This is especially important when you have long-running code such as portable clients.

We have been able to improve runtime performance with various changes. Some of the biggest wins came from changes we made to our compiler, Cranelift, which takes WebAssembly code and turns it into machine code.

For example, this new register allocator makes SpiderMonkey.wasm run about 5% faster. The new backend framework (which chooses the best machine instructions for code blocks) makes SpiderMonkey.wasm run 22% faster than this!

We also did some experiments here. For example, we have started working on a new mid-range optimization framework. In an early prototype, we've seen a 13% speedup in the runtime of SpiderMonkey.wasm.

This is how we increase speed. But what about security?

How we made Wasmtime super safe

Security is a big driver of our Bytecode Alliance. We believe WebAssembly is uniquely positioned to address some of the biggest emerging security threats, and we're committed to making sure it does.

We've been pushing the WebAssembly proposal to make it easier for developers to create applications that are safe by default. But none of this matters if the runtime the code is running on is not inherently safe.

That's why we put so much effort into the security of Wasmtime itself.

Nick Fitzgerald wrote a great article about all the different ways we keep Wasmtime safe , you should read more details, but here are a few examples:

We already use Cargo Veterinary to protect our supply chain. We are already using memory-safe languages, which helps us avoid introducing vulnerabilities that attackers could exploit. But this security doesn't necessarily protect us from malicious code that an attacker slips into a dependency. To prevent this, we use cargo vet to ensure dependencies are manually reviewed by trusted auditors.

We do a lot of work on fuzzing. Fuzzing is a method of finding inexplicable bugs in your code by throwing in a bunch of pseudo-randomly generated inputs. As Nick puts it, "Our ubiquitous fuzzing is probably the single largest contributor to Wasmtime's code quality. We fuzz because hand-written tests are necessary, but not sufficient."

We are formally validating security critical parts of the code. With formal verification, you can actually prove that a program does what it's supposed to do, just like a proof in math class. This gives us a very high level of confidence in the relevant code and helps us pinpoint exactly where the problem is when we accidentally introduce new bugs.

So these are some of the ways we can make Wasmtime more secure.

This is our very strong feeling - all WebAssembly runtimes should follow the best practices listed by Nick and more to make sure they are safe. This is the only way we can have the secure WebAssembly ecosystem we all want.

What is after 1.0?

Now that we have reached 1.0, we plan to maintain frequent and predictable cycles of stable releases. We will release a new version of Wasmtime every month.

Each release of Wasmtime increments the major version number. This allows us to keep Wasmtime's version number in sync with the version number embedded in the specific language. For example, if you are using wasmtime-py 7.0, you can be sure you are using Wasmtime 7.0. You can learn more about the publishing process here .

Thanks to all the contributors who made this possible, and if you want to help shape the future of Wasmtime, join our Zulip chat !

GitHub address:
https://github.com/bytecodealliance/wasmtime/releases/tag/v1.0.0

❤️Thank you for your attention to "Songbao Writing Code", writing more than just code.


松宝写代码
515 声望47 粉丝

昵称:saucxs | songEagle | 松宝写代码