Welcome to .NET 6. Today's release is the result of over a year of work by the .NET team and community. C# 10 and F# 6 provide language improvements that make your code simpler and better. Performance has improved dramatically, and we've seen Microsoft reduce the cost of hosting cloud services. .NET 6 is the first version to natively support Apple Silicon (Arm64), and it also has improvements for Windows Arm64. We built a new dynamic profile-guided optimization (PGO) system that provides deep optimizations that are only possible at runtime. Improved cloud diagnostics using dotnet monitor and OpenTelemetry. WebAssembly support is more capable and performant. Added new APIs for HTTP/3, handling JSON, math, and direct memory manipulation. .NET 6 will be supported for three years. Developers have started upgrading applications to .NET 6, and we've heard good early results in production. .NET 6 is ready for your application.
You can download .NET 6 for Linux, macOS and Windows.
- Installers and Binaries
- container image
- Linux package
- Release Notes
- API differences
- known issues
- GitHub issue tracker
See the ASP.NET Core, Entity Framework, Windows Forms, .NET MAUI, YARP, and dotnet monitor posts for what's new in various scenarios.
.NET 6 Highlights
.NET 6 is:
- Stress testing in production using Microsoft services, cloud applications run by other companies, and open source projects.
- Supported for three years as the latest Long Term Support (LTS) release.
- A unified platform across browser, cloud, desktop, IoT, and mobile applications, all using the same .NET libraries and enabling easy code sharing.
- Performance is greatly improved, especially for file I/O, which together results in a reduction in execution time, latency, and memory usage.
- C# 10 offers language improvements such as record structures, implicit usage, and new lambda capabilities, while the compiler adds incremental source generators. F# 6 adds new features, including task-based async, pipeline debugging, and numerous performance improvements.
- Visual Basic has improved the Visual Studio experience and the Windows Forms project opening experience.
- Hot Reload enables you to skip rebuilding and restarting your application to see new changes (while your application is running), Visual Studio 2022 and .NET CLI support C# and Visual Basic.
- Cloud Diagnostics has been improved with OpenTelemetry and dotnet monitor and is now supported in production and available in Azure App Service.
- The JSON API is more powerful and has higher performance through the serializer's source generator.
- The minimal API introduced in ASP.NET Core simplifies the onboarding experience and improves the performance of HTTP services.
- Blazor components can now be rendered from JavaScript and integrated with existing JavaScript-based applications.
- WebAssembly AOT compilation of Blazor WebAssembly (Wasm) applications, with support for runtime relinking and native dependencies.
- Single-page applications built with ASP.NET Core now use a more flexible pattern that works with Angular, React, and other popular front-end JavaScript frameworks.
- Added HTTP/3 so that ASP.NET Core, HttpClient, and gRPC can all interact with HTTP/3 clients and servers.
- File IO now supports symlinks, and has greatly improved performance with re-written-from-scratch FileStream.
- Security has been improved with support for OpenSSL 3, the ChaCha20Poly1305 encryption scheme, and runtime defense-in-depth mitigations, especially W^X and CET.
- Single-file applications (extract-free) can be published for Linux, macOS, and Windows (previously only Linux).
- IL trimming is now more powerful and effective, and new warnings and analyzers ensure correct end results.
- Added source generators and analyzers to help you generate better, safer, and more performant code.
- Source code builds enable organizations such as Red Hat to build .NET from source and offer their own builds to their users.
This release includes about ten thousand git commits. Even though this article is long, it skips many improvements. You must download and try .NET 6 to see all the new features.
supports
.NET 6 is a Long Term Support (LTS) release and will be supported for three years. It supports multiple operating systems, including macOS Apple Silicon and Windows Arm64.
Red Hat works with the .NET team to support .NET on Red Hat Enterprise Linux. On RHEL 8 and later, .NET 6 will be available for AMD and Intel (x64_64), ARM (aarch64), and IBM Z and LinuxONE (s390x) architectures.
Please start migrating your applications to .NET 6, especially .NET 5 applications. We've heard from early adopters that upgrading from .NET Core 3.1 and .NET 5 to .NET 6 is simple.
Visual Studio 2022 and Visual Studio 2022 for Mac support .NET 6. It is not supported by Visual Studio 2019, Visual Studio for Mac 8 or MSBuild 16. If you want to use .NET 6, you will need to upgrade to Visual Studio 2022 (now also 64-bit). The Visual Studio Code C# extension supports .NET 6.
Azure App Service:
Azure Functions now supports running serverless functions in .NET 6.
The App Service .NET 6 GA Announcement provides information and details for ASP.NET Core developers excited to get started with .NET 6 today.
Azure Static Web Apps now supports full-stack .NET 6 applications with a Blazor WebAssembly frontend and Azure Function API.
Note: If your app is already running the .NET 6 preview or RC version on App Service, after deploying the .NET 6 runtime and SDK to your region, it will be automatically updated on the first restart. If you deployed a standalone application, you will need to rebuild and redeploy.
Unified Extension Platform
.NET 6 provides a unified platform for browser, cloud, desktop, IoT, and mobile applications. The underlying platform has been updated to meet the needs of all application types and to facilitate code reuse across all applications. New features and improvements apply to all applications simultaneously, so your code running on the cloud or mobile device behaves the same way and has the same benefits.
The range of .NET developers continues to expand with each release. Machine learning and WebAssembly are two recent additions. For example, with machine learning, you can write applications that find anomalies in streaming data. Using WebAssembly, you can host .NET applications in the browser just like HTML and JavaScript, or mix them with HTML and JavaScript.
One of the most exciting additions is the .NET Multi-platform App UI (.NET MAUI). You can now write code in a single project to deliver a modern client application experience across desktop and mobile operating systems. .NET MAUI will be released a little later than .NET 6. We've invested a lot of time and effort in .NET MAUI and are excited to release it and see .NET MAUI applications go into production.
Of course, .NET applications can also be used at home on the Windows desktop (using Windows Forms and WPF) and in the cloud using ASP.NET Core. They've been our longest-serving type of application and are still very popular, and we've improved them in .NET 6.
for .NET 6
Continuing the broad platform theme, it's easy to write .NET code on all of these operating systems.
To target .NET 6, you need to use the .NET 6 targeting framework as follows:
<TargetFramework>net6.0</TargetFramework>
net6.0 Target Framework Moniker (TFM) gives you access to all the cross-platform APIs provided by .NET. This is the best choice if you are writing console applications, ASP.NET Core applications or reusable cross-platform libraries.
If you're targeting a specific OS (such as writing Windows Forms or iOS apps), there's another set of TFMs (each targeting a self-explanatory OS) for you to use. They give you access to all the net6.0 APIs plus a bunch of OS specific APIs.
- net6.0-android
- net6.0-ios
- net6.0-maccatalyst
- net6.0-tvos
- net6.0-windows
Each unversioned TFM is equivalent to the minimum OS version supported for .NET 6. You can specify the OS version if you want specific or access to the newer API.
Both net6.0 and net6.0-windows TFMs are supported (same as .NET 5). Android and Apple TFM are new features of .NET 6 and are currently in preview. They will be supported in a later .NET 6 update.
There is no compatibility relationship between OS-specific TFMs. For example, net6.0-ios is not compatible with net6.0-tvos. If you want to share code, you need to use source code with #if statements or binaries with net6.0 object code to do so.
The team has been constantly focusing on performance since we started the .NET Core project. Stephen Toub has done an excellent job documenting the performance progress of .NET for each release. Welcome to the post on performance improvements in .NET 6. In this article, I cover the major performance improvements you'll want to know about, including file IO, interface conversion, PGO, and System.Text.Json.
Dynamic PGO
Dynamic profile-guided optimization (PGO) can significantly improve steady-state performance. For example, PGO increased requests per second for the TechEmpower JSON "MVC" suite by 26% (510K -> 640K).
Dynamic PGO is based on tiered compilation, which enables methods to be first compiled very quickly (called "tier 0") to improve startup performance, and then recompiled later with numerous optimizations enabled (called "tier 0"). Tier 1") once the method is proven to be impactful. This model enables methods to be instrumented in layer 0 to allow various observations on the execution of the code. When these methods are retuned at layer 1, the information gathered from the layer 0 execution is used to better optimize the layer 1 code. This is the nature of the mechanism.
The startup time of a dynamic PGO will be slightly slower than the default runtime because additional code is run in the layer 0 method to observe the method behavior.
To enable dynamic PGO, set DOTNET_TieredPGO=1 in the environment where the application will run. You also have to make sure that tiered compilation is enabled (by default). Dynamic PGO is optional as it is a new and influential technology. We would like to publish opt-in usage and related feedback to ensure it is fully stress-tested. We did the same thing with tiered compilation. At least one very large Microsoft service supports and already uses dynamic PGO in production. We encourage you to try it out.
You can see more about the benefits of dynamic PGO in the Performance in .NET 6 post, including the following microbenchmarks, which measure the cost of specific LINQ enumerators.
private IEnumerator<long> _source = Enumerable.Range(0, long.MaxValue).GetEnumerator();
[Benchmark]
public void MoveNext() => _source.MoveNext();
Here are the results with and without dynamic PGO.
This is a considerable difference, but also an increase in code size, which may surprise some readers. This is the size of the assembly code generated by the JIT, not the memory allocation (which is a more common focus). The .NET 6 performance post has a good explanation of this.
A common optimization in PGO implementations is "hot/cold separation", where frequently executed method parts ("hot") are close together at the beginning of the method, and infrequently executed method parts ("cold") are moved to the end of the method. This allows for better use of the instruction cache and minimizes potentially unused code load.
As a context, interface dispatch is the most expensive type of call in .NET. Non-virtual method calls are fastest, and even faster are calls that can be eliminated by inlining. In this case, dynamic PGO provides two (alternative) call sites for MoveNext. The first - hot - is a direct call to Enumerable+RangeIterator.MoveNext, the other - cold - is a call through the virtual interface of IEnumerator<int>. It would be a huge win if the hottest guy was called most of the time.
This is magic. When JIT instrumenting this method's level 0 code, it includes instrumenting this interface dispatch to keep track of the concrete type of _source on each call. The JIT finds that each call is on a type called Enumerable+RangeIterator, a private class that implements Enumerable.Range inside the Enumerable implementation. So, for layer 1, the JIT has issued a check to see if _source is of type Enumerable+RangeIterator: if not, jump to the cold part of doing the normal interface dispatch that we highlighted earlier. But if it is - based on profiling data, which is expected to be the case most of the time - then it can go ahead and call the non-virtualized Enumerable+RangeIterator.MoveNext method directly. Not only that, but it also considers it profitable to inline the MoveNext method. The net effect is that the generated assembly code is a bit larger, but optimized for the exact scenarios that are expected to be most common. These are the kind of wins we want when we start building dynamic PGOs.
Dynamic PGO will be discussed again in the RyuJIT section.
file IO improvement
FileStream was almost completely rewritten in .NET 6 with a focus on improving asynchronous file IO performance. On Windows, the implementation no longer uses blocking APIs and can be several times faster! We've also improved memory usage on all platforms. After the first async operation (usually allocation), we've made async operations allocation-free! In addition, we've made Windows and Unix implement the same behavior for different edge cases (which is possible).
This rewritten performance improvement benefits all operating systems. The benefit to Windows is the greatest because it's far behind. macOS and Linux users should also see significant FileStream performance improvements.
The following benchmark writes 100 MB to a new file.
private byte[] _bytes = new byte[8_000]; [Benchmark] public async Task Write100MBAsync() { using FileStream fs = new("file.txt", FileMode.Create, FileAccess.Write, FileShare.None, 1, FileOptions.Asynchronous); for (int i = 0; i < 100_000_000 / 8_000; i++) await fs.WriteAsync(_bytes); }
On Windows with an SSD drive, we observed a 4x speedup and a more than 1200x allocation drop:
We also recognized the need for higher performance file IO capabilities: concurrent reads and writes, and scatter/gather IO. For these cases, we have introduced new APIs for the System.IO.File and System.IO.RandomAccess classes.
async Task AllOrNothingAsync(string path, IReadOnlyList<ReadOnlyMemory<byte>> buffers)
{
using SafeFileHandle handle = File.OpenHandle(
path, FileMode.Create, FileAccess.Write, FileShare.None, FileOptions.Asynchronous,
preallocationSize: buffers.Sum(buffer => buffer.Length)); // hint for the OS to pre-allocate disk space
await RandomAccess.WriteAsync(handle, buffers, fileOffset: 0); // on Linux it's translated to a single sys-call!
}
This example demonstrates:
- Open file handles using the new File.OpenHandle API.
- Preallocate disk space with the new preallocated size feature.
- Write to files using the new Scatter/Gather IO API.
The preallocated size feature improves performance because writes do not need to extend the file and the file is less likely to be fragmented. This approach improves reliability, as write operations will no longer fail due to lack of space, since the space has been reserved. Scatter/Gather IO API reduces the number of system calls required to write data.
Faster interface checking and conversion
Interface casting performance increased by 16% - 38%. This improvement is particularly useful for pattern matching between C# and interfaces.
This chart shows the scale of improvement on a representative benchmark.
One of the biggest advantages of migrating parts of the .NET runtime from C++ to managed C# is that it lowers the barrier to contribution. This includes interface conversion, which was moved to C# as an early .NET 6 change. There are more people in the .NET ecosystem who know C# than people who know C++ (and the runtime uses challenging C++ patterns). Just being able to read some of the code that makes up the runtime is an important step in developing the confidence to contribute in all its forms.
Credit to Ben Adams.
System.Text.Json Source Generator
We've added a source code generator for System.Text.Json that avoids the need for reflection and code generation at runtime and can generate optimal serialization code at build time. Serializers are usually written using very conservative techniques as they must. However, if you read your own serialization source code (using the serializer), you can see what the obvious choices should be, which can make the serializer more optimized for your particular case. That's exactly what this new source generator does. In addition to improving performance and reducing memory, the source code generator also generates code that is best suited for assembly trimming. This helps to make smaller applications.
Serializing POCOs is a very common scenario. Using the new source code generator, we observe that serialization is 1.6 times faster than our benchmark.
The TechEmpower Cache Benchmarking platform or framework does an in-memory cache of information from the database. The .NET implementation of the benchmark performs JSON serialization of cached data so that it can be sent to the test harness as a response.
We observed ~100K RPS gain (~40% increase). When combined with the MemoryCache performance improvements, .NET 6 delivers 50% higher throughput than .NET 5!
C# 10
Welcome to C# 10. A major theme of C# 10 is continuing the simplification journey that started with top-level statements in C# 9. The new feature removes more rituals from Program.cs, resulting in a program with only one line. They were inspired by talking to people with no C# experience (students, professional developers, and others) and seeing what worked best and most intuitively for them.
Most of the .NET SDK templates have been updated to provide a simpler, cleaner experience that is now possible with C# 10. We've received feedback that some people don't like the new templates because they're not for experts, remove object orientation, remove important concepts learned on the first day of writing C#, or encourage writing entire programs in one file. Objectively speaking, neither of these views is correct. The new model also applies to students who are professional developers. However, it is not the same as the C-derived model prior to .NET 6.
There are several other features and improvements in C# 10, including record structures.
global use instruction
The global using directive lets you specify the using directive once and apply it to every file you compile.
The following examples show the breadth of syntax:
- global using System;
- global using static System.Console;
- global using Env = System.Environment;
You can put global using statements in any .cs file, including Program.cs.
Implicit usings is an MSBuild concept that automatically adds a set of directives based on the SDK. For example, console applications use implicitly differently than ASP.NET Core.
Implicit use is optional and enables PropertyGroup in a:
<ImplicitUsings>enable</ImplicitUsings>
Implicit use is optional for existing projects, but is included by default in new C# projects. For more information, see Implicit usage.
File-scoped namespace
File-scoped namespaces enable you to declare a namespace for an entire file without nesting the rest within {...}. Only one is allowed, and must appear before any types are declared.
The new syntax is a single line:
namespace MyNamespace;
class MyClass { ... } // Not indented
This new syntax is an alternative to the three-line indentation style:
namespace MyNamespace
{
class MyClass { ... } // Everything is indented
}
The benefit is reduced indentation in the extremely common case where the entire file is in the same namespace.
record structure
C# 9 introduced records as a special form of value-oriented class. In C# 10, you can also declare structure records. Structs in C# already have value equality, but record structs add an implementation of the == operator and IEquatable<T>, as well as a value-based ToString implementation:
public record struct Person { public string FirstName { get; init; } public string LastName { get; init; } }
Just like record classes, record structures can be "positional", meaning they have a primary constructor that implicitly declares public members corresponding to the parameters:
public record struct Person(string FirstName, string LastName);
However, unlike record classes, implicit public members are mutable auto-implemented properties. In this way, the record structure becomes a natural growth story for tuples. For example, if you have a return type (string FirstName, string LastName), and you want to extend it to a named type, you can easily declare the corresponding location structure record and maintain mutable semantics.
If you want an immutable record with read-only properties, you can declare the entire record structure readonly (just like you can with other structures):
public readonly record struct Person(string FirstName, string LastName);
C# 10 supports not only record structures, but all structures as well as anonymous types with expressions:
var updatedPerson = person with { FirstName = "Mary" };
F# 6
F# 6 aims to make F# simpler and more efficient. This applies to language design, libraries and tools. Our goal with F# 6 (and beyond) is to remove edge cases in the language that surprise users or hinder learning F#. We're excited to partner with the F# community on this ongoing effort.
Make F# faster and more interoperable
The new syntax task {…} directly creates a task and starts it. This is one of the most important features in F# 6, making asynchronous tasks simpler, more performant, and more interoperable with C# and other .NET languages. Previously, creating a .NET task required using async {…} to create the task and calling Async.StartImmediateAsTask.
The function task{...} is built on the basis of what is called "Recoverable Code" RFC FS-1087. Resumable code is a core feature that we hope to use in the future to build other high-performance asynchronous and yielding state machines.
F# 6 also adds other performance features for library authors, including InlineIfLambda and an unboxed representation of the F# active pattern. A particularly dramatic performance improvement is the compilation of list and array expressions, which are now 4x faster and better and easier to debug.
Makes F# easier to learn and more unified
F# 6 enables the expr[idx] index syntax. So far, F# has used expr.[idx] for indexing. The removal of the dot notation is based on repeated feedback from first-time F# users that the use of dots deviates unnecessarily from their expected standard practice. In new code, we recommend systematic use of the new expr[idx] indexing syntax. As a community, we should all switch to this syntax.
The F# community has made important improvements to make the F# language more unified in F# 6. The most important of these is the removal of some inconsistencies and limitations in F#'s indentation rules. Other design additions to make F# more uniform include adding the as pattern; allowing "overloading custom operations" in evaluation expressions (useful for DSLs); allowing _ to drop the use binding and allowing %B to be binary-formatted in output. The F# core library adds new functions for copying and updating lists, arrays, and sequences, as well as other NativePtr intrinsics. Some old features of F# deprecated since 2.0 now cause errors. Many of these changes better align F# with your expectations, reducing surprises.
F# 6 also adds support for other "implicit" and "type-directed" conversions in F#. This means fewer explicit upcasts and adds first-class support for .NET-style implicit conversions. F# has also been tweaked to better suit the era of number libraries using 64-bit integers, and implicitly extends 32-bit integers.
Improved F# tools
Tool improvements in F# 6 make everyday coding easier. The new "Pipeline Debug" allows you to single-step, set breakpoints, and examine the intermediate values of the F# pipeline syntax input |> f1 |> f2. The debug display of shadow values has been improved to remove a common source of confusion when debugging. The F# tooling is also now more efficient, and the F# compiler performs the parsing phase in parallel. F# IDE tools have also been improved. F# scripts are now more robust, allowing you to pin the .NET SDK version used via the global.json file.
Hot Reload
Hot Reload is another performance feature focused on developer productivity. It enables you to make various code edits to a running application, reducing the time you need to wait for your application to rebuild, restart, or re-navigate to where you were after making code changes.
Hot Reload is available through the dotnet watch CLI tool and Visual Studio 2022. You can use Hot Reload with several app types such as ASP.NET Core, Blazor, .NET MAUI, Console, Windows Forms (WinForms), WPF, WinUI 3, Azure Functions, and more.
When using the CLI, just start your .NET 6 application with dotnet watch, make any supported edits, and when you save the file (like in Visual Studio Code), the changes will be applied immediately. If changes are not supported, details are logged to the command window.
This image shows one using dotnet watch. I made edits to the .cs file and the .cshtml file (as described in the log), both applied to the code and reflected very quickly in the browse in less than half a second in the device.
When using Visual Studio 2022, simply launch your application, make supported changes, and apply those changes using the new "Hot Reload" button (pictured below). You can also choose to apply changes on save via the drop-down menu on the same button. When using Visual Studio 2022, hot reload is available for multiple .NET versions, for .NET 5+, .NET Core, and .NET Framework. For example, you'll be able to make code-behind changes to the button's OnClickEvent handler. The Main method of the application does not support it.
Note: There is a bug in RuntimeInformation.FrameworkDescription which will be shown in this image and will be fixed soon.
Hot Reload also works with the existing Edit and Continue functionality (when stopped at a breakpoint) and XAML Hot Reload for live editing of the application UI. Currently supports C# and Visual Basic applications (not F#).
Security
Security has been significantly improved in .NET 6. It remains a focus of the team, including threat modeling, encryption, and defense-in-depth defenses.
On Linux, we rely on OpenSSL for all cryptographic operations, including TLS (required for HTTPS). On macOS and Windows, we rely on the functionality provided by the operating system for the same purpose. With each new version of .NET, we often need to add support for a new version of OpenSSL. .NET 6 adds support for OpenSSL 3.
The biggest changes in OpenSSL 3 are improved FIPS 140-2 modules and simpler licensing.
.NET 6 requires OpenSSL 1.1 or higher, and will prefer the highest installed version of OpenSSL it can find, up to and including v3. In general, you will most likely start using OpenSSL 3 when your Linux distribution switches to OpenSSL 3 by default. Most distributions don't do this yet. For example, if you install .NET 6 on Red Hat 8 or Ubuntu 20.04, you will not (as of this writing) start using OpenSSL 3.
OpenSSL 3, Windows 10 21H1, and Windows Server 2022 all support ChaCha20Poly1305. You can use this new authenticated encryption scheme in .NET 6 (assuming your environment supports it).
Thanks to Kevin Jones for the Linux support of ChaCha20Poly1305.
We also released a new runtime security mitigation roadmap. Importantly, the runtime you use is immune to textbook attack types. We are meeting this need. In .NET 6, we built the initial implementation of W^X and Intel Control Flow Enforcement Technology (CET). W^X is fully supported, enabled by default for macOS Arm64, and can optionally be added to other environments. CET is an opt-in and preview for all environments. We want both technologies to be enabled by default in all environments in .NET 7.
Arm64
Arm64 is exciting for laptops, cloud hardware, and other devices these days. We are equally excited about the .NET team and are doing our best to keep up with this industry trend. We work directly with engineers at Arm Holdings, Apple, and Microsoft to ensure our implementation is correct and optimized, and that our plans are aligned. These close partnerships help us a lot.
- Special thanks to Apple for sending our team a bushel of Arm64 development kits for us to use prior to the release of the M1 chip, and for providing critical technical support.
- Special thanks to Arm Holdings, whose engineers did a code review of our Arm64 changes and made performance improvements.
Before that, we added initial support for Arm64 with .NET Core 3.0 and Arm32. The team has made significant investments in Arm64 in the last few releases, and this will continue for the foreseeable future. In .NET 6, our main focus is on supporting the new Apple Silicon silicon and x64 emulation scenarios on the macOS and Windows Arm64 operating systems.
You can install Arm64 and x64 versions of .NET on macOS 11+ and Windows 11+ Arm64 operating systems. We had to make multiple design choices and product changes to make sure it worked.
Our strategy is "pro-native architecture". We recommend that you always use the SDK that matches the native architecture, the Arm64 SDK on macOS and Windows Arm64. SDK is a lot of software. Running natively on an Arm64 chip will perform much better than emulation. We've updated the CLI to simplify things. We're never going to focus on optimizing to emulate x64.
By default, if you dotnet run a .NET 6 application with the Arm64 SDK, it will run as Arm64. You can easily switch to running in x64 with an argument such as -adotnet run -a x64. The same argument applies to other CLI verbs. For more information, see .NET 6 RC2 Update for macOS and Windows Arm64.
I want to make sure to cover one of the subtleties. When you use -a x64, the SDK still runs natively in Arm64. There are fixed points of process boundaries in the .NET SDK architecture. In most cases, a process must be either all Arm64 or all x64. I'm simplifying a bit, but the .NET CLI waits for the last process in the SDK architecture to be created, then starts it as your requested chip architecture (eg x64). This is how your code runs. That way, as a developer, you get the benefits of Arm64, but your code can run as long as it needs to. This is only relevant if you need to run some code as x64. If you don't, you can always run everything in Arm64, which is great.
Arm64 supports
For macOS and Windows Arm64, here's what you need to know:
- .NET 6 Arm64 and x64 SDKs are supported and recommended.
- All supported Arm64 and x64 runtimes are supported.
- The .NET Core 3.1 and .NET 5 SDKs work, but offer fewer features and are not fully supported in some cases.
- dotnet test has not yet worked correctly with x64 emulation. We are working on it. dotnet test will be improved as part of the 6.0.200 release, and possibly earlier.
For more complete information, see .NET Support for macOS and Windows Arm64.
Linux is missing from this discussion. It does not support x64 emulation like macOS and Windows. Therefore, these new CLI features and support methods are not directly applicable to Linux, nor are they required for Linux.
Windows Arm64
We have a simple tool to demonstrate the environment in which .NET is running.
C:Usersrich>dotnet tool install -g dotnet-runtimeinfo
You can invoke the tool using the following command: dotnet-runtimeinfo
Tool 'dotnet-runtimeinfo' (version '1.0.5') was successfully installed.
C:Usersrich>dotnet runtimeinfo
42
42 ,d ,d
42 42 42
,adPPYb,42 ,adPPYba, MM42MMM 8b,dPPYba, ,adPPYba, MM42MMM
a8" `Y42 a8" "8a 42 42P' `"8a a8P_____42 42
8b 42 8b d8 42 42 42 8PP""""""" 42
"8a, ,d42 "8a, ,a8" 42, 42 42 "8b, ,aa 42,
`"8bbdP"Y8 `"YbbdP"' "Y428 42 42 `"Ybbd8"' "Y428
As you can see, the tool runs natively on Windows Arm64. I'll show you what ASP.NET Core looks like.
macOS Arm64
You can see that the experience on macOS Arm64 is similar, and the architectural goals are also shown.
rich@MacBook-Air app % dotnet --version
6.0.100
rich@MacBook-Air app % dotnet --info | grep RID
RID: osx-arm64
rich@MacBook-Air app % cat Program.cs
using System.Runtime.InteropServices;
using static System.Console;
WriteLine($"Hello, {RuntimeInformation.OSArchitecture} from {RuntimeInformation.FrameworkDescription}!");
rich@MacBook-Air app % dotnet run
Hello, Arm64 from .NET 6.0.0-rtm.21522.10!
rich@MacBook-Air app % dotnet run -a x64
Hello, X64 from .NET 6.0.0-rtm.21522.10!
rich@MacBook-Air app %
This image shows how Arm64 execution is the default for the Arm64 SDK and how easy it is to switch between targeting Arm64 and x64 using the -a parameter. The exact same experience works on Windows Arm64.
This image demonstrates the same thing, but using ASP.NET Core. I'm using the same .NET 6 Arm64 SDK as you can see in the image above.
Docker on
Docker supports containers running in native architecture and emulation, with native architecture being the default. This may seem obvious, but it can be confusing when most Docker Hub catalogs are x64 oriented. You can use --platform linux/amd64 to request x64 images.
We only support running Linux Arm64 .NET container images on Arm64 OS. This is because we never supported running .NET in QEMU, which is what Docker uses for architectural emulation. It appears that this may be due to a limitation of QEMU.
This image demonstrates a console sample we maintain: mcr.microsoft.com/dotnet/samples. It's an interesting sample because it contains some basic logic to print information about the CPU and memory limits you can use. The image I'm showing sets CPU and memory limits.
Try it yourself: docker run --rm mcr.microsoft.com/dotnet/samples
Arm64 Performance
The Apple Silicon and x64 emulation support projects are very important, however, we have also generally improved Arm64 performance.
This image demonstrates the improvement of zeroing the contents of a stack frame, which is a common operation. The green line is the new behavior, while the orange line is another (less beneficial) experiment, both improving relative to the baseline, represented by the blue line. For this test, lower is better.
container
.NET 6 is better for containers, mostly based on all the improvements discussed in this article, for Arm64 and x64. We've also made key changes that help in various scenarios. Validating container improvements with .NET 6 demonstrates that some of these improvements are being tested together.
Windows container improvements and new environment variables are also included in the November .NET Framework 4.8 container update, which will be released on November 9 (tomorrow).
Release notes can be found in our docker repository:
.NET 6 Container Release Notes
.NET Framework 4.8 November 2021 Container Release Notes
Windows container
.NET 6 adds support for Windows Process Isolation Containers. If you're using Windows containers with Azure Kubernetes Service (AKS), you're relying on process-isolated containers. Process isolation containers can be thought of as very similar to Linux containers. Linux containers use cgroups, and Windows process isolation containers use Job Objects. Windows also offers Hyper-V containers, which provide greater isolation through stronger virtualization. There are no changes in .NET 6 for Hyper-V containers.
The main value of this change is that now Environment.ProcessorCount will report the correct value using the Windows Process Isolation container. If a 2-core container is created on a 64-core machine, Environment.ProcessorCount will return 2. In previous versions, this property would report the total number of processors on the machine, as specified by the Docker CLI, Kubernetes, or other container orchestrator/runtime Restrictions are irrelevant. This value is used by various parts of .NET for extension purposes, including the .NET garbage collector (although it relies on the associated lower-level API). Community libraries also rely on this API for extensions.
We recently validated this new feature with customers on Windows containers in production using a large number of pods on AKS. They were able to run successfully at 50% memory (compared to their typical configuration), which was previously causing the OutOfMemoryException level StackOverflowException. They didn't take the time to find the minimum memory configuration, but we're guessing it's significantly less than 50% of their typical memory configuration. As a result of this change, they will save money by moving to cheaper Azure configurations. Just level up and it's a nice, easy win.
Optimized scaling
We've heard from users that some applications don't scale optimally when Environment.ProcessorCount reports the correct value. If this sounds the opposite of what you just read about Windows containers, it's kind of like it. .NET 6 now provides the DOTNET_PROCESSOR_COUNT environment variable to manually control the value of Environment.ProcessorCount. In a typical use case, an application might be configured with 4 cores on a 64-core machine and scale best with 8 or 16 cores. This environment variable can be used to enable that scaling.
This model might look odd, where the Environment.ProcessorCount and --cpus (via Docker CLI) values might be different. By default, the container runtime targets core equivalents, not actual cores. This means that when you say you want 4 cores, you get comparable CPU time to 4 cores, but your application might (in theory) run on more cores, even at 64 for a short period of time Core machines run all 64 cores. This might enable your application to scale better on more than 4 threads (continuing the example), and it might be beneficial to allocate more. This assumes thread allocation is based on the value of Environment.ProcessorCount. If you choose to set a higher value, your application may use more memory. For some workloads, this is a simple tradeoff. At least, this is a new option you can test.
Both Linux and Windows containers support this new feature.
Docker also provides a CPU group feature that your application can associate to a specific core. This feature is not recommended in this case, as the number of cores an application can access is specifically defined. We've also seen some issues with using it with Hyper-V containers, and it doesn't really work in that isolation mode.
Debian 11 “bullseye”
We pay close attention to the life cycle and release schedule of Linux distributions and try to make the best choice on your behalf. Debian is the Linux distribution we use for our default Linux image. If you pull the 6.0 tag from one of our container repositories, you will pull a Debian image (assuming you are using Linux containers). With each new .NET release, we consider whether we should adopt a new Debian release.
As a policy, we do not change Debian releases for convenience of labels, eg 6.0, mid-release. If we did, some applications would definitely crash. This means that it is very important to choose the Debian version at the beginning of the release. Also, these images get a lot of use, mostly because they're a "good hashtag" reference.
Debian and .NET releases are naturally not planned together. When we started .NET 6, we saw that Debian "bullseye" might be released in 2021. We decided to bet on Bullseye from launch. We started shipping bullseye-based container images with .NET 6 Preview 1 and decided not to look back. The bet is that the .NET 6 version will lose the race against the bullseye version. As of August 8th, we still don't know when Bullseye will ship, three months before our own release, which is November 8th. We don't want to release production .NET 6 on preview Linux, but we're sticking with our plan that we're going to lose the race late.
We were pleasantly surprised when Debian 11 "bullseye" was released on August 14th. We lost the game but won the bet. This means that .NET 6 users can get the best and latest Debian from day one by default. We believe Debian 11 and .NET 6 will be a great combination for many users. Sorry nemesis, we hit the bullseye.
Newer distributions include newer major versions of various packages in their package feeds and generally get CVE fixes faster. This is in addition to newer kernels. The new distribution can serve users better.
Looking further ahead, we'll start planning support for Ubuntu 22.04 soon. Ubuntu is another Debian family of distributions popular with .NET developers. We want to provide same-day support for new Ubuntu LTS releases.
Kudos to Tianon Gravi for maintaining Debian images for the community and helping us when we have questions.
Dotnet Monitor
dotnet monitor is an important diagnostic tool for containers. It has been available as a sidecar container image for some time, but in an unsupported "experimental" state. As part of .NET 6, we are releasing a .NET 6 based dotnet monitor image that is fully supported in production.
dotnet monitor has been used by Azure App Service as an implementation detail for its ASP.NET Core Linux diagnostic experience. This is one of the expected scenarios, building on dotnet monitor to provide a higher level and higher value experience.
You can now pull new images:
docker pull mcr.microsoft.com/dotnet/monitor:6.0
dotnet monitor makes it easier to access diagnostic information (logs, traces, process dumps) from .NET processes. It's easy to access all the diagnostic information you need on your desktop, however, these familiar techniques may not work in a production environment using containers. dotnet monitor provides a unified way to collect these diagnostic artifacts, whether running on your desktop computer or in a Kubernetes cluster. There are two different mechanisms for collecting these diagnostic artifacts:
- HTTP API for ad hoc collection of artifacts. You can call these API endpoints when you already know that your application is having a problem and you are interested in gathering more information.
- Rule-based configuration triggers for always-online collection of artifacts. You can configure rules to collect diagnostic data when required conditions are met, for example, collect process dumps when you have persistently high CPU.
dotnet monitor provides a generic diagnostic API for .NET applications that can work anywhere with any tool. "Generic API" is not a .NET API, but a Web API that you can call and query. dotnet monitor includes an ASP.NET web server that interacts directly with the diagnostic server in the .NET runtime and exposes data from the diagnostic server. The design of dotnet monitor enables high performance monitoring and secure use in production to control access to privileged information. The dotnet monitor interacts with the runtime via a non-Internet-addressable unix domain socket - across container boundaries. This model communication model is a good fit for this use case.
Structured JSON log
JSON formatter is now the default console logger in aspnet.NET 6 container images. The default in .NET 5 is a simple console formatter. This change was made to make the default configuration work with automation tools that rely on machine-readable formats such as JSON.
The output of the image now looks like this
aspnet:
$ docker run --rm -it -p 8000:80 mcr.microsoft.com/dotnet/samples:aspnetapp
{"EventId":60,"LogLevel":"Warning","Category":"Microsoft.AspNetCore.DataProtection.Repositories.FileSystemXmlRepository","Message":"Storing keys in a directory u0027/root/.aspnet/DataProtection-Keysu0027 that may not be persisted outside of the container. Protected data will be unavailable when container is destroyed.","State":{"Message":"Storing keys in a directory u0027/root/.aspnet/DataProtection-Keysu0027 that may not be persisted outside of the container. Protected data will be unavailable when container is destroyed.","path":"/root/.aspnet/DataProtection-Keys","{OriginalFormat}":"Storing keys in a directory u0027{path}u0027 that may not be persisted outside of the container. Protected data will be unavailable when container is destroyed."}}
{"EventId":35,"LogLevel":"Warning","Category":"Microsoft.AspNetCore.DataProtection.KeyManagement.XmlKeyManager","Message":"No XML encryptor configured. Key {86cafacf-ab57-434a-b09c-66a929ae4fd7} may be persisted to storage in unencrypted form.","State":{"Message":"No XML encryptor configured. Key {86cafacf-ab57-434a-b09c-66a929ae4fd7} may be persisted to storage in unencrypted form.","KeyId":"86cafacf-ab57-434a-b09c-66a929ae4fd7","{OriginalFormat}":"No XML encryptor configured. Key {KeyId:B} may be persisted to storage in unencrypted form."}}
{"EventId":14,"LogLevel":"Information","Category":"Microsoft.Hosting.Lifetime","Message":"Now listening on: http://[::]:80","State":{"Message":"Now listening on: http://[::]:80","address":"http://[::]:80","{OriginalFormat}":"Now listening on: {address}"}}
{"EventId":0,"LogLevel":"Information","Category":"Microsoft.Hosting.Lifetime","Message":"Application started. Press Ctrlu002BC to shut down.","State":{"Message":"Application started. Press Ctrlu002BC to shut down.","{OriginalFormat}":"Application started. Press Ctrlu002BC to shut down."}}
{"EventId":0,"LogLevel":"Information","Category":"Microsoft.Hosting.Lifetime","Message":"Hosting environment: Production","State":{"Message":"Hosting environment: Production","envName":"Production","{OriginalFormat}":"Hosting environment: {envName}"}}
{"EventId":0,"LogLevel":"Information","Category":"Microsoft.Hosting.Lifetime","Message":"Content root path: /app","State":{"Message":"Content root path: /app","contentRoot":"/app","{OriginalFormat}":"Content root path: {contentRoot}"}}
Logging__Console__FormatterName The logger format type can be changed by setting or unsetting an environment variable or through code changes (see Console log format for more details).
After the change, you will see output like this (just like .NET 5):
$ docker run --rm -it -p 8000:80 -e Logging__Console__FormatterName="" mcr.microsoft.com/dotnet/samples:aspnetapp
warn: Microsoft.AspNetCore.DataProtection.Repositories.FileSystemXmlRepository[60]
Storing keys in a directory '/root/.aspnet/DataProtection-Keys' that may not be persisted outside of the container. Protected data will be unavailable when container is destroyed.
warn: Microsoft.AspNetCore.DataProtection.KeyManagement.XmlKeyManager[35]
No XML encryptor configured. Key {8d4ddd1d-ccfc-4898-9fe1-3e7403bf23a0} may be persisted to storage in unencrypted form.
info: Microsoft.Hosting.Lifetime[14]
Now listening on: http://[::]:80
info: Microsoft.Hosting.Lifetime[0]
Application started. Press Ctrl+C to shut down.
info: Microsoft.Hosting.Lifetime[0]
Hosting environment: Production
info: Microsoft.Hosting.Lifetime[0]
Content root path: /app
Note: This change does not affect .NET SDKs on developer machines, such as dotnet run. This change is specific to aspnet container images.
supports OpenTelemetry metrics
As part of our focus on observability, we've been adding support for OpenTelemetry for the last few .NET releases. In .NET 6, we added support for the OpenTelemetry Metrics API. By adding support for OpenTelemetry, your application can seamlessly interoperate with other OpenTelemetry systems.
System.Diagnostics.Metrics is a .NET implementation of the OpenTelemetry Metrics API specification. The Metrics API is specifically designed to process raw measurements in order to generate continuous summaries of those measurements efficiently and simultaneously.
The API includes classes that Meter can use to create instrument objects. The API exposes four utility classes: Counter, Histogram, ObservableCounter, and ObservableGauge to support different measurement schemes. Additionally, the API exposes the MeterListener class to allow listening to measurements recorded by the instrument for aggregation and grouping purposes.
The OpenTelemetry .NET implementation will be extended to use these new APIs that add support for Metrics observability scenarios.
Library measurement record example
Meter meter = new Meter("io.opentelemetry.contrib.mongodb", "v1.0");
Counter<int> counter = meter.CreateCounter<int>("Requests");
counter.Add(1);
counter.Add(1, KeyValuePair.Create<string, object>("request", "read"));
Listening example
MeterListener listener = new MeterListener();
listener.InstrumentPublished = (instrument, meterListener) =>
{
if (instrument.Name == "Requests" && instrument.Meter.Name == "io.opentelemetry.contrib.mongodb")
{
meterListener.EnableMeasurementEvents(instrument, null);
}
};
listener.SetMeasurementEventCallback<int>((instrument, measurement, tags, state) =>
{
Console.WriteLine($"Instrument: {instrument.Name} has recorded the measurement {measurement}");
});
listener.Start();
Windows Forms
We continue to make important improvements in Windows Forms. .NET 6 includes better control accessibility, the ability to set application-wide default fonts, template updates, and more.
Accessibility improvements
In this release, we've added UIA providers for CheckedListBox, LinkLabel, Panel, ScrollBar, and TabControlTrackBar, which enable tools like Narrator and test automation to interact with the elements of the application.
default font
You can now use .Application.SetDefaultFont
void Application.SetDefaultFont(Font font)
Minimal app
Here is a minimal Windows Forms application with .NET 6:
class Program
{
[STAThread]
static void Main()
{
ApplicationConfiguration.Initialize();
Application.Run(new Form1());
}
}
As part of the .NET 6 release, we've been updating most of our templates to be more modern and minimalistic, including Windows Forms. We decided to make the Windows Forms template more traditional, in part because of the need to apply the [STAThread] attribute to the application entry point. However, there's more drama than immediately unfolding.
ApplicationConfiguration.Initialize() is a source generation API that makes the following calls behind the scenes:
Application.EnableVisualStyles();
Application.SetCompatibleTextRenderingDefault(false);
Application.SetDefaultFont(new Font(...));
Application.SetHighDpiMode(HighDpiMode.SystemAware);
The parameters for these calls are configurable via the csproj or MSBuild properties in the props file.
The Windows Forms designer in Visual Studio 2022 also knows about these properties (for now it only reads the default font) and can show you your application as if it were running:
Template update
The Windows Forms templates for C# have been updated to support new application bootstrapping, global using directives, file-scoped namespaces, and nullable reference types.
More runtime designers
Now you can build generic designers (for example, report designers) because .NET 6 has all the pieces that designers and designer-related infrastructure are missing. See this blog post for details.
Single file application
In .NET 6, in-memory single-file applications have been enabled for Windows and macOS. In .NET 5, this deployment type is limited to Linux. You can now publish single-file binaries that are deployed and launched as a single file for all supported operating systems. Single-file apps no longer extract any core runtime assemblies to a temporary directory.
This extended functionality is based on building blocks called "superhosts". "apphost" is the executable that launches the application in a non-single-file case, such as myapp.exe or ./myapp. Apphost contains the code to find the runtime, load it, and use that runtime to start your application. Superhost still performs some of these tasks, but uses statically linked copies of all CoreCLR native binaries. Static linking is the approach we use to achieve a single file experience. Native dependencies, such as those that come with NuGet packages, are a notable exception to single-file embedding. By default, they are not included in a single file. For example, WPF native dependencies are not part of the superhost and thus result in additional files outside of a single-file application. You can use the setting IncludeNativeLibrariesForSelfExtract to embed and extract native dependencies.
static analysis
We improved the single-file analyzer to allow custom warnings. If your API doesn't work in single-file publishing, you can now mark it with the [RequiresAssemblyFiles] attribute, which will give you a warning if the analyzer is enabled. Adding this property will also silence all warnings in the method related to a single file, so you can use that to propagate warnings up to your public API.
When PublishSingleFile is set to true, the single-file analyzer is automatically enabled for exe projects, but you can also enable it for any project by setting EnableSingleFileAnalysis to true. This is helpful if you want to support libraries as part of a single file application.
In .NET 5, we added warnings for Assembly.Location and some other APIs that behave differently in single-file packages.
compressed
Single-file packages now support compression, which can be achieved by setting the property EnableCompressionInSingleFile to true. At runtime, the file is decompressed into memory as needed. Compression can save a lot of space for some scenarios.
Let's look at a single file distribution (with and without compression) used with the NuGet Package Explorer.
Uncompressed: 172 MB
Compressed: 71.6 MB
Compression can significantly increase application startup time, especially on Unix platforms. Unix platforms have a copyless fastboot path that cannot be used for compression. You should test your application with compression enabled to see if the additional startup cost is acceptable.
single file debug
Currently only single-file applications can be debugged using platform debuggers such as WinDBG. We are considering adding Visual Studio debugging with a later version of Visual Studio 2022.
Single File Signature on macOS
Single-file applications now meet Apple's notarization and signing requirements on macOS. The specific changes relate to the way we build single-file applications based on discrete file layouts.
Apple begins implementing new signing and notarization requirements for macOS Catalina. We've been working closely with Apple to understand requirements and find solutions that enable development platforms like .NET to work well in this environment. We've made product changes and documented user workflows to meet Apple's requirements in the last few .NET releases. One of the remaining gaps is single-file signing, a requirement for distributing .NET applications on macOS, including in the macOS Store.
IL trim
The team has been working on IL trimming for multiple releases. .NET 6 represents an important step forward in this journey. We've been working hard to make the more aggressive trim mode safe and predictable, so feel confident making it the default. TrimMode=link used to be an optional feature and is now the default.
We have a three-pronged pruning strategy:
- Improves the trimming capacity of the platform.
- Annotate the platform to provide better warnings and enable others to do the same.
- On top of that, make the default trim mode more aggressive in order to make the app smaller.
Trimming was kept in preview until the results were unreliable for applications using unannotated reflections. With trim warnings, the experience should now be predictable. Applications without pruning warnings should prune correctly and observe no change in behavior at runtime. Currently, only the core .NET library is fully annotated with trimming, but we would like to see the ecosystem annotating trimming and trimming compatible
Reduce application size
Let's take a look at this pruning improvement using crossgen, one of the SDK tools. It can be trimmed with several trim warnings that the crossgen team was able to address.
First, let's look at publishing crossgen as a standalone application without pruning. It is 80 MB (including .NET runtime and all libraries).
Then we can try the (now legacy) .NET 5 default trim mode, copyused. The result is down to 55 MB.
The new .NET 6 default trim mode link further reduces the standalone file size to 36MB.
We hope that the new link pruning mode better aligns with the expectations of pruning: significant savings and predictable results.
Warnings enabled by default
Trim warnings tell you where trimming might remove code used at runtime. These warnings were previously disabled by default because the warnings were very noisy, mainly due to the .NET platform not participating in pruning as a first-class scenario.
We annotated most of the .NET libraries so that they produce accurate trim warnings. Therefore, we felt it was time to enable pruning warnings by default. ASP.NET Core and Windows Desktop Runtime Libraries are not yet annotated. We plan to annotate ASP.NET service components next (after .NET 6). We'd like to see the community comment on NuGet libraries after .NET 6 is released.
You can disable warnings by setting <SuppressTrimAnalysisWarnings> to true.
More information:
- pruning warning
- Introduction to Pruning
- Prepare .NET library for pruning
Share with native AOT
We also implemented the same pruning warnings for Native AOT experiments, which should improve the Native AOT compilation experience in almost the same way.
math
We have significantly improved the math API. Some in the community are already enjoying these improvements.
Performance-oriented API
A performance-oriented math API has been added to System.Math. Their implementation is hardware accelerated if the underlying hardware supports it.
New API:
- SinCos is used to calculate Sin and Cos simultaneously.
- ReciprocalEstimate is used to calculate an approximation of 1/x.
- ReciprocalSqrtEstimate is used to calculate an approximation of 1 / Sqrt(x).
New overload:
- Clamp, DivRem, Min and Max support nint and nuint.
- Abs and Sign support nint.
- The DivRem variant returns a tuple.
Performance improvements:
ScaleB was ported to C# resulting in a 93% faster call. Thanks Alex Covington.
Big integer performance
Improved parsing of BigIntegers from decimal and hex strings. We saw improvements of up to 89% as shown in the graph below (lower is better).
Thanks Joseph da Silva.
ComplexAPI is now annotated readonly
The various APIs are now annotated with System.Numerics.Complexreadonly to ensure that no copying in of readonly values or passed values is made.
Credit to hrrrrustic.
BitConverter now supports floating point to unsigned integer bit broadcasting
BitConverter now supports DoubleToUInt64Bits, HalfToUInt16Bits, SingleToUInt32Bits, UInt16BitsToHalf, UInt32BitsToSingle, and UInt64BitsToDouble. This should make floating-point bit manipulation easier w
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。