M2 chip
- The M2 chip uses an enhanced second-generation 5nm process technology and packs more than 20 billion transistors, 25% more than the M1;
- It supports 100GB/s unified memory bandwidth, which is 50% higher than M1, and supports up to 24GB of LPDD5 memory;
- The CPU architecture follows the 8-core architecture (4 performance cores + 4 energy efficiency cores), which is 18% higher than M1 performance;
- The GPU supports 10 cores, and the overall graphics performance is 25% higher than that of the M1 under the same power consumption, and the performance of the M1 is improved by up to 35% at the maximum power consumption level;
- Equipped with a new generation of neural network engine, it can process 15.8 trillion operations per second, which is 40% higher than M1;
- Equipped with a new generation of media processing engine, supports 8K H.264, HEVC codec; equipped with ProRes video processing engine; supports multi-channel 4K and 8K video decoding and playback;
Macbook Air
The newly released Macbook Air is equipped with the latest M2 chip and supports 1080p camera capture. So far, 7 Mac devices support 1080p video capture. Except for the MBA released this time, the remaining 6 models are the 2021 14-inch MBP, the 2021 16-inch MBP, the 2021 24-inch 2-port iMac, and the 2021 24-inch 4-port iMac. iMac, 27-inch iMac, iMac Pro.
In terms of performance, when using filters and image effects in Photoshop, it is 20% faster than the previous generation of Macbook Air with M1.
When editing video with final cut pro, its performance is 38% faster than the previous generation Macbook Air with M1.
Macbook Pro 13 inches
The newly released 13-inch Macbook Pro is equipped with the latest M2 chip and supports 720p camera capture.
In terms of performance, the ProRes video transcoding speed is 3 times faster than the previous generation; when using affinity photo to process images, the performance is 39% higher than the previous generation.
Baldur's Gate III game performance is 39% faster than the previous generation.
macOS
Continuity Camera
The Continuity camera feature supports using the iPhone as the video capture camera of the Mac device in a wireless connection, so that the features of the iPhone camera such as portrait mode, background blur, multi-camera capture, portrait centering, etc. can be used on the Mac; in addition to Facetime, third-party cameras Software can also use continuity camera, such as zoom, teams, Webex. This feature also supports using the iPhone as the Mac's audio input device.
With the continuity camera, external capture on Mac will be more flexible and easy to use in the future, providing more possibilities for multi-channel capture of audio and video on Mac.
ScreenCaptureKit
ScreenCaptureKit is a Mac-side screen capture framework launched in macOS 12.3, which enables high-performance, fine-tuned screen capture. This issue of WWDC highlights the features of ScreenCaptureKit:
Supports customizable screen content capture, supports full-screen capture, supports adding or deleting one or more windows specified in the screen content, and supports the capture of a single window content.
Supports simultaneous capture of video and audio of the application.
Supports the setting of acquisition parameters, including output resolution, acquisition area, color format, pixel format, cursor display, frame rate, cache queue size, and dynamic settings of acquisition parameters.
Capture frames are buffered in the GPU to reduce memory copies.
Based on hardware-accelerated capture, scaling, pixel and color format conversion, the CPU consumption of screen capture can be effectively reduced, and the overall performance is excellent.
Comparing the capture with CGWindowListCeateImage API and ScreenCaptureKit on OBS: in the same scene, the capture frame rate of CGWindowListCeateImage API is 7fps, while that of ScreenCaptureKit can reach 60fps; the capture memory using ScreenCaptureKit is reduced by 15%, and the CPU consumption is reduced by 50%.
iOS
spatial audio
iOS 14 began to support spatial audio technology, which simulates traditional surround sound in the sense of hearing of AirPods Pro through directional audio filtering and subtle adjustments to the sound frequencies received by the user's binaural ears. This simulation does not just stop at realizing surround sound, but simulates the iOS device in the user's hand as a set of speaker devices in a fixed position in the space.
Now in iOS 16, you can use your phone's TrueDepth camera to create personalized spatial audio profiles for a more accurate and immersive personalized listening experience.
Metal 3
Metal 3 has done a lot of performance optimization on the basis of the previous version, and introduced many important new features and APIs. Let's introduce its main features one by one.
new features
- MetalFX Upscaling
GPU rendering renders lower resolution frames first, then utilizes the MetalFX framework to perform antialiasing and upsample to the target resolution, saving overall rendering time compared to directly rendering frames of the same scene at the target resolution. MetalFX offers two upscale methods: Temporal antialiased upscaling and Spatial upscaling.
- Fast Resource Loading
In order to reduce GPU resource loading time, Metal 3 adds a fast resource loading API, which provides a more direct path from the storage device to the GPU, minimizing the waiting time for resource loading, so that the GPU can access faster Textures and buffers.
Usually, in order to optimize the resource loading time, a low-quality resource material is loaded first until the high-quality resource is loaded.
The new Fast Assets API in Metal 3 provides faster and more consistent performance, and increases asset loading speed, allowing more time for high-quality assets to be drawn.
- Offline Shader Compilation
Shader compilation often needs to be completed at runtime, and runtime compilation may affect performance, resulting in a drop in frame rate, prolonged loading time, and impact on user experience; Metal 3 supports offline shader compilation, which can generate shader binary files when the project is built. This reduces loading time.
- Mesh Shaders
The original vertex shaders are replaced by new object and mesh shaders in the rendering pipeline, allowing for more flexible occlusion culling and LOD selection.
- Ray Tracing Optimization
Metal 3 optimizes ray tracing and saves CPU and GPU time significantly:
- Improved acceleration structure build speed
- Move some operations from CPU to GPU to reduce CPU overhead
- Optimize intersection and shading operations with direct access to primitive data
- Machine Learning Hardware Acceleration
Metal 3 does a lot of optimizations to support hardware acceleration for machine learning
Supported models
HLS
HLS Content Steering
HLS Content Steering is an HLS content steering (Content Steering) mechanism proposed by Apple to improve the availability of global streaming services. With the HLS content steering mechanism, content providers can establish side channels with all HLS clients by deploying a content steering server. During the use of streaming media services, the client will periodically send a Steering Manifest request to the content steering server, and the steering server will send a steering list to the client according to the current situation of the client, that is, the priority of the CDN service, so that the latest CDN policies are applied to clients.
Today, HLS Content Steering supports path cloning.
This feature is compatible with Content Stering 1.2.
With this feature, a new CDN path can be added to the existing CDN list.
When adding a new CDN path in the configuration file, you do not need to fill in the full URI, just fill in the server and parameter fields to support flexible URI replacement rules.
HLS Interstitials
HLS interstitials is an HLS specification launched by Apple in 2021 to make the deployment of advertising content more convenient, whether on the server or client side, it no longer needs to rely on special tags in SSAI.
This issue of HLS interstitials adds the following features:
Support CUE attribute configuration, which can configure pre-video content advertisement, video content post-advertising, and one-time interstitial advertisement.
Supports the X-SNAP attribute, which can be used to calibrate the time offset of advertisement insertion in live broadcast scenarios.
New request parameters:
- HLS_start_offset: Get the duration of the ad that has been played.
- HLS_primary_id: Identifies the playback segment and ad to avoid repeating the same ad.
AVFoundation API provides AVPlayerInterstitialController and AVPlayerInterstitialEvent to support client-side interstitial ad playback.
AVQT
In 2021, Apple launched the Advanced Video Quality Tool (AVQT), leveraging the AVFoundation framework, AVQT supports various video formats, codecs, resolutions and frame rates in the SDR and HDR domains, enabling simple and efficient workflows - such as , without decoding to raw pixel format etc. AVQT uses Metal, which offloads heavy pixel-level computations to the GPU for high processing speed, and is often used to analyze video that exceeds the frame rate of real-time video. With excellent ease of use and computational efficiency, AVQT enables the removal of low-quality videos from the video catalog before they otherwise affect users in the application.
This year AVQT brings the following updates:
Supports the generation of HTML-based visual reports, which can easily mark issues and share reports.
Supports analyzing the video quality of video clips within the start and end time
Expanded the support types for YUV format, can support 20 formats, including 444, 422, 420, 411, 410, and also supports 8-bit, 10-bit, 12-bit, 16-bit formats; support for uncompressed Analysis of native video; supports analysis of compressed and decoded video outside the Apple ecosystem.
Linux systems are supported, enabling server-side deployment.
DriverKit
DriverKit is a framework for developing device drivers. Currently supports driver development for the following modules: Networking, Block Storage, Serial, Audio, USB, PCI, HID, SCSI Controllers, SCSI Periphersals.
The updates brought by DriverKit this time mainly include:
AudioDriverKit supports registration of real-time callbacks; callbacks can be obtained every time an IO operation occurs; callbacks can be used for real-time processing threads such as signal processing, etc.
Enables the new permission setting parameter.
DriverKit supports iPad, USBDriverKit, PCIDriverKit and AudioDriverKit are available for iPadOS 16, and iPads with M1 chip are available.
EDR
EDR (Extended Dynamic Range) is a rendering technology introduced by Apple, which can support the device to correctly display SDR and HDR content on the screen at the same time. EDR does not directly brighten the HDR area, but improves the recognition of the HDR content. Along with the overall screen brightness, reduce the white point value in non-HDR areas so that they don't look as bright.
In this WWDC, Apple once again introduced the EDR principle through several keynote speeches; how to use the CoreImage library to display EDR video images; how to use the AVFoundation library to decode HDR video and make it support EDR display playback.
The following new features of the EDR API are introduced:
iOS and iPadOS support the EDR API.
The 12.9-inch iPad Pro adds two new features: EDR rendering support in Reference mode and Sidecar mode (Sidecar is an Apple technology that supports the iPad as an extended screen for Mac).
Summarize
NetEase Cloud Audio Video Call 2.0 SDK already supports 4K & 8K ultra-high resolution and is used on certain conference systems. The newly released Macbook Air and Macbook Pro 13 can be used with 4K & 8K cameras or other high-definition video sources. Great experience in super high resolution.
NetEase Cloud Signal Video Call 2.0 SDK supports GPU-based video pre-processing framework , which can maximize GPU computing power and save CPU consumption, and bring excellent video pre-processing capabilities with excellent performance; in the future, the new features of Metal 3 will be further Take advantage of the performance advantages of NetEase Cloud Audio and Video Call 2.0 SDK.
NetEase Cloud Audio and Video Call 2.0 SDK PC and Mac now support dual-camera video capture to fully meet users' video needs in different scenarios; The camera function brings more application scenarios and play space on the Mac side.
NetEase Cloud Messenger Audio and Video Call 2.0 SDK now supports high-performance screen capture and only captures required windows and content. ScreenCaptureKit brings more optional technical solutions to the screen capture of the new Mac system.
In general, the new features of this WWDC in the direction of audio, video and multimedia have brought more imagination to developers. In the future, NetEase Cloud Messenger Audio and Video Call 2.0 SDK will be even more powerful under the blessing of these new features.
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。