Authors: zhiqiang, sunfei, wangli, Huawei software development engineers
Introduction to UI framework and industry
UI, the user interface, mainly includes vision (such as visual content such as images, text, animation, etc.) and interaction (such as button clicks, list sliding, image zooming and other user operations). UI framework for the development of UI is provided infrastructure , such as layout view, UI components, incident response mechanism.
From the perspective of operating system platform support, UI frameworks can generally be divided into native UI frameworks and cross-platform UI frameworks .
1. native UI framework . This generally refers to the UI framework that comes with the operating system. Typical examples include the UI Kit of iOS and the View framework of Android. These UI frameworks are deeply bound to the operating system and generally only run on the corresponding operating system. Function, performance, development and commissioning, etc. are well integrated with the corresponding operating system.
2. cross-platform UI framework . This generally refers to independent UI frameworks that can run on different platforms (OS). Typical examples include HTML5 and the front-end framework React Native extended based on HTML5, as well as Google's Flutter. The goal of a cross-platform UI framework is that the code needs to be written only once, with little or no modification, and can be deployed to different operating system platforms. Of course, the realization of cross-platform also comes at a price. Due to the differences between different platforms (such as differences in UI presentation, API differences, etc.) .
From a programming point of view, UI frameworks can generally be divided into imperative UI framework and declarative UI framework :
1. imperative UI framework . Process-oriented - tell the "machine" specific steps, and instruct the "machine" to follow the specified steps. For example, Android's native UI framework (View framework) or iOS's UIKit provides a series of APIs for developers to directly manipulate UI components - such as locating to a specified UI component, making property changes, etc. The advantage of this approach is that developers can control the specific implementation path, and experienced developers can write more efficient implementations. However, in this case, developers need to understand a lot of API details and specify a specific execution path, and the development threshold is high. The specific implementation effect is also highly dependent on the development skills of the developers themselves. In addition, due to the tight binding with the specific implementation, the flexibility and scalability are relatively limited in the case of cross-device.
2. Declarative UI Framework . Results Oriented - Tell the "machine" what you need, and the machine is responsible for how to do it. For example, the web front-end framework Vue, or SwiftUI of iOS, etc., the framework will render the corresponding UI according to the description of the declarative syntax. At the same time, combined with the corresponding programming model, the framework will automatically update the corresponding UI according to the data changes.
The advantage of this approach is that developers only need to describe the results, and the corresponding implementation and optimization are handled by the framework. In addition, due to the separation of result description and specific implementation, the implementation is relatively flexible and easy to expand. However, in this case, the requirements for the framework are relatively high, and the framework needs to have complete and intuitive description capabilities and be able to efficiently process the corresponding description information.
UI frameworks are a core component of application development. Looking at the UI framework in the industry, its main development trends are as follows:
1. From imperative UI to declarative UI
Such as UIKit in iOS to SwiftUI, View in Android to Jetpack Compose.
This can achieve more intuitive and convenient UI development.
framework and language runtime 161f20957b115f
SwiftUI, Jetpack Compose, and Flutter all make use of their own language features - for example, in terms of UI description, the Swift language in SwiftUI and the Kotlin language in Jetpack Compose all simplify the UI description syntax; in terms of performance, Swift introduces lightweight structures. Other language features can better achieve fast memory allocation and release, while the Dart language in Flutter is optimized for small object memory management at runtime.
3. Cross-platform (OS) capability
Cross-platform (OS) capabilities allow a set of codes to be reused on different OSs, mainly to improve development efficiency and reduce development costs. However, there are also a series of challenges, such as performance issues running on different platforms, and the consistency of capabilities and rendering effects. The industry is also constantly evolving in this regard, mainly in several ways:
1. JS/Web solutions, such as HTML5, use the standardized ecology of JS/Web to achieve cross-platform goals through the corresponding Web engine;
2. JS+Native hybrid method, such as React Native, Weex, etc., combined with JS bridging to native UI components, realizes a set of application code that can run on different OS;
3. Platform-independent UI self-drawing capability + new languages, such as Flutter, the entire UI is drawn by the framework layer based on the underlying canvas, and combined with the Dart language to achieve a complete UI framework. From the very beginning of its design, Flutter regards cross-platform capability as an important competitiveness to attract more developers.
In addition, it is interesting that some native development frameworks have also begun to evolve towards cross-platform. For example, Jetpack Compose, Android's native development framework, has also begun to target cross-OS support, and plans to expand Compose to desktop platforms, such as Windows and MacOS.
In addition, with the popularization of smart devices, in multi-device scenarios, the morphological differences of the devices (screen size, resolution, shape, interaction mode, etc.), as well as the differences in the capabilities of the devices (from 100K-class memory to G-class memory devices, etc.) , and applications need to collaborate across different devices, all of which bring new challenges to UI frameworks and application development.
What is the ACE UI framework
The full name of ACE is Ability Cross-platform Environment (meta-ability cross-platform execution environment). It is a UI framework for applications designed by Huawei on HarmonyOS. The ACE UI framework combined with HarmonyOS' basic operating unit Ability, language and runtime, and various platform (OS) capability APIs form the basis for HarmonyOS application development, realizing cross-device distributed scheduling, and atomic service installation-free capabilities. .
ACE provides two development languages for different developers to choose from, namely Java language and JavaScript language. Among them, Java only supports the use of devices with large memory, such as large screens, mobile phones, tablets, etc., while JavaScript supports the use of hundreds of languages. Use on Class K to G equipment.
In the multi-device scenarios, due to the different shape of the huge difference in equipment and equipment capacity, the industry has no single UI framework can better resolve the problems.
So the overall design idea of
1. Establish a layered mechanism, introduce an efficient UI basic backend, and be able to decouple from the OS platform to form a consistent UI experience
2. Expand the application ecosystem through multiple front ends, and combine with declarative UI to continuously evolve in development efficiency
3. The framework layer combines language and runtime, distributed, componentized design, etc., to further enhance the experience around cross-device
ACE parses the UI interface of the application, generates specific drawing instructions by creating back-end specific UI components, performing layout calculations, resource loading, etc., and sends the drawing commands to the rendering engine, which converts the drawing instructions into specific screen pixels. , and finally convert the application program into a visible interface effect and display it to the user through the display device.
The overall architecture of the ACE UI framework is shown in the figure below, which is mainly composed of the front-end framework layer, the bridge layer, the engine layer and the platform abstraction layer. We will introduce them one by one below.
1. Front-end frame layer
This layer mainly includes the corresponding development paradigm (such as the mainstream Web-like development paradigm), components/APIs, and programming models MVVM (Model-View-ViewModel). Among them, Model is the data model layer, which represents the data read from the data source; View is the view UI layer, which presents the data in the system to the user in a certain form; ViewModel: The view model layer is the interface between the data and the view. bridge. It binds the view and data bidirectionally, so that the changes of the data can be presented on the view in time, and the user's modification on the view can also be transmitted to the background data in time, so as to realize the automatic change of the data-driven UI.
The development paradigm can be extended to support ecological development. Different development paradigms can be uniformly adapted to the underlying engine layer
2. Bridge layer
This layer is mainly used as an intermediate layer to realize the connection between the front-end development paradigm and the underlying engine (including UI back-end, language & runtime)
3. Engine layer
This layer mainly includes two parts: UI backend engine and language execution engine.
UI backend engine built by C++, including UI components, layout views, animation events, self-drawing rendering pipelines and rendering engines.
In terms of rendering, we try to make this part of the component as small and flexible as possible. Such a design provides flexible UI capabilities for different front-end frameworks, which are composed of C++ components. Through the on-demand combination of underlying components, layout calculation and rendering are parallelized, and combined with the upper-level development paradigm, a local update mechanism that minimizes view changes is implemented, thereby achieving efficient UI rendering.
In addition, the engine layer also provides basic capabilities such as rendering pipelines, animations, themes, and event processing of components. At present, the Flutter engine is reused to provide basic graphics rendering capabilities, font management, text typesetting and other capabilities. The bottom layer is implemented using Skia or other graphics libraries, and GPU hardware rendering acceleration is implemented through OpenGL.
In terms of multi-device UI adaptation, through a variety of atomic layout capabilities (automatic wrapping, hiding, proportional scaling, etc.), polymorphic UI controls (unified description, diverse forms of expression), and unified interaction framework (different interaction methods) Normalized to unified event processing) to meet the morphological differentiation needs of different devices.
In addition, the engine layer also includes the capability extension infrastructure to implement the capability extension of custom components and system APIs
2. Language & Runtime Execution Engine . It can be switched to different runtime execution engines as needed to meet the differentiated capabilities of different devices
4. Platform Abstraction Layer
This layer mainly focuses platform dependencies on a few necessary interfaces such as the underlying canvas, general thread and event mechanism through platform abstraction, creates corresponding infrastructure for cross-platform, and can achieve a consistent UI rendering experience.
Correspondingly, the supporting developer tool (HUAWEI DevEco Studio) combined with ACE UI's cross-platform rendering infrastructure and adaptive rendering can achieve a rendering experience consistent with the device and real-time UI preview on multiple devices.
In addition, the ACE UI framework has also designed a scalable architecture, that is, the front-end framework, language runtime, UI back-end, etc. are all decoupled and can be implemented differently. This gives the ability to deploy to lightweight devices with hundreds of kilobytes of memory, as follows:
In the lightweight implementation of ACE UI, the core of the front-end framework is sinking into C++ to reduce the memory footprint of the JS part, using C++ for more strict memory allocation and management, and using a more lightweight JS engine, UI part Using lightweight UIKit combined with a lightweight graphics engine, it achieves the goal of very lightweight memory usage. The interface capability guarantee is a subset of the full capability, which ensures that applications executable on lightweight devices can be executed on higher-level devices without redevelopment. This is the advantage of using the ACE JS development paradigm. After using the unified development paradigm for application development, developers do not need to care about the front-end framework, JS engine and back-end UI components at the specific runtime. The module ensures that the application can have the best running performance on different platforms. However, due to resource constraints on lightweight devices, the API capabilities supported are relatively limited, but the APIs in the public part are completely common.
In summary, the ACE UI framework has the following characteristics:
1. supports mainstream language ecology – JavaScript
2. supports Web-like development paradigm and MVVM mechanism. And the architecture can support multiple front-end development paradigms, further simplifying development
3. achieves high-performance and cross-platform consistent rendering experience through a unified UI backend
4. further reduces the UI development threshold for different device forms through polymorphic UI, atomic layout, unified interaction, and scalable runtime design, and enables a set of code to be deployed across devices through a unified development paradigm ( Covering hundreds of K-level to G-level memory devices)
ACE UI framework rendering process analysis
Next, we introduce the specific rendering technology of the ACE UI framework through a complete process of the rendering process of the ACE JS application on the mobile phone side.
1) Threading model
When an ACE JS application starts, a series of threads are created to form an independent thread model to achieve a high-performance rendering process.
The process of each ACE JS application contains an asynchronous task thread pool consisting of a unique Platform thread and several background threads:
•Platform thread: The main thread of the current platform, that is, the main thread of the application, is mainly responsible for the interaction of the platform layer, the application life cycle and the creation of the window environment
• Background thread pool: A series of background tasks are used for some low-priority parallel asynchronous tasks, such as network requests, Asset resource loading, etc. In addition to this, each instance includes a series of dedicated threads
•JS thread: The execution thread of the JS front-end framework, the JS logic of the application and the parsing and construction of the application UI interface are all executed in this thread
•UI thread: engine, the construction of the component tree and the core logic of the entire rendering pipeline are all in this thread: including the construction of the rendering tree, layout, drawing and animation scheduling
•GPU thread: Modern rendering engines support GPU hardware acceleration in order to give full play to hardware performance. On this thread, a GPU-accelerated OpenGL environment will be created through the system's window handle, responsible for rasterizing the content of the entire rendering tree. The content of each frame is directly rendered and synthesized to the Surface of the window and sent to the display
•IO thread: mainly for asynchronous file IO read and write, and this thread will create an off-screen GL environment, which is the same shared group as the GL environment of the GPU thread, which can share resources and decode the content of picture resources You can upload and generate GPU textures directly on this thread to achieve more efficient image rendering
The role of each thread will be further mentioned in the subsequent rendering process.
2) Front-end script parsing
The ACE UI framework supports different development paradigms and can be connected to different front-end frameworks.
Taking the Web-like development paradigm as an example, the application developed by the developer will generate an engine executable Bundle file through the compilation of the development tool chain. When the application starts, the Bundle file will be loaded on the JS thread, and the content will be used as input for the JS engine to parse and execute, and finally generate a structured description of the front-end components and establish a data binding relationship. For example, an application containing several simple texts will generate a tree structure similar to the following figure, and each component node will contain the attributes and style information of the node.
3) Render pipeline build
As shown in the figure above, after the front-end framework is parsed, the front-end framework docking layer requested to create the components provided by the ACE rendering engine according to the specific component specification definition.
Front-end framework docking layer implements the ability to define front-end components through the Component components provided by the ACE engine layer. Component is a declarative description of UI components implemented by C++, which describes the properties and styles of UI components, and is used to generate entity elements of components. Each front-end component is connected to a Composed Component, which represents a combined UI component, and the front-end corresponding Composed component is constructed by combining different sub-components. Each Composed component is a basic update unit for front-end and back-end docking.
Taking the front-end component tree above as an example, each node will be described by a set of Composed components. The corresponding relationship is as shown in the figure below. The corresponding relationship is just an example, and the corresponding relationship in the actual scene may be more complicated.
With the Component corresponding to each front-end node, a description structure of the completed Page is formed, and the rendering pipeline is notified to mount a new page.
Before the Page is mounted, the rendering pipeline has created several key core structures in advance, the Element tree and the Render tree:
Element tree , Element is an instance of Component, representing a specific component node, and the Element tree formed by it is responsible for maintaining the tree structure of the interface during the entire runtime, which is convenient for calculating local update algorithms. In addition, for some complex components, some logical management of subcomponents will be implemented on this data structure.
Render tree For each displayable Element, a corresponding RenderNode will be created for it, which is responsible for the display information of a node, and the Render tree formed by it maintains the information needed for the rendering of the entire interface, including its position, Size, drawing commands, etc., the subsequent layout and drawing of the interface are all performed on the Render tree.
When the application starts, the initially formed Element tree has only a few basic nodes, generally including root, overlay, and stage. The functions are as follows:
RootElement : The root node of the Element tree, which is only responsible for the drawing of the global background color
: A global floating layer container for the management of global drawing scenes such as pop-ups
StageElement : A Stack container, as a global "stage", each loaded page must be mounted under this "stage", which manages the transition effects between multiple pages of the application, etc.
In the process of creating the Element tree, the Render tree will also be created synchronously. The initial state is as follows:
When the front-end framework docking layer notifies the rendering pipeline that the page is ready, when the next frame synchronization signal (VSync) arrives, the page will be mounted on the rendering pipeline. The specific process is to instantiate the process of generating Element through Component, create A successful Element synchronously creates the corresponding RenderNode:
As shown in the figure above, the target is to mount the Component description of the entire Page to the StageElement. If there is no Element node under the current Stage, it will recursively generate the Element node corresponding to the Component one by one. For the ComposedElement of the combined type, the reference of the Element will be recorded in a Composed Map at the same time, which is convenient for quick search in subsequent updates. For container nodes or render nodes of visible type, the corresponding RenderNode will be created and hung on the Render tree.
When the Element tree and Render tree of the current page are generated, the complete process of page rendering construction ends.
4) Layout drawing mechanism
The next step is to enter the stage of layout and drawing. Both layout and drawing are carried out on the Render tree. Each RenderNode implements its own layout algorithm and drawing methods.
layout
The process of layout is to calculate the real size and position of each RenderNode in relative space through various layout algorithms.
As shown in the figure below, when the content of a node changes, it will mark itself as needLayout, and mark it up to the layout boundary (ReLayout Boundary). The layout boundary is a range mark for re-layout. In general, if a The layout parameter information (LayoutParam) of the node is strongly constrained. For example, the maximum size and minimum size expected by its layout are the same, so it can be used as a layout boundary. Layout is a depth-first traversal process. Starting from the layout boundary, the parent node passes the LayoutParam to the child node from top to bottom, and the child node calculates the size and position from the bottom to the top.
For each node, the layout is divided into three steps:
1. The current node recursively calls the layout method of the child node, and passes the parameter information (LayoutParam) of the layout, including the maximum size and minimum size expected by the layout, etc.
2. The child node uses its own defined layout algorithm to calculate its own size according to the layout parameter information
3. The current node obtains the size of the child node after the layout, and then calculates the position information of each child node according to its own layout algorithm, and sets the relative position to the child node to save
According to the above process, after a layout traversal is completed, the size and position of each node are calculated, and the next drawing can be performed.
draw
Like layout, drawing is also a process of deep traversal, traversing and calling the Paint method of each RenderNode. At this time, the drawing is only based on the size and position calculated by the layout, and the drawing command of each node is recorded in the current drawing context.
Why record the command instead of drawing and rendering directly? In modern rendering engines, in order to make full use of the GPU hardware acceleration capability, the DisplayList mechanism is generally used. During the drawing process, only the drawing commands are recorded, and they are uniformly converted into OpenGL commands for execution during GPU rendering, which can maximize the to improve graphics processing efficiency. So in the drawing context mentioned above, a canvas (Canvas) that can record drawing commands will be provided. Each individual drawing context can be thought of as a layer.
In order to improve performance, the concept of layer (Layer) is introduced here. Usually drawing will speed up the rendering by dividing the rendered content into multiple layers. For content that changes frequently, create a separate layer for it, then the frequent refresh of this independent layer does not have to cause other content to be redrawn, so as to achieve the effect of improving performance and reducing power consumption, and can also support optimizations such as GPU caching. . Each RenderNode can decide whether it needs to be layered individually.
As shown in the figure below, the drawing process starts from the nodes that need to be drawn, selects the nearest node that needs to be layered, and executes the Paint method of each node from top to bottom.
For each node, drawing is divided into four steps:
1. If the current node needs to be layered, you need to create a new drawing context and provide a canvas that can record drawing commands
2. Record the background drawing command on the current canvas
3. Recursively call the drawing method of the child node and record the drawing command of the child node
4. Record the drawing command of the foreground on the current canvas
After a complete drawing process is over, we will get a complete Layer tree, which contains the complete drawing information of this frame: including the position of each layer, transform information, Clip information, and the drawing of each element Order. The next step is to go through the process of rasterization and synthesis to display the content of this frame to the interface.
5) Rasterization Synthesis Mechanism
After the above drawing process ends, the GPU thread will be notified to start the compositing process.
As shown in the figure above, the output of the UI thread (UI Thread) in the rendering pipeline is LayerTree, which is equivalent to a producer and adds the produced LayerTree to the rendering queue. The compositor of the GPU thread (GPU Thread) is equivalent to the consumer. In each new rendering cycle, the compositor will obtain a LayerTree from the rendering queue for composite consumption.
For Layers that need to be cached, rasterization is also performed to generate GPU textures. The so-called rasterization is the process of replaying the commands recorded in the Layer to generate the pixels of each entity. Pixels are stored in the texture's graphics memory.
The compositor will obtain the current Surface from the system's window, synthesize the texture generated by each Layer, and finally synthesize it into the graphics memory (Graphic Buffer) of the current Surface. This memory stores the content of the rendering result of the current frame. Finally, the rendering results need to be submitted to the system compositor for composite display. The synthesis process of the system is shown in the following figure:
When the compositor of the GPU thread finishes compositing a frame, it will perform a SwapBuffer operation and submit the generated Graphic Buffer to the frame buffer queue (Buffer Queue) established with the system compositor. The system synthesizer will obtain the latest content from each production side for final synthesis. Take the above picture as an example. The system synthesizer will process the content of the current application and other display contents of the system, such as the status bar and navigation bar of the System UI, once. Composite, and finally written to the frame buffer (Frame Buffer) corresponding to the screen. The driver of the LCD screen will read the content from the buffer to refresh the screen, and finally display the content on the screen.
6) Local update mechanism
After the above processes 1~5, the first complete rendering process is completed. In subsequent operations, such as user input, animation, and data changes may cause page refreshes. If only some elements have changed, it is not necessary to For a global refresh, you only need to start a local update. So how is partial update done? Below we introduce the process of partial update.
Take the above picture as an example, JS updates the data in the code, and the data binding module will automatically trigger the update of the attributes of the front-end components, and then asynchronously initiate a request to update the attributes through the JS engine. The front-end component will build a new set of Composed patches based on the changed properties as input for rendering pipeline updates.
As shown in the figure above, when the next VSync arrives, the rendering pipeline will start the update process on the UI thread. Through the Id of the Composed patch, query the position of the corresponding ComposedElement on the Element tree in the ComposedMap. Updates to the Element tree through patches. Start with ComposedElement and compare layer by layer. If the node types are the same, update the corresponding attributes and the corresponding RenderNode directly. If they are inconsistent, recreate new Element and RenderNode. And mark the relevant RenderNode as needLayout and needRender.
As shown in the figure above, according to the RenderNode that needs to be re-layout and re-rendered according to the mark, the process of layout and drawing from the nearest layout boundary and drawing layer to generate a new Layer tree, it is only necessary to re-generate the Layer corresponding to the changed RenderNode.
As shown in the figure above, next, according to the refreshed Layer tree as input, rasterization and synthesis are performed on the GPU thread. There is no need to re-rasterize the already cached Layer, the compositor only needs to re-synthesize the cached Layer and the uncached or updated Layer. Finally, after the synthesis of the system synthesizer, the content of the new frame will be displayed.
The above is the rendering and updating process of an ACE JS application. Finally, review the overall process through two flowcharts:
After understanding the rendering and update process of ACE JS applications, if you want to know more about how the HarmonyOS UI framework solves the development challenges and application examples caused by the differences in device shapes, you can refer to our previous content: Deciphering the HarmonyOS UI Framework:
https://mp.weixin.qq.com/s/0RZL09vKppIZmpqTJUeRSA。
current maturity and evolution of the ACE UI framework
Up to now, the ACE UI framework has been commercialized in a series of products such as Huawei sports watches, Huawei smart watches, Huawei smart screens, Huawei mobile phones, and Huawei tablets. The usage scenarios include various applications such as calendar, travel, fitness, and utility tools, mobile-device touch-to-touch applications, and various service cards in HarmonyOS released in June this year-gallery, camera, etc. In addition, in terms of development and commissioning, the developer tool (HUAWEI DevEco Studio) also integrates the ACE UI framework, which supports development and commissioning on the PC side (MacOS, Windows), real-time preview (including real-time multi-device preview, component-level preview) preview, two-way preview, etc.) for a consistent rendering experience on both PC and device.
In the future, the minimalist development for developers, the smooth and cool experience for consumers, and the ability to efficiently deploy on different devices and platforms, the ACE UI framework will continue to evolve along the lines of streamlined development and high performance, combined with language It further simplifies the development paradigm, further enhances the performance experience in terms of cross-language interaction and type optimization in combination with the runtime, and combines the distributed capabilities to evolve the programming model from MVVM to Distributed Model-View-ViewModel (Distributed Model-View-ViewModel). The declarative UI description like natural language is used for interface construction, and the programming language is further opened up. In the future, we will consider evolving to TS to further improve the user experience in terms of dynamic effects, layout and performance.
Of course, the application ecology will involve more aspects, such as the prosperity of third-party plug-ins, the expansion of cross-OS platforms, and more innovative distributed experiences. The ACE UI framework is still very young, and I look forward to working with many developers to focus on the emerging scene of hyperterminals composed of multiple devices, constantly polish and improve, and jointly build a leading application experience and ecology!
At present, the online development experience of HarmonyOS has been launched, and everyone is welcome to experience it online. The ACE JS framework has entered the open source community. Welcome to follow and build together. We look forward to building our development framework together. If you have questions during the development process and good suggestions for HarmonyOS development, please log in to the forum and let us discuss together. Forum link can be accessed:
https://developer.huawei.com/consumer/cn/forum/,
Open Source Community: https://gitee.com/openharmony/ace_ace_engine/
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。