1. Foreword
As an indispensable part of the safety of autonomous driving, High Definition Map can effectively strengthen the perception and decision-making capabilities of autonomous driving, and enhance the level of autonomous driving. For autonomous driving, high-precision maps are mainly used by machines, but they still need to be understood by people in the process of production and analysis. This article will give you a brief introduction. In the past period of time, AutoNavi's high-precision map business team has explored and practiced some of the WEB 3D engine technology, how to make complex and abstract geographic data appear in front of people, and satisfy its business editors. And the demands of analysis.
High-precision maps are mainly the refined expression of road traffic layer objects (such as: lane ground markings, traffic lights, traffic signs, guardrails, poles, etc.), including their geometric positions and attributes. Due to the extremely high precision requirements, its production mainly relies on laser point cloud , which is usually massive data, and the amount of point cloud at an intersection may be hundreds of millions. This has high requirements on the performance of the rendering engine. Meet massive point cloud real-time rendering and pick and edit .
At first, we investigated the mainstream Web 3D map engines on the market: Mapbox, Cesium, JS API, L7, etc. They generally did not support the point cloud loading and processing capabilities, and tended to express abstract points, lines, and surfaces, and refine them. The rendering ability is insufficient, and it cannot satisfy the editing ability of complex Topo relationships as an editing tool. For the business scenarios of AutoNavi's high-precision data editing and analysis, more low-level abstract capability building is needed. the exploration and practice road three-dimensional engine eagle.gl 160d5b743cde90 in the high-precision business.
First, let’s briefly understand the capabilities of the current AutoNavi 3D engine eagle.gl through a video. It implements a set of 2/3D unified map data editing and visualization solutions, with good scalability. It has been applied in AutoNavi. High-precision map data production, data analysis, and new infrastructure projects.
2. The design and realization of the engine scheme
Based on the above analysis, we need to implement a set of data production engine capabilities that meet the needs of AutoNavi's business scenarios. In order not to reinvent the wheel and maximize the use of existing open source capabilities, we finally chose Threejs as the rendering layer framework dependency. Based on this framework, GIS visualization and complex editing capabilities are built on the upper level, and finally a set of 2/3D integrated rendering engine is built. At the same time, facing the direction of future smart cities and 5G IOT, we will build a data team with GIS 3D visualization capabilities that are in line with the industry.
Figure 2.1 2/3D integrated digital engine for high-precision maps
The entire engine work can be divided into three parts (point cloud map, vector map, model map), and the overall work of the front-end team needs to be developed around these three points.
- Point cloud map: Visualization of multiple point cloud data formats (las/laz/bc/xbc/rds/bin), supports real-time rendering and editing of massive point clouds, and satisfies the main production line's ability to absorb point cloud data.
- Vector map: It mainly connects with the vectorized map capability of online production line, and realizes the addition, deletion, modification, and query capability of vectorized 3D data through data snapshot, command line editing mode, spatial data indexing and other capabilities.
- Model map: Mainly oriented to the application layer of data results, data preprocessing modeling, real-time modeling editing, etc., to achieve the same data capabilities with the lane-level rendering capabilities of AutoNavi Client.
3. Technical problem solving
The following briefly introduces the capabilities of each core module.
3.1 Full-frame refined rendering
High-precision data has more accurate and richer information, supports more refined data rendering, and better restores the real world.
Figure3.1.1 Full-frame refined rendering
The original high-precision output data is vectorized skeleton data . This layer of data is a real-world digital abstraction of points, lines and surfaces. How to perform model rendering and display based on the original vector data? The original vector data is for automatic driving and other machines. In order to better verify the data quality and conduct data problem investigation and analysis, we need to model and render the point, line and surface vector data to achieve the final modeled and refined rendering effect, which is convenient for humans. Perform a quick problem verification.
Figure 3.1.2 The original vectorized data results
Figure3.1.3 Real-time data modeling and refined rendering results on the terminal
In order to support refined rendering effects, the eagle.gl engine currently supports multiple visualization layers such as commonly used points, lines, areas, bodies, texts, models, etc., and realizes data rendering through configuration.
The entire modeling rendering is divided into the following core module processes. The main process is to pull the original point, line and surface vector data, and perform data analysis according to the rules, data modeling processing, data merging and Instance processing, and data rendering. The workload is the data pre-modeling process , because the original data is more abstract, in order to restore the world more realistically, a lot of logical calculations need to be done, for example, the zebra crossing/diversion zone data representation has only the outer border In order to render the contour more realistically, it needs to be generated by real-time modeling in the process according to the specifications of the long and short sides of the data and the specifications of the traffic network.
Figure3.1.4 The core process of end-to-end rendering modeling
Figure.3.1.5 Original geometry of zebra crossing
Figure.3.1.6 Geometry after zebra crossing modeling
Through the same processing method, we can achieve hundreds of high-precision data customized modeling capabilities. However, if hundreds of data specifications are modeled in a hard-coded form, the entire code structure will be corrupted and unmaintainable. Therefore, we realize the ability of data configuration to drive the display of the map surface style in the engine layer.
Figure.3.1.7 Expression of a variety of high-precision data modeling results
We divide the definition of style into two types. One is the display behavior corresponding to the basic feature elements. This layer of style is currently defined as a static style configuration; the other style is the styleSchema driven by the tile data source. This style will Define dynamic attributes and common attribute definitions such as space scenes and lights.
The style corresponding to the basic element layer is mainly divided into the following types:
IBaseStyle | ILineStyle | ITextStyle | IPointStyl | IPolygonStyle | IModelStyle | ICustomStyle
For example, we use the model style class ModelStyle to add a new model style definition, inherited from IBaseStyle.
mapView.addDataLayer({
id: 'model', // 图层id,id为唯一标识,不能重复
style: {
type: 'model',
resources: {
type: 'gltf',
base:
'//cn-zhangjiakou-d.oss.aliyun-inc.com/fe-zone-daily/eagle.gl/examples/assets/theme/default/model/',
files: ['ludeng_0.glb'],
},
translate: [0, 0, 0],
},
features: [
{
geometry: {
type: 'Point',
coordinates: [[116.46809296274459, 39.991664844795196, 0]],
},
},
],
});
attribute
The gray part means inherited from the BaseStyle property
ModelResource resource definition
The loading and rendering of model data can be realized through the above API calls. The full element expression of the entire map surface can be visually described in this form.
Figure.3.1.8 How the gltf model informs the configuration style to load the definition
3.2 Multi-source heterogeneous data integrated rendering
For HD data production, point cloud and DEM elevation are divided into the core data and data expression corresponding to the acquisition and application side. As a characteristic capability of the engine, the engine currently implements DEM elevation rendering and multi-dimensional point cloud coloring capabilities. In the future, it will also access multi-source model capabilities such as BIM/tilted photography to achieve macro/micro integration based on a single image. Demonstrate ability.
3.2.1 DEM elevation rendering
Figure.3.2.1.1 DEM rendering effect DEMO
High-precision data is expressed in XYZ three-dimensional data, but many of our traditional road network data are two-dimensional. How to achieve the hybrid superposition of multi-source vector data? We use the DEM elevation scheme to integrate traditional background data without elevation. /Road network data is projected to altitude. The original input is a tiff image file. The QMesh rendering vector is generated by using data preprocessing. The main data processing flow is as follows:
Figure.3.2.1.2 DEM data rendering processing flow
The QMesh rendering scheme is the terrain slice format recommended by Cesium now and in the future. Compared with the Heightmap format, it has higher performance advantages and smaller data storage. Based on the better-performing QMesh geometric processing solution, real-time loading of the national mountain terrain is realized, and real-time subdivision of the terrain is supported in the worker, which can be rendered to the 21-level terrain based on the dynamic calculation of the low-level triangular surface.
Figure.3.2.1.3 Multi-source heterogeneous data fusion integrated rendering-DEM elevation scheme
3.2.2 Point cloud map
As a high-precision collection data, point cloud can restore real-world three-dimensional position information with the greatest accuracy compared with traditional photo data. In order to achieve a highly refined description of the real world, the loading and rendering of massive point clouds on the Web is a very important content in our daily work.
The biggest problem of point cloud rendering is real-time loading and dynamic picking of massive data. eagle.gl realizes real-time loading of point clouds from the network based on the global ECEF unified coordinate index, and can support real-time loading of point cloud LOD in the memory at the same time 800w and according to Reflectance/height/MIX mixed coloring ability, rendering frame rate can be maintained at 60fps, real-time data picking interaction can be realized through the GpuPicker and Raycast hybrid picking scheme.
Figure.3.2.2.1 Real-time adsorption editing capability based on LOD point cloud
3.3 Interaction and expansion capabilities
For map data production, data visualization display is the first step. How to build the upper-level complex GIS editing capabilities on the basis of data visualization display is an important part of business value-added.
3.3.1 Interaction capabilities
In order to meet the complex data editing requirements of high-precision production lines, picking is the lowest level of editing capabilities. The fusion of ray picking and GPU picking can accurately meet the rapid picking ability of massive data within 20ms.
Figure.3.3.1.1 Mass data pickup capability
Data clipping capability : In an interchange scenario, mutual occlusion between data will have a certain impact on data analysis. By default, we provide data clipping capability to meet the data editing and analysis needs of complex scenes.
Figure.3.3.1.2 Real-time data cropping
3.3.2 Plug-in system
The engine supports plug-in development. Users can customize plug-ins according to their own needs. At present, we provide three plug-ins for distance measurement, editing and box selection for your reference. At the same time, the engine also supports customized development of layers and controllers to meet diverse business scenarios.
Figure.3.3.2.1 Ranging plug-in system
3.4 Other Ability-Visual & Motion
3.4.1 Flight
Smooth flight effects and the ability to change the background based on filters.
Figure.3.4.1.1 Smooth flight effect / background skinning
3.4.2 Light and Shadow
Simulate the time of day to realize the ability of light and shadow.
Figure.3.4.2.1 Light and shadow
3.4.3 Post-processing
Deeply customize and extend threejs, implement an extended post-processing rendering pipeline, and abstract a general three-party post-processing basic library, which increases the post-processing capabilities of the engine to improve the visualization effect and prepare for future large-screen projects.
Figure.3.4.3.1 Post-processing
3.4.4 3D Earth
23D integration, supports free switching between Mercator projection and spherical projection, and enhances the visualization effect.
Figure.3.4.4.1 2/3D integrated ball display
4. Summary planning
At present, the eagle.gl engine is widely used in the data production, data analysis and capacity building of new infrastructure projects of the Gaode high-precision team. In the future, as the engine's capabilities are complete, on the one hand, it is based on high-precision business to realize the establishment of a simulation platform for autonomous driving; on the other hand, it realizes multi-source heterogeneous data (DEM/BIM/tilt photography and other industry-related industries) through real-world digitization. ) Fusion rendering, and finally achieve digital twin capability.
There is a long way to go before the above things are finally realized. Welcome to contact and join us to build a digital world together. Welcome friends who are interested in the general platform front-end, online editing IDE, 3D visualization direction to join us and do some meaningful things together. Submit your resume to gdtech@alibaba-inc,com, the subject of the email is: Name-Technical Direction-From Gaode Technology, self-recommendations or recommendations are welcome.
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。