I. Introduction
1.1 Front-end engineers, what else can they do without writing web pages?
In the history of front-end development in the past 20 years, the front-end has experienced the Iron Age (small front-end), the information age (big front-end) and even the current all-round front-end era. After several eras of precipitation, the front-end field began to be more subdivided.
At present, the industry generally believes that the vertical direction of the front-end segmentation field is: NodeJS that helps to separate the front and back ends and perfect engineering, pays attention to the small front desk for user interface display, provides a one-stop solution for the middle and backstage, and enriches the data visualization capabilities of data visualization ( 2D, 3D), and future-oriented interactive content for users with rich interactive experience-AR, VR, 3D, etc...
With the segmentation of the front-end field, front-end engineers are no longer simply responsible for piling up web pages and implementing some interactions, but can also achieve some cool effects in the field of visualization. The picture below is the actual display of 3D data visualization on vivo's official website. online experience address
data visualization: name implies, is to present data to users in ways such as visualized graphs and charts, making the data more intuitive, objective and persuasive. The above example is to use the rendering engine to analyze and render the model data, and finally present it to the mobile device. Because the images displayed are more three-dimensional and interactive, it belongs to the category of 3D data visualization.
Today we will take a look at a detailed branch of the front-end-3D data visualization. This article is mainly divided into:
- Preface
- 2D data visualization
- 3D (2D+1D) data visualization
- Vivo official website 3D application combat
- to sum up
I hope that through the introduction and discussion of the five chapters, everyone can have a clearer understanding of data visualization and 3D data visualization.
2. 2D data visualization
2.1 What is 2D data visualization?
2D data visualization refers to a way of organizing, processing and presenting data using two-dimensional graphs. When it comes to charts, the first thing that comes to your mind may be that we have used bar graphs, line graphs and other display forms of graphs and graphics. For example, the following:
In fact, in addition to the above forms, there are some cool chart display forms such as bubble chart, area chart, province map, word cloud, waterfall chart, funnel chart, heat map, GIS map, etc.
2.2 2D data visualization application scenarios
2D data visualization is widely used in work and life. The simplest ones are like Excel data charts, XMind and Visio belong to specific application scenarios of data visualization. There are also some slightly more complicated ones, such as large data visualization screens, background data reports, maps, etc.
As the application scenarios of data visualization become more and more extensive, data can be presented in more rich visualization forms, enabling users to more easily and conveniently obtain and understand the information conveyed by the data.
Three, 3D (2D+1D) data visualization
3.1 What is 3D data visualization?
3D data visualization can be understood as the addition of the Z-axis dimension on the basis of 2D data visualization, so that the data presentation expands from a two-dimensional plane to a three-dimensional structure. It is a new way of managing, analyzing and interacting data, and can achieve real-time reflection, real-time refraction, dynamic shadows and other high-quality, realistic rendering of 3D images.
The main difference between 3D data visualization and 2D data visualization (general data visualization) is that it is more three-dimensional, more realistic, and more immersive. Take a picture and feel:
3.2 3D data visualization application scenarios
3D data visualization is easier for users to understand data and present spatial knowledge due to its fast knowledge transmission speed, more intuitive display of data and information, and easier information transmission.
Currently visible 3D data visualization application areas include smart cities, automobiles, and mobile phone model display.
It is believed that with the wider and wider support of browsers for WebGL and the popularization of 5G, the application fields of front-end 3D visualization will become more and more extensive.
3.3 3D data visualization solution
After understanding the concept and application scenarios of 3D data visualization, let's get to know the current mainstream solution for 3D data visualization in the industry: WebGL.
The following figure shows the rendering process of WebGL:
WebGL (Web Graphics Library) is a browser implementation based on the OpenGL ES specification. The WebGL rendering process in the above figure can be understood as:
1) JavaScript: processes the vertex coordinates, normal vectors, colors, textures and other information needed by the shader, and provides these data for the vertex shader2) vertex shader: receives the vertex information passed by JavaScript, and draws the vertex to the corresponding coordinate
3) rasterization stage: the internal area of the graph with empty pixels
4) fragment shader: fills the color information for the pixels inside the graphics
5) rendering: rendering to the Canvas object
WebGL can not only draw 2D data visualization graphics and charts, but also a 3D drawing standard. This drawing technology standard combines JavaScript and OpenGL ES 2.0. Through binding, WebGL can provide hardware 3D accelerated rendering for HTML5 Canvas, so that we You can use the system graphics card to display 3D scenes and models more smoothly in the browser.
Four, vivo official website 3D application combat
For users, the biggest pain point of online shopping is the inability to see what you get. At present, mainstream online shopping malls generally display the characteristics of products through pictures or videos, and these two-dimensional information display methods cannot allow users to understand well. Product information. With the 3D display scene, the user can more intuitively and clearly understand the product details and characteristics of the mobile phone through the 3D display of the mobile phone model, thereby enhancing the user's desire to buy.
Let's take a look at the technical selection and implementation plan of vivo's official website in realizing 3D display.
4.1 Introduction to visualization tools and technology selection
At present, the industry has many useful 3D visualization development tools to facilitate the development of 3D visualization requirements. 3D data visualization mainly includes two aspects: rendering library and model. Below we will learn about the technical selection of 3D visualization field tools and official website from the 3D rendering library and model respectively.
4.1.1 Rendering library selection
The current mainstream solution for 3D data visualization is based on WebGL. Now that we have WebGL, why do we need a rendering library?
This is because WebGL has a relatively high threshold and requires a relatively large amount of mathematical knowledge. Although WebGL provides a front-end API, in essence, WebGL and front-end development are completely different directions, and there is little overlap of knowledge.
Using the rendering library to implement the rendering of the model can greatly reduce our learning costs, and can complete almost all the functions that WebGL can achieve. Some commonly used 3D rendering libraries are: ThreeJs, BabylonJS, SceneJS and CesiumJs;
several different 3D rendering libraries:
Through comparison, we can find that the above-mentioned rendering libraries have their own advantages. However, when doing 3D rendering of mobile phone models, the emphasis on lighting, shadows and reflections is relatively high, and features such as collision detection are not required. Therefore, based on the above comparison, we choose ThreeJs as the underlying library for our 3D rendering to implement the 3D rendering of the mobile phone model.
4.1.2 Model selection
After understanding the rendering library, let's talk about the commonly used 3D model formats: OBJ, FBX, GLTF.
The model file is actually a data collection that contains information such as vertex coordinates, index, UV, normals, node relationships, materials, textures, and animations. Regardless of the format of the model, its essence is the arrangement and organization of the above information. The difference between the various models is nothing more than the way of organization, some use plain text (OBJ), some use json (GLTF), and some use binary (FBX).
several different model files:
Through comparison, we found that several model formats are suitable for different scenarios:
1) The OBJ model is not particularly friendly to animation support, and the mobile phone needs to perform some disassembly animation display of the model when doing 3D display.2) FBX Due to the different parsing specifications of different engines, the rendering effects of different engines are quite different.
3) GLTF (GLB) model format has high scalability, and WebGL rendering engines such as ThreeJs and Babylonjs have good support
4.2 3D scene construction and program implementation
We found that if you want to show the objects in the 3D scene realistic enough, the camera and lighting are the two essential elements. In the actual business scenario, there are logics such as model color switching, model rotation, zooming, and panoramic scenes that we need to deal with.
4.2.1 Scene camera
First, let's take a look at the camera. The camera in the 3D scene is similar to the function of the human eye in real life. When the camera shoots an object, the position and angle of the camera need to be set, and the virtual camera also needs to set the projection mode. We understand the position and angle better, let’s introduce the projection methods: there are two methods of projection, namely orthographic projection and perspective projection:
4.2.1.1 Orthographic projection
orthographic: orthographic projection, called parallel projection. The viewing volume of this kind of projection is a rectangular parallel pipe, that is, a cuboid, as shown in the figure. The biggest feature of orthographic projection is that no matter how far away the object is from the camera, the size of the object after projection does not change.
Orthographic projection is usually used in plane graphics such as architectural blueprint drawing and computer-aided design. These industries require that the projected object size and mutual angle remain unchanged, so that the object proportions are correct during construction or manufacturing.
4.2.1.2 Perspective projection
perspective projection: perspective projection conforms to people's psychological habits, that is, objects close to the point of view are large, objects far away from the point of view are small, and the extreme point disappears and becomes a vanishing point. Its viewing volume is similar to a pyramid with the top and bottom cut off, that is, a pyramid.
Perspective projection is usually used in animation, visual simulation, and many other aspects that reflect reality. In comparison, perspective projection is closer to our visual perception. Therefore, in the 3D display of the mobile phone model on the official website, we choose perspective projection to calculate the projection matrix of the camera.
4.2.2 Scene lighting
In order to make our rendered 3D objects look more natural and lifelike, it is very important to simulate the effects of various lighting.
The lighting of an object in a 3D scene is determined by the light source, medium (material of the object) and reflection type, and the reflection type is determined by the material characteristics of the object. According to the characteristics of different light sources, we can divide the light sources into 4 different types.
They are Ambient Light, Directional Light, and Positional Light.
Let's learn about Ambient Light, Directional Light, and Positional Light respectively.
We can see from the figure:
Parallel light is light shining in a certain direction, and each photon in the light moves in parallel with other photons. For example, sunlight can be regarded as parallel light, which can only illuminate a part of the surface of an object.
In addition to color, parallel light also has directional properties and belongs to directional light. When the directed light interacts with the object, depending on the material of the object, there will be two kinds of reflection effects: diffuse reflection and specular reflection. The final reflection effect in a 3D scene is a superposition of ambient light, parallel light, diffuse reflection, and specular reflection.
point light source means that the light is emitted from one point and is emitted in all directions. This kind of light is most commonly used in our real life. For example, an electric light bulb emits light in all directions, and it can be regarded as a point light source.
Point light sources not only have direction properties, but also position properties. Therefore, to calculate the illumination of a point light source, we must first determine the direction according to the position of the light source and the relative position of the surface of the object, and then calculate the angle between the direction of the light and the normal direction of the surface of the object, just like the parallel light.
Ambient light refers to the natural light in the three-dimensional space where the object is located. It fills the entire space with the same light intensity everywhere. Ambient light has no direction, so the effect of reflecting the ambient light on the surface of an object is only related to the reflectivity of the ambient light and the material.
4.2.3 Implementation of model rotation
With the camera and lighting, the model can be presented to the user realistically, but some interactive operations of the model itself need to be processed, such as , color switching etc. There are two ways to achieve model rotation in a 3D scene:
(1) The camera in the 3D scene does not move, and the rotating 3D entity is the 3D model
(2) Rotate the camera, that is, the 3D model does not move, and the camera rotates around the model
In real life, moving the object into the field of view is not the correct method, because in real life it is usually to move the camera to shoot the object. So we choose to move the camera , which is the implementation method (1) to realize the rotation interaction of the 3D entity.
4.2.4 Model color switching
The model format uses the GLB model (convenient for later curing and uploading), so each color corresponds to a new GLB file.
Each time you switch the model, you need to re-analyze the file, but because the textures and other materials can be shared between different color models, even if you reload the model and analyze it when you switch colors, the speed will be much faster than the initial loading. Therefore, considering the later curing cost and reusability, switching the color and reloading the model file is a relatively elegant processing method.
4.2.5 Panorama scene construction
In order to allow users to have a stronger immersive experience when browsing the 3D page of the product. We adopted the panoramic mode. When the user rotates and zooms the phone in the panorama mode, the corresponding background element will also follow the rotation and zoom of the camera to rotate and zoom. In this way, the user has a stronger sense of interactive experience when browsing and viewing.
The panoramic mode in ThreeJs can be implemented by loading texture maps:
let texture = await Loader.loadImg(panoramicImg)
texture.encoding = THREE.sRGBEncoding
let sphereGeometry = new THREE.SphereGeometry(3000, 160, 160)
sphereGeometry.scale(-1, 1, 1)
let sphereMaterial = new THREE.MeshBasicMaterial({ map: texture })
let sphere = new THREE.Mesh(sphereGeometry, sphereMaterial)
// 设置材质对象的纹理贴图
this.bgMap = sphere
this.stage.scene.add(this.bgMap)
The above code first creates a spherical geometric SphereGeometry, and reverses the x-axis of the created spherical geometric grid: sphereGeometry.scale(-1, 1, 1), so that all surface points are inward. Then load the image data to create the material and add the map: new THREE.MeshBasicMaterial({map:texture}); new THREE.Mesh(sphereGeometry, sphereMaterial) to finally achieve the panorama effect.
4.3 Performance optimization
4.3.1 Model compression
In order to improve the loading speed of page initialization and the analysis speed when switching color models, we need to compress the model to reduce the volume of the model after the model is completed.
Google has a compression library Draco 3D for GLB models, which can compress the volume of the model without affecting the display effect of the model. You can use the GLTF Pipeline command line to compress the GLTF model.
compression steps:
1. Install gltf-pipeline
npm install -g gltf-pipeline
2. Convert gltf to glb file
Converting a glTF to glb
gltf-pipeline -i model.gltf -o model.glb
gltf-pipeline -i model.gltf -b
After compression, the volume of the glb file will be reduced by about 80%, so the loading speed and effect rendering will be faster than the original GLTF file.
4.3.2 Model decompression
ThreeJs has a decompression scheme for the compression model:
// Instantiate a loader
const loader = new GLTFLoader();
// Optional: Provide a DRACOLoader instance to decode compressed mesh data
const dracoLoader = new DRACOLoader();
dracoLoader.setDecoderPath( '/examples/js/libs/draco/' );
loader.setDRACOLoader( dracoLoader );
First build a GLTFLoader object, and then during the model loading process, set the path of the dracoLoader parsing file, and dracoLoader will parse the compressed model file. Finally, the parsed file is returned to the script for rendering.
Five, summary
This article first introduces 2D data visualization. By stretching the flat chart data visualization form to a three-dimensional structure, 3D data visualization related content is derived, and the official website is based on ThreeJs's 3D application development actual combat.
But WebGL's knowledge about 3D rendering is much more than that. Here is just a list of several commonly used rendering elements of 3D models, such as lights, cameras, etc. In fact, there are also types of light reflections related to object materials: diffuse reflection and specular reflection. Cameras also have other types of camera models: for example: orthogonal cameras, cubic cameras, stereo cameras, etc. We will not introduce them in detail due to space reasons. Interested students can go to the WebGL ) to view and learn related content.
Author: vivo official website store front-end team -Ni Huaifa
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。