3

This article is compiled from Div Xia's technical sharing in Bump in 2022, and briefly introduces the process of drawing a basic graphic in WebGL. I hope that after you understand it, you can be less confused when using the 3d rendering library.

Four commonly used page drawing tools

Regarding the graphics drawing of h5 pages, we mostly talk about these four tools: html+css, svg, canvas2d, webgl.

image-20220321180951763

Html+css is the most common drawing tool. Using CSS to draw is the same as writing page layouts. When making charts, we can use CSS to define the style of the chart. Others are to add elements according to different data. different properties. Such development is very friendly to scenarios with simple chart elements and few data nodes. Not only can you reduce the amount of development tools, but also do not need to introduce redundant code bases. However, as more and more graphics need to be drawn at any time, the css code becomes more and more complicated. In addition, css has no logical semantics, the code will become difficult to read and maintain.

svg is a scalable vector graphics. It is closely integrated with html and css. It can be used as the src of img, and css can be used to manipulate the properties of svg. Both svg and html are text markup languages. Linear graphics support, including arcs, bezier curves, etc. At the same time, svg supports the syntax of <g>, <defs> and other reused classes, which allows him to draw a lot of graphics, the code still retains a certain readability. However, svg also has some disadvantages. Because a graph is an element node. When there is a lot of data, the layout and rendering calculation overhead caused by page refresh will be very large. Moreover, the complete svg puts the structure, style, and reuse logic together, which is a little less tidy than the three separate modes of html + css + js.

canvas2D is the 2D drawing context of canvas. It provides a series of methods for modifying and drawing images in the canvas area. Compared with the out-of-the-box use of the first two, many graphics and colors of canvas2d need to be implemented and encapsulated by themselves. This makes it a lot more difficult to get started with this tool, but if you do these basic things well, you will have a drawing tool that completely covers the previous two tools and is easy to expand.

webGL is also the drawing context of canvas, the web implementation of opengl es. The biggest feature is that the lower layer can directly use the parallel capability of gpu. It has a high performance advantage in the scene of processing a large number of graphics, pixel-level processing and 3d objects.

Choice of four tools

image-20220321181126634

When we get a drawing requirement, we should first see if the graphics used in this requirement are relatively few and simple. If so, you can directly choose css for rapid development. If the graph is simple but there are many, or the graph has some curve requirements, svg can quickly cope with it at this time. If the structure between the graphics is complex, choose canvas2d when the number is large. And when the order of magnitude of the graphics reaches a certain amount, or when each pixel needs to be processed, or when a large number of 3D displays are required, we have to use webgl.

image-20220321181247016

hello world of webgl

Unlike other tools, the hello world of webgl can be done with one or two lines of code, but it has more than 40 lines of code. Although this string of code has corresponding encapsulation methods in each 3d rendering library, we basically don't need to write it by ourselves, but learning this string of code can give us a basic understanding of the webgl drawing process.

There are five steps in webgl drawing:

  1. Create a webgl drawing context
  2. Create shader programming, link to gl context (parallel to step 3)
  3. Create the data, put it into the buffer and associate the buffer with the gl context (parallel to step 2)
  4. gpu loads data in cache
  5. draw graphics

Create Webgl context

const canvas = document.createElement('canvas');
const gl = canvas.getContext('webgl');

Create a shader program

const program = gl.createProgram();
gl.attachShader(program, /*某个着色器(下文的vertexShader)*/);
gl.linkProgram(program);
gl.useProgram(program);

A shader is a program that runs on the gpu. We use glCreateProgram to create an empty program object, and then use glAttachShader to fill this program object with compiled shader code. What is a shader, and how to compile it will be discussed later. Here, it can be regarded as the compiled code of a certain function. After putting several such compiled functions into the program object, when the gpu executes the program object, it will take the pixel information as an input parameter and execute the functions in the program object in turn.

After filling in the shader code, call glLinkProgram to link the program to the gl context, and use glUseProgram to enable the program.

Next, let's see how the shader code comes out.

const vertex = `
      attribute vec2 position;
      void main() {
        gl_Position = vec4(position, 1.0, 1.0);
      }
    `;
const vertexShader = gl.createShader(gl.VERTEX_SHADER);
gl.shaderSource(vertexShader, vertex);
gl.compileShader(vertexShader);

First, we define a variable vertex and assign it a string of code strings in other language formats. This string code is glsl code, which is very similar to c language. The code receives an incoming two-dimensional vector position, and then sets the global variable gl_Position in its execution environment to a four-dimensional vector. The components of the first two dimensions of this four-dimensional vector are the incoming two-dimensional vector.

Next, use glCreateShader to create a shader. The VERTEX_SHADER constant indicates that this shader is a vertex shader, and the corresponding vertex shader is a fragment shader. The vertex shader handles the position of the determined point. The fragment shader processes all the positions in the graph formed by the vertices one by one, such as two points to draw a straight line, the two points are determined by the vertex shader, and the straight line is drawn by the fragment shader after determining the positions of the two points .

After we create an empty vertex shader object vertexShader, we can use glShaderSource to put the previous string code into the vertex shader object, and then use glCompileShader to compile this code into an executable file. This process is similar to the compilation process of the C language.

gl.attachShader(program, /*某个着色器(下文的vertexShader)*/);
gl.attachShader(program, vertexShader);

After completing this step, it's time to go back to the comment above and associate the shader object with the program object. Of course, you still have to write a fragment shader, and use the same steps to associate a fragment shader with the program object.

image-20220321181429987

store data in buffer

const points = new Float32Array([-1, -1, 0, 1, 1, -1]);
const bufferId = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, bufferId);
gl.bufferData(gl.ARRAY_BUFFER, points, gl.STATIC_DRAW);

After the above operations, we have a program object loaded with shader code, this object is enabled in the gl drawing context. Next, we need to define the data for this program.

In the vertex shader, the code accepts an incoming two-dimensional vector, which is what we want to define now. First define a typed array, and put 6 numbers during initialization. These 6 numbers will be divided into three groups by the drawing program and put into three vertex shader calls. In addition, the use of typed arrays is to optimize performance, so that in the case of large amounts of data, the data takes up less space.

After having the data, call glCreateBuffer to create a buffer object, use glBindBuffer to associate this object with the gl drawing context, and finally call glBufferData to put the data of points into the buffer.

gpu loads data in cache

const vPosition = gl.getAttribLocation(program, "position");
gl.vertexAttribPointer(vPosition, 2, gl.FLOAT, false, 0, 0);
gl.enableVertexAttribArray(vPosition);

In this step, we first call glGetAttribLocation to get the position of the variable position in the program object, call glVertexAttribPointer to set the length of this variable to 2, set the type to glFLOAT, and use glEnableVertexAttribArray to enable this variable

draw graphics

gl.clear(gl.COLOR_BUFFER_BIT);
gl.drawArrays(gl.TRIANGLES, 0, points.length / 2);

At the last step, just use glClear to clear the color buffer and then use glDrawArrays to draw. Among them gl.TRIANGLES determines the drawing range of the fragment shader. When this value is gl.POINTS, the shader will connect the points in pairs, and gl.TRIANGLES makes the third point form a set of drawing triangles

image-20220321181511519

In this way, a hello world of webgl is completed, and the triangle above is the image output by these 40 lines of code.

Summarize

This program has certain encapsulations in three.js and other 3d frameworks and tool libraries. It is relatively convenient to draw webgl through those libraries, but if you don't know the most fundamental operations of these libraries, it is very easy Walk around when you have a problem. So I hope this article can increase your understanding of the underlying aspects of web 3d and provide you with some help when learning these 3d tool libraries.

References

GPU and rendering pipeline: how to draw the simplest geometry with WebGL?

Welcome to Bump Labs blog: aotu.io

Or pay attention to the AOTULabs official account (AOTULabs), and push articles from time to time.


凹凸实验室
2.3k 声望5.5k 粉丝

凹凸实验室(Aotu.io,英文简称O2) 始建于2015年10月,是一个年轻基情的技术团队。