4

For the front-end students who realize animation, canvas can be said to be the freest and most comprehensively controlled animation realization carrier. Not only can javascript , and use image resource filling; it can also change the input parameters to make interactive animations, and fully control the motion trajectory, speed, flexibility and other elements in the animation process.

But using canvas developed over a bit more complicated than the students animations, you may find that full use javascript rendering, animation control, some not so good results achieved (discussed in this article only 2D), image blur, light, water droplets and other effects . Although it can be achieved by pixel-by-pixel processing, javascript is not good at calculating this type of large amounts of data. The time to draw each frame is very touching, and it is not realistic to use it to achieve animation.

But canvas addition to the most commonly used javascript API drawing mode ( getContext('2d') ), as well as WebGL way ( getContext(webgl) ), large amounts of data to calculate the scene said before, it can be said is the most suitable place to play. For many students, WebGL realizes 3D scenes. In fact, for 2D drawing, it also has great use scenes.

Why WebGL is more powerful

Let's take a look at javascript API drawing and webGL drawing principles:

If you use javascript to process the canvas pixel by pixel, this part of the processing work needs to be performed in the javascript operating environment. We know that javascript is single-threaded, so it can only be calculated and drawn pixel by pixel. It's like a slender funnel, dripping drop by drop.

/img/bVcSj37

The processing method of WebGL is driven by the GPU. The processing of each pixel is executed on the GPU. The GPU has many rendering pipelines. These processing can be executed in parallel in these pipelines. This is why WebGL is good at such a large number of The reason for the data calculation scenario.

/img/bVcSj38

WebGL is so powerful, you can use it for drawing.

Although WebGL has the advantages mentioned above, it also has a fatal disadvantage: it is not easy to learn, and it takes a lot of effort to draw simple lines.

GPU parallel pipelines do not know what the output of the other pipeline is, only the input of their own pipeline and the program that needs to be executed; and the state is not retained, the pipeline itself does not know what program has been executed before this task, and what is there Input and output values are similar to the current concept of pure functions. These conceptual differences raise the threshold for using WebGL graphics.

In addition, these programs running in the GPU are not javascript , but a C-like language, which also requires front-end students to study separately.

Hello, world

high the threshold is, there is always a need to cross the past day. The next step is to control WebGL to draw a little pattern at 160b845b9b8958. You can also experience it when it is suitable to use this technology.

Basic environment-big screen

In order to enter the GLSL shader stage as soon as possible, the basic WebGL environment here is built with Three.js . You can study the construction of this basic environment. In fact, there is not much code required without a third-party library.

The following is the construction of the basic environment:

function init(canvas) {
  const renderer = new THREE.WebGLRenderer({canvas});
  renderer.autoClearColor = false;
 
  const camera = new THREE.OrthographicCamera(
    -1, // left
     1, // right
     1, // top
    -1, // bottom
    -1, // near,
     1, // far
  );
  const scene = new THREE.Scene();
  const plane = new THREE.PlaneGeometry(2, 2);

  const fragmentShader = '............'
  const uniforms = {
    u_resolution:  { value: new THREE.Vector2(canvas.width, canvas.height) },
    u_time: { value: 0 }
  };
  const material = new THREE.ShaderMaterial({
    fragmentShader,
    uniforms,
  });
  scene.add(new THREE.Mesh(plane, material));
 
  function render() {
    material.uniforms.u_time.value++;
    renderer.render(scene, camera);
    requestAnimationFrame(render);
  }

  render()
}

Explain what the above code does: Create a 3D scene (what about 2D?), paste a rectangular plane in front of the camera, occupying the camera's visual range, just like watching IMAX sitting in the front row, you can see All you get is the feeling of the screen in front of you, and the picture on the screen is your entire world. Our drawing is on this screen.

Again, the shader is divided into vertex shader VERTEX_SHADER and fragment shader FRAGMENT_SHADER .

The vertex shader calculates the value of each vertex of the object in the 3D scene, such as color, normal vector, etc. Here we only discuss the 2D picture, the part of the vertex shader is done by Three.js , and the effect is to fix the scene The position of the lens and the screen.

The role of the fragment shader is to calculate the color value output by each fragment on the plane (in this case, each pixel on the screen), which is also the object of this article.

There are two types of fragment shader input: varying and uniform varying simply, 060b845b9b8a26 is passed in by the vertex shader. The input value of each fragment is obtained by linear interpolation of the related vertices, so the value on each fragment is different. I won’t discuss this part for now (otherwise I won’t finish writing it). uniform is a uniform value, passed in from outside the shader, and the value obtained for each fragment is the same. Here is the entry for javascript In the above code, we pass in u_resolution for the fragment shader, which contains the width and height of the canvas.

The first shader

fragmentShader is the program code of the shader, the general composition is:

#ifdef GL_ES
precision mediump float;
#endif

uniform vec2 u_resolution;

void main() {
  gl_FragColor = vec4(1.0, 0.0, 0.0, 1.0);
}

GL_ES is defined in the first 3 lines, which is usually defined in the mobile terminal or browser. The second line specifies that float is medium, or it can be designated as low precision lowp or high precision highp , the higher the precision The lower the execution speed, the faster, but the quality will decrease. It is worth mentioning that the same settings may behave differently in different execution environments. For example, some mobile browser environments need to be specified as high-precision to achieve the same performance as the medium-precision in PC browsers.

Line 5 specifies which input parameters the shader can receive. There is only one input parameter: 060b845b9b8a97 of type u_resolution .

The last 3 lines describe the main program of the shader, which can process the input parameters and other information, and finally output the color to gl_FragColor , which represents the color displayed by this fragment, of which 4 values represent RGBA (red, green, blue, transparency) , The value range is 0.0 ~ 1.0 .

Why write 0.0 instead of 0 ? Because GLSL has javascript , but it is divided into integer ( int ) and floating point ( float ), and when floating point numbers must contain 0, the decimal point is written before the decimal point. .0 is also possible.

After reading this explanation, everyone should be able to guess what the above shader will output, right, it is the full-screen red.

This is the most basic fragment shader.

Use uniform

You should note that the above example does not use the uniform value passed in, let's talk about how to use these values.

javascript code used to build the basic environment before, we can see that u_resolution stores the width and height of the canvas. What is the use of this value in the shader?

gl_FragCoord about another built-in value x and y values of the fragment (pixel) coordinates. Using these two values, you can know that the current shader is calculated on the canvas. Which position is the color. for example:

#ifdef GL_ES
precision mediump float;
#endif

uniform vec2 u_resolution;

void main() {
  vec2 st = gl_FragCoord.xy / u_resolution;
  gl_FragColor = vec4(st, 0.0, 1.0);
}

You can see this image:

/img/bVcSj39

The above shader code uses the normalized x and y coordinates to output to the red and green parts of gl_FragColor

As can be seen from the figure, gl_FragCoord the (0, 0) points upward and rightward respectively in the lower left corner, x-axis and y-axis directions.

The other uniform value u_time is a value that continuously increases over time. This value can be used to make the image change over time to achieve the effect of animation.

Rewrite the above shader again:

#ifdef GL_ES
precision mediump float;
#endif

uniform vec2 u_resolution;
uniform float u_time;

void main() {
  vec2 st = gl_FragCoord.xy / u_resolution;
  gl_FragColor = vec4(st, sin(u_time / 100.0), 1.0);
}

You can see the effect of the following figure:

http://storage.360buyimg.com/element-video/QQ20210330-195823.mp4

sin is used in the shader to make a periodic change from 0 to 1 in the blue channel of the color output.

What else can I do?

After mastering the basic principles, I started to learn from the master's works. shadertoy is a shader playgroud similar to a codepen. The above shaders use the basic tools above and some modeling functions to create a variety of dazzling special effects and animations.

The above is the basic development tools for GLSL shaders. Now you can start developing your own shaders, and the rest is to use mathematical skills.

Welcome to follow the blog of Lab: 160b845b9b8d0e aotu.io

Or follow the AOTULabs official account (AOTULabs) and push articles from time to time.


凹凸实验室
2.3k 声望5.5k 粉丝

凹凸实验室(Aotu.io,英文简称O2) 始建于2015年10月,是一个年轻基情的技术团队。