1

Introduction

After understanding wind field data , let's see how to draw particles.

draw map particles

Looking at the source library, I found that there is a single Canvas to draw a map, and the obtained world map coastline coordinates, the main format is as follows:

{
  "type": "FeatureCollection",
  "features": [
    {
      "type": "Feature",
      "properties": {
        "scalerank": 1,
        "featureclass": "Coastline"
      },
      "geometry": {
        "type": "LineString",
        "coordinates": [
          [
              -163.7128956777287,
              -78.59566741324154
          ],
          // 数据省略
        ]
      }
    },
    // 数据省略
  ]
}

The points corresponding to these coordinates can be connected to form an overall outline. The main logic is as follows:

  // 省略
  for (let i = 0; i < len; i++) {
    const coordinates = data[i].geometry.coordinates || [];
    const coordinatesNum = coordinates.length;
    for (let j = 0; j < coordinatesNum; j++) {
      context[j ? "lineTo" : "moveTo"](
        ((coordinates[j][0] + 180) * node.width) / 360,
        ((-coordinates[j][1] + 90) * node.height) / 180
      );
  }
  // 省略

According to the actual width and height of Canvas, it is mapped in proportion to the width and height of the generated wind field image.

See here for an example of separate logic for drawing a map.

draw wind particles

Looking at the source library, there is a single Canvas to draw wind particles. When I looked at the source code, I found that the logic involved more states, and I planned to figure out the logic of drawing static particles separately.

See example for static wind particle effects.

First, let's take a look at the main idea of implementation:

  • The wind speed is mapped to the R and G components of the pixel color coding, resulting in the picture W .
  • Create color data for display and store it in texture T1.
  • Based on the number of particles, create data to store particle indices and buffer them. It also creates data about information about each particle and stores it in texture T2.
  • Load image W and store image data into texture T3.
  • When the vertex shader is processing, it will obtain the corresponding data from the texture T2 according to the particle index, and the conversion will generate a position P and pass it to the fragment shader.
  • The fragment shader obtains data from the image texture T3 according to the position P and performs linear mixing to obtain a value N, and obtains the corresponding color in the color texture T1 according to N.

Let's take a look at the specific implementation below.

color data

The main logic for generating color data:

function getColorRamp(colors) {
  const canvas = document.createElement("canvas");
  const ctx = canvas.getContext("2d");

  canvas.width = 256;
  canvas.height = 1;
  // createLinearGradient 用法: https://developer.mozilla.org/en-US/docs/Web/API/CanvasRenderingContext2D/createLinearGradient
  const gradient = ctx.createLinearGradient(0, 0, 256, 0);
  for (const stop in colors) {
    gradient.addColorStop(+stop, colors[stop]);
  }

  ctx.fillStyle = gradient;
  ctx.fillRect(0, 0, 256, 1);

  return new Uint8Array(ctx.getImageData(0, 0, 256, 1).data);
}

Here, the data is obtained by creating a gradient Canvas. Since it corresponds to the color, a color component is stored as 8-bit binary, a total of 256 kinds.

The data in the Canvas is put into the texture and needs a sufficient size: 16 * 16 = 256 . The width and height here will be used in the later fragment shader, and these two places need to be consistent to achieve the expected result.

this.colorRampTexture = util.createTexture(
  this.gl,
  this.gl.LINEAR,
  getColorRamp(colors),
  16,
  16
);

Vertex Data and State Data

Main logic:

set numParticles(numParticles) {
  const gl = this.gl;

  const particleRes = (this.particleStateResolution = Math.ceil(
    Math.sqrt(numParticles)
  ));
  // 总粒子数
  this._numParticles = particleRes * particleRes;
  // 所有粒子的颜色信息
  const particleState = new Uint8Array(this._numParticles * 4);
  for (let i = 0; i < particleState.length; i++) {
    // 生成随机颜色,颜色会对应到图片中的位置
    particleState[i] = Math.floor(Math.random() * 256);
  }
  // 创建存储所有粒子颜色信息的纹理
  this.particleStateTexture = util.createTexture(
    gl,
    gl.NEAREST,
    particleState,
    particleRes,
    particleRes
  );
  // 粒子索引
  const particleIndices = new Float32Array(this._numParticles);
  for (let i = 0; i < this._numParticles; i++) particleIndices[i] = i;
  this.particleIndexBuffer = util.createBuffer(gl, particleIndices);
}

The color information of the particles will be stored in the texture. Here, a texture with equal width and height is created. Each particle color has 4 components of RGBA, and each component is 8 bits. Note that the size range for generating random color components is [0, 256).

From the logic behind, the vertex data particleIndexBuffer is used to assist in calculating the final position, and the actual position is related to the texture. See the specific implementation of the vertex shader below for more details.

vertex shader

Vertex shader and corresponding bound variables:

const drawVert = `
  precision mediump float;

  attribute float a_index;

  uniform sampler2D u_particles;
  uniform float u_particles_res;

  varying vec2 v_particle_pos;

  void main(){
      vec4 color=texture2D(u_particles,vec2(
              fract(a_index/u_particles_res),
              floor(a_index/u_particles_res)/u_particles_res));
  // 从像素的 RGBA 值解码当前粒子位置
  v_particle_pos=vec2(
          color.r / 255.0 + color.b,
          color.g / 255.0 + color.a);

      gl_PointSize = 1.0;
      gl_Position = vec4(2.0 * v_particle_pos.x - 1.0, 1.0 - 2.0 * v_particle_pos.y, 0, 1);
  }
`;

// 代码省略
util.bindAttribute(gl, this.particleIndexBuffer, program.a_index, 1);
// 代码省略
util.bindTexture(gl, this.particleStateTexture, 1);
// 代码省略
gl.uniform1i(program.u_particles, 1);
// 代码省略
gl.uniform1f(program.u_particles_res, this.particleStateResolution);

From these scattered logic, find the actual value corresponding to the variable in the shader:

  • a_index : The particle index data in particleIndices .
  • u_particles : Texture particleStateTexture for all particle color information.
  • u_particles_res : The value of particleStateResolution , which is consistent with the width and height of the texture particleStateTexture , is also the square root of the total number of particles and the square root of the length of the particle index data.

According to these corresponding values, let's look at the main processing logic:

vec4 color=texture2D(u_particles,vec2(
              fract(a_index/u_particles_res),
              floor(a_index/u_particles_res)/u_particles_res));

First introduce two function information:

  • floor(x) : Returns the largest integer value less than or equal to x.
  • fract(x): Returns x - floor(x) , that is, returns the fractional part of x.

Assuming that the total number of particles is 4, then particleIndices = [0,1,2,3] , u_particles_res = 2 , then the two-dimensional coordinates are vec2(0,0) , vec2(0.5,0)、 , vec2(0,0.5) , vec2(0.5,0.5) . The calculation method here ensures that the obtained coordinates are between 0 and 1, so that the color information can be collected in the texture particleStateTexture .

It should be noted here that the value range returned by texture2D collection is [0, 1]. For the specific principle, see here .

v_particle_pos=vec2(
        color.r / 255.0 + color.b,
        color.g / 255.0 + color.a);

The source code comment says "decode the current particle position from the RGBA value of the pixel". Combining with the previous data, the theoretical range of the components obtained by this calculation method is [0, 256/255]. The variable v_particle_pos will be used in the fragment shader.

gl_Position = vec4(2.0 * v_particle_pos.x - 1.0, 1.0 - 2.0 * v_particle_pos.y, 0, 1);

gl_Position variable is the coordinate value of the vertex converted to the clipping space. The clipping space range is [-1.0, +1.0]. If you want to display it, it must be within this range. The calculation method here achieves this purpose.

fragment shader

Fragment shader and corresponding bound variables:

const drawFrag = `
  precision mediump float;

  uniform sampler2D u_wind;
  uniform vec2 u_wind_min;
  uniform vec2 u_wind_max;
  uniform sampler2D u_color_ramp;

  varying vec2 v_particle_pos;

  void main() {
      vec2 velocity = mix(u_wind_min, u_wind_max, texture2D(u_wind, v_particle_pos).rg);
      float speed_t = length(velocity) / length(u_wind_max);

      vec2 ramp_pos = vec2(
          fract(16.0 * speed_t),
          floor(16.0 * speed_t) / 16.0);

      gl_FragColor = texture2D(u_color_ramp, ramp_pos);
  }
`;

// 代码省略
util.bindTexture(gl, this.windTexture, 0);
// 代码省略
gl.uniform1i(program.u_wind, 0); // 风纹理数据
// 代码省略
util.bindTexture(gl, this.colorRampTexture, 2);
// 代码省略
gl.uniform1i(program.u_color_ramp, 2); // 颜色数据
// 代码省略
gl.uniform2f(program.u_wind_min, this.windData.uMin, this.windData.vMin);
gl.uniform2f(program.u_wind_max, this.windData.uMax, this.windData.vMax);

From these scattered logic, find the actual value corresponding to the variable in the shader:

  • u_wind : Texture windTexture generated from the wind field image.
  • u_wind_min : Wind field data component minimum value.
  • u_wind_max : Maximum value of wind field data components.
  • u_color_ramp : Color texture colorRampTexture .
  • v_particle_pos : The position generated inside the vertex shader.
vec2 velocity = mix(u_wind_min, u_wind_max, texture2D(u_wind, v_particle_pos).rg);
float speed_t = length(velocity) / length(u_wind_max);

First introduce the built-in functions:

  • mix(x, y, a) : will return a linear mix of x and y , calculated in the same way as x*(1-a) + y*a .

The value of velocity is guaranteed to be between u_wind_min and u_wind_max , so the result of speed_t must be less than or equal to 1. According to speed_t , the position ramp_pos is obtained according to certain rules, and the color output to the screen is obtained in the color texture colorRampTexture .

draw

After the above logic is ready, the drawing can be performed in the normal order.

Although it is to draw static particles, in the process of separate extraction, it is found that the drawing of different numbers of particles may not be completed if only one drawing wind.draw() is performed.

See example for static wind particle effects.

summary

After the logical analysis of the above code, look back at the main idea at the beginning, and express it in another way:

  • According to the number of particles to be displayed, randomly initialize the color coding information of each particle and store it in the texture T2; create the color texture T1 for the final display particle; load the image W generated by the wind speed and store it in the texture T3.
  • The ultimate goal is to get the color from the color texture T1 and display it. The way of this process is to find a corresponding wind speed mapping point from the texture T3 according to the texture T2, and then find the corresponding display color from T1 according to this point.

I feel that I understand a little better than the main idea at the beginning, but there are still some questions.

Why not map texture T3 in association with color texture T1 directly?

At present, this is only a partial reproduction of the entire wind field visualization logic. Look back at the complete implementation effect: it is dynamic. Then in order to track the movement of each particle, adding a related record variable implementation, I personally feel that the logic will be clearer. Texture T2 is mainly used to record the number and status of particles, and will continue to deepen the related logic in the future.

What is the basis for the calculation of 2D vectors for texture sampling in vertex shaders?

Correspondingly, why use the following logic:

vec2(
  fract(a_index/u_particles_res),
  floor(a_index/u_particles_res)/u_particles_res
)

In the previous specific explanation, it was said that this calculation method ensures that the obtained coordinates are between 0 and 1, but there should be more than one way to generate this range. clear. A similar approach is used later in the fragment shader to calculate the final position ramp_pos .

The fragment shader has already got a position, why calculate velocity to get a position again?

That is why there is the following logic:

vec2 velocity = mix(u_wind_min, u_wind_max, texture2D(u_wind, v_particle_pos).rg);
float speed_t = length(velocity) / length(u_wind_max);

The position v_particle_pos obtained from the vertex shader is obtained based on the randomly generated color texture T2. As mentioned earlier, the theoretical range of component value calculation is [0, 256/255], and there is no guarantee that the corresponding point can be found in the wind field image. Then an association can be generated through the mix function.

Why is the multiplication factor of ramp_pos in the fragment shader 16.0?

It's the following logic:

vec2 ramp_pos = vec2(
    fract(16.0 * speed_t),
    floor(16.0 * speed_t) / 16.0
  );

By trying to find that the 16.0 here is the same as the width and height of the color texture T1 used to generate the final display, we guess that this consistency can achieve a uniform effect.

References


XXHolic
363 声望1.1k 粉丝

[链接]