Introduction
After understanding how draws particles , let's see how to draw particle trajectories.
- Source library: webgl-wind
- Origin
- My GitHub
draw track
The way to draw the trajectory mentioned in the original is to draw the particles into a texture, then on the next frame use that texture as a background (slightly darkened) and swap the input/target textures every frame. There are two important WebGL function points involved here:
On the basis of drawing particles based on , the main idea of adding logic:
- During initialization, the background texture B and the screen texture S are added.
- When creating data about information about each particle, two textures T20 and T21 are stored.
- When drawing, draw the background texture B first, then draw all particles according to the texture T20, then draw the screen texture S, and then use the screen texture S as the background texture B of the next frame.
- Finally, draw a new result based on texture T21, generate a new state texture overlay T20, and start the next frame of drawing.
See example for particle trajectory effects that do not include randomly generated particles. Let's take a look at the specific implementation.
texture
Added texture related logic:
// 代码省略
resize() {
const gl = this.gl;
const emptyPixels = new Uint8Array(gl.canvas.width * gl.canvas.height * 4);
// screen textures to hold the drawn screen for the previous and the current frame
this.backgroundTexture = util.createTexture(gl, gl.NEAREST, emptyPixels, gl.canvas.width, gl.canvas.height);
this.screenTexture = util.createTexture(gl, gl.NEAREST, emptyPixels, gl.canvas.width, gl.canvas.height);
}
// 代码省略
The initialized background texture and screen texture are both based on the width and height of Canvas, and are also stored in 4 components per pixel.
screen shader program
Add a new screen shader program object, and the final display visible content is this object responsible for drawing:
this.screenProgram = webglUtil.createProgram(gl, quadVert, screenFrag);
vertex data
Vertex related logic:
// 代码省略
this.quadBuffer = util.createBuffer(gl, new Float32Array([0, 0, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1]));
// 代码省略
util.bindAttribute(gl, this.quadBuffer, program.a_pos, 2);
// 代码省略
gl.drawArrays(gl.TRIANGLES, 0, 6);
// 代码省略
Here it can be seen that the vertex data is parsed in two dimensions, a total of 6 points, and a rectangle is drawn, why the coordinates are both 0 and 1, and then look at the shader below.
vertex shader
Added vertex shaders and corresponding bound variables:
const quadVert = `
precision mediump float;
attribute vec2 a_pos;
varying vec2 v_tex_pos;
void main() {
v_tex_pos = a_pos;
gl_Position = vec4(1.0 - 2.0 * a_pos, 0, 1);
}
`;
// 代码省略
this.drawTexture(this.backgroundTexture, this.fadeOpacity);
// 代码省略
drawTexture(texture, opacity) {
// 代码省略
util.bindAttribute(gl, this.quadBuffer, program.a_pos, 2);
// 代码省略
gl.drawArrays(gl.TRIANGLES, 0, 6);
}
// 代码省略
From this scattered logic, find the actual value corresponding to the variable in the shader:
a_pos
: 2D data per vertex inquadBuffer
.v_tex_pos
: The same value asa_pos
, will be used in the corresponding fragment shader.
Here, the calculation method of gl_Position
, combined with the vertex coordinates mentioned above are both 0 and 1, it is found that the range of the calculation result is [-1.0, +1.0], which can be displayed within the scope of the clipping space.
fragment shader
Fragment shader and corresponding bound variables:
const screenFrag = `
precision mediump float;
uniform sampler2D u_screen;
uniform float u_opacity;
varying vec2 v_tex_pos;
void main() {
vec4 color = texture2D(u_screen, 1.0 - v_tex_pos);
// a hack to guarantee opacity fade out even with a value close to 1.0
gl_FragColor = vec4(floor(255.0 * color * u_opacity) / 255.0);
}
`;
this.fadeOpacity = 0.996;
// 代码省略
drawTexture(texture, opacity) {
// 代码省略
gl.uniform1i(program.u_screen, 2);
gl.uniform1f(program.u_opacity, opacity);
gl.drawArrays(gl.TRIANGLES, 0, 6);
}
From this scattered logic, find the actual value corresponding to the variable in the shader:
u_screen
: Dynamically changing texture, which needs to be judged according to the context.u_opacity
: Transparency, depending on context.v_tex_pos
: Passed from the vertex shader, which is the data inquadBuffer
.
The range of 1.0 - v_tex_pos
is [0, 1], which just includes the range of the entire texture. The effect of multiplying the final color by the dynamic u_opacity
is what the original text says "slightly darker".
Update the shader program
The new update shader program object is the key to making the particles move:
this.updateProgram = webglUtil.createProgram(gl, quadVert, updateFrag);
vertex data
A common set of vertex data for screen shader programs.
vertex shader
Shared set with the vertex shader of the screen shader program.
fragment shader
For updated fragment shaders and corresponding bound variables:
const updateFrag = `
precision highp float;
uniform sampler2D u_particles;
uniform sampler2D u_wind;
uniform vec2 u_wind_res;
uniform vec2 u_wind_min;
uniform vec2 u_wind_max;
varying vec2 v_tex_pos;
// wind speed lookup; use manual bilinear filtering based on 4 adjacent pixels for smooth interpolation
vec2 lookup_wind(const vec2 uv) {
// return texture2D(u_wind, uv).rg; // lower-res hardware filtering
vec2 px = 1.0 / u_wind_res;
vec2 vc = (floor(uv * u_wind_res)) * px;
vec2 f = fract(uv * u_wind_res);
vec2 tl = texture2D(u_wind, vc).rg;
vec2 tr = texture2D(u_wind, vc + vec2(px.x, 0)).rg;
vec2 bl = texture2D(u_wind, vc + vec2(0, px.y)).rg;
vec2 br = texture2D(u_wind, vc + px).rg;
return mix(mix(tl, tr, f.x), mix(bl, br, f.x), f.y);
}
void main() {
vec4 color = texture2D(u_particles, v_tex_pos);
vec2 pos = vec2(
color.r / 255.0 + color.b,
color.g / 255.0 + color.a); // decode particle position from pixel RGBA
vec2 velocity = mix(u_wind_min, u_wind_max, lookup_wind(pos));
// take EPSG:4236 distortion into account for calculating where the particle moved
float distortion = cos(radians(pos.y * 180.0 - 90.0));
vec2 offset = vec2(velocity.x / distortion, -velocity.y) * 0.0001 * 0.25;
// update particle position, wrapping around the date line
pos = fract(1.0 + pos + offset);
// encode the new particle position back into RGBA
gl_FragColor = vec4(
fract(pos * 255.0),
floor(pos * 255.0) / 255.0);
}
`;
// 代码省略
setWind(windData) {
// 风场图片的源数据
this.windData = windData;
}
// 代码省略
util.bindTexture(gl, this.windTexture, 0);
util.bindTexture(gl, this.particleStateTexture0, 1);
// 代码省略
this.updateParticles();
// 代码省略
updateParticles() {
// 代码省略
const program = this.updateProgram;
gl.useProgram(program.program);
util.bindAttribute(gl, this.quadBuffer, program.a_pos, 2);
gl.uniform1i(program.u_wind, 0); // 风纹理
gl.uniform1i(program.u_particles, 1); // 粒子纹理
gl.uniform2f(program.u_wind_res, this.windData.width, this.windData.height);
gl.uniform2f(program.u_wind_min, this.windData.uMin, this.windData.vMin);
gl.uniform2f(program.u_wind_max, this.windData.uMax, this.windData.vMax);
gl.drawArrays(gl.TRIANGLES, 0, 6);
// 代码省略
}
From this scattered logic, find the actual value corresponding to the variable in the shader:
u_wind
: TexturewindTexture
generated from the wind field image.u_particles
: TextureparticleStateTexture0
with all particle color information.u_wind_res
: The width and height of the generated image.u_wind_min
: Wind field data component minimum value.u_wind_max
: Wind field data component maximum value.
Obtain the pixel information of the corresponding position from the texture quadBuffer
according to the vertex data of particleStateTexture0
, use the pixel information to decode the particle position, obtain the smooth interpolation of the adjacent 4 pixels by the lookup_wind
method, and then obtain the offset based on the maximum and minimum values of the wind field offset
, and finally get the new position and convert it to color output. During this process, the following key points were discovered:
- How to get 4 adjacent pixels?
- In a two-dimensional map, how are polar and equatorial particles different?
How to get 4 adjacent pixels?
Look at the main method:
vec2 lookup_wind(const vec2 uv) {
vec2 px = 1.0 / u_wind_res;
vec2 vc = (floor(uv * u_wind_res)) * px;
vec2 f = fract(uv * u_wind_res);
vec2 tl = texture2D(u_wind, vc).rg;
vec2 tr = texture2D(u_wind, vc + vec2(px.x, 0)).rg;
vec2 bl = texture2D(u_wind, vc + vec2(0, px.y)).rg;
vec2 br = texture2D(u_wind, vc + px).rg;
return mix(mix(tl, tr, f.x), mix(bl, br, f.x), f.y);
}
- Taking the width and height of the generated image as the benchmark, the basic unit
px
is obtained; - Under the new measurement standard, round down to obtain the approximate position
vc
as the first reference point, and move the single component of the basic unitpx.x
to obtain the second reference point; - Move the single component of the base unit
px.y
to get the 3rd reference point, and move the base unitpx
to get the 4th reference point.
In a two-dimensional map, how are polar and equatorial particles different?
As in the original text:
Near the poles, the particles should move much faster along the X-axis than at the equator, because the same longitude represents a much smaller distance.
The corresponding processing logic:
float distortion = cos(radians(pos.y * 180.0 - 90.0));
vec2 offset = vec2(velocity.x / distortion, -velocity.y) * 0.0001 * u_speed_factor;
radians
method converts the angle to a radian value, and the pos.y * 180.0 - 90.0
guess is the rule for converting wind data to an angle. The cosine value of cos
gradually becomes smaller between [0, π], the first component corresponding to offset
will gradually become larger, and the effect seems to be faster. The second component is added with the symbol -
, which is presumed to be consistent with the image texture. The image texture is reversed on the Y axis by default.
draw
Drawing this piece changes a lot:
draw() {
// 代码省略
this.drawScreen();
this.updateParticles();
}
drawScreen() {
const gl = this.gl;
// draw the screen into a temporary framebuffer to retain it as the background on the next frame
util.bindFramebuffer(gl, this.framebuffer, this.screenTexture);
gl.viewport(0, 0, gl.canvas.width, gl.canvas.height);
this.drawTexture(this.backgroundTexture, this.fadeOpacity);
this.drawParticles();
util.bindFramebuffer(gl, null);
// enable blending to support drawing on top of an existing background (e.g. a map)
gl.enable(gl.BLEND);
gl.blendFunc(gl.SRC_ALPHA, gl.ONE_MINUS_SRC_ALPHA);
this.drawTexture(this.screenTexture, 1.0);
gl.disable(gl.BLEND);
// save the current screen as the background for the next frame
const temp = this.backgroundTexture;
this.backgroundTexture = this.screenTexture;
this.screenTexture = temp;
}
drawTexture(texture, opacity) {
const gl = this.gl;
const program = this.screenProgram;
gl.useProgram(program.program);
// 代码省略
gl.drawArrays(gl.TRIANGLES, 0, 6);
}
drawParticles() {
const gl = this.gl;
const program = this.drawProgram;
gl.useProgram(program.program);
// 代码省略
gl.drawArrays(gl.POINTS, 0, this._numParticles);
}
updateParticles() {
const gl = this.gl;
util.bindFramebuffer(gl, this.framebuffer, this.particleStateTexture1);
gl.viewport(
0,
0,
this.particleStateResolution,
this.particleStateResolution
);
const program = this.updateProgram;
gl.useProgram(program.program);
// 代码省略
gl.drawArrays(gl.TRIANGLES, 0, 6);
// swap the particle state textures so the new one becomes the current one
const temp = this.particleStateTexture0;
this.particleStateTexture0 = this.particleStateTexture1;
this.particleStateTexture1 = temp;
}
- First switch to the frame buffer, the specified texture is
screenTexture
, note that the result of drawing from here is invisible, then draw the entire background texturebackgroundTexture
and all the individual particles based on the textureparticleStateTexture0
, and then unbind the frame buffer. This part of the drawing result will be stored in texturescreenTexture
. - Switch to the default color buffer. Note that the result of drawing from here is visible. Turn on alpha blending. The effect of the two parameters set by
blendFunc
is that the overlapping part will be overwritten and drawn first. Then the entire texturescreenTexture
is drawn, which means that the drawing results of the framebuffer are displayed on the canvas. - After the drawing is completed, the intermediate variable is used for replacement, and the texture
backgroundTexture
becomes the texture content now presented as the background of the next frame. - Then switch to the frame buffer to update the particle state. The specified texture is
particleStateTexture1
. Note that the result of drawing from here is invisible. The offset state is generated based on the textureparticleStateTexture0
, and the entire drawing result will be stored in the textureparticleStateTexture1
. - After the drawing is completed, the intermediate variable is used for replacement, and the texture
particleStateTexture0
becomes the moved texture content, which is used as the basis for the presentation of particles in the next frame. Such continuous frame drawing looks like a dynamic effect.
Confuse
It feels like that's the case, but some people still don't understand it.
Why use the calculation method in lookup_wind for offset?
The original text explained to find smooth interpolation, but what is the mathematical principle in it? Why do you need to mix
again after finding it? I haven't been able to find a better explanation.
**粗体** _斜体_ [链接](http://example.com) `代码` - 列表 > 引用
。你还可以使用@
来通知其他用户。