1

Introduction

When trying to visualize mathematical functions, I found an interesting library Field Play , and made a partial translation record of the instructions in the README for a preliminary understanding.

what?

Let's assign a vector (1, 0) to each point on the grid. This means we have an arrow pointing to the right:

2-1

Suppose these vectors represent velocities. What if we throw a thousand particles onto this grid? How will they act?

2-2

When we assign a vector to each point on the white space, we create a mathematical structure called 向量场(Vector Field) .

Let's create a more interesting vector field:

  • y The point whose coordinates are even number gets the vector (1, 0) ;
  • y points with odd coordinates get an opposite vector (-1, 0) ;

2-3

Let's drop a few thousand particles again and see what happens:

2-4

The above can be expressed in a formula:

 v.x = -2.0 * mod(floor(y), 2.0) + 1.0;
v.y = 0.0;

The remainder after integer division y/2 may be 1 or 0. Then we transform the remainder so that the final vector is (-1, 0) or (1, 0) .

So far, we've only used one component of the velocity vector v.x and the particles are only moving horizontally. Let's try setting all two components and see what happens

 v.x = -2.0 * mod(floor(y), 2.0) + 1.0;
v.y = -2.0 * mod(floor(x), 2.0) + 1.0;

2-52-6

Wow! With two simple operations, the final animation looks like a work of art!

2-7

It turns out that vector fields are very flexible generative frameworks.

How this project works?

This project was inspired by Vladimir Agafonkin's article: How I built a wind map with WebGL . Vladimir demonstrates how to render up to 1 million particles at 60 frames per second entirely on the GPU.

I used almost the same technique with some modifications:

  1. Vector fields are defined in shader language GLSL code, so mathematical formulas can be freely expressed.
  2. The particle positions are computed on the GPU using the fourth-order Runge-Kutta method.
  3. Each dimension X and Y is computed independently, so we can store the position more accurately.
  4. Added pan/zoom functionality using panzoom library.
  5. Vector field definitions are stored in URLs using the query-state library. This way you can easily bookmark/share your vector fields.

Float packing

The core idea of WebGL-based computing is very simple.

GPUs can render images very fast. Each image is a collection of pixels. Each pixel is just a number representing a color, usually written in 32 bits (RGBA format).

But who said 32 bits per pixel had to represent a color? Why can't we compute some number and store it to 32 bits? This number can be, for example, the position of the particle along some velocity vector...

If we do this, the GPU will still see these numbers as colors:

2-8

Fortunately, we don't have to let users see these seemingly random images. WebGL allows content to be rendered on a "virtual" screen called 帧缓冲区(frame buffers) .

These virtual screens are just images (textures) in video memory. With two textures, we can take advantage of the GPU to solve math problems. On each frame, the algorithm works as follows:

  1. tell the GPU to read data from the "background" texture;
  2. tell the GPU to use the framebuffer to write data to the "screen" texture;
  3. replace "background" with "screen";

In theory, this should work just fine. There is actually a problem. WebGL does not allow floating point numbers to be written to textures. So we need to convert a floating point number to RGBA format, 8 bits per channel.

In Vladimir's article, the following encoding/decoding modes are used:

 // decode particle position (x, y) from pixel RGBA color
vec2 pos = vec2(
    color.r / 255.0 + color.b,
    color.g / 255.0 + color.a);
... // move the position
// encode the position back into RGBA
gl_FragColor = vec4(
    fract(pos * 255.0),
    floor(pos * 255.0) / 255.0);

Here, the particle's X and Y coordinates are stored in a 32-bit number. I used this method from the beginning and it worked fine on desktop and Android phones.

However, when I opened a website on my iPhone, an unpleasant surprise awaited me. Serious flaws appear for no apparent reason.

Compare the same code running on desktop (left) and iPhone (right)

2-92-10

To make matters worse, the particles on the iPhone keep moving when the vector field is static (0 velocity everywhere):

2-112-12

I checked that the requested floating point resolution is set to the highest available (highp). However, these flaws are still apparent.

How can we fix this?

I don't want to use the simplest workaround of enabling floating point textures. They are not as widely supported as I would have liked . Instead, I did what non-GPU programming told me not to do for years.

I decided to solve thousands of ODEs instead of once per frame. But once for each dimension. I'll pass the shader a property telling it which dimension needs to be written to the output of this "draw" call:

 if (u_out_coordinate == 0) gl_FragColor = encodeFloatRGBA(pos.x);
else if (u_out_coordinate == 1) gl_FragColor = encodeFloatRGBA(pos.y);

In pseudocode it looks like this:

 Frame 1:
  Step 1: 嘿,WebGL,将 u_out_coordinate 设为 0 ,并将所有内容渲染进 `texture_x` ;
  Step 2: 嘿,WebGL,把 u_out_coordinate 设为 1 ,然后把所有内容再次渲染进 `texture_y` ;

We solved the same problem, except for the x component in the solution, we threw everything away. Then repeat for y .

This seems crazy to me because I think it will affect performance. But using this method, even my low-end Android phone had no problems.

encodeFloatRGBA() floating point to RGBA vector using all 32 bits. I found its application somewhere on stackoverflow and I'm not sure if this is the best way to go about it (if you know better, please let me know).

The good news is that the blemishes are gone:

2-13

References


XXHolic
363 声望1.1k 粉丝

[链接]