3

Introduction

I am interested in the effect of wind field visualization. I found this article when I was searching for information.

text

95-1

Check out my WebGL based wind simulation demo ! Let's dive into how it works.

I'll be honest: In the last few years of working on Mapbox , I avoided direct OpenGL/WebGL programming like the plague. One reason: the OpenGL API and terminology terrifies me. It always seemed so complicated, messy, ugly and verbose that I could never get into it. Whenever I hear terms like stencil masks, mipmaps, depth culling, blend functions, normal maps, etc., I get a sense of unease.

This year, I finally decided to face my fears and build something meaningful with WebGL. The 2D wind simulation looked like the perfect opportunity - it was useful, visually stunning, and challenging, but still felt achievable within the capabilities. I was surprised to find that it was far less scary than it looked!

CPU-based wind farm visualization

There are many examples of wind field visualizations online, but the most popular and influential is the famous project earth.nullschool.net by Cameron It's not open source itself, but it has an older open source version, , on which most other implementations are written. A well-known open source fork is Esri Wind JS . Popular weather services that use this technique include Windy and VentuSky .

95-2

Typically, this visualization in the browser relies on the Canvas 2D API, which looks roughly like this:

  1. Generates a set of random particle positions on the screen and draws the particles.
  2. For each particle, query the wind's data for the particle velocity at its current position, and move it accordingly.
  3. Reset a small number of particles to random positions. This ensures that the area blown away by the wind never becomes completely empty.
  4. Gradually fades out of the current screen and draws newly positioned particles on top.

Doing so has attendant performance limitations:

  • The number of wind particles needs to be kept low (eg the Earth example uses ~5k).
  • Every time the data or view is updated there is a large delay (for example, about 2 seconds for the Earth example) because the data processing is expensive and happens on the CPU side.

Also, to integrate it as part of a WebGL based interactive map (like Mapbox ), you would have to load the pixel content of the canvas element to the GPU on every frame, which would significantly reduce performance.

I've been looking for a way to re-implement the complete logic on the GPU side with WebGL so it's fast, capable of drawing millions of particles, and can be integrated into a Mapbox GL map without a huge performance hit loss. Fortunately, I stumbled across a great tutorial by Chris Wellons on Particle Physics in WebGL and realized that the same approach can be used for wind field visualization.

OpenGL Basics

Confusing APIs and terminology make OpenGL graphics programming very difficult to learn, but on the surface, the concept is very simple. Here's a useful definition:

OpenGL provides a 2D API for efficiently drawing triangles.

So basically what you do with GL is draw triangles. Besides the horrible API, the difficulty comes from the various math and algorithms required to do this. It can also draw points and basic lines (no smooth or circular joins/caps), but is rarely used.

95-3

OpenGL provides a special C-like language -- GLSL -- for writing programs that are executed directly by the GPU. Each program is divided into two parts called shaders - vertex shader and fragment shader.

vertex shader provides code for transforming coordinates. For example, multiplying the triangle coordinates by 2 makes our triangle appear twice as large. Every coordinate we pass to OpenGL when drawing will be run once. A basic example:

attribute vec2 coord;
void main() {
    gl_Position = vec4(2.0 * coord, 0, 1);
}

The fragment shader provides code for determining the color of each drawn pixel. You can do a lot of cool math with it, but in the end it's like "draw the current pixel of the triangle green". Example:

void main() {
    gl_FragColor = vec4(0, 1, 0, 1);
}

A cool thing you can do in both vertex shaders and fragment shaders is add an image (called a texture) as a parameter, and then look up the pixel color at any point in that image. We will rely heavily on this in wind field visualization.

Fragment shader code execution is massively parallel and hardware accelerated, so it is often many orders of magnitude faster than the equivalent computation on the CPU.

Get wind farm data

The National Weather Service publishes Global Weather Data every 6 hours, known as GFS, and publishes associated values (including wind speed) in a latitude/longitude grid. It is encoded in a special binary format called GRIB, which can be parsed into human-readable JSON using a special set of tools .

I wrote a couple of little scripts that downloads and converts the wind data to into a simple PNG image with wind speed encoded as RGB color - each pixel has horizontal speed in red and vertical speed in green. It looks like this:

95-4

You can download higher resolution versions (2x and 4x), but a 360x180 grid is enough for low-zoom visualizations. PNG compression is great for this kind of data, and the images above are usually only around 80 KB.

Move particles based on GPU

Existing wind field visualizations store particle states in JavaScript arrays. How do we store and manipulate this state on the GPU side? A new GL feature called compute shader (in OpenGL ES 3.1 and the equivalent WebGL 2.0 specification) allows you to run shader code on arbitrary data (without any rendering). Unfortunately, cross-browser and mobile support for the new spec is very limited, so we're left with only one practical option: textures.

OpenGL not only allows you to draw to the screen, but also to textures (through a concept called a framebuffer). So we can encode the particle position as the RGBA color of the image, load it into the GPU, calculate the new position in the patch shader based on the wind speed, re-encode it into the RGBA color, and draw it into the new image.

X and Y To store enough precision, we store each component in two bytes - RG and BA respectively, giving each component a range of 65536 distinct values.

95-5

A 500x500 example image will hold 250,000 particles, and we'll use a fragment shader to move each particle. The resulting image looks like this:

95-6

Here's how positions are decoded and encoded from RGBA in the fragment shader:

// lookup particle pixel color
vec4 color = texture2D(u_particles, v_tex_pos);
// decode particle position (x, y) from pixel RGBA color
vec2 pos = vec2(
    color.r / 255.0 + color.b,
    color.g / 255.0 + color.a);
... // move the position
// encode the position back into RGBA
gl_FragColor = vec4(
    fract(pos * 255.0),
    floor(pos * 255.0) / 255.0);

In the next frame, we can take this new image as the current state, and draw the new state into another image, and so on, swapping the two states every frame. So with the help of two particle state textures, we can move all the wind simulation logic to the GPU.

This method is very fast, instead of updating 5000 particles 60 times per second on the browser, we can suddenly handle a million .

One thing to keep in mind is that near the poles, particles should move much faster along the X-axis than at the equator, because the same longitude represents a much smaller distance. The following shader code handles this:

float distortion = cos(radians(pos.y * 180.0 - 90.0));
// move the particle by (velocity.x / distortion, velocity.y)

draw particles

As I mentioned earlier, in addition to triangles, we can also draw basic points - rarely used, but great for 1 pixel particles like this.

To draw each particle, we simply look up its pixel color on the particle state texture in the vertex shader to determine its position; then determine the particle color in the fragment shader by looking up its current velocity from the wind texture; Which maps to a nice color gradient (I choose colors from the ColorBrewer2 ). At this point it looks like this:

95-7

If there's a little gap, it's something. But it is difficult to get a sense of wind direction from particle motion alone. We need to add particle tracks.

draw particle trajectories

The first way I tried to draw a trajectory was to use WebGL's PreserveDrawingBuffer option, which keeps the screen state constant between frames so that we can repeatedly draw the particles on each frame as they move. However, this WebGL feature is a huge performance hit and many articles recommend against using it.

Instead, similar to how particle state textures are used, we can draw the particles into a texture (which in turn is drawn to the screen), then use that texture as a background (slightly darkened) on the next frame, and swap each frame Input/target texture. Aside from better performance, one advantage of this approach is that we can port it directly to native code (there is no equivalent to preserveDrawingBuffer ).

Wind field interpolation lookup

95-8

On a latitude/longitude raster, wind data has corresponding values for specific points, such as (50,30), (51,30), (50,31), (51,31) geographic points. How can I get an arbitrary intermediate value such as (50.123, 30.744)?

OpenGL provides its own interpolation when looking up texture colors. However, it still results in a blocky, pixelated pattern. Here is an example of these artifacts in the wind texture when scaled:

95-9

Luckily, we can smooth out artifacts by finding 4 adjacent pixels in each wind detector and manually bilinear interpolation computations on them on local pixels in the fragment shader. It's more expensive, but fixes imperfections and produces smoother wind field visualizations. The following are the same areas as this technique:

95-10

Pseudo-random generator on GPU

There is also a tricky logic to implement on the GPU - random reset of particle positions. If you don't do this, even a large number of wind particles will turn into a few lines on the screen, as the area blown away by the wind will become empty over time:

95-11

The problem is that shaders don't have random number generators. How do we randomly decide if a particle needs to be reset?

I found a solution on StackOverflow - a GLSL function for generating pseudorandom numbers that takes a pair of numbers as input:

float rand(const vec2 co) {
    float t = dot(vec2(12.9898, 78.233), co);
    return fract(sin(t) * (4375.85453 + t));
}

This fancy function depends on the result of sin changing. Then we can do this:

if (rand(some_numbers) > 0.99)
    reset_particle_position();

The challenge here is to choose a sufficiently "random" input for each particle so that the resulting values are consistent across the screen and don't show weird patterns.

Using the current particle position as a source is not perfect, as the same particle position will always generate the same random numbers, so some particles will disappear in the same area.

Using particle positions in the state texture also doesn't work, as the same particles will always disappear.

What I end up with depends on particle position and state position, plus random values computed on every frame and passed to the shader:

vec2 seed = (pos + v_tex_pos) * u_rand_seed;

But we have another little problem - areas where the particles are moving very fast appear to be much denser than areas where there isn't much wind. We can balance this by increasing the particle reset rate for faster particles:

float dropRate = u_drop_rate + speed_t * u_drop_rate_bump;

Here speed_t is a relative velocity value (from 0 to 1), u_drop_rate and u_drop_rate_bump are parameters that can be adjusted in the final visualization. Here is an example of how it affects the results:

95-12
95-13

What's next?

The result is a fully GPU-driven wind field visualization that can render a million particles at 60fps. Try using the slider in the demo, and see for the final code - about 250 lines in total, I tried to make it as readable as possible.

The next step is to integrate it into the live map that can be explored. I've made some progress on this, but not enough to share a live demo. Here are some partial snippets:

95-14

Thanks for reading, stay tuned for more updates! If you missed it, check out my

Many thanks to my Mapbox teammates kkaefer and ansis for patiently answering all my silly questions about graphics programming, giving me a lot of valuable tips and helping me learn a lot. ❤️

References


XXHolic
363 声望1.1k 粉丝

[链接]