1
头图

As mentioned above, the part that I am most interested in in the activity of "Your Character Dominant Color" is the movement Three.js . According to the author, the position of each cloud is random, and the effect is very good. The picture is the version I implemented.

Online Demo

First, let's talk about the basic idea of realizing the dynamic effect of crossing the clouds:

  1. Place a bunch of 64*64 plane graphics evenly along the Z axis. The X and Y coordinates of these planes are random (much like the barreled potato chips in the picture below)
  2. Combine all the above figures into one big figure
  3. Generate grids from large graphics and patch materials (clouds), and put the grids into the scene
  4. The motion effect is to move the camera slowly along the Z axis from a distance, and it will have the effect of passing through the clouds.

First of all, the official document provides a create a scene . After reading, you can better understand the following content.

The following introduces the basic concepts in Three.js Only my novice's understanding. If you have a good document or share it, please help me to show me the way.

Scenes

A scene is a space for the content we want to render. The simplest use is that a grid can be added to the scene and then rendered.

// 初始化场景
var scene = new THREE.Scene();

// 其他代码...
// 把物体添加进场景
scene.add(mesh);
// 渲染场景
renderer.render(scene, camera);

Here are the coordinate rules in the scene: the origin is the center of the canvas plane, the Z axis is perpendicular to the X and Y axes, and the positive direction is directed at us. I have rotated the Z axis line here, otherwise we can’t see it. ,As shown below:

Code:

const scene = new THREE.Scene();

var camera = new THREE.PerspectiveCamera(70, window.innerWidth / window.innerHeight, 1, 1000);
camera.position.set(0, 0, 100);

const renderer = new THREE.WebGLRenderer();
renderer.setSize(window.innerWidth, window.innerHeight);
document.body.appendChild(renderer.domElement);

// 线段1,红色的,从原点到X轴40
const points = [];
points.push(new THREE.Vector3(0, 0, 0));
points.push(new THREE.Vector3(40, 0, 0));
const geometry1 = new THREE.BufferGeometry().setFromPoints(points);
var material1 = new THREE.LineBasicMaterial({ color: 'red' });
var line1 = new THREE.Line(geometry1, material1);

// 线段2,蓝色的,从原点到Y轴40
points.length = 0;
points.push(new THREE.Vector3(0, 0, 0));
points.push(new THREE.Vector3(0, 40, 0));
const geometry2 = new THREE.BufferGeometry().setFromPoints(points);
var material2 = new THREE.LineBasicMaterial({ color: 'blue' });
var line2 = new THREE.Line(geometry2, material2);

// 线段3,绿色的,从原点到Z轴40
points.length = 0;
points.push(new THREE.Vector3(0, 0, 0));
points.push(new THREE.Vector3(0, 0, 40));
const geometry3 = new THREE.BufferGeometry().setFromPoints(points);
var material3 = new THREE.LineBasicMaterial({ color: 'green' });
var line3 = new THREE.Line(geometry3, material3);
// 做了个旋转,不然看不到Z轴上的线
line3.rotateX(Math.PI / 8);
line3.rotateY(-Math.PI / 8);

scene.add(line1, line2, line3);

renderer.render(scene, camera);

camera

In order for the objects in the scene to be seen by us, that is to say, they need the camera to "see". Through the above coordinate system diagram, we know that the same object, the camera viewing angle is different, it will definitely show a different picture . The most commonly used is the perspective camera used here, which can penetrate objects, and it is used here to penetrate the clouds, and the effect is outstanding.

// 初始化相机
camera = new THREE.PerspectiveCamera(70, pageWidth / pageHeight, 1, 1000);

// 最后,场景和相机一起渲染出来,我们就能够看到场景中的物体了
renderer.render(scene, camera);

Material

The texture is easy to understand. In the original example, we used MeshBasicMaterial to add color to the cube. The way the material is used is to generate a mesh from the material and the graphics together. Here we use a more complex texture material.

// 贴图材质
const material = new THREE.ShaderMaterial({
  // 这里的值是给着色器传递的
  uniforms: {
    map: {
      type: 't',
      value: texture
    },
    fogColor: {
      type: 'c',
      value: fog.color
    },
    fogNear: {
      type: 'f',
      value: fog.near
    },
    fogFar: {
      type: 'f',
      value: fog.far
    }
  },
  vertexShader: vShader,
  fragmentShader: fShader,
  transparent: true
});

Graphs and grids

Three.js default, that is, various Geometry , their base class is BufferGeometry .

Graphics can be merged, like here is the clone of a lot of the same plane graphics, by modifying their respective positions, the combined effect of forming a large cloud is generated.

At first I thought that graphics and grids were a concept, but later I learned that materials and graphics can generate grids, and grids can be placed in the scene.

// 把上面合并出来的形状和材质,生成一个网格
mesh = new THREE.Mesh(mergedGeometry, material);

Rendering

Rendering the scene and camera to the target element will generate a canvas . If it is a static scene, then the rendering is complete. But if it is a moving scene, a native function requestAnimationFrame needs to be used here.

function animate() {
  requestAnimationFrame(animate);
  renderer.render(scene, camera);
}

The above code is a rendering loop, the frequency on a normal screen is 60HZ, and the refresh frequency will increase on a high-brush screen, which will give users a good refresh experience, and we don’t need to use setInterval to control it ourselves. And when the user switches to other tabs, it will pause the refresh, will not waste the user's precious processor resources, and will not deplete the life of the battery.

Demystification process

The process is actually very interesting and tortuous.

I picked up the front-end code of the "Your Personality Dominant Color" activity, but there are many codes related to cloud dynamics that have been compressed and cannot be understood.

what to do? Then I went to three.js to find an official example. After searching for a long time, I only found one like this:

Later, through a variety of search, finally three.js of forum discovered the special effects through the clouds, is three.js examples of authors long before written.

After getting the source code of cloud dynamic effects, I feel that imyzf students should also learn from this example after comparison.

I found that three.js in the source code is a bit backward. The version in the source code is 55, and the latest is version 131. The difference between the versions is a bit big. Some of the above classes and APIs are no longer available. Here are the different parts:

THREE.Geometry

The first is that this class is no longer available in the latest version. This class is used to merge many flat graphics into one graphic. Observe the code below. The 55 version first Geometry , and then generates a flat grid. After adjusting the coordinates of the grid, Geometry (I don’t understand here, how do the graphics merge with the grid, and it’s The same grid, I guess a new grid was generated when merging).

// 初始化一个基础的图形
geometry = new THREE.Geometry();
// 初始化一个64*64的平面
var plane = new THREE.Mesh(new THREE.PlaneGeometry(64, 64));

for (var i = 0; i < 8000; i++) {
  // 调整平面图案的位置和旋转角度等
  plane.position.x = Math.random() * 1000 - 500;
  plane.position.y = -Math.random() * Math.random() * 200 - 15;
  plane.position.z = i;
  plane.rotation.z = Math.random() * Math.PI;
  plane.scale.x = plane.scale.y = Math.random() * Math.random() * 1.5 + 0.5;
  // 平面合并到基础图形
  THREE.GeometryUtils.merge(geometry, plane);
}

After querying the latest documents, I found that the base class BufferGeometry all graphics provides the clone method, and the plane graphics can naturally be cloned.

// 一个平面形状
const geometry = new THREE.PlaneGeometry(64, 64);
const geometries = [];

for (var i = 0; i < CloudCount; i++) {
  const instanceGeometry = geometry.clone();

  // 把这个克隆出来的云,通过随机参数,做一些位移,达到一堆云彩的效果,每次渲染出来的云堆都不一样
  // X轴偏移后,通过调整相机位置达到平衡
  // Y轴想把云彩放在场景的偏下位置,所以都是负值
  // Z轴位移就是:当前第几个云*每个云所占的Z轴长度
  instanceGeometry.translate(Math.random() * RandomPositionX, -Math.random() * RandomPositionY, i * perCloudZ);

  geometries.push(instanceGeometry);
}

// 把这些形状合并
const mergedGeometry = BufferGeometryUtils.mergeBufferGeometries(geometries);

GeometryUtils.merge

There is such an API in the old code code. This is a very important API. The purpose is to merge graphics and grids to generate a cloud. The latest version of three.js is no longer available.

// 合并所有的平面图形到一个基础图形
THREE.GeometryUtils.merge(geometry, plane);

By querying the latest version of the document, I found that a group of graphics can be merged. I personally think it is better than the above, and it is much better semantically. The above code is repeated to merge the same plane onto a basic figure, and the following is to combine this group of planes into a new plane.

// 把这些形状合并
const mergedGeometry = BufferGeometryUtils.mergeBufferGeometries(geometries);

Shader

I have not modified the logic of the shader code at all. GLSL (OpenGL Shading Language), the original shader code is written in the <script> element tag, which is not consistent with our engineering project.

// 原来的
<script id="vs" type="x-shader/x-vertex">
  varying vec2 vUv;
  void main()
  {
      vUv = uv;
      gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );
  }
</script>

<script id="fs" type="x-shader/x-fragment">
   uniform sampler2D map;
   uniform vec3 fogColor;
   uniform float fogNear;
   uniform float fogFar;
   varying vec2 vUv;
   void main()
   {
       float depth = gl_FragCoord.z / gl_FragCoord.w;
       float fogFactor = smoothstep( fogNear, fogFar, depth );
       gl_FragColor = texture2D(map, vUv );
       gl_FragColor.w *= pow( gl_FragCoord.z, 20.0 );
       gl_FragColor = mix( gl_FragColor, vec4( fogColor, gl_FragColor.w ), fogFactor );
  }
</script>

Later, I found a few places to find out that we can use string instead of time:

const vShader = `
    varying vec2 vUv;
    void main()
    {
      vUv = uv;
      gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );
    }
  `;

I don't really understand the vertex shader and fragment shader code, so I'll copy it as a respect.

Source code

Finally, put the source code , interested students can take a look, welcome Star and suggestions.


hezhongfeng
257 声望452 粉丝

coder