9

In visual development, whether it is two-dimensional canvas or three-dimensional development, line drawing is very common, such as drawing migration diagrams between cities, motion trajectory diagrams, and so on. Whether in 3D or 2D, all objects are made up of points, two-point lines, and three-point surfaces. So what are the stories behind drawing a simple line in ThreeJS, and this article will unravel them one by one.

Birth of a thread

In ThreeJS, objects are composed of geometry (Geometry) and material (Material), and how the object is displayed (points, lines, surfaces) depends on the rendering method (ThreeJS provides different object constructors).

Looking at the API of ThreeJS, there are these related to the line:

To put it simply, ThreeJS provides two types of materials: LineBasicMaterial and LineDashedMaterial , which mainly control the color, width, etc. of the line; the geometry mainly controls the position of the breakpoint of the line segment, etc. The basic geometry class BufferGeometry is mainly used to create the geometry of the line. Some line generation functions are also provided to help generate line geometry.

straight line

Line LineLoop LineSegments three types of line related objects are provided in the API

Line

First use Line to create the simplest line:

// 创建材质
const material = new THREE.LineBasicMaterial({ color: 0xff0000 });
// 创建空几何体
const geometry = new THREE.BufferGeometry()
const points = [];
points.push(new THREE.Vector3(20, 20, 0));
points.push(new THREE.Vector3(20, -20, 0));
points.push(new THREE.Vector3(-20, -20, 0));
points.push(new THREE.Vector3(-20, 20, 0));
// 绑定顶点到空几何体
geometry.setFromPoints(points);

const line = new THREE.Line(geometry, material);
scene.add(line);

img

LineLoop

LineLoop is used to draw a series of points into a continuous line, which is almost the same as Line , the only difference is that after all points are connected, the first point and the last point will be connected. This kind of line is used in actual projects. Draw a certain area, such as ticking a certain area with a line on a map. Create an object with LineLoop :

// 创建材质
const material = new THREE.LineBasicMaterial({ color: 0xff0000 });
// 创建空几何体
const geometry = new THREE.BufferGeometry()
const points = [];
points.push(new THREE.Vector3(20, 20, 0));
points.push(new THREE.Vector3(20, -20, 0));
points.push(new THREE.Vector3(-20, -20, 0));
points.push(new THREE.Vector3(-20, 20, 0));
// 绑定顶点到空几何体
geometry.setFromPoints(points);

const line = new THREE.LineLoop(geometry, material);
scene.add(line);

The same four points, created with LineLoop , are a closed area.

LineSegments

LineSegments is used to connect two points as a line. It will automatically assign a series of points we pass into a group of two, and then connect the two assigned points. This kind of innate practical project is mainly used for drawing. Lines with the same starting point and different ending points, such as the commonly used genetic map. Create an object with LineSegments :

// 创建材质
const material = new THREE.LineBasicMaterial({ color: 0xff0000 });
// 创建空几何体
const geometry = new THREE.BufferGeometry()
const points = [];
points.push(new THREE.Vector3(20, 20, 0));
points.push(new THREE.Vector3(20, -20, 0));
points.push(new THREE.Vector3(-20, -20, 0));
points.push(new THREE.Vector3(-20, 20, 0));
// 绑定顶点到空几何体
geometry.setFromPoints(points);

const line = new THREE.LineSegments(geometry, material);
scene.add(line);

the difference

The difference between the above three line objects is that the underlying rendering of WebGL is different. Assuming that there are five points p1/p2/p3/p4/p5,

  • Line uses gl.LINE_STRIP , draws a straight line to the next vertex, the final connection is p1->p2->p3->p4->p5
  • LineLoop is using gl.LINE_LOOP , draws a line to the next vertex, and returns the last vertex to the first vertex, the final line is p1->p2->p3->p4->p5->p1
  • LineSegments uses gl.LINES to draw a line between a pair of vertices, the final connection is p1->p2 p3->p4

If you just draw a line segment between two points, then there is no difference between the above three implementation methods, and the implementation effect is the same.

dotted line

In addition to LineBasicMaterial , ThreeJS also provides the material LineDashedMaterial to draw dashed lines:

// 虚线材质
const material = new THREE.LineDashedMaterial({
  color: 0xff0000,
  scale: 1,
  dashSize: 3,
  gapSize: 1,
});

const points = [];
points.push(new THREE.Vector3(10, 10, 0));
  points.push(new THREE.Vector3(10, -10, 0));
  points.push(new THREE.Vector3(-10, -10, 0));
  points.push(new THREE.Vector3(-10, 10, 0));
const geometry = new THREE.BufferGeometry().setFromPoints(points);
const line = new THREE.Line(geometry, material);
// 计算LineDashedMaterial所需的距离的值的数组。 
line.computeLineDistances();
scene.add(line);

<img src="https://img.alicdn.com/imgextra/i4/O1CN010B12zS1TwlulbyP9Y_!!6000000002447-2-tps-908-574.png" style="zoom:50%;" />

It should be noted that drawing dashed lines requires calculating the distance between the lines, otherwise the effect of dashed lines will not appear. For each vertex in the geometry, line.computeLineDistances This method calculates the cumulative length from the current point to the starting point of the line.

cool line

add point width

LineBasicMaterial provides the linewidth for setting the line width, the connection shape linecap between adjacent line segments and the end shape linecap, but after setting it, it is found that it does not take effect. The ThreeJS document also explains this:

Due to the of the underlying OpenGL rendering of , the maximum and minimum line widths can only be 1, and the line width cannot be set, so the connection shape setting between line segments is meaningless, so these three setting items are all ineffective.

ThreeJS officially provides a demo that can set the line width. This demo uses the material LineMaterial , geometry LineGeometry and object Line2 .

import { Line2 } from './jsm/lines/Line2.js';
import { LineMaterial } from './jsm/lines/LineMaterial.js';
import { LineGeometry } from './jsm/lines/LineGeometry.js';

const geometry = new LineGeometry();
geometry.setPositions( positions );

const matLine = new LineMaterial({
  color: 0xffffff,
  linewidth: 5, // in world units with size attenuation, pixels otherwise
  //resolution:  // to be set by renderer, eventually
  dashed: false,
  alphaToCoverage: true,
});

const line = new Line2(geometry, matLine);
line.computeLineDistances();
line.scale.set(1, 1, 1);
scene.add( line );

function animate() {
  renderer.render(scene, camera);
    // renderer will set this eventually
  matLine.resolution.set( window.innerWidth, window.innerHeight ); // resolution of the viewport
  requestAnimationFrame(animate);
}

It should be noted that in the loop of the rendering loop, the resolution of the material needs to be reset for each frame, otherwise the width effect will not take effect; Line2 does not provide documentation, and the specific parameters need to be explored by observing the source code.

add color

In the basic demo, the color of the line is set uniformly through the material's color , so how to achieve the gradient effect?

In the material settings, the parameter vertexColors can control the source of the material color. If it is set to true, the color calculation logic comes from the vertex color, and it transitions smoothly into continuous color changes through a certain interpolation.

// 创建材质
const material = new THREE.LineMaterial({
  linewidth: 2,
  vertexColors: true,
  resolution: new THREE.Vector2(800, 600),
});

// 创建空几何体
const geometry = new THREE.LineGeometry();
geometry.setPositions([
  10,10,0, 10,-10,0, -10,-10,0, -10,10,0
]);
// 设置顶点颜色
geometry.setColors([
  1, 0, 0, 0, 1, 0, 0, 0, 1, 1, 1, 0
]);

const line = new THREE.Line2(geometry, material);
line.computeLineDistances();
scene.add(line);

The above code creates four points and sets the vertex colors to red (1,0,0), green (0,1,0), blue (0,0,1), yellow (1,1,0), The resulting rendering is as follows:

This example only sets the color of four vertices. If the color interpolation function interval is smaller, we can create more detailed colors.

dotted shape

When two points are connected, a line can be specified. If the distance between points is very small and the points are very dense, various curves can be generated by connecting the points.

ThreeJS provides a variety of curve generation functions, mainly divided into two-dimensional curves and three-dimensional curves:

<img src="https://img.alicdn.com/imgextra/i3/O1CN01zjHrBJ1cn00O1kmjD_!!6000000003644-2-tps-476-524.png" style="zoom:50%;" />

  • ArcCurve and EllipseCurve draw circles and ellipses respectively, EllipseCurve is the base class of ArcCurve ;
  • LineCurve and LineCurve3 draw two-dimensional and three-dimensional curves respectively (the definition of mathematical curves includes straight lines), which are composed of starting points and ending points;
  • QuadraticBezierCurve , QuadraticBezierCurve3 , CubicBezierCurve and CubicBezierCurve3 are two-dimensional, three-dimensional, second-order, and third-order Bezier curves ;
  • SplineCurve and CatmullRomCurve3 are 2D and 3D splines respectively, using the Catmull-Rom algorithm to create a smooth spline from a series of points.

The difference between the Bezier curve and the CatmullRom curve is that the CatmullRom curve can smoothly pass through all points and is generally used to draw trajectories, while the Bezier curve constructs a tangent through the intermediate points.

  • Bezier curve

img

  • CatmullRom Curve

These constructors generate curves through parameters. Curve base class provides the getPoints method class to obtain the points on the curve. The parameter is the number of segments of the curve. The more segments, the denser the division, the more points, and the smoother the curve. Finally, assign this series of points to the geometry, taking the Bezier curve as an example:

// 创建几何体
const geometry = new THREE.BufferGeometry();
// 创建曲线
const curve = new THREE.CubicBezierCurve3(
  new THREE.Vector3(-10, -20, -10),
  new THREE.Vector3(-10, 40, -10),
  new THREE.Vector3(10, 40, 10),
  new THREE.Vector3(10, -20, 10)
);
// getPoints 方法从曲线中获取点
const points = curve.getPoints(100);
// 将这系列点赋值给几何体
geometry.setFromPoints(points);
// 创建材质
const material = new THREE.LineBasicMaterial({color: 0xff0000});
const line = new THREE.Line(geometry, material);
scene.add(line);

<img src="https://img.alicdn.com/imgextra/i3/O1CN01mLGaXQ1WeOsF7cHVJ_!!6000000002813-2-tps-852-859.png" style="zoom:50%;" />

We can also implement a custom curve by inheriting the Curve base class and by overriding the getPoint method in the base class. The getPoint method returns a vector of at a given position in the curve.

For example, to implement a curve of a sine function:

class CustomSinCurve extends THREE.Curve {
    constructor( scale = 1 ) {
        super();
        this.scale = scale;
    }

    getPoint( t, optionalTarget = new THREE.Vector3() ) {
        const tx = t * 3 - 1.5;
        const ty = Math.sin( 2 * Math.PI * t );
        const tz = 0;

        return optionalTarget.set( tx, ty, tz ).multiplyScalar( this.scale );
    }
}

Add some stretch

No matter how the line changes, it is only a two-dimensional plane. Although there are some three-dimensional curves above, the normal plane is different. If we want to simulate some effects similar to pipes, pipes have a concept of diameter, then two-dimensional lines definitely cannot meet the requirements. So we need to use other geometry to achieve the pipe effect.

ThreeJS encapsulates a lot of geometry for us to use, including a TubeGeometry pipeline geometry,
It can stretch out a pipe according to the 3d curve, its constructor:

class TubeGeometry(path : Curve, tubularSegments : Integer, radius : Float, radialSegments : Integer, closed : Boolean)

path is the curve, describing the shape of the pipe. We use the sine function curve CustomSinCurve that we created earlier to generate a curve and stretch it using TubeGeometry .

const tubeGeometry = new THREE.TubeGeometry(new CustomSinCurve(10), 20, 2, 8, false);
const tubeMaterial = new THREE.MeshStandardMaterial({ color: 0x156289, emissive: 0x072534, side: THREE.DoubleSide });
const tube = new THREE.Mesh(tubeGeometry, tubeMaterial);
scene.add(tube)

add animation

At this point, our line already has a width, color, and shape, so it's time to move! The essence of moving is to change a certain attribute of the object in each rendering frame to form a certain continuous effect, so we have two ideas to make the line move, one is to move the geometry of the line, and the other is to make the line move. the material moves,

flowing line

In material animation, the most frequently used is texture flow . By setting the repeat property of the texture, and constantly changing the texture object's offset to make the texture flow.

If you want to achieve the texture flow effect in the line, the two-dimensional line cannot be realized, and it must be meaningful in the extruded three-dimensional pipeline. Also use the pipe body implemented above, and then assign a texture configuration to the material:

// 创建纹理
const imgUrl = 'xxx'; // 图片地址
const texture = new THREE.TextureLoader().load(imgUrl);
texture.wrapS = THREE.RepeatWrapping;
texture.wrapT = THREE.RepeatWrapping;
// 控制纹理重复参数
texture.repeat.x = 10;
texture.repeat.y = 1;
// 将纹理应用于材质
const tubeMaterial = new THREE.MeshStandardMaterial({
   color: 0x156289,
   emissive: 0x156289,
   map: texture,
   side: THREE.DoubleSide,
});
const tube = new THREE.Mesh(tubeGeometry, tubeMaterial);
scene.add(tube)

function renderLoop() {
  const delta = clock.getDelta();
  renderer.render(scene, camera);
  // 在renderloop中更新纹理的offset
  if (texture) {
    texture.offset.x -= 0.01;
  }
  requestAnimationFrame(renderLoop);
}

demo

growing line

The idea of the realization of the growing line is very simple. First calculate and define a series of points, that is, the final shape of the line, then create a line with only the first two points, and then insert other points into the created line in order, and then Update this line, and finally you can get the effect of line growth.

Updates to BufferGeometry

Before that, let's take another look at geometry in ThreeJS. The geometry in ThreeJS can be divided into Points, Lines, and Meshes. The objects created by the Points model are composed of points, each of which has its own position. The objects created by the Line model are continuous lines. These lines can be understood as connecting all points in order. The Mesh mesh model creates The object is composed of small triangles, and these small triangles are determined by three points. No matter which model it is, they all have one thing in common, that is, they are inseparable from points. Each point has a certain xyz. BoxGeometry and SphereGeometry help us encapsulate the operations on these points. We only need to tell them the length and width. height or radius and it will create a default geometry for me. BufferGeometry is a method of operating point information completely by ourselves. We can use it to set the position of each point (position), the color of each point (color), the normal vector (normal) of each point, and so on.

In contrast to Geometry, BufferGeometry stores information (such as vertex positions, face indices, normals, colors, uvs and any custom attributes) in a buffer - ie Typed Arrays . This makes them generally faster than standard Geometry, but has the disadvantage of being harder to use.

When updating BufferGeometry, the most important point is that the size of the buffer cannot be adjusted. This operation is expensive, which is equivalent to creating a new geometry, but the content of the buffer can be updated. So if you expect a property of the BufferGeometry to increase, such as the number of vertices, you must pre-allocate a buffer large enough to accommodate any new number of vertices that may be created. Of course, this also means that the BufferGeometry will have a maximum size, i.e. it will not be possible to create a BufferGeometry that can efficiently expand infinitely.

Well, when drawing a growing line, the real problem is to expand the vertices of the line when rendering. For example, we first allocate a buffer that can hold 500 vertices for the vertex attribute of BufferGeometry, but only draw 2 at first, and then control the buffer range for drawing through the drawRange method of BufferGeometry.

const MAX_POINTS = 500;
// 创建几何体
const geometry = new THREE.BufferGeometry();

// 设置几何体的属性
const positions = new Float32Array( MAX_POINTS * 3 ); // 一个顶点向量需要3个位置描述
geometry.setAttribute( 'position', new THREE.BufferAttribute( positions, 3 ) );

// 控制绘制范围
const drawCount = 2; // 只绘制前两个点
geometry.setDrawRange( 0, drawCount );

// 创建材质
const material = new THREE.LineBasicMaterial( { color: 0xff0000 } );

// 创建线
const line = new THREE.Line( geometry, material );
scene.add(line);

Then randomly add vertices to the line:

const positions = line.geometry.attributes.position.array;

let x, y, z, index;
x = y = z = index = 0;

for ( let i = 0; i < MAX_POINTS; i ++ ) {
    positions[ index ++ ] = x;
    positions[ index ++ ] = y;
    positions[ index ++ ] = z;

    x += ( Math.random() - 0.5 ) * 30;
    y += ( Math.random() - 0.5 ) * 30;
    z += ( Math.random() - 0.5 ) * 30;

}

If you want to change the number of points rendered after the first render, do the following:

line.geometry.setDrawRange(0, newValue);

If you want to change the position value after the first render, you need to set the needsUpdate flag:

line.geometry.attributes.position.needsUpdate = true; // 需要加在第一次渲染之后

demo

line drawing

In the editor under the 3D construction scene, it is often necessary to draw the connection between objects, such as drawing pipelines in industrial scenes, drawing shelves in modeling scenes, and so on. This process can be abstracted as generating a straight line by tapping two points on the screen. In a two-dimensional scene, this function does not sound difficult, but in a three-dimensional scene, how to achieve it?

The first thing to solve is the vertex update of the line, that is, click the mouse once to determine a vertex in the line, and click again to determine the position of the next vertex. The second thing to solve is the click and interaction problem in the 3D scene, how to determine the 3D in the 2D screen Point position, how to ensure that the point the user clicks is the position they understand.

Updates to LineGeometry

When drawing ordinary lines, the geometry uses BufferGeometry, and we also introduced how to update it in the previous section. But in the section on drawing lines with width, we used material LineMaterial , geometry LineGeometry and object Line2 . How should LineGeometry be updated?

LineGeometry provides setPosition methods to operate on its BufferAttribute, so we don't need to care how to update

Looking at the source code, you can see that the underlying rendering of LineGeometry does not directly calculate the position through the positions attribute, but sets it through the attributes instanceStart instanceEnd . LineGeometry provides the setPositions method to update the vertices of the line.

class LineSegmentsGeometry {
  // ...
  setPositions( array ) {
        let lineSegments;
        if ( array instanceof Float32Array ) {
            lineSegments = array;
        } else if ( Array.isArray( array ) ) {
            lineSegments = new Float32Array( array );
        }
        const instanceBuffer = new InstancedInterleavedBuffer( lineSegments, 6, 1 ); // xyz, xyz
        this.setAttribute( 'instanceStart', new InterleavedBufferAttribute( instanceBuffer, 3, 0 ) ); // xyz
        this.setAttribute( 'instanceEnd', new InterleavedBufferAttribute( instanceBuffer, 3, 3 ) ); // xyz

        this.computeBoundingBox();
        this.computeBoundingSphere();
        return this;
    }
}

Therefore, when drawing, we only need to call setPositions method to update the line vertices. At the same time, we need to pre-determine the maximum number of vertices that the drawing line can hold, and then control the rendering range. The realization idea is the same as above.

const MaxCount = 10;
const positions = new Float32Array(MaxCount * 3);
const points = [];

const material = new THREE.LineMaterial({
  linewidth: 2,
  color: 0xffffff,
  resolution: new THREE.Vector2(800, 600)
});
geometry = new THREE.LineGeometry();
geometry.setPositions(positions);
geometry.instanceCount = 0;
line = new THREE.Line2(geometry, material);
line.computeLineDistances();
scene.add(line);

// 鼠标移动或点击时更新线
function updateLine() {
  positions[count * 3 - 3] = mouse.x;
  positions[count * 3 - 2] = mouse.y;
  positions[count * 3 - 1] = mouse.z;
  geometry.setPositions(positions);
  geometry.instanceCount = count - 1;
}

click and interact

How to realize point-and-click interaction in a 3D scene? The screen where the mouse is located is a two-dimensional world, and the screen presents a three-dimensional world. First, let me explain the relationship between the three coordinate systems: the world coordinate system, the screen coordinate system, and the viewpoint coordinate system.

  • Scene coordinate system (world coordinate system)

    The scene constructed by ThreeJS has a fixed coordinate system (no matter where the camera is), and any object placed must use this coordinate system to determine its position, which is (0,0,0) coordinates. For example we create a scene and add arrow assist.

  • screen coordinates

    The coordinates on the display screen are the screen coordinate system. As shown in the figure below, the maximum value of clientX and clientY is determined by window.innerWidth and window.innerHeight .

  • Viewpoint coordinates

    The viewpoint coordinate system is based on the center point of the camera as the origin, but the position of the camera is also offset according to the world coordinate system. WebGL will first transform the world coordinate to the viewpoint coordinate, and then crop it, only in the line of sight (view volume). ) will enter the next stage of calculation
    The camera guides are added as shown below.

If you want to get the coordinates of the mouse click, you need to convert the screen coordinate system to the scene coordinate system in ThreeJS. One is to use the geometric intersection calculation method to emit a ray along the viewing angle from the place where the mouse is clicked. Whether the object is picked up is determined by judging the geometric intersection of the ray and the 3D model. ThreeJS has a built-in class of Raycaster , which provides us with a ray, and then we can emit rays according to different directions, and judge whether we hit an object according to whether the ray is blocked. Let's see how to use the Raycaster class to achieve the highlighting effect of the mouse clicked object

const raycaster = new THREE.Raycaster();
const mouse = new THREE.Vector2();
renderer.domElement.addEventListener("mousedown", (event) => {
    mouse.x = (event.clientX / window.innerWidth) * 2 - 1;
    mouse.y = -(event.clientY / window.innerHeight) * 2 + 1;
    raycaster.setFromCamera(mouse, camera);
    const intersects = raycaster.intersectObjects(cubes, true);
    if (intersects.length > 0) {
        var obj = intersects[0].object;
        obj.material.color.set("#ff0000");
        obj.material.needsUpdate= true;
    }
})

Instantiate the Raycaster object, and a 2D vector mouse that records the mouse position. When the listener dom node mousedown event is triggered, you can get the mouse position (event.clientX、event.clientY) on the current dom in the event callback. Then convert the screen coordinates to the screen coordinate position in the scene coordinate system. The corresponding relationship is shown in the figure below.

The origin of the screen coordinate system is the upper left corner, the Y axis is down, and the origin of the three-dimensional coordinate system is the center of the screen, and the Y axis is up and normalized, so if you want to convert the mouse position x into the three-dimensional coordinate system:

1.将原点转到屏幕中间即 x - 0.5*canvasWidth
2.做归一化处理 (x - 0.5*canvasWidth)/(0.5*canvasWidth)
即最终 (event.clientX / window.innerWidth) * 2 - 1;

The y-axis calculation is the same, but a flip is done.

Continue to call the setFromCamera method of raycaster to get a ray that is in the same direction as the camera and is shot from the mouse point. Then call the detection function intersectObjects where the ray intersects the object.

class Raycaster {
  // ...
  intersectObjects(objects: Object3D[], recursive?: boolean, optionalTarget?: Intersection[]): Intersection[];
}

The first parameter objects is to detect a group of objects that intersect with the ray, and the second parameter recursive only detects objects of the current level by default, and sub-objects are not detected. If you need to check all descendants, you need to show set to true.

  • Interaction limits in line drawing

In the line drawing scene, click two points to determine a straight line, but when viewing the 3D world on a 2D screen, the 3D coordinates perceived by people are not necessarily the actual 3D coordinates. If the line drawing interaction needs to be more precise, ensure that the mouse The clicked point is the three-dimensional coordinate point understood by the user, so some restrictions need to be added.

Since the position of a point can be precisely determined within a two-dimensional screen, what if we limit the ray-picking range to a fixed plane? That is, first determine the plane, and then determine the position of the point. You can switch planes before entering the next point drawing. By limiting the picking range, it is ensured that the point clicked by the mouse is a three-dimensional coordinate point understood by the user.

For simplicity, we create three basic picking planes XY/XZ/YZ, the picking plane is determined when drawing a point, and auxiliary grid lines are created to help users see which plane they are drawing in.

const planeMaterial = new THREE.MeshBasicMaterial();
const planeGeometry = new THREE.PlaneGeometry(100, 100);
// XY 平面 即在 Z 方向上绘制
const planeXY = new THREE.Mesh(planeGeometry, planeMaterial);
planeXY.visible = false;
planeXY.name = "planeXY";
planeXY.rotation.set(0, 0, 0);
scene.add(planeXY);
// XZ 平面 即在 Y 方向上绘制
const planeXZ = new THREE.Mesh(planeGeometry, planeMaterial);
planeXZ.visible = false;
planeXZ.name = "planeXZ";
planeXZ.rotation.set(-Math.PI / 2, 0, 0);
scene.add(planeXZ);
// YZ 平面 即在 X 方向上绘制
const planeYZ = new THREE.Mesh(planeGeometry, planeMaterial);
planeYZ.visible = false;
planeYZ.name = "planeYZ";
planeYZ.rotation.set(0, Math.PI / 2, 0);
scene.add(planeYZ);

// 辅助网格
const grid = new THREE.GridHelper(10, 10);
scene.add(grid);

// 初始化设置
mode = "XZ";
grid.rotation.set(0, 0, 0);
activePlane = planeXZ;// 设置拾取平面
  • Update position when mouse is moved

When the mouse moves, use the ray to obtain the coordinates of the mouse point and the picked plane as the next point position of the line:

function handleMouseMove(event) {
  if (drawEnabled) {
    const { clientX, clientY } = event;
    const rect = container.getBoundingClientRect();
    mouse.x = ((clientX - rect.left) / rect.width) * 2 - 1;
    mouse.y = -(((clientY - rect.top) / rect.height) * 2) + 1;

    raycaster.setFromCamera(mouse, camera);
        // 计算射线与当前平面的交叉点
    const intersects = raycaster.intersectObjects([activePlane], true);

    if (intersects.length > 0) {
      const intersect = intersects[0];

      const { x: x0, y: y0, z: z0 } = lastPoint;
      const x = Math.round(intersect.point.x);
      const y = Math.round(intersect.point.y);
      const z = Math.round(intersect.point.z);
      const newPoint = new THREE.Vector3();

      if (mode === "XY") {
        newPoint.set(x, y, z0);
      } else if (mode === "YZ") {
        newPoint.set(x0, y, z);
      } else if (mode === "XZ") {
        newPoint.set(x, y0, z);
      }
      mouse.copy(newPoint);
      updateLine();
    }
  }
}
  • Add point when the mouse is clicked

After the mouse is clicked, the current point is officially added to the line and recorded as the last vertex, and the position of the picking plane and auxiliary mesh is updated at the same time.

function handleMouseClick() {
  if (drawEnabled) {
    const { x, y, z } = mouse;
    positions[count * 3 + 0] = x;
    positions[count * 3 + 1] = y;
    positions[count * 3 + 2] = z;
    count += 1;
    grid.position.set(x, y, z);
    activePlane.position.set(x, y, z);
    lastPoint = mouse.clone();
  }
}
  • Keyboard switching mode

For convenience, monitor keyboard events to control the mode, X/Y/Z to switch different pick planes respectively, D/S to control whether the line drawing can be operated.

function handleKeydown(event) {
  if (drawEnabled) {
    switch (event.key) {
      case "d":
        drawEnabled = false;
        break;
      case "s":
        drawEnabled = true;
        break;
      case "x":
        mode = "YZ";
        grid.rotation.set(-Math.PI / 2, 0, 0);
        activePlane = planeYZ;
        break;
      case "y":
        mode = "XZ";
        grid.rotation.set(0, 0, 0);
        activePlane = planeXZ;
        break;
      case "z":
        mode = "XY";
        grid.rotation.set(0, 0, Math.PI / 2);
        activePlane = planeXY;
        break;
      default:
    }
  }
}

The final effect

Demo

If it is expanded a little, the interaction can be optimized in more detail, and the related properties of the line material can also be edited after the line is generated, and there are many tricks that can be played.

Summarize

Line has always been a very interesting topic in graphic drawing, and there are many technical points that can be extended. From the basic line connection method in OpenGL, to adding some width, color and other effects to the line, and how to realize the line drawing function in the editing scene. If you have any questions about the above summary of the ThreeJS midline, you are welcome to discuss it together!

Author: ES2049 | Timeless

The article can be reproduced at will, but please keep the original link .
You are very welcome to join ES2049 Studio with passion. Please send your resume to caijun.hcj@alibaba-inc.com .


ES2049
3.7k 声望3.2k 粉丝