1

This article is a previous article 161f0ab2fd188f [G2O and Multi-Viewpoint Cloud Global Registration Optimization] .
As we know from the previous article, we now have the registered global point cloud, the transformation matrix of each point cloud, and the RGB image corresponding to each point cloud. Next, it is natural to reconstruct a 3D mesh from the global point cloud and texture the reconstructed 3D mesh.
Known: 3D model (triangular patch, any format), multi-view RGB image (texture), transformation matrix; seek: paste a texture map for the 3D model.
For the above textures to be solved, familiar friends must know that they are actually in the last step of solving multi-view stereo matching-surface reconstruction and texture generation.
Continuing from the previous article, in order to accomplish your own purpose, you first need to reconstruct the triangular mesh from the point cloud, which has nothing to do with cameras, texture images, etc. You can directly use the reconstruction interface in pcl or software such as cgal or even meshlab , which reconstructs the triangular mesh directly from the 3D point cloud.
Before completing the reconstruction of the triangular mesh, there is a key step that is very important - but can be omitted as appropriate: multi-view 3D point cloud fusion! Let’s continue to use the 8 point cloud clips I shot before as an example to record. (Note that 8 point cloud fragments are emphasized here, and do not represent point clouds of 8 perspectives, which will be explained later).

Multi-view point cloud fusion

After splicing multiple point cloud fragments, it is unavoidable that point clouds from multiple viewpoints overlap each other. As shown in the figure below, the density of point clouds in the overlapping area is generally greater than that in the non-overlapping area. The purpose of point cloud fusion is to de-duplicate the point cloud and reduce the amount of point cloud data; the second is to further smooth the point cloud, improve the accuracy, and provide high-quality data for other subsequent calculations (such as measurement, etc.).

重叠

Coincidentally, I have a certain understanding of the moving least squares algorithm. In pcl, N years ago [ pcl's MLS smoothing ] and [ pcl's MLS algorithm calculates and unifies the normal vector ] also did some small tests. In my impression, moving least squares works like this: For in-depth data, the target point and its neighborhood are calculated, and these local data to be calculated constitute the compact support field of the target point, and in the compact support field there are a function of the target point arithmetic, the basic rules of operation is based on compact support domain to other points within the target point weight different, here a function called compactly supported function . Compactly Supported compact sub-domain function + + weighting function constitute the basic concept of moving least squares mathematical algorithm, a so-called mobility is reflected in Compactly Supported field "slip" calculating space permits until Overwrite all data. The least squares is generally optimized for the global, and the moving least squares can not only solve for the global optimization due to its "mobility" (compact support), but also has the local optimization. Further, for the three-dimensional point cloud, it can The isosurface is extracted and combined with the MC (moving cube) algorithm to realize the reconstruction of the surface triangular mesh.

In pcl, there are many examples of the MLS algorithm, and the two previous hydrology articles also have a basic application introduction, so I will not record too much here. Here, use the following code directly:

pcl::PointCloud<pcl::PointXYZRGB> mls_points; //存结果
pcl::search::KdTree<pcl::PointXYZRGB>::Ptr tree(new pcl::search::KdTree<pcl::PointXYZRGB>);
pcl::MovingLeastSquares<pcl::PointXYZRGB, pcl::PointXYZRGB> mls;
mls.setComputeNormals(false);
mls.setInputCloud(res);
mls.setPolynomialOrder(2); //MLS拟合的阶数,
mls.setSearchMethod(tree);
mls.setSearchRadius(3);

mls.setUpsamplingMethod(pcl::MovingLeastSquares<pcl::PointXYZRGB, pcl::PointXYZRGB>::UpsamplingMethod::VOXEL_GRID_DILATION);
mls.setDilationIterations(0);  //设置VOXEL_GRID_DILATION方法的迭代次数

mls.process(mls_points);

Note that in the above calculation process, the pcl::MovingLeastSquares::VOXEL_GRID_DILATION method is used. According to the official explanation of pcl, this method can not only repair a small number of point cloud holes, but also locally optimize the point cloud coordinates. The output is a point cloud with the same global density. By setting With different iteration times, this method can not only , but also (mainly controlled by the number of iterations).

Put your own data into the above calculation process, and the local effect of point cloud fusion is as follows:

The point cloud visible to the naked eye is more uniform and smooth, and at the same time, the amount of data is reduced from 100w+ to 10w+.

The above operations basically meet the requirements of the next step.

Further, since surface reconstruction is not the focus, for the above results, the Poisson reconstruction in MeshLab software is directly used here, and the results are recorded as follows:

Multi-view texture mapping

Here, I formally introduce my own understanding of the so-called "multi-perspective".

First of all, multi-angle and multi-angle , usually, these two concepts basically mean the same thing, that is, shooting 3D models from different azimuths. However, there are two different [operation methods] implied in this, such as handheld 3D scanners, state estimation in SLAM, and even self-driving cars equipped with cameras/radars. In these scenarios, the camera is basically in the the camera does not move, the target does not move 161f0ab2fd1cda, that is, the camera moves around the target; the second is a rotary 3D scanner, etc., these scenes are basically camera does not move, and the target moves , that is, the target itself has motion attributes. So here, it is necessary to make a further distinction between multi-angle and multi-angle multi-view refers to the camera's multi-angle (yes, the camera has a real physical perspective, pose, shooting angle Barabara. ..), multi-angle refers to different angles of the target object (yes, an object can be viewed from different angles).

Secondly, external reference , external reference is a very important concept (nonsense, what you say!), can it really attract enough attention from other users? not necessarily! The external reference we usually say actually has a subject, but we are too accustomed to omit the subject. External parameters --- generally refer to the external parameters camera, which describe the changes of the camera. In the multi-view 3D point cloud registration, each point cloud has a change matrix, which can be called the external parameter point cloud, which describes the change of the point cloud. So far, at least two external parameters have appeared, and the physical meanings represented by these two external parameters are completely different, but they are inextricably linked. We all know that the macro motion at each other, then the inevitable reference point cloud outside and correspond external camera parameters reciprocal.

Note: The reason why the above concepts are strictly distinguished here is mainly because I did not really understand the concepts mentioned above, especially the understanding of external parameters, which led to errors in the calculation of the later algorithms; the second is the follow-up libraries and frameworks There is a strict distinction between the above.

OpenMVS and texture mapping

Finally got to OpenMVS. . . .

It is known that OpenMVS can be used for dense reconstruction (point cloud), Mesh reconstruction (point cloud --> mesh), Mesh optimization (unpopular processing, hole filling, etc.), mesh texture mapping, which corresponds to several of the source code. APP.

Here, the purpose is simply to need the mesh texture mapping function of OpenMVS! Browse the use of OpenMVS at home and abroad, especially in China, it is basically a "one-stop" service, and the summary of the use process is colmap/OpenMVG/visualSFM... + OpenMVS, which is basically a fool-like way of using executable files (online one Check a lot), and basically they are "quoting" (plagiarism) over and over again, obviously not in line with their own requirements and purposes. Tucao. . .

In order to use the texture mapping module of .mvs , the input data must be a 061f0ab2fd1dfc format file, eh. . . .mvs , I don't have it, what should I do? !

Well, enter the OpenMVS source code, and use your own data to fill the data interface required by OpenMVS. (Window10 + VS2017 + OpenMVS compilation is omitted, mainly because I did not record it when I compiled it at that time, but CMake projects are generally not too complicated).

Scene Scene

Scene , I found that it must be filled with the 061f0ab2fd1e6e class. For the problems I face, the main structure of the Scene

class MVS_API Scene
{
public:
    PlatformArr platforms; //相机参数和位姿 // camera platforms, each containing the mounted cameras and all known poses
    ImageArr images; //纹理图,和相机对应 // images, each referencing a platform's camera pose
    PointCloud pointcloud; //点云 // point-cloud (sparse or dense), each containing the point position and the views seeing it
    Mesh mesh; //网格 // mesh, represented as vertices and triangles, constructed from the input point-cloud

unsigned nCalibratedImages; // number of valid images

unsigned nMaxThreads; // maximum number of threads used to distribute the work load
    
... //省略代码
    
bool TextureMesh(unsigned nResolutionLevel, unsigned nMinResolution, float fOutlierThreshold=0.f, float fRatioDataSmoothness=0.3f, bool bGlobalSeamLeveling=true, bool bLocalSeamLeveling=true, unsigned nTextureSizeMultiple=0, unsigned nRectPackingHeuristic=3, Pixel8U colEmpty=Pixel8U(255,127,39));
    
... //省略代码
    
}

The main function that I need is bool TextureMesh() , it can be seen that it has a lot of parameters, and the meaning of the parameters will be discussed later.

Platforms

First look at platforms , this thing is an array Platform In OpenMVS, Platform defined as follows:

class MVS_API Platform
{
...

public:
    String name; // platform's name
    CameraArr cameras; // cameras mounted on the platform
    PoseArr poses; 

... 

}

For us, two arrays CameraArr and PoseArr

CameraArr is an array of type CameraIntern CameraIntern is the most basic camera parent class. When it comes to cameras, it must contain two matrices: internal parameters and external parameters. CameraIntern is no exception. It needs to be filled with the following three parameters, of which K is a normalized 3X3 camera internal parameter available Eigen matrix is filled in, the so-called normalization, in fact, each element of the matrix of intrinsic divided by the texture image of maximum width or height; R the name suggests, represents rotating camera , C represent camera panning , R and C together constitute the external reference camera.

    KMatrix K; //相机内参(归一化后的) the intrinsic camera parameters (3x3)
    RMatrix R; //外参:相机旋转 rotation (3x3) and
    CMatrix C;

PoseArr is an array of type Pose Pose is a structure defined in Platform

struct Pose {
        RMatrix R; // platform's rotation matrix
        CMatrix C; // platform's translation vector in the global coordinate system
        #ifdef _USE_BOOST
        template <class Archive>
        void serialize(Archive& ar, const unsigned int /*version*/) {
            ar & R;
            ar & C;
        }
        #endif
    };
    typedef CLISTDEF0IDX(Pose,uint32_t) PoseArr;

It can be seen from the above Pose that it also contains two matrices, but the physical meaning represented by the matrix Pose CameraIntern parameters in 061f0ab2fd20ea. Simply put, <font color=red> CameraIntern In represents the inherent or built-in attributes of the camera itself, while Pose represents the Platform platform (including the camera) in the world coordinate system. For each texture map (introduced below), these two attribute parameters Together they constitute the real external parameters of the corresponding camera </font> (later explained through the source code) [Question 1] .

Images

images is ImageArr , as explained in the source code, each Image corresponds to each camera pose. Image is as follows:

class MVS_API Image
{
public:
    uint32_t platformID;//plateform相对应的ID // ID of the associated platform
    uint32_t cameraID; // camer对应的ID //ID of the associated camera on the associated platform
    uint32_t poseID; // 位姿ID  //ID of the pose of the associated platform
    uint32_t ID; // global ID of the image
    String name; // 该imgage的路径 // image file name (relative path)
    Camera camera; // view's pose
    uint32_t width, height; // image size
    Image8U3 image; //load的时候已经处理. image color pixels
    ViewScoreArr neighbors; // scored neighbor images
    float scale; // image scale relative to the original size
    float avgDepth;

    ....

}

As in the Chinese comment added by myself, the three parameters of platformID, cameraID and poseID Platform </font>, which are the most important three for me Parameter, Platform itself does not carry the ID attribute, but is sorted by default according to the addition order, and the ID index increases from 0.

In the Image structure, there is also the camera member that must be mentioned. As noted in the source code, it represents the pose of the camera, that is, which camera corresponds to the picture. At first glance, the camera member must also be filled, but this is not the case. The reason is explained later. [Question 2] .

Other members in this structure, such as image width and height, image name, etc., will be automatically filled when Image::LoadImage(img_path); neighbors member represents the adjacency relationship between the image and the 3D point, which is an unnecessary option in the texture map and does not affect the texture Texture effect.

PointCloud

This member represents a point cloud, which is generally the result of OpenMVS's sparse reconstruction or dense reconstruction. It is not constrained to the grid and is directly discarded without filling.

Mesh

The triangular mesh storage structure in OpenMVS, I only need to know that it has a Load function.


SimpleTriangle
128 声望110 粉丝

只会写 Hello World 的厨子