Point Cloud Registration

namespace: registration

The task of point cloud registration is very similar to the visual odometer, except that the input of the visual odometer is a pair of frames and camera intrinsics, and the input of the point cloud registration is two point clouds without other conditiosn, such as camera parameters. So it is impossible to use 2D reprojection for point cloud registration. Generally speaking, the scale of the point cloud used for registration is much larger than frame, which can be a model extracted from dozens of frames, or even a model of two rooms.

For point cloud registration, there are also two different ways just the spase tracking and dense tracking in visual odometry: ICP based on dense points and Global Registration based on 3D features.

Global Registration

Global Registration does not require an initial pose, the basic steps are: - Extract 3D features - Feature matching - Outlier filtering - Calculate transformation matrix

The methods are as following:

std::shared_ptr<RegistrationResult> 
    RansacRegistration(const geometry::PointCloud &source_pcd, 
    const geometry::PointCloud &target_pcd, 
    const RANSACParameter &r_para = RANSACParameter());
std::shared_ptr<RegistrationResult> 
    RansacRegistration(const geometry::PointCloud &source_feature_pcd, 
    const geometry::PointCloud &target_feature_pcd, 
    const FeatureSet & source_features,
    const FeatureSet & target_features,
    const RANSACParameter &r_para);

The difference between the above two functions is that the first is to directly register the two point clouds, and the second is to register the feature points, which means that the step of extracting 3D features is skipped. When building a system, saving the 3D features can avoid repeated calculation and improve efficiency.

Extracting 3D features are generally implemented in libraries such as PCL or Open3D, but such libraries are too large. OpenCV does not implement 3D feature extraction, so OnePiece implements this part. FPFH features can be calculated by the following function :

std::tuple<geometry::PointCloud, FeatureSet> DownSampleAndExtractFeature(const geometry::PointCloud &pcd, 
    const RANSACParameter &r_para = RANSACParameter()); 

The above function input is the point cloud and related parameters, and returns the extracted feature point position and features (33-dimensional vector). The we can perform knn search for the nearest point to get the matching result. Outlier filtering are quiet similar to the methods introduced in SparseTracking, so I won’t introduce them here. Finally, the transformation matrix and inliers are calculated by RANSAC.

ICP

ICP (iterative closest point) is a very classic algorithm with many variants. OnePiece implements two basic ICPs, one is point-to-point and the other is point-to-plane.

PointToPoint

PointToPoint optimizes the distance from point to point. The optimized objective function is: \(\Vert T \cdot p_s-p_t\Vert^2\). Through OpenCV's KDtree to find the closest point, then the best transformation \(T\) can be obtained directly by the geometry::EstimateRigidTransformation(which can also be optimized by computing the Jacobian matrix). Then re-find the nearest point according to the new \(T\), and iterate the process until convergence.

PointToPlane

PointToPlane optimizes the distance from point to plane, so it needs the vector normal of target point cloud. The equation of the plane whose normal vector is \(n_t\) and passing through \(p_t\) is $$ n_t^T (\cdot p - p_t) = 0, $$

\(n_t^T (\cdot p - p_t)\) is the distance from point to plane. So the objective function is: \(\Vert n_t^T\cdot(T\cdot p_s - p_t)\Vert^2\). For the linearization of this cost function, you can view related_paper/lowk_point-to-plane_icp_techrep.pdf. The new \(T\) is obtained through computing Jacobian matrix and optimization, and then the closest point is re-finded according to the new \(T\). Iterate this process until convergence .

Both experimental and theoretical analysis prove that PointToPlane has a faster convergence rate than PointToPoint. The method is defined as following:

std::shared_ptr<RegistrationResult> PointToPoint(const geometry::PointCloud &source, const geometry::PointCloud &target, 
    const geometry::TransformationMatrix &init_T = geometry::TransformationMatrix::Identity(), const ICPParameter &icp_para = ICPParameter() );
std::shared_ptr<RegistrationResult> PointToPlane(const geometry::PointCloud &source, const geometry::PointCloud &target,
    const geometry::TransformationMatrix &init_T = geometry::TransformationMatrix::Identity(), const ICPParameter &icp_para = ICPParameter() );

icp_para are parameters of ICP, using the default value is a good choice. RegistrationResult is defined as following:

class RegistrationResult
{
    public:
    //relative transformation
    geometry::TransformationMatrix T;
    //the id of inliers
    geometry::FMatchSet correspondence_set_index;
    //inliers
    geometry::PointCorrespondenceSet correspondence_set;
    //rmse
    double rmse;
};

In this case, it is left to the user to determine whether the registration is successful.