Files
Abstract
Simultaneous Localization and Mapping (SLAM) is one of the key issues for mobile robots
to achieve true autonomy. The implementations of SLAM could rely on a variety of sensors. Among many
types of them, the laser-based SLAM approach is widely used owing to its high accuracy, even in poor
lighting conditions. However, when in structure-less environments, laser modules will fail due to a lack
of sufficient geometric features. Besides, motion estimation by moving lidar has the problem of distortion
since range measurements are received continuously. To solve these problems, we propose a tightly-coupled
SLAM integrating LiDAR and an integrated navigation system (INS) for unmanned vehicle navigation in
campus environments. On the basis of feature extraction, a constraint equation for inter-frame point cloud
features is constructed, and the pose solution results of the INS are added as a priori data for inter-frame point
cloud registration. The Levenberg-Marquardt nonlinear least square method is used to solve the constraint
equation to obtain inter-frame pose relationships. Map matching and loop closure detection methods are used
to optimize the odometer, and the optimal pose information is obtained. The proposed SLAM algorithm
is evaluated by comparing with the classic open-source laser SLAM algorithms on the campus dataset.
Experimental results demonstrate that our proposed algorithm has certain advantages in estimating the
trajectory error of the unmanned vehicle and has higher mapping performance.