Results 151 to 160 of about 14,971 (201)

EIL‐SLAM: Depth‐enhanced edge‐based infrared‐LiDAR SLAM

Journal of Field Robotics, 2021
AbstractTraditional simultaneous localization and mapping (SLAM) approaches that utilize visible cameras or light detection and rangings (LiDARs) frequently fail in dusty, low‐textured, or completely dark environments. To address this problem, this study proposes a novel approach by tightly coupling perception data from a thermal infrared camera and a ...
Wenqiang Chen   +3 more
openaire   +1 more source

LoLa-SLAM: Low-Latency LiDAR SLAM Using Continuous Scan Slicing

IEEE Robotics and Automation Letters, 2021
Real-time 6D pose estimation is a key component for autonomous indoor navigation of Unmanned Aerial Vehicles (UAVs). This letter presents a low-latency LiDAR SLAM framework based on LiDAR scan slicing and concurrent matching, called LoLa-SLAM. Our framework uses sliced point cloud data from a rotating LiDAR in a concurrent multi-threaded matching ...
Mojtaba Karimi   +4 more
openaire   +1 more source

Integrating V-SLAM and LiDAR-based SLAM for Map Updating

2021 IEEE 4th International Conference on Knowledge Innovation and Invention (ICKII), 2021
Vehicle positioning generally uses the global navigation satellite system (GNSS), but systems of different levels significantly affect positioning accuracy. Moreover, it is greatly affected by weather that may cause inaccurate positioning due to excessive cloud cover.
Yu-Cheng Chang   +4 more
openaire   +1 more source

Moving target removal for lidar SLAM

Seventh Symposium on Novel Photoelectronic Detection Technology and Applications, 2021
Lidar SLAM is mainly suitable for static scenes without moving targets. The existence of motion in actual scenes will limit the application of the algorithm. If there are moving targets in the lidar field of view, the 3D point cloud map constructed by the lidar SLAM will include a series of fake point cloud from the moving target.
Xuanliang Zhang, Wenguang Wang
openaire   +1 more source

DVL-SLAM: sparse depth enhanced direct visual-LiDAR SLAM

Autonomous Robots, 2019
This paper presents a framework for direct visual-LiDAR SLAM that combines the sparse depth measurement of light detection and ranging (LiDAR) with a monocular camera. The exploitation of the depth measurement between two sensor modalities has been reported in the literature but mostly by a keyframe-based approach or by using a dense depth map.
Young-Sik Shin   +2 more
openaire   +1 more source

DL-SLAM: Direct 2.5D LiDAR SLAM for Autonomous Driving

2019 IEEE Intelligent Vehicles Symposium (IV), 2019
Precisely localizing a vehicle in the GNSS-denied urban area is crucial for autonomous driving. The occupancy grid-based 2D LiDAR SLAM methods scale poorly to outdoor road scenarios, while the 3D point cloud-based LiDAR SLAM methods suffer from huge computation and storage costs.
Jun Li   +5 more
openaire   +1 more source

GLO-SLAM: a slam system optimally combining GPS and LiDAR odometry

Industrial Robot: the international journal of robotics research and application, 2021
Purpose Large-scale and precise three-dimensional (3D) map play an important role in autonomous driving and robot positioning. However, it is difficult to get accurate poses for mapping. On one hand, the global positioning system (GPS) data are not always reliable owing to multipath effect and poor satellite visibility in many urban environments.
Ruihao Lin, Junzhe Xu, Jianhua Zhang
openaire   +1 more source

Home - About - Disclaimer - Privacy