Results 11 to 20 of about 852,520 (337)

FAST-LIO2: Fast Direct LiDAR-Inertial Odometry [PDF]

open access: yesIEEE Transactions on robotics, 2021
This article presents FAST-LIO2: a fast, robust, and versatile LiDAR-inertial odometry framework. Building on a highly efficient tightly coupled iterated Kalman filter, FAST-LIO2 has two key novelties that allow fast, robust, and accurate LiDAR ...
Wei Xu   +4 more
semanticscholar   +1 more source

TransFusion: Robust LiDAR-Camera Fusion for 3D Object Detection with Transformers [PDF]

open access: yesComputer Vision and Pattern Recognition, 2022
LiDAR and camera are two important sensors for 3D object detection in autonomous driving. Despite the increasing popularity of sensor fusion in this field, the robustness against inferior image conditions, e.g., bad illumination and sensor misalignment ...
Xuyang Bai   +6 more
semanticscholar   +1 more source

Spherical Transformer for LiDAR-Based 3D Recognition [PDF]

open access: yesComputer Vision and Pattern Recognition, 2023
LiDAR-based 3D point cloud recognition has benefited various applications. Without specially considering the LiDAR point distribution, most current methods suffer from information disconnection and limited receptive field, especially for the sparse ...
Xin Lai   +4 more
semanticscholar   +1 more source

Rethinking Range View Representation for LiDAR Segmentation [PDF]

open access: yesIEEE International Conference on Computer Vision, 2023
LiDAR segmentation is crucial for autonomous driving perception. Recent trends favor point- or voxel-based methods as they often yield better performance than the traditional range view representation.
Lingdong Kong   +8 more
semanticscholar   +1 more source

BEVFusion: A Simple and Robust LiDAR-Camera Fusion Framework [PDF]

open access: yesNeural Information Processing Systems, 2022
Fusing the camera and LiDAR information has become a de-facto standard for 3D object detection tasks. Current methods rely on point clouds from the LiDAR sensor as queries to leverage the feature from the image space. However, people discovered that this
Tingting Liang   +8 more
semanticscholar   +1 more source

A Survey on Global LiDAR Localization: Challenges, Advances and Open Problems [PDF]

open access: yesInternational Journal of Computer Vision, 2023
Knowledge about the own pose is key for all mobile robot applications. Thus pose estimation is part of the core functionalities of mobile robots. Over the last two decades, LiDAR scanners have become the standard sensor for robot localization and mapping.
Huan Yin   +7 more
semanticscholar   +1 more source

DeepFusion: Lidar-Camera Deep Fusion for Multi-Modal 3D Object Detection [PDF]

open access: yesComputer Vision and Pattern Recognition, 2022
Lidars and cameras are critical sensors that provide complementary information for 3D detection in autonomous driving. While prevalent multi-modal methods [34], [36] simply decorate raw lidar point clouds with camera features and feed them directly to ...
Yingwei Li   +12 more
semanticscholar   +1 more source

NeRF-LOAM: Neural Implicit Representation for Large-Scale Incremental LiDAR Odometry and Mapping [PDF]

open access: yesIEEE International Conference on Computer Vision, 2023
Simultaneously odometry and mapping using LiDAR data is an important task for mobile systems to achieve full autonomy in large-scale environments.
Junyuan Deng   +6 more
semanticscholar   +1 more source

General, Single-shot, Target-less, and Automatic LiDAR-Camera Extrinsic Calibration Toolbox [PDF]

open access: yesIEEE International Conference on Robotics and Automation, 2023
This paper presents an open source LiDAR-camera calibration toolbox that is general to LiDAR and cam-era projection models, requires only one pairing of LiDAR and camera data without a calibration target, and is fully automatic.
Kenji Koide   +3 more
semanticscholar   +1 more source

LVI-SAM: Tightly-coupled Lidar-Visual-Inertial Odometry via Smoothing and Mapping [PDF]

open access: yesIEEE International Conference on Robotics and Automation, 2021
We propose a framework for tightly-coupled lidar-visual-inertial odometry via smoothing and mapping, LVI-SAM, that achieves real-time state estimation and map-building with high accuracy and robustness.
Tixiao Shan   +3 more
semanticscholar   +1 more source

Home - About - Disclaimer - Privacy