A Robust, Real-time, RGB-colored, LiDAR-Inertial-Visual tightly-coupled state Estimation and mapping package
-
Updated
May 28, 2024 - C++
A Robust, Real-time, RGB-colored, LiDAR-Inertial-Visual tightly-coupled state Estimation and mapping package
FAST-LIVO2: Fast, Direct LiDAR-Inertial-Visual Odometry
A Fast and Tightly-coupled Sparse-Direct LiDAR-Inertial-Visual Odometry (LIVO).
A Collection of LiDAR-Camera-Calibration Papers, Toolboxes and Notes
Xtreme1 is an all-in-one data labeling and annotation platform for multimodal data training and supports 3D LiDAR point cloud, image, and LLM.
[CVPR2023] LoGoNet: Towards Accurate 3D Object Detection with Local-to-Global Cross-Modal Fusion
The code implemented in ROS projects a point cloud obtained by a Velodyne VLP16 3D-Lidar sensor on an image from an RGB camera.
[CVPR 2023] MSMDFusion: Fusing LiDAR and Camera at Multiple Scales with Multi-Depth Seeds for 3D Object Detection
ROS package to calibrate the extrinsic parameters between LiDAR and Camera.
auto-calibration of lidar and camera based on maximization of intensity mutual information
[T-RO 2022] Official Implementation for "LiCaS3: A Simple LiDAR–Camera Self-Supervised Synchronization Method," in IEEE Transactions on Robotics, doi: 10.1109/TRO.2022.3167455.
[IV2024] MultiCorrupt: A benchmark for robust multi-modal 3D object detection, evaluating LiDAR-Camera fusion models in autonomous driving. Includes diverse corruption types (e.g., misalignment, miscalibration, weather) and severity levels. Assess model performance under challenging conditions.
This repository uses a ROS node to subscribe to camera (hikvision) and lidar (livox) data. After the node merges the data, it publishes the colored point cloud and displays it in rviz.
ADAS Car - with Collision Avoidance System (CAS) - on Indian Roads using LIDAR-Camera Low-Level Sensor Fusion. DIY Gadget built with Raspberry Pi, RP LIDAR A1, Pi Cam V2, LED SHIM, NCS 2 and accessories like speaker, power bank etc
Lidar Camera Manual Target-less Calibration Software
Extrinsic Calibration of Monocular Camera and Lidar using Planar Point To Plane Constraint
This a simple implementation of V-LOAM
This package introduces the concept of optimizing target shape to remove pose ambiguity for LiDAR point clouds. Both the simulation and the experimental results confirm that by using the optimal shape and the global solver, we achieve centimeter error in translation and a few degrees in rotation even when a partially illuminated target is placed…
Project: Generating overhead birds-eye-view occupancy grid map with semantic information from lidar and camera data.
BIM-based AI-supported LiDAR-Camera Pose Refinement
Add a description, image, and links to the lidar-camera-fusion topic page so that developers can more easily learn about it.
To associate your repository with the lidar-camera-fusion topic, visit your repo's landing page and select "manage topics."