M-detector is a moving event detection package, which determines if a point from LiDAR is moving immediately after its arrival, resulting in a point-by-point detection with a latency of just several microseconds. M-detector is designed based on occlusion principles and can be used in different environments with various types of LiDAR sensors.
Our related papers has been accepted by Nature Communications. Moving Event Detection from LiDAR Stream Points.
If our code is used in your project, please cite our paper.
Our accompanying videos are now available on YouTube (click below images to open) and Bilibili.
The codes of this repo are contributed by: Huajie Wu (吴花洁), Yihang Li (李一航) and Wei Xu (徐威)
Ubuntu ≥ 18.04.
ROS ≥ Melodic. Follow [ROS Installation]
PCL ≥ 1.8
sudo apt install libpcl-dev
Eigen ≥ 3.3.4
sudo apt install libeigen3-dev
Follow livox_ros_driver Installation.
Remarks:
- Since the M-detector support Livox serials LiDAR firstly, so the livox_ros_driver must be installed and sourced before run any M-detector launch file.
- How to source? The easiest way is add the line
source $Livox_ros_driver_dir$/devel/setup.bash
to the end of file~/.bashrc
, where$Livox_ros_driver_dir$
is the directory of the livox ros driver workspace (should be thews_livox
directory if you completely followed the livox official document).
Install gcc-9 g++-9
sudo add-apt-repository ppa:ubuntu-toolchain-r/test
sudo apt update
sudo apt install gcc-9 g++-9
cd /usr/bin
sudo rm gcc g++
sudo ln -s gcc-9 gcc
sudo ln -s g++-9 g++
Follow [TBB Installation] (Note: change the gcc-9.1/g++-9.1 to gcc-9/g++-9)
Change the TBB path (line 51-52) in CMakeLists.txt
Clone the repository and catkin_make:
cd ~/catkin_ws/src
git clone [email protected]:hku-mars/M-detector.git
catkin_make
source devel/setup.bash
(Note: change the path for TBB in CMakeList.txt)
points_topic: "/cloud_registered_body" #the topic name of local point cloud
odom_topic: "/aft_mapped_to_init" #the topic name of odometry
dataset: 3 #0 for kitti, 1 for nuscenes, 2 for waymo
buffer_delay: 0.1 #the delay duration between the frame-out output and depth map construction
buffer_size: 100000 #the saved maximum point numbers in the buffer, usually larger than the point numbers generated during buffer_delay
points_num_perframe: 30000 #the maximum point numbers LiDAR generated per frame
depth_map_dur: 0.2 #the effective durtaion of every depth map
max_depth_map_num: 5 #the maximum depth map numbers in the configuration
max_pixel_points: 5 #the saved maximum point numbers in each pixel of the depth map
frame_dur: 0.1 #the frame duration of the point cloud input by points_topic
hor_resolution_max: 0.005 #the horizontal resolution of the depth map (units in radius), usually is 2-4 times of the horizontal resolution of LiDAR
ver_resolution_max: 0.01 #the vertical resolution of the depth map (units in radius), usually is 2-4 times of the horizontal resolution of LiDAR
fov_up: 52 #the maximum value of the vertical FOV of LiDAR
fov_down: -7 #the minimum value of the vertical FOV of LiDAR
fov_left: 180.0 #the maximum value of the horizontal FOV of LiDAR
fov_right: -180.0 #the minimum value of the horizontal FOV of LiDAR
occluded_map_thr1:3 #the minimum occlusion times for test 1
map_cons_hor_thr1: 0.05 #the horizontal neighborhood size for occlusion check in the step of map consistency for test 1
map_cons_ver_thr1: 0.05 #the vertical neighborhood size for occlusion check in the step of map consistency for test 1
cluster_coupled:true #whether output the frame-out results
cluster_future:true #whether utilize the frame-out results during depth map construction
The parameters are provided in folder "config" for different LiDARs.
For methods of parameters tuning, please follow the section 8 introduced in [Supplementary Information].
To save the label files, please pass the parameter via the corresponding launch files.
├── XXX (dataset name)
│ ├── bags
│ │ ├── XXX_0000.bag
│ │ ├── ...
│ ├── sequences
│ │ ├── 0000
│ │ │ ├── labels
│ │ │ ├── predictionsx_origin (results in point-out mode with xth parameter file)
│ │ │ ├── predictionsx (in frame-out mode with xth parameter file)
│ │ │ ├── ...
│ │ ├── ...
├── ...
The dataset can be downloaded at [this link].
At first, please run a odometery node, such as [Fast Lio] (Download Fast Lio provided in Releases into the same location as M-detector's and complie them).
Then:
roslaunch fast_lio mapping_XXX(for dataset).launch
roslaunch m_detector detector_(dataset).launch
rosbag play YOURBAG.bag
roslaunch m_detector detector_XXX.launch out_path:="your path for frame-out results" out_origin_path:="your path for point-out results"
Note: Follow the folder structure introduced before, the out_path
should be in the format of "(the path to dataset folder)/(dataset name)/sequences/(sequence number)/predictionsx(x is the parameter file's number)/", and the out_origin_path
should be in the format of "(the path to dataset folder)/(dataset name)/sequences/(sequence number)/predictionsx_origin(x is the parameter file's number)/".
roslaunch m_detector cal_recall.launch dataset:=(0 for kitti, 1 for nuscenes, 2 for waymo, 3 for avia) dataset_folder:="the path to the dataset_folder" start_se:=(the first sequence number for calculation) end_se:=(the last sequence number for calculation) start_param:=(the first parameter file's number for calculation) end_param:=(the last parameter file's number for calculation) is_origin:=(true for point-out results, false for frame-out results)
Note: Follow the folder structure introduced before, the dataset_folder
should be the path to dataset folder. This step will calculate all the IoU for all designated results listed in the dataset folder and generate a new folder named "recall" or "recall_origin" containing the results.
Download the embedded version provided in Releases into a new workspace and complie them.
roslaunch fast_lio mapping_(dataset).launch
rosbag play YOURBAG.bag
The bags used in paper can be download at [this link].
The source code of this package is released under GPLv2 license. We only allow it free for academic usage. For commercial use, please contact Dr. Fu Zhang [email protected].
For any technical issues, please contact me via email [email protected].