Skip to content

Organized lidar point clouds #15

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
dllu opened this issue Nov 28, 2024 · 1 comment
Open

Organized lidar point clouds #15

dllu opened this issue Nov 28, 2024 · 1 comment

Comments

@dllu
Copy link

dllu commented Nov 28, 2024

On page 120-121, it says

However, the organization of the point cloud no longer holds in the case of non-repeated pattern solid-state, flash LiDARs, or mechanically rotating LiDAR under motion distortion

That is not true on three counts.

First, flash lidars produce a depth image that is organized just as the Kinect or other depth cameras are.

Second, even under motion distortion, as long as you carefully keep track of the timestamps, you can retain a useful mapping from each point to the original range image. This is not unlike rolling shutter cameras, whose pixels are clearly organized despite the fact that each row has a different timestamp. After all, even though the point cloud as a whole is slightly distorted, locally the structure persists. For example, imagine a spinning lidar which produces depth images where each column has the same timestamp. Then all you have to do is to store the pixel coordinate, depth, and timestamp, as well as some continuous-time trajectory where you can evaluate the sensor's pose at that exact point in time. (The use of "column" here is an oversimplification since, for example, Ouster lidars have a zigzag "staggered" pattern of points corresponding to each timestamp instead of a pure vertical column, and Velodyne/Hesai/Robosense lidars fire each laser at a slightly different time).

I have an awesome animation here.

By exploiting the structure of lidars, you can even extract visual features like SuperPoint on the lidar intensity images, which will correspond to a 3D point, giving very good slam results!

(example screenshot from Ouster marketing data)

image

Furthermore, retaining the structure allows you to accurately model the lidar's characteristics --- for example, it may have greater noise in the range direction than in the bearing direction, and more noise in the azimuthal direction than the elevation direction. Its range noise may be dependent on the lidar intensity, which is in turn somewhat proportional to reflectivity times the inverse square of range. Modelling this properly allows you to avoid the common tragedy of lidar mapping applications, which is that your map (a point cloud) gets fuzzier the more data you accumulate.

Third, although the sentence wrongfully points out solid state, flash, and mechanically rotating lidars, it leaves out the actual kind of lidar that produces unorganized point clouds! The most famous one is likely the Livox lidars. These are mechanically scanning (using Risley prisms).

Livox scanning patterns:

image
image

The Blickfeld lidar is also notable for its weird scanning pattern --- it is a MEMS lidar, which uses a small mirror that physically rotates in two axes. However, many people consider MEMS to be "solid state" despite the literally moving mirror, simply because the mirror is so small that it has some advantages of solid state. Blickfeld lidar scanning pattern:

image

@mauricefallon
Copy link
Contributor

@dllu is indeed correct: "locally the structure persists".

If a module like place recognition is not susceptible to motion distortion, this organisation is useful.
In my own group's work we compute point normals efficiently by creating a grid or range image.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants