LiDAR and Point Clouds
LiDAR (Light Detection and Ranging) is the "radar for robots." It shoots laser beams at the world and measures how long they take to bounce back. The result: direct distance measurements — no guessing, no calibration headaches, just raw 3D geometry.
How LiDAR Works
The core principle:
- Emit a laser pulse toward a target
- Wait for the reflection to return
- Measure the time (usually nanoseconds)
- Calculate distance using the speed of light:
distance = (speed_of_light * time) / 2
(Divided by 2 because the light travels to the target and back.)
Since light moves at ~300,000 km/s, even nanosecond precision gives millimeter-accurate measurements.
LiDAR is an active sensor — it emits its own light. Cameras are passive — they rely on ambient light. This means LiDAR works perfectly in total darkness, but struggles with transparent surfaces (glass, water) that don't reflect lasers well.
2D vs. 3D LiDAR
2D LiDAR (Laser Scanners)
- Spins in a single plane (usually horizontal)
- Outputs a 1D array of distances — one per angle
- Common range: 270° coverage, 0.25° angular resolution (~1080 points per scan)
- Used for: indoor navigation, obstacle avoidance, 2D mapping
3D LiDAR (Spinning/Solid-State)
- Multiple laser beams in different vertical angles
- Outputs a 2D array or point cloud — (x, y, z) coordinates
- Common configurations: 16, 32, 64, or 128 beams
- Used for: 3D mapping, autonomous driving, aerial surveying
Point Clouds: The Data Structure
A point cloud is just a collection of 3D points. Each point has:
- Position: (x, y, z) in meters (or whatever unit)
- Intensity: How much laser light reflected (0-255 or 0-65535)
- Optional: color (if fused with camera), timestamp, beam ID
Organized vs. Unorganized
- Organized: Points form a grid (like image pixels) — fast to index by row/column
- Unorganized: Points in a flat list — more flexible but slower to query
Think of an organized point cloud as a "depth image" — each pixel stores (x, y, z) instead of (r, g, b). This makes it easy to find neighboring points, which is critical for surface normal estimation and segmentation.
Range and Intensity
Range (Distance)
- Maximum range: 10m (indoor LiDAR) to 200m+ (automotive LiDAR)
- Invalid readings: Returned as
inf,NaN, or a special value (often0ormax_range + 1) - Failure cases: Transparent objects, very dark/absorbing surfaces, max range exceeded
Intensity (Reflectivity)
Measures how much laser light bounces back. High intensity means:
- Bright/reflective surfaces (white walls, metal, retroreflectors)
- Close objects (more photons return)
Low intensity means:
- Dark/absorbing surfaces (black rubber, asphalt)
- Far objects (signal weakens with distance)
- Glancing angles (laser hits at a steep angle, less reflection)
Common LiDAR Use Cases
| Application | LiDAR Type | Why |
|---|---|---|
| Indoor navigation | 2D (single plane) | Cheap, fast, perfect for flat environments |
| Outdoor mapping | 3D (64+ beams) | Captures terrain, trees, buildings |
| Warehouse robots | 2D | Pallets and obstacles are mostly at one height |
| Self-driving cars | 3D (128 beams) | Need to see pedestrians, cars, road geometry |
| Drones | 3D (lightweight) | Terrain mapping, collision avoidance |
What's Next?
LiDAR gives us 3D points, but cameras give us rich visual information. The next lesson explores depth perception — how we can get 3D distance data from cameras alone, and when to choose LiDAR vs. vision-based depth.