Learn Robotics
Module: Map The Unknown

Occupancy Grid Mapping

Learn how robots represent space as grids, update cell probabilities using sensor data, and use log-odds representation for efficient probabilistic mapping.

10 min read

Dividing the World Into Pixels

Occupancy grid mapping is robotics' answer to "how do we represent a continuous, messy world in a computer?" The answer: divide it into tiny squares and track what's in each one.

It's conceptually similar to how digital images work — a photo is millions of pixels, each with a color value. An occupancy grid is thousands (or millions) of cells, each with a probability: how likely is there an obstacle here?

The Three States of a Cell

Every cell in an occupancy grid has one of three states:

StateMeaningTypical Value
FreeRobot can drive through here0.0 - 0.3 (low probability of obstacle)
OccupiedThere's an obstacle here0.7 - 1.0 (high probability of obstacle)
UnknownNever observed0.5 (maximum uncertainty)

Notice we use probabilities, not absolutes. Why? Because sensors are noisy. A single LiDAR scan might report a wall at 3.02 meters, but the next scan says 2.98 meters. Instead of flip-flopping between "occupied" and "free," we gradually increase or decrease our confidence.

Tip

The probability threshold matters. Setting "occupied" at 0.7 instead of 0.9 makes the robot more cautious (treats uncertain areas as obstacles) but might block valid paths. Setting it too low makes the robot overconfident and prone to collisions.

Ray Casting: From Sensor to Map

When a LiDAR sensor returns a measurement — say, "obstacle detected at 3 meters in direction 45°" — the robot needs to update its map. Here's how:

  1. Draw a ray from the robot's position in the measured direction
  2. Mark cells along the ray as free — if the obstacle is at 3 meters, everything between 0-3 meters must be clear
  3. Mark the endpoint as occupied — that's where the obstacle is
  4. Leave cells beyond as unknown — we can't see past the obstacle

This is called ray casting or ray tracing, and it's done for every single LiDAR beam (often 360+ per scan).

Simplified Ray Casting

Probabilistic Updates: Log-Odds Representation

Here's a problem: if we store probabilities directly (0.0 to 1.0), updating them gets messy. Multiplying probabilities requires normalizing, dealing with 0 and 1 edge cases, and generally doing a lot of math.

The elegant solution is log-odds representation. Instead of storing probability p, we store:

log_odds = log(p / (1 - p))

This transforms probabilities into a number line from -∞ to +∞, where:

  • 0 = 50% probability (unknown)
  • Positive values = occupied
  • Negative values = free

The magic trick: updates become simple addition. When a sensor says "occupied," you add a positive constant. When it says "free," you add a negative constant. No multiplication, no normalization.

Note

Converting back to probability when needed is straightforward: p = 1 / (1 + exp(-log_odds)). But most internal operations stay in log-odds space for speed.

Handling Sensor Noise

Every sensor reading has uncertainty. A LiDAR might be accurate to ±2 cm, but walls aren't perfectly flat, lighting affects cameras, and dynamic objects move between scans.

Occupancy grids handle this by:

1. Clamping probabilities — never let a cell become 100% certain (0% or 100%). Always leave room for new evidence to change your mind.

2. Weighting updates — a single sensor reading shouldn't drastically change the map. Use small update increments so it takes multiple consistent observations to flip a cell's state.

3. Sensor models — account for known sensor characteristics. LiDAR is more accurate in certain ranges, cameras struggle with bright lights, etc.

Grid Resolution Trade-offs

Cell size is a critical design choice:

Cell SizeProsCons
Large (50+ cm)Small memory, fast updatesLoses fine detail, might miss narrow passages
Medium (5-10 cm)Good balanceStandard choice for indoor robots
Small (1-2 cm)Captures fine detailMassive memory, slow processing

A 10m × 10m room with 5cm cells needs 40,000 cells (200×200). At 1cm resolution, that jumps to 1,000,000 cells. Memory and computation scale with the square of resolution.

What's Next?

Now that you understand how robots build maps, we'll flip the problem around: given a known map, how does a robot figure out where it is on it? That's localization, and it's the focus of the next lesson.

Got questions? Join the community

Discuss this lesson, get help, and connect with other learners on Discord.

Join Discord

Discussion

Sign in to join the discussion.