Blogs

SLAM in 2026: What Actually Works

Simultaneous Localization and Mapping is the foundation of mobile robotics. The algorithms and stacks that ship in production today.

Mar 25, 2026 4 min

SLAM is a 30-year-old research field with a mature production toolkit in 2026. Here is what to use.

SLAM (Simultaneous Localization and Mapping) is how a mobile robot figures out where it is and what its environment looks like, in real time. Decades of research has produced a clear set of production-ready stacks for different sensor modalities.

2D LIDAR SLAM

For wheeled robots in flat indoor environments, 2D LIDAR is the workhorse. The dominant stacks: SLAM Toolbox (the ROS 2 default), Cartographer (Google), and HectorSLAM (lighter, no IMU required). All three are production-grade and integrate cleanly with ROS 2 Nav2.

Visual SLAM

For drones, AR devices, and small mobile robots without LIDAR, visual SLAM (using one or more cameras) is the answer. ORB-SLAM3 and its descendants handle monocular, stereo, and RGB-D inputs. OpenVSLAM is another solid open-source option. Apple's ARKit and Google's ARCore are essentially closed-source visual SLAM stacks.

3D LIDAR SLAM

For autonomous vehicles and outdoor robots, 3D LIDAR systems (Velodyne, Ouster, Hesai) demand purpose-built SLAM. LIO-SAM, FAST-LIO2, and the newer Glim are the open-source frontrunners. Tightly fused with IMU data for dead reckoning between LIDAR scans.

Visual-inertial SLAM

The most accurate single-sensor approach. Cameras + IMU, fused with extended Kalman filters or factor graphs. VINS-Fusion (HKUST), Kimera (MIT), and the newer ORB-SLAM3-VI are the references. Used in drones, AR headsets, and humanoid robotics.

Multi-session and persistent maps

The newest production-relevant work is on saving and reusing maps across sessions. Google Cloud Anchors, Niantic Lightship, and several open-source efforts let a robot localize against a previously built map without rebuilding. This is what unlocks long-term autonomy.

What we ship

For warehouse robots: SLAM Toolbox on a 2D LIDAR. For drone clients: VINS-Fusion or ORB-SLAM3. For autonomous vehicles: FAST-LIO2 with map persistence. For AR experiences: ARKit / ARCore directly. The choice is a function of sensors, environment, and accuracy requirements.

The frontier

Neural-radiance-field SLAM (NeRF-SLAM) and Gaussian-splatting SLAM are the research frontier. They produce dense, photorealistic 3D maps from cameras alone. Production-readiness is 2-3 years out, but the demos are impressive.