A robust, targetless calibration system for autonomous vehicle sensor suites.
This project provides an offline software system designed to calibrate the intrinsic and extrinsic parameters of sensors commonly used in autonomous vehicles (cameras, IMU, wheel encoders). The system achieves accurate calibration without relying on predefined calibration targets (e.g., checkerboards), using data collected during normal vehicle operation.
Accurate sensor calibration is critical for reliable environment perception, localization, and sensor fusion in autonomous driving systems.
- Targetless Calibration: Calibrate sensors using data from normal driving, without specialized calibration targets
- Multi-Sensor Support: Integrates cameras, IMU, and wheel encoders in a unified optimization framework
- High Accuracy: Achieves calibration accuracy comparable to target-based methods under suitable conditions
- Robust Optimization: Uses factor graph optimization with GTSAM for joint optimization of all parameters
The system follows a modular pipeline architecture:
- Data Loading & Synchronization: Reads various input formats and aligns measurements based on timestamps
- Initial Ego-Motion Estimation: Fuses IMU and wheel odometry to provide an initial estimate of the vehicle's trajectory
- Visual Initialization / SfM: Detects/matches features, performs triangulation, and establishes initial 3D landmarks
- Factor Graph Construction: Builds the optimization problem using variables (poses, landmarks, intrinsics, extrinsics, biases) and factors (reprojection, IMU, odometry, priors)
- Bundle Adjustment Optimization: Solves the non-linear least-squares problem using Levenberg-Marquardt
- Results Extraction & Validation: Extracts the final calibrated parameters and computes validation metrics
- Python 3.8+
- NumPy, SciPy
- OpenCV (for image processing)
- GTSAM (for factor graph optimization)
- Additional dependencies listed in
requirements.txt
# Clone the repository
git clone https://github.com/etendue/multisensor-calibration.git
cd multisensor-calibration
# Install dependencies
pip install -r requirements.txt
# Install GTSAM (recommended method: conda)
conda install -c conda-forge gtsam
# Alternatively, use our installation script (tries conda, pip, and building from source)
# ./scripts/install_gtsam.sh
# Test GTSAM installation
./scripts/test_gtsam.py
- Prepare your dataset in a supported format (ROS bags recommended)
- Configure the calibration parameters in a YAML file
- Run the calibration pipeline:
python calibrate.py --config config.yaml --data-dir /path/to/your/data --output-dir results
For best results, the input data should include:
- Multiple cameras providing a 360° view
- One 6-axis IMU (angular velocity, linear acceleration)
- Four wheel encoders (wheel speeds or ticks)
- Diverse vehicle motion (translations, rotations, varying speeds)
- Good lighting conditions and feature-rich environments
- Reprojection error: < 0.5 pixels (RMS)
- Extrinsic translation error: < 2 cm (relative to ground truth, if available)
- Extrinsic rotation error: < 0.2 degrees (relative to ground truth, if available)
This project is currently in development. See the task plan for current progress and upcoming tasks.
- Product Requirements Document
- Technical Design Document
- Requirements Traceability Matrix
- Task Plan
- Task Definitions
- CI Setup
- Wheel Odometry Models
Contributions are welcome! Please feel free to submit a Pull Request.