DDSL (Dense Dispersed Structured Light for Hyperspectral 3D Imaging of Dynamic Scenes) is a method that reconstructs both spectral and geometry information of dynamic scenes.
You should follow the requirements.txt provided there.
git clone https://github.com/shshin1210/DDSL.git
cd DDSL
pip install -r requirements.txt
You should prepare the DDSL imaging system configuration as the figure above.
-
You will need conventional RGB projector(Epson CO-FH02), and a conventional RGB stereo cameras(FLIR GS3-U3-32S4C-C) with a diffraction grating film(Edmund 54-509) infront of the projector.
-
Calibration between camera-projector, camera-camera camera-diffraction grating must be done in advance.
We provide the process of our data-driven backward mapping model in our paper and Supplementary Document.
All calibrated paramters should be prepared:
-
Camera-camera & camera-projector intrinsic and extrinsic paramters
-
Camera response function & projector emission function & Diffraction grating efficiency
-
Dispersive-aware backward model
We provide an expample calibration parameters in our DDSL Calibration Parameters.
We capture dynamic scene under a group of M DDSL patterns and a single black pattern at 6.6 fps.
Here we use software synchronization using programs that displays images in Python via OpenGL. More details for fast capture software synchronization is provided in Supplementary Document.
Please refer to this repository elerac/pyglimshow which provides code for fast capture and clone this repository.
For fast capture, prepare all imaging system configuration and run the code below. This file is provided in directory fast_capture.
python procam_multiple_capture.py
Make sure python files provided in fast_capture is inside the cloned directory :
|-- cloned files ...
|-- procam_multiple_capture.py
|-- constants.py
|-- cam_pyspin.py
You may change some settings for cameras in cam_pyspin.py
.
We provide example of captured dynamic scene images in dataset directory for both stereo cameras.
We reconstruct depth by using the RAFT-Stereo. We used the code from princeton-vl/RAFT-Stereo and earned accurate depths.
Reconstructed depth results for each M DDSL patterns are provided in each dynamic scenes. dynamic00
We provide a example of dynamic scene dataset in Example_of_Dynamic_scene_Dataset.
Please make sure each methods RAFT, RAFT-Stereo, DDSL are placed as:
DDSL
|-- ...
RAFT
|-- ...
RAFT-Stereo
|-- ...
You also need to make sure that the original RAFT/core
should be changed to RAFT/raft_core
since there are duplicated folder names in RAFT-Stereo.
For Hyperspectral reconstruction of dynamic scenes under group of M DDSL patterns and a single black pattern, we need optical flow estimation.
We estimate optical flow between each black pattern captured images by RAFT. We used the code from princeton-vl/RAFT please refer to this repository.
If you have prepared all datasets and imaging system configurataion, start reconstructing hyperspectral reflectance:
python main.py
replace any configuration changes in ArgumentParser.