This is a final project for CS445 (Computational Photography) Fall 2023 at UIUC
FieldsFusion is a project focused on advancing the field of 3D object and scene composition, with a particular emphasis on composition of radiance fields like NeRF and 3D Gaussian splatting. Our goal is to seamlessly integrate foreground objects with background scenes, both reconstructed from radiance fields, to facilitate high-fidelity rendering and composition.
Our Data, Render Results and Example Blender Script can be found in Google Drive
git clone --recurse-submodules git@github.com:ZiYang-xie/FieldFusion.git
cd model/nerfstudio
pip install -r requirements.txt
pip install -e .
cd model/TensoIR
pip install torch==1.10 torchvision
pip install tqdm scikit-image opencv-python configargparse lpips imageio-ffmpeg kornia lpips tensorboard loguru plyfile
mkdir external
cd external
wget https://download.blender.org/release/Blender2.93/blender-2.93.0-linux-x64.tar.xz
tar -xvf blender-2.93.0-linux-x64.tar.xz
- According to nerfstudio tutorial
- Prepare the data for the foreground object and background scene separately.
- The data should be in the form of a set of images and camera poses.
- Refer to nerfstudio data preparation for more details.
- We apply TensoIR for relighting and please refer to it for more details. Thanks for its great work!
- First train the foreground images using the
train_tensoIR_simple.py
function to get the checkpoint.
export PYTHONPATH=. && python train_tensoIR_simple.py --config ./configs/single_light/blender.txt
- Then render with a pre-trained model under unseen lighting conditions.
export PYTHONPATH=. && python scripts/relight_importance.py --ckpt "$ckpt_path" --config configs/relighting_test/"$scene".txt --batch_size 800
- Run the reconstruction pipeline for the foreground object and background scene separately.
MODEL= <nerfacto/gaussian-splatting>
DATA=colmap
SCENE_NAME=<INPUT_YOUR_SCENE_NAME>
DATA_DIR=./data/<background/foreground>/$SCENE_NAME/
EXP_NAME=$MODEL-$SCENE_NAME
ns-train $MODEL \
--vis wandb \
--experiment-name $EXP_NAME \
$DATA \
--data $DATA_DIR \
--train-split-fraction 1 \
- Export Pointcloud for background
ns-export gaussian-splat --load-config <CONFIG> --output-dir <OUTPUT_DIR>
- Crop the foreground object with Bbox and export the mesh
ns-export poisson \
--load-config <CONFIG> \
--output-dir <OUTPUT_DIR> \
--normal-method open3d \
--remove-outliers True \
--obb-center <FLOAT FLOAT FLOAT> \
--obb-rotation <FLOAT FLOAT FLOAT> \
--obb-scale <FLOAT FLOAT FLOAT>
You can crop the foreground object with ns-viewer
to get the obb-center
, obb-rotation
and obb-scale
- Import the foreground object and background scene into Blender.
- Put the foreground object into the background scene.
- Design a camera trajectory.
- Export the camera poses of the foreground object and background scene respectively.
- [OPTIONAL] Add a equiangular camera in the center of the scene to render the environment map.
- Given the prepared camera poses, render the foreground object and background scene.
ns-render camera-path \
--load-config <Foreground / Background config> \
--output-format images \
--output-path <Your Save Path> \
--camera-path-filename <Exported Camera Path> \
# --rendered-output-names accumulation
Use --rendered-output-names accumulation
to render foreground accumulation mask.
- Save the blender script generated in Step 2.3 as
shadow.blend
. - Change the file path in
scripts/render_shadow.py
to your own path. - Run the following command to render the shadow.
./externel/blender/blender -b -E CYCLES -P ./scripts/shadow_render.py
- Given the RGB images of the foreground object and background scene, shadow and the foreground accumulation mask, compose the images to get the final results.
python scripts/compose.py \
--fg_path <Foreground RGB> \
---bg_path <Background RGB> \
--shadow_path <Shadow> \
--mask_path <Foreground Accumulation Mask> \
--output <Output Path>
This is an open access article distributed under the terms of the CC-BY-NC-ND license
Donnot use this code for commercial purposes.