OpenIllumination: A Multi-Illumination Dataset for Inverse Rendering Evaluation on Real Objects (NeuRIPS 2023)
This repository contains code for pre-processing data captured in the light stage, including camera parameter restoration, image undistortion, object segmentation, and light calibration for object-centric tasks. For comparison between your method and existing works, such as TensoIR, refer to here.
This repository is used to process data in the paper "OpenIllumination: A Multi-Illumination Dataset for Inverse Rendering Evaluation on Real Objects", which introduces a real-world dataset containing over 108K images of 64 objects captured under 72 camera views and a large number of different illuminations. This dataset enables the quantitative evaluation of most inverse rendering and material decomposition methods for real objects. This dataset contains various everyday objects, including decoration sculptures, toys, foods, etc., and does not include human beings.
The dataset can be viewed on the project page.
- Ubuntu 18.04+ with a display for annotating segmentation masks
- Python 3.7+
- Nvidia GPU
See the LICENSE file for license rights and limitations (MIT).
cd OpenIllumination
conda env create -n openillumination
conda activate openillumination
conda install pytorch torchvision torchaudio pytorch-cuda=11.7 -c pytorch -c nvidia
conda install -c fvcore -c iopath -c conda-forge fvcore iopath -y
conda install pytorch3d -c pytorch3d -y
pip install -r requirements.txt
sh install.sh
Follow the instructions here.
bash scripts/download_segm_models.sh
a) Capture images using the light stage and organize the data like below.
DATASET_ROOT
├─ greenhead/DSLR_3D__greenhead # for camera calibration
│ │ ├─ CA2.JPG
│ │ ├─ CA4.JPG
│ │ ├─ ...
├─ paper_egg # image directory
│ ├─ 001__1__... # the first illumination
│ │ ├─ CA2.JPG
│ │ ├─ CA4.JPG
│ │ ├─ ...
│ ├─ 002__2__... # the second illumination
│ │ ├─ CA2.JPG
│ │ ├─ CA4.JPG
│ │ ├─ ...
│ ├─ ...
b) Create a directory for the processed data.
mkdir OUTPUT_DIR
c) Copy the images for camera calibration.
mkdir OUTPUT_DIR/calibration/images -p
cp DATASET_ROOT/${GREENHEAD_DIR}/*.JPG OUTPUT_DIR/calibration/images/
python ltsg/module/calibration.py --data_dir OUTPUT_DIR/calibration --normalize_camera_poses
d) Run the following command to reformat images of other objects.
python tools/reformat_data_dslr.py --input_dir DATASET_ROOT --output_dir OUTPUT_DIR --calibration_dir OUTPUT_DIR/calibration
Since we have performed light calibration before, you can use the pre-calibrated results and skip this step.
python tools/light_calib/light_calib.py
Take images captured by DSLR as an example, run the following command to perform camera calibration, image undistortion, and segmentation. Note that this step requires a display if you use SAM to perform the segmentation.
python tools/data_process_multi_light.py -c configs/dslr/obj.txt
python tools/ps_recon/albedo_from_mvc.py
In addition to the One-Light-At-Time (OLAT) pattern, we have carefully designed 13 different light patterns for our dataset. These patterns involve lighting multiple LED lights either randomly or in a regular manner.
For the first 6 light patterns (001 to 006), we divide the 142 lights into 6 groups based on their spatial location. Each light pattern corresponds to activating one of these groups.
As for the remaining 7 light patterns (007 to 013), the lights are randomly illuminated, with the total number of chosen lights gradually increasing.
Below is an image illustrating the 13 light patterns present in our dataset.
The light patterns ground truth as provided in REPO_ROOT/light_pos.npy