IRS: A Large Synthetic Indoor Robotics Stereo Dataset for Disparity and Surface Normal Estimation
IRS is an open dataset for indoor robotics vision tasks, especially disparity and surface normal estimation. It contains totally 103,316 samples covering a wide range of indoor scenes, such as home, office, store and restaurant.
Left image | Right image |
Disparity map | Surface normal map |
Rendering Characteristic | Options |
---|---|
indoor scene class | home(31145), office(43417), restaurant(22058), store(6696) |
object class | desk, chair, sofa, glass, mirror, bed, bedside table, lamp, wardrobe, etc. |
brightness | over-exposure(>1300), darkness(>1700) |
light behavior | bloom(>1700), lens flare(>1700), glass transmission(>3600), mirror reflection(>3600) |
We give some sample of different indoor scene characteristics as follows.
Home | Office | Restaurant |
Normal light | Over exposure | Darkness |
Glass | Mirror | Metal |
We design a novel network, namely DispNormNet, to estimate the disparity map and surface normal map together of the input stereo images. DispNormNet is comprised of two modules, DispNetC and NormNetDF. DispNetC is identical to that in this paper and produces the disparity map. NormNetDF produces the normal map and is similar to DispNetS. "DF" indicates disparity feature fusion, which we found important to produce accurate surface normal maps.
Q. Wang*,1, S. Zheng*,1, Q. Yan*,2, F. Deng2, K. Zhao†,1, X. Chu†,1.
IRS : A Large Synthetic Indoor Robotics Stereo Dataset for Disparity and Surface Normal Estimation. [preprint]
* indicates equal contribution. † indicates corresponding authors.1Department of Computer Science, Hong Kong Baptist University. 2School of Geodesy and Geomatics, Wuhan University.
You can use the OneDrive link to download our dataset.
- Python2.7
- PyTorch(1.2.0)
- torchvision 0.2.0 (higher version may cause issues)
- Cuda 10 (https://developer.nvidia.com/cuda-downloads)
Use the following commands to install the environment in Linux
cd layers_package
./install.sh
# install OpenEXR (https://www.openexr.com/)
sudo apt-get update
sudo apt-get install openexr
Download IRS dataset from https://1drv.ms/f/s!AmN7U9URpGVGem0coY8PJMHYg0g?e=nvH5oB (OneDrive).
Check the following MD5 of all files to ensure their correctness.
MD5SUM | File Name |
---|---|
e5e2ca49f02e1fea3c7c5c8b29d31683 | Store.tar.gz |
d62b62c3b6badcef0d348788bdf4f319 | IRS_small.tar.gz |
ac569053a8dbd76bb82f1c729e77efa4 | Home-1.tar.gz |
65aad05ae341750911c3da345d0aabb2 | Home-2.tar.gz |
de77ab28d9aaec37373a340a58889840 | Office-1.tar.gz |
2a5cb91fb2790d92977c8d0909539543 | Office-2.tar.gz |
d68dd6014c0c8d6ae24b27cc2fce6423 | Restaurant.tar.gz |
Extract zip files and put them in correct folder:
---- pytorch-dispnet ---- data ---- IRSDataset ---- Home
|-- Office
|-- Restaurant
|-- Store
There are configurations for train in "exp_configs" folder. You can create your own configuration file as samples.
As an example, following configuration can be used to train a DispNormNet on IRS dataset:
/exp_configs/dispnormnet.conf
net=dispnormnet
loss=loss_configs/dispnetcres_irs.json
outf_model=models/${net}-irs
logf=logs/${net}-irs.log
lr=1e-4
devices=0,1,2,3
dataset=irs #sceneflow, irs, sintel
trainlist=lists/IRSDataset_TRAIN.list
vallist=lists/IRSDataset_TEST.list
startR=0
startE=0
endE=10
batchSize=16
maxdisp=-1
model=none
Then, the configuration should be specified in the "train.sh"
/train.sh
dnn="${dnn:-dispnormnet}"
source exp_configs/$dnn.conf
python main.py --cuda --net $net --loss $loss --lr $lr \
--outf $outf_model --logFile $logf \
--devices $devices --batch_size $batchSize \
--dataset $dataset --trainlist $trainlist --vallist $vallist \
--startRound $startR --startEpoch $startE --endEpoch $endE \
--model $model \
--maxdisp $maxdisp \
--manualSeed 1024 \
Lastly, use the following command to start a train
./train.sh
There is a script for evaluation with a model from a train
/detech.sh
dataset=irs
net=dispnormnet
model=models/dispnormnet-irs/model_best.pth
outf=detect_results/${net}-${dataset}/
filelist=lists/IRSDataset_TEST.list
filepath=data
CUDA_VISIBLE_DEVICES=0 python detecter.py --model $model --rp $outf --filelist $filelist --filepath $filepath --devices 0 --net ${net} --disp-on --norm-on
Use the script in your configuration, and then get result in detect_result folder.
Disparity results are saved in png format as default.
Normal results are saved in exr format as default.
If you want to change the output format, you need to modify "detecter.py" and use save function as follow
# png
skimage.io.imsave(filepath, image)
# pfm
save_pfm(filepath, data)
# exr
save_exr(data, filepath)
For viewing files in exr format, we recommand a free software
Please contact us at qiangwang@comp.hkbu.edu.hk if you have any question.