Our submission to the Under Display Camera Challenge (UDC) at ECCV 2020. We placed 2nd and 5th on the POLED and TOLED tracks respectively!
Project Page | Paper |
Official implementation of our ECCVW 2020 paper, "Deep Atrous Guided Filter for Image Restoration in Under Display Cameras", Varun Sundar*, Sumanth Hedge*, Divya K Raman, Kaushik Mitra. Indian Institute of Technology Madras, * denotes equal contribution.
If you want to experiment with Deep Atrous Guided Filter (DAGF), we recommend you get started with the collab notebook. It exposes the core aspects of our method, while abstracting away minor details and helper functions.
It requires no prior setup, and contains a demo for both POLED and TOLED measurements.
If you're unfamiliar with Under Display Cameras, they are a new imaging system for smartphones, where the camera is mounted right under the display. This makes truly bezel-free displays possible, and opens up a bunch of other applications. You can read more here.
If you would like to reproduce all our experiments presented in the paper, head over to the experiments branch. For a concise version with just our final models, you may continue here.
You'll need to install the following:
- python 3.7+
- pytorch 1.5+
- Use
pip install -r utils/requirements.txt
for the remaining
Dataset | Train Folder | Val Folder | Test Folder |
---|---|---|---|
POLED | POLED_train | POLED_val | POLED_test |
TOLED | TOLED_train | TOLED_val | TOLED_test |
Simulated POLED | Sim_train | Sim_val | NA |
Simulated TOLED | Sim_train | Sim_val | NA |
Download the required folder and place it under the data/
directory. The train and val splits contain both low-quality measurements (LQ
folder) and high-quality groudtruth (HQ
folder). The test set contains only measurements currently.
We also provide our simulated dataset, based on training a shallow version of DAGF with Contextual Bilateral (CoBi) loss. For simulation specific details (procedure etc.) take a look at the experiments branch.
We use sacred to handle config parsing, with the following command-line invocation:
python train{val}.py with config_name {other flags} -p
Various configs available:
Model | Dataset | Config Name | Checkpoints |
---|---|---|---|
DAGF | POLED | ours_poled | ours-poled |
DAGF-sim | Simulated POLED | ours_poled_sim | ours-poled-sim |
DAGF-PreTr | POLED (fine-tuned from DAGF-sim) | ours_poled_PreTr | ours-poled-PreTr |
DAGF | TOLED | ours_toled | ours-toled |
DAGF-sim | Simulated TOLED | ours_toled_sim | ours-toled-sim |
DAGF-PreTr | TOLED (fine-tuned from DAGF-sim) | ours_toled_PreTr | ours-toled-PreTr |
Download the required checkpoint folder and place it under ckpts/
.
DAGF-sim networks are first trained on simulated data. To obtain this data, we trained a shallow version of our final model to transform clean images to Glass/ POLED / TOLED. You can find the checkpoints and code to these networks in our experiments branch.
Further, see config.py
for exhaustive set of config options. To add a config, create a new function in config.py and add it to
named_configs`.
Create the following symbolic links (assume path_to_root_folder/
is ~/udc_net
):
- Data folder:
ln -s /data_dir/ ~/udc_net
- Runs folder:
ln -s /runs_dir/ ~/udc_net
- Ckpts folder:
ln -s /ckpt_dir/ ~/udc_net
- Outputs folder:
ln -s /output_dir/ ~/udc_net
Data folder: Each subfolder contains a data split.
|-- Poled_train
| |-- HQ
| |-- |-- 101.png
| |-- |-- 102.png
| |-- |-- 103.png
| `-- LQ
|-- Poled_val
| `-- LQ
Splits:
- Poled_{train,val}: Poled acquired images, HQ (glass), LQ (Poled) pairs.
- Toled_{train,val}: Toled acquired images, HQ (glass), LQ (Toled) pairs.
- Sim_{train,val}: our simulated set.
- DIV2K: source images used for train Poled, Toled in monitor acquisition. Used to train sim networks.
Outputs folder: Val, test dumps under various experiment names.
outputs
|-- ours-poled
| |-- test_latest
| |-- val_latest
|-- 99.png
|-- 9.png
`-- metrics.txt
Ckpts folder: Ckpts under various experiment names. We store every 64th epoch, and every 5 epochs prior for model snapshots. This is mutable under config.py
.
ckpts
|-- ours-poled
| `-- model_latest.pth
Runs folder: Tensorboard event files under various experiment names.
runs
|-- ours-poled
| |-- events.out.tfevents.1592369530.genesis.26208.0
Run as:
python train.py with xyz_config {other flags}
For a multi-gpu version (we use pytorch's distributed-data-parallel
):
python -m torch.distributed.launch --nproc_per_node=3 --use_env train.py with xyz_config distdataparallel=True {other flags}
Run as:
python val.py with xyz_config {other flags}
Useful Flags:
self_ensemble
: Use self-ensembling. Ops may be found inutils/self_ensembling.py
.
See config.py for exhaustive set of arguments (under base_config
).
If you find our work useful in your research, please cite:
@InProceedings{10.1007/978-3-030-68238-5_29,
author="Sundar, Varun
and Hegde, Sumanth
and Kothandaraman, Divya
and Mitra, Kaushik",
title="Deep Atrous Guided Filter for Image Restoration in Under Display Cameras",
booktitle="Computer Vision -- ECCV 2020 Workshops",
year="2020",
publisher="Springer International Publishing",
pages="379--397",
}
Feel free to mail us if you have any questions!