Exploring the Benefits of Vision Foundation Models for Unsupervised Domain Adaptation (CVPR 2024 Second Workshop on Foundation Models)
Authors: Bruno B. Englert, Fabrizio J. Piva, Tommie Kerssies, Daan de Geus, Gijs Dubbelman
Affiliation: Eindhoven University of Technology
Publication: CVPR 2024 Workshop Proceedings for the Second Workshop on Foundation Models
Paper: arXiv
Code: GitHub
Achieving robust generalization across diverse data domains remains a significant challenge in computer vision. This challenge is important in safety-critical applications, where deep-neural-network-based systems must perform reliably under various environmental conditions not seen during training. Our study investigates whether the generalization capabilities of Vision Foundation Models (VFMs) and Unsupervised Domain Adaptation (UDA) methods for the semantic segmentation task are complementary. Results show that combining VFMs with UDA has two main benefits: (a) it allows for better UDA performance while maintaining the out-of-distribution performance of VFMs, and (b) it makes certain time-consuming UDA components redundant, thus enabling significant inference speedups. Specifically, with equivalent model sizes, the resulting VFM-UDA method achieves an 8.4x speed increase over the prior non-VFM state of the art, while also improving performance by +1.2 mIoU in the UDA setting and by +6.1 mIoU in terms of out-of-distribution generalization. Moreover, when we use a VFM with 3.6x more parameters, the VFM-UDA approach maintains a 3.3x speed up, while improving the UDA performance by +3.1 mIoU and the out-of-distribution performance by +10.3 mIoU. These results underscore the significant benefits of combining VFMs with UDA, setting new standards and baselines for Unsupervised Domain Adaptation in semantic segmentation.
-
Create a Weights & Biases (W&B) account.
- The metrics during training are visualized with W&B: https://wandb.ai
-
Download datasets.
- Cityscapes: Download 1 | Download 2
- GTA V: Download 1 | Download 2 | Download 3 | Download 4 | Download 5 | Download 6 | Download 7 | Download 8 | Download 9 | Download 10 | Download 11 | Download 12 | Download 13 | Download 14 | Download 15 | Download 16 | Download 17 | Download 18 | Download 19 | Download 20
- Mapillary: Download 1
- WildDash: Download 1 (Download the "old WD2 beta", not the new "Public GT Package")
- For WilDdash, an extra step is needed to create the train/val split. After the "wd_public_02.zip" is downloaded, place the files from the "wilddash_trainval_split" in the same direcetory as the zip file. After that, run:
This creates a new zip files, which should be used during training.
chmod +x create_wilddash_ds.sh ./create_wilddash_ds.sh
- For WilDdash, an extra step is needed to create the train/val split. After the "wd_public_02.zip" is downloaded, place the files from the "wilddash_trainval_split" in the same direcetory as the zip file. After that, run:
All the zipped data should be placed under one directory. No unzipping is required.
-
Environment setup.
conda create -n fuda python=3.10 && conda activate fuda
-
Install required packages.
pip install -r requirements.txt
-
Train the VFM-UDA base model.
python main.py fit -c uda_vit_vanilla.yaml --root /data --trainer.devices [0]
(replace
/data
with the folder where you stored the datasets) -
Reproducibility
We note that there are small variations in performance between training runs, due to the stochasticity in the process, particularly for UDA techniques. Therefore, results may differ slightly depending on the random seed.’
Method | Backbone | Pre-training | Cityscapes (miou) | WildDash2 (miou) | model |
---|---|---|---|---|---|
VFM-UDA | ViT-B | DINOv2 | 77.1 | 60.8 | model |
VFM-UDA | ViT-L | DINOv2 | TBA |
Note: these models are re-trained, so the results differ slightly from those reported in the paper.
@inproceedings{englert2024exploring,
author={{Englert, Brunó B.} and {Piva, Fabrizio J.} and {Kerssies, Tommie} and {de Geus, Daan} and {Dubbelman, Gijs}},
title={Exploring the Benefits of Vision Foundation Models for Unsupervised Domain Adaptation},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
year={2024},
}
We use some code from:
- DINOv2 (https://github.com/facebookresearch/dinov2): Apache-2.0 License
- Masked Image Consistency for Context-Enhanced Domain Adaptation (https://github.com/lhoyer/MIC): Copyright (c) 2022 ETH Zurich, Lukas Hoyer, Apache-2.0 License
- SegFormer (https://github.com/NVlabs/SegFormer): Copyright (c) 2021, NVIDIA Corporation, NVIDIA Source Code License
- DACS (https://github.com/vikolss/DACS): Copyright (c) 2020, vikolss, MIT License
- MMCV (https://github.com/open-mmlab/mmcv): Copyright (c) OpenMMLab, Apache-2.0 License