PØDA: Prompt-driven Zero-shot Domain Adaptation

Inria logo       Inria logo

Abstract

Domain adaptation has been vastly investigated in computer vision but still requires access to target images at train time, which might be intractable in some conditions, especially for long-tail samples. In this paper, we propose the task of 'Prompt-driven Zero-shot Domain Adaptation', where we adapt a model trained on a source domain using only a general textual description of the target domain, i.e., a prompt. First, we leverage a pretrained contrastive vision-language model (CLIP) to optimize affine transformations of source features, bringing them closer to target text embeddings, while preserving their content and semantics. Second, we show that augmented features can be used to perform zero-shot domain adaptation for semantic segmentation. Experiments demonstrate that our method significantly outperforms CLIP-based style transfer baselines on several datasets for the downstream task at hand. Our prompt-driven approach even outperforms one-shot unsupervised domain adaptation on some datasets, and gives comparable results on others.

Overview of PODA

(Left) PODA leverages frozen CLIP image encoder and a single textual prompt of an unseen target domain to optimize affine transformations of low-level features. (Middle) Zero-shot domain adaptation is achieved by finetuning a segmenter model (M) on the feature-augmented source domain with our optimized transformations. (Right) This enables inference on unseen domains.

PODA's qualitative results on unseen youtube videos

Citation

If you find this project useful for your research, please cite
@article{fahes2022poda,
  title={P{\O}DA: Prompt-driven Zero-shot Domain Adaptation},
  author={Fahes, Mohammad and Vu, Tuan-Hung and Bursuc, Andrei and P{\'e}rez, Patrick and de Charette, Raoul},
  journal={arXiv},
  year={2022}
}

Acknowledgements

This work was partially funded by French project SIGHT (ANR-20-CE23-0016).