[CVT] 1:30PM EST, 20th August | Contrastive Test-Time Adaptation @CVPR22 | Dian Chen @ToyotaResearchInstitute
47 views
Skip to first unread message
Shambhavi Mishra
unread,
Aug 19, 2022, 10:24:02 PM8/19/22
Reply to author
Sign in to reply to author
Forward
Sign in to forward
Delete
You do not have permission to delete messages in this group
Copy link
Report message
Show original message
Either email addresses are anonymous for this group or you need the view member email addresses permission to view the original message
to ml-...@googlegroups.com
Hello Everyone!
We are glad to host Dian Chen with us tomorrow, 1:30 PM Eastern Time/ 11:00 PM IST, 20th August 2022 to discuss her work on Contrastive Test-Time Adaptation accepted at CVPR 2022.
Dian is a researcher from the Machine Learning team at Toyota Research Institute, and previously a researcher at Prof. Trevor Darrell's lab at University of California, Berkeley. She received her master's degree in Robotics in 2019 at University of Pennsylvania, advised by Prof. Kostas Daniilidis. Dian is actively working on 3D perception and domain adaptation.
Abstract : Test-time adaptation is a special setting of unsupervised domain adaptation where a trained model on the source domain has to adapt to the target domain without accessing source data. We propose a novel way to leverage self supervised contrastive learning to facilitate target feature learning, along with an online pseudo labeling scheme with refinement that significantly denoises pseudo labels. The contrastive learning task is applied jointly with pseudo labeling, contrasting positive and negative pairs constructed similarly as MoCo but with source-initialized encoder, and excluding same-class negative pairs indicated by pseudo labels. Meanwhile, we produce pseudo labels online and refine them via soft voting among their nearest neighbors in the target feature space, enabled by maintaining a memory queue. Our method, AdaContrast, achieves state-of-the art performance on major benchmarks while having several desirable properties compared to existing works, including memory efficiency, insensitivity to hyper-parameters, and better model calibration.