Meeting #148: [YSU AI Lab; Friday 15:30] DINO; Emerging Properties in Self-Supervised Vision Transformers

46 visualizações
Pular para a primeira mensagem não lida

Karen Hambardzumyan

não lida,
2 de mar. de 2023, 10:15:3502/03/2023
para Machine Learning Reading Group Yerevan
This Friday at 3:30 pm, Ani Vanyan from YerevaNN will present ICCV 2021 paper "Emerging Properties in Self-Supervised Vision Transformers" from FAIR (Meta AI).

The work introduces a self-supervised method called DINO, which can be interpreted as a form of self-distillation with no labels. This method achieves 80.1% top-1 on ImageNet in linear evaluation with ViT-Base. The authors also highlight the importance of the momentum encoder, multi-crop training, and the use of small patches with ViTs.

Time: Friday, 3:30 pm
Language: Armenian
Venue: YSU AI Lab
Paper: https://arxiv.org/pdf/2104.14294.pdf
Codebase: https://github.com/facebookresearch/dino

Abstract: In this paper, we question if self-supervised learning provides new properties to Vision Transformer (ViT) that stand out compared to convolutional networks (convnets). Beyond the fact that adapting self-supervised methods to this architecture works particularly well, we make the following observations: first, self-supervised ViT features contain explicit information about the semantic segmentation of an image, which does not emerge as clearly with supervised ViTs, nor with convnets. Second, these features are also excellent k-NN classifiers, reaching 78.3% top-1 on ImageNet with a small ViT. Our study also underlines the importance of momentum encoder, multi-crop training, and the use of small patches with ViTs. We implement our findings into a simple self-supervised method, called DINO, which we interpret as a form of self-distillation with no labels. We show the synergy between DINO and ViTs by achieving 80.1% top-1 on ImageNet in linear evaluation with ViT-Base.

Best,
Karen
Responder a todos
Responder ao autor
Encaminhar
0 nova mensagem