The past five years have seen rapid progress in large-scale pre-trained models across a variety of domains, such as computer vision, natural language processing, robotics, bioinformatics, etc. Leveraging a huge number of parameters, large-scale pre-trained models are capable of encoding rich knowledge from labeled and/or unlabeled examples. Supervised and self-supervised pre-training have been the two most representative paradigms, through which pre-trained models have demonstrated large benefits on a wide spectrum of downstream tasks. For example, convolutional neural networks pre-trained on a large-scale labeled image dataset (e.g., ImageNet) and later fine-tuned on specific vision tasks with a relative small training set are highly successful. By resorting to carefully designed self-supervised tasks, self-supervised pre-trained models (e.g., MoCo and BERT) enjoy impressive generalization and applicability. There are also other pre-training paradigms, e.g., meta-learning for few-shot learning, where pre-trained models are trained so that they quickly adapt to solve new tasks.
However, there are still many remaining challenges and new opportunities ahead for pre-training, In this workshop, we propose to have the following two foci, informed by recent advancement in pre-training.
We welcome submissions from areas of pre-trained models, few-shot learning, transfer learning, self-supervised learning, meta-learning, etc. We also invite submissions from researchers in other application areas such as physics, chemistry, biology. To summarize, the topics include but are not limited to:
Submission deadline: May 22th, 2022, AOE May 25th, 2022, AOE
Notification to authors: June 13th, 2022, AOE
Video recording deadline (contributed talk only): July 1st, 2022
Final workshop program, camera-ready deadline: July 8th, 2022
Please kindly find the list of accepted papers here.
This is the tentative schedule of the workshop. All slots are provided in Eastern Time (ET).
[8:50 - 9:00] | Introduction and opening remarks |
[9:00 - 9:30] | Invited talk 1: Nathan C. Frey |
[9:30 - 10:00] | Invited talk 2: Oriol Vinyals |
[10:00 - 10:15] | Contributed talk 1: Multimodal Masked Autoencoders Learn Transferable Representations |
[10:15 - 10:45] | Invited talk 3: Maithra Raghu |
[10:45 - 11:15] | Invited talk 4: Charles Sutton |
[11:15 - 12:15] | Panel Discussion |
[13:30 - 14:00] | Invited talk 5: Hanie Sedghi |
[14:00 - 14:15] | Contributed talk 2: Pre-Train Your Loss: Easy Bayesian Transfer Learning with Informative Prior |
[14:15 - 14:45] | Invited talk 6: Xinlei Chen |
[14:45 - 15:00] | Contributed talk 3: Plex: Towards Reliability using Pretrained Large Model Extensions |
[15:00 - 16:30] | Poster Session |
[16:30 - 17:00] | Invited talk 7: Mohit Bansal |
[17:00 - 17:30] | Invited talk 8: Sara Beery |