Representation learning with very limited images



The 1st Workshop in Conjunction with ICCV 2023

We organize the 1st Workshop on representation learning with very limited images: the potential of self-, synthetic- and formula-supervision. Such a setting of very limited data (e.g. self-supervised learning with a single image) and/or synthetic datasets are free of the typical issues found in real datasets such as societal bias, copyright, and privacy. Efforts to train visual models on very limited data resources have emerged independently from various academic and industry communities around the world. This workshop aims to bring together these various communities to form a collaborative effort.

Call for Papers

We call papers of the following topics:

{Self, Semi, Weakly, Un}-supervised learning with very limited data
Synthetic training with graphics, generated images
Formula-driven supervised learning
Training convnets and/or vision transformers with relatively fewer resources
Training methods to dramatically improve efficiency
Vision+X and multi-modality to accelerate a learning efficiency
New datasets and benchmarks
Brave new ideas related to above-mentioned topics

Invited Talk

Christian Rupprecht
University of Oxford

Title: Unsupervised Learning from Limited Data
Abstract: While current large models are trained on millions or even billions of images, in this talk, we will discuss how unsupervised learning can be performed on a limited number of samples. A special focus of this talk will lie on representation learning, but we will also explore specific applications such as 3D reconstruction, object detection and tracking. Overall, several strategies have shown promise in this area: naturally image augmentations play a strong role in combatting data scarcity and imposing priors on images. Additionally, synthetic data can often be generated with either very simple methods or through pre-trained large-scale models that have already captured the diversity of the real world and allow the distillation of information into downstream applications.


Manel Baradad Jurjo
MIT

Title: Learning to see by looking at noise
Abstract: Current vision systems are trained on huge datasets, and these datasets come with costs: curation is expensive, they inherit human biases, and there are concerns over privacy and usage rights. To counter these costs, interest has surged in learning from cheaper data sources, such as unlabeled images. In this talk, I will present two of our recent approaches for training neural networks without data. In the first one, we train neural networks using a small set of curated programs that generate simple noise-like images, which have properties present in natural images. On the second, we follow the opposite direction: instead of curating a small set of programs, we collect a big dataset of 21k image-generation programs, which we do not manually tune, making it easier to scale up and increase performance. Both yield competitive results on different vision downstream tasks, showing the potential of these approaches to circumvent the usage of realistic data.


Guide for Authors

We invite original research papers. All submissions should be anonymized, formatted according to the template of ICCV 2023.
Research Papers (4-8 pages excluding references) should contain unpublished original research. They will be published in the ICCV workshop proceedings, and will be archived in the IEEE Xplore Digital Library and the CVF.
Please submit papers via the submission system:
https://cmt3.research.microsoft.com/ICCVWLIMIT2023.

Important Dates and Venue

Research Paper Submission: July 4th, (11:59 PM, Pacific Time), 2023
Notification: July 31st
Camera-ready: August 4th
Workshop: October 2nd PM at the same venue as ICCV 2023 in Paris, France..

Organizers

Hirokatsu Kataoka
AIST/LINE
Rio Yokota
Tokyo Tech/AIST
Nakamasa Inoue
Tokyo Tech/AIST
Dan Hendrycks
Center for AI Safety

Xavier Boix
MIT
Yue Qiu
AIST
Connor Anderson
BYU
Ryo Nakamura
Fukuoka Univ./AIST

Ryosuke Yamada
Univ. of Tsukuba/AIST
Risa Shinoda
Kyoto Univ./AIST


Contact

E-mail: M-iccv2023ws-limit-ml@aist.go.jp

Special Thanks