Representation learning with very limited images



The 1st Workshop in Conjunction with ICCV 2023

We organize the 1st Workshop on representation learning with very limited images: the potential of self-, synthetic- and formula-supervision. Such a setting of very limited data (e.g. self-supervised learning with a single image) and/or synthetic datasets are free of the typical issues found in real datasets such as societal bias, copyright, and privacy. Efforts to train visual models on very limited data resources have emerged independently from various academic and industry communities around the world. This workshop aims to bring together these various communities to form a collaborative effort.

Call for Papers

We call papers of the following topics:

{Self, Semi, Weakly, Un}-supervised learning with very limited data
Synthetic training with graphics, generated images
Formula-driven supervised learning
Training convnets and/or vision transformers with relatively fewer resources
Training methods to dramatically improve efficiency
Vision+X and multi-modality to accelerate a learning efficiency
New datasets and benchmarks
Brave new ideas related to above-mentioned topics

Program

October 2nd in conjunction of ICCV 2023 at Paris.

Room E03 (Poster Room W02)
13:30 - 13:40 Welcome (10 min) [Slides]
13:40 - 14:20 Invited Talk 1 (Christian Rupprecht; 40 min)
14:20 - 14:30 Short Break (10 min)
14:30 - 15:30 Oral Session (10 min x 6 Full Papers)
15:30 - 15:50 Coffee Break (20 min)
15:50 - 16:30 Invited Talk 2 (Manel Baradad; 40 min)
16:30 - 16:40 Spotlight Session (30 sec x 15 posters; 7.5 min)
16:40 - 16:45 Closing (5 min)
16:45 - 17:00 Short Break (10 min)
17:00 - 18:00 Poster Session (60 min)
* In-person only

Oral (10 minutes including questions; 16:9 aspect ratio format, HDMI connection)

  1. Adaptive Self-Training for Object Detection, Renaud Vandeghen (University of Liège (ULiège)), Gilles Louppe (University of Liège), Marc Van Droogenbroeck (University of Liege)
  2. Tensor Factorization for Leveraging Cross-Modal Knowledge in Data-Constrained Infrared Object Detection, Manish Sharma (Rochester Institute of Technology), Moitreya Chatterjee (Mitsubishi Electric Research Laboratories), Kuan-Chuan Peng (Mitsubishi Electric Research Laboratories (MERL)), Suhas Lohit (Mitsubishi Electric Research Laboratories), Michael J Jones (MERL)
  3. Learning Universal Semantic Correspondences with No Supervision and Automatic Data Curation, Aleksandar Shtedritski (University of Oxford), Andrea Vedaldi (Oxford University), Christian Rupprecht (University of Oxford)
  4. Semantic RGB-D Image Synthesis, Shijie Li (Bonn University), Rong Li (The Hong Kong University of Science and Technology (Guangzhou)), Jürgen Gall (University of Bonn)
  5. JEDI: Joint Expert Distillation in a Semi-Supervised Multi-Dataset Student-Teacher Scenario for Video Action Recognition, Lucian Bicsi (University of Bucharest), Bogdan Alexe (University of Bucharest), Radu Tudor Ionescu (University of Bucharest), Marius Leordeanu (University “Politehnica” of Bucharest)
  6. Frequency-Aware Self-Supervised Long-Tailed Learning, Ci-Siang Lin (National Taiwan University), Min-Hung Chen (NVIDIA), Yu-Chiang Frank Wang (National Taiwan University)

Spotlight (30 seconds) + Poster (60 minutes; 95.4 cm x 138.8 cm, portrait format)

  1. SelectNAdapt: Support Set Selection for Few-Shot Domain Adaptation, Youssef Dawoud (Friedrich-Alexander-Universität Erlangen-Nürnberg), Gustavo Carneiro (University of Surrey), Vasileios Belagiannis (Friedrich-Alexander-Universität Erlangen-Nürnberg)
  2. Self-supervised Hypergraphs for Learning Multiple World Interpretations, Alina E Marcu (University Politehnica of Bucharest), Mihai Cristian Pîrvu (University “Politehnica” of Bucharest), Dragos Costea (University “Politehnica” of Bucharest), Emanuela Haller (Bitdefender), Emil Slusanschi (University “Politehnica” of Bucharest), Nabil Belbachir (NORCE Norwegian Research Centre AS), Rahul Sukthankar (Google), Marius Leordeanu (University “Politehnica” of Bucharest)
  3. MIAD: A Maintenance Inspection Dataset for Unsupervised Anomaly Detection, Tianpeng Bao (SenseTime Research), Jiadong Chen (sensetime), Wei Li (SenseTime Research), Xiang Wang (SenseTime Research), Jingjing Fei (SenseTime Research), Liwei Wu (SenseTime Research), Rui Zhao (SenseTime Group Limited), Ye Zheng (Institute of Computing Technology, Chinese Academy of Sciences)
  4. Self-training and multi-task learning for data exploitation: an evaluation study on object detection, Hoàng-Ân Lê (IRISA, University of South Brittany), Minh-Tan Pham (IRISA-UBS)
  5. Augmenting Features via Contrastive Learning-based Generative Model for Long-Tailed Classification, Minho Park (ETRI), Hyung-Il Kim (ETRI), Hwa Jeon Song (Electronics & Telecommunications Research Institute (ETRI), Daejeon, Korea), Dong-oh Kang (ETRI)
  6. Recognition-Friendly Industrial Image Generation for Defect Recognition Under Limited Data, Younkwan Lee (Samsung Electronics)
  7. Boosting Semi-Supervised Learning by bridging high and low-confidence predictions, Khanh-Binh Nguyen (Sungkyunkwan University), Joon-Sung Yang (Yonsei University)
  8. FedLID: Self-Supervised Federated Learning for Leveraging Limited Image Data, Athanasios Psaltis (ITI-CERTH, University of West Attica), Anestis Kastellos (ITI/CERTH), Charalampos Z Patrikakis (University of West Attica), Petros Daras (ITI-CERTH, Greece)
  9. A Horse with no Labels: Self-Supervised Horse Pose Estimation from Unlabelled Images and Synthetic Prior, Jose A Sosa Martinez (University of Leeds), David C Hogg (University of Leeds)
  10. Enhancing Classification Accuracy on Limited Data via Unconditional GAN, Chunsan Hong (CNAI), ByungHee Cha (Seoul National University); Bohyung Kim (CNAI), Tae-Hyun Oh (POSTECH)
  11. Deep Generative Networks for Heterogeneous Augmentation of Cranial Defects, Kamil Kwarciak (AGH University of Science and Technology), Marek Wodzinski (AGH UST)
  12. 360deg from a Single Camera: A Few-Shot Approach for LiDAR Segmentation, Laurenz Reichardt (HS Mannheim), Nikolas Ebert (HS Mannheim), Oliver Wasenmüller (HS Mannheim)
  13. Guiding Video Prediction with Explicit Procedural Knowledge, Patrick Takenaka (Hochschule der Medien), Johannes Maucher (Media University Stuttgart), Marco Huber (University of Stuttgart)
  14. G2L: A High-Dimensional Geometric Approach for Automatic Generation of Highly Accurate Pseudo-labels, John Kender (Columbia University), Parijat Dube (IBM Research), Zhengyang Han (New York University), Bishwaranjan Bhattacharjee (IBM Research)
  15. Image guided inpainting with parameter efficient learning, Sangbeom Lim (Korea University), Seungryong Kim (Korea University)
  16. [Invited poster; ICCV 2023 Main Paper] SegRCDB: Semantic Segmentation via Formula-Driven Supervised Learning, Risa Shinoda (AIST), Ryo Hayamizu (AIST), Kodai Nakashima (AIST), Nakamasa Inoue (AIST, TokyoTech), Rio Yokota (AIST, TokyoTech), Hirokatsu Kataoka (AIST)
  17. [Invited poster; ICCV 2023 Main Paper] Pre-training Vision Transformers with Very Limited Synthesized Images, Ryo Nakamura (AIST), Hirokatsu Kataoka (AIST), Sora Takashima (AIST, TokyoTech), Edgar Josafat Martinez Noriega (AIST, TokyoTech), Rio Yokota (AIST, TokyoTech), Nakamasa Inoue (AIST, TokyoTech)

Invited Talk

Christian Rupprecht
University of Oxford

Title: Unsupervised Learning from Limited Data
Abstract: While current large models are trained on millions or even billions of images, in this talk, we will discuss how unsupervised learning can be performed on a limited number of samples. A special focus of this talk will lie on representation learning, but we will also explore specific applications such as 3D reconstruction, object detection and tracking. Overall, several strategies have shown promise in this area: naturally image augmentations play a strong role in combatting data scarcity and imposing priors on images. Additionally, synthetic data can often be generated with either very simple methods or through pre-trained large-scale models that have already captured the diversity of the real world and allow the distillation of information into downstream applications.


Manel Baradad Jurjo
MIT

Title: Learning to see by looking at noise
Abstract: Current vision systems are trained on huge datasets, and these datasets come with costs: curation is expensive, they inherit human biases, and there are concerns over privacy and usage rights. To counter these costs, interest has surged in learning from cheaper data sources, such as unlabeled images. In this talk, I will present two of our recent approaches for training neural networks without data. In the first one, we train neural networks using a small set of curated programs that generate simple noise-like images, which have properties present in natural images. On the second, we follow the opposite direction: instead of curating a small set of programs, we collect a big dataset of 21k image-generation programs, which we do not manually tune, making it easier to scale up and increase performance. Both yield competitive results on different vision downstream tasks, showing the potential of these approaches to circumvent the usage of realistic data.


Guide for Authors

We invite original research papers. All submissions should be anonymized, formatted according to the template of ICCV 2023.
Research Papers (4-8 pages excluding references) should contain unpublished original research. They will be published in the ICCV workshop proceedings, and will be archived in the IEEE Xplore Digital Library and the CVF.
Please submit papers via the submission system:
https://cmt3.research.microsoft.com/LIMIT2023.

Important Dates and Venue

Research Paper Submission: July 4th 14th, (11:59 PM, Pacific Time), 2023
Due to many requests, we have decided to extend the deadline for submission.
Notification: August 6th
Camera-ready: August 9th
Workshop: October 2nd PM at the same venue as ICCV 2023 in Paris, France..

Organizers

Hirokatsu Kataoka
AIST/LINE
Rio Yokota
Tokyo Tech/AIST
Nakamasa Inoue
Tokyo Tech/AIST
Dan Hendrycks
Center for AI Safety

Xavier Boix
MIT
Yue Qiu
AIST
Connor Anderson
BYU
Ryo Nakamura
Fukuoka Univ./AIST

Ryosuke Yamada
Univ. of Tsukuba/AIST
Risa Shinoda
Kyoto Univ./AIST


Contact

E-mail: M-iccv2023ws-limit-ml@aist.go.jp

Special Thanks