Loading
Stage 1: Completed Stage 2: Completed
14.5k
92
15
552

Problem Statements

CLEAR10

10-class dataset

2481
313
CLEAR100

100-class dataset

2249
239

πŸš€ Challenge Starter Kit | πŸ‘₯ Looking for teammates? | 🌈 Discord

πŸ•΅οΈ Introduction

Welcome to the CLEAR Challenge for CVPR 2022 Workshop on Visual Perception and Learning in an Open World!

Continual learning (CL) is widely regarded as crucial challenge for lifelong AI. However, existing CL benchmarks, e.g. Permuted-MNIST and Split-CIFAR, make use of artificial temporal variation and do not align with or generalize to the real world. In this challenge, we introduce Continual LEArning on Real-World Imagery (CLEAR), the first continual image classification benchmark dataset with a natural temporal evolution of visual concepts in the real world that spans a decade (2004-2014). This competition will be an opportunity for researchers and machine learning enthusiasts to experiment and explore state-of-the-art Continual Learning (CL) algorithms on this novel dataset. In addition, submissions will be evaluated with our novel streaming evaluation protocols proposed in the NeurIPS 2021 paper.

πŸ“‘ Problem Statement

The challenge is an image classification problem on the two following benchmarks as two subtasks:

  • CLEAR-10: The dataset introduced in NeurIPS 2021 paper with 10 buckets of training images from 11 classes.
  • CLEAR-100: A new version of CLEAR that consists of more classes (100!) with 10 buckets spanning 2004-2014.

There will be two stages in this challenge:

  • Stage I
    Participants train their models locally using the public dataset consisting of 10 public trainsets following the streaming protocol, i.e. train today and test on tomorrow. Participants upload their models (10 in total, each is a model checkpoint train consecutively on the 10 trainsets) along with their training script as one submission to AICrowd for evaluation against our private hold-out testset. Each of the 10 models will be evaluated on the 10 hold-out testsets, obtaining an 10x10 accuracy matrix. The evaluation metrics are 4 different summarization of the accuracy matrix, i.e. In-Domain Accuracy (mean of diagonal), Next-Domain Accuracy (mean of superdiagonal), Forward Transfer (mean of upper triangular entries), Backward Transfer (mean of lower triangular entries). Details about these metrics can be found in the paper. In each subtask leaderboard, we take a weighted average of the 4 metrics when determining the rankings. We provide an visual illustration for how the subtask leaderboard metrics will be computed below (for detailed equations, please refer to NeurIPS 2021 paper). The rankings on the final, aggregate leaderboard are determined by a simple 25-75 weighted average of the two subtask leaderboards.

  • Stage II
    The top 5 teams on the final leaderboard in Stage 1 will be asked to provide a dockerized environment to train their models on our own servers. We will contact the top 5 teams on the leaderboard after Stage 1 ends (June 14th) in order to discuss the algorithm & training details and to verify whether the submission is valid. We will invite ~4 teams to present their novel solutions on the CVPR 2022 workshop (schedule is here). We will validate each team's models submitted to the leaderboard by training their models within the specified time limit, comparing the accuracy with the baselines, as well as verifying that they did not use auxilary information to train the model (e.g., pre-trained network, additional labeled data, and etc.). Teams with invalid submissions will be removed from the leaderboard, and the remaining top-4 teams with valid submissions will be eligible for the awards.

πŸš€ Instructions

  • Each problem has an associated weight. This weight denotes the problem’s contribution to your final score.
  • The final leaderboard is calculated by the weighted mean of your score across both problems in this challenge. If a problem is not attempted by the participant, his/her score in that problem is automatically zero.
  • To help you get started, we have a starter kit available for both subtasks, in which you can find all the details about how to train, evaluate, and submit your awesome work. If you find any bugs, typos, or improvements, please send us a pull request.
  • In case you have any queries, please reach out to us via the Discussion Forums.

πŸ“… Timeline

  • Start Time: May 10th, 2022
  • Stage 1 End Time: June 14th, 2022
  • Challenge End Time (winners must agree to present by): June 18th, 2022
  • CVPR 2022 VPLOW Workshop Presentation Date (for top-4 teams): June 19th, 2022
  • Duration: 39 Days

πŸ† Prizes

The top 3 teams on the final leaderboard, whose submissions are verified by us, will also have an oppourtnity to present at the CVPR 2022 Workshop). To qualify for the prizes, the winner must (1) have a verified submission and (2) prepare to present at the workshop on June 19th, 2022.

  • 1st Place: $1000
  • 2nd Place: $300
  • 3rd, 4th Place: $100

The prizes are offered by CMU Argo AI center.

πŸ€– Team

πŸ“± Contact

πŸš€ Challenge Starter Kit

πŸ‘₯ Looking for teammates? Or having any question? Come to our Slack

🌈 Post any feedback or question to the community board

Participants

Getting Started

Leaderboard

01
100.000
01
100.000
02
225.000
02
225.000
03
275.000