Challenge Structure

In the AUC23 challenge, you will work on developing a universal algorithm that can handle a wide array of 3D medical image classification tasks. The challenge is structured to reward the development of the most universally applicable algorithm, emphasizing its performance across a wide range of tasks. 

When entering the challenge, you may choose to form non-overlapping teams with other participants. Note that the submission limits noted in this overview are limits per team, not per participant.

Rolling submissions

Each participant is allowed to make a maximum of 3 submissions on each task-specific development Leaderboard. Your solutions can be submitted in the form of Algorithms. We have prepared a comprehensive tutorial detailing how to create and submit your Algorithms, as well as the expected format of your codebase. You can access this tutorial here.  

Submitting on the test set

Each participant is allowed to make only one submission on each task-specific test set. You are required to use only one method for training and only one method for running inference accross all models submitted to the test set. 

All your source code must be publicly released after the challenge under a permissive open-source license. It must be usable as-is, and include all files necessary to reproduce training and inference. Any data sources can be used for training your models, provided that pretrained weights are included in your repository, and that training does not require internet access.

Along with your test submission, you should submit a PDF that describes your unified method. The PDF should include the following headings: 

  • Abstract
  • Preprocessing strategy
  • Training strategy
  • Inference strategy
  • Acknowledgments
  • References

Performance metric

For each task, your performance will be evaluated in terms of Area Under the Receiver Operating Characteristic Curve (AUROC). For multi-class output, the AUROC is computed for each foreground class separately in a leave-one-out fashion, and the performance for the output is computed as the mean of these AUROC scores.  

Data Access

The data for AUC23 covers a broad range of medical imaging tasks, providing you with diverse datasets to train and test your models. Links to the data will be posted here soon.

Additional rules

For running inference:

  • The processing of a single CT scan by a submitted Algorithm should preferably take no more than 5 minutes on an NVIDIA T4 GPU (16 GB) with 8 CPUs (32 GB), with a hard limit of 15 minutes.
  • You are only allowed to use image information from the input, such as the image volume, orientation, voxel spacing, and origin metadata. Other metadata usage is not permitted.
  • Your method should not require internet access.

For training:

  • You are only allowed to use image information from the input, such as the image volume, orientation, voxel spacing, and origin metadata. Other metadata usage is not permitted.
  • Your method should not require internet access.

Winning AUC23

The winning team will be determined based on the multiplication of all ranks on the test sets.

After the challenge, the winning team's codebases will be made publicly available. The winning Algorithms will also be made publicly available on https://grand-challenge.org/algorithms/.

We plan to publish the findings of AUC23 in a peer-reviewed article. The best-performing teams in the challenge be invited to collaborate in writing this article. Other teams invited to the Final phase may also be invited to help writing based on their performance and methodology.