Jump to content

Overview of ImageCLEFmedical 2023 - Caption Prediction and Concept Detection

Fast facts

  • Further publishers

    • Asma Ben Abacha
    • Alba G. Seco de Herrera
    • Louise Bloch
    • Raphael Brüngel
    • Ahmad Idrissi-Yaghir
    • Henning Schäfer
    • Henning Müller
  • Publishment

    • 2023
  • Anthology

    Overview of ImageCLEFmedical 2023 – Caption Prediction and Concept Detection

  • Organizational unit

  • Subjects

    • Applied computer science
    • Artificial intelligence
  • Research fields

    • Medical Informatics (MI)
  • Publication format

    Conference paper

Quote

Rückert, Johannes et al. 2023. Overview of ImageCLEFmedical 2023 - Caption Prediction and Concept Detection. CLEF 2023 Working Notes, 1328-1346. https://ceur-ws.org/Vol-3497/paper-108.pdf.

Content

The ImageCLEFmedical 2023 Caption task on caption prediction and concept detection follows similar challenges held from 2017-2022. The goal is to extract Unified Medical Language System (UMLS) concept annotations and/or define captions from image data. Predictions are compared to original image captions. Images for both tasks are part of the Radiology Objects in COntext version 2 (ROCOv2) dataset. For concept detection, multi-label predictions are compared against UMLS terms extracted from the original captions with additional manually curated concepts via the F1-score. For caption prediction, the semantic similarity of the predictions to the original captions is evaluated using the BERTScore. The task attracted strong participation with 27 registered teams, 13 teams submitted 116 graded runs for the two subtasks. Participants mainly used multi-label classification systems for the concept detection subtask, the winning team AUEB-NLP-Group used an ensemble of three CNNs. For the caption prediction subtask, most teams used encoder-decoder architectures, with the winning team CSIRO using an encoder-decoder framework with an additional reinforcement learning optimization step.

About the publication

Notes and references

This site uses cookies to ensure the functionality of the website and to collect statistical data. You can object to the statistical collection via the data protection settings (opt-out).

Settings(Opens in a new tab)