Interpreting and summarizing the insights gained from medical images such as radiology output is a time-consuming task that involves highly trained experts and often represents a bottleneck in clinical diagnosis pipelines.
Consequently, there is a considerable need for automatic methods that can approximate this mapping from visual information to condensed textual descriptions. The more image characteristics are known, the more structured are the radiology scans and hence, the more efficient are the radiologists regarding interpretation. We work on the basis of a large-scale collection of figures from open access biomedical journal articles (PubMed Central). All images in the training data are accompanied by UMLS concepts extracted from the original image caption.
The first step to automatic image captioning and scene understanding is identifying the presence and location of relevant concepts in a large corpus of medical images. Based on the visual image content, this projects provides the building blocks for the scene understanding step by identifying the individual components from which captions are composed. The concepts can be further applied for context-based image and information retrieval purposes.
ImageCLEFcaption is a task of the ImageCLEF evaluation campaign. ImageCLEF is part of the Cross Language Evaluation Forum (CLEF). Since 2017, this caption task was added. The goal is to create databases to interpret medical images . The organisers of the ImageCLEFcaption challenge distribute the “collection” consisting of images and annotations for image concept extraction. Participants then apply their tools and techniques which are then evaluated in a blind collection.
Join our mailing list: https://groups.google.com/d/forum/imageclefcaption
- National Library of Medicine – National Institutes of Health
- University of Applied Sciences and Arts Dortmund, Germany
- University of Applied Sciences Western Switzerland, Sierre, Switzerland
- School of Computer Science & Electronic Engineering, University of Essex