The increasing use of structure-from-motion photogrammetry for modelling large-scale environments from action cameras attached to drones has driven the next-generation of visualisation techniques that can be used in augmented and virtual reality headsets. It has also created a need to have such models labelled, with objects such as people, buildings, vehicles, terrain, etc. all essential for machine learning techniques to automatically identify as areas of interest and to label them appropriately. However, the complexity of the images makes impossible for human annotators to assess the contents of images on a large scale.
Advances in automatically annotating images for complexity and benthic composition have been promising, and we are interested in automatically identify areas of interest and to label them appropriately for monitoring coral reefs. Coral reefs are in danger of being lost within the next 30 years, and with them the ecosystems they support. This catastrophe will not only see the extinction of many marine species, but also create a humanitarian crisis on a global scale for the billions of humans who rely on reef services. By monitoring the changes and composition of coral reefs we can help prioritise conservation efforts.
ImageCLEFcoral is a task of the ImageCLEF evaluation campaign. ImageCLEF is part of the Cross Language Evaluation Forum (CLEF). Since 2019, this coral task was added. The goal is to create databases to monitor the composition of coral reefs . The organisers of the ImageCLEFcoral challenge distribute the “collection” consisting of images and annotations for coral reef image annotation and localisation. Participants then apply their tools and techniques which are then evaluated in a blind collection.
- US National Oceanic and Atmospheric Administration (NOAA)
- Wellcome Trust
- School of Computer Science & Electronic Engineering, University of Essex