Testing the transferability of AI models for cold-water coral detection and classification

The proliferation of accessible deep-water imaging platforms has resulted in the acquisition of vast amounts of image data, resulting in an analysis bottleneck. Object detection is now being applied to assist the image annotation process, with the potential to reduce analysis time. However, for object detectors to effectively tackle the scale of the challenge, models need to be generalisable to allow the transfer between imaging platforms and in space. This study trains YOLOv5 object detection models to identify six coral morphology groups using annotated imagery collected by ROV ISIS in the UK (JC136). Model performance was tested with independent datasets to inspect different aspects of transferability. Imagery collected on Tropic Seamount near the Canary Islands (JC142) with the same ROV (ISIS) was used to test spatial transferability. Imagery collected with ROV Holland I (SeaRover Project) from the Irish deep sea was used to test the transferability of models between ROVs. Model performance was moderate, recalling 60% of human annotations when evaluated against the validation dataset with varying performance across morphological groups (Recall = 44–69%). However, when tested using the independent datasets, model performance falls, recalling only 23% to 34% of human annotations across transfer scenarios. The results suggest that the model performance when transferred was poor, arising because of high shape variability within some morphological groups and poor taxonomic representation across datasets. We discuss how a coordinated community effort could improve model transferability and potentially address the analysis bottleneck.