How can concurrent visual and Synthetic Aperture Radar (SAR) satellite imagery be used to detect sea ice extent in the Southern Bellingshausen Sea, Antarctica?
Sea ice forms when the ocean freezes, forming a dynamic and seasonally variable layer of ice. This floating ice has a profound influence on the polar environment, influencing ocean circulation, weather, regional climate and operators in the polar regions.
Accurate, timely understanding of sea-ice boundaries in polar seas is essential for ship navigation, ecosystem and climate monitoring. Since the advent of remote sensing, sea ice extent has largely been mapped from optical imagery and microwave instruments. These have provided a useful record of changes, but are of limited use for many applications due to cloud cover and low spatial resolution respectively. More recently, microwave data referred to as Synthetic Aperture Radar (SAR) imagery has become available for wide-scale use through freely available satellite missions such as Sentinel-1. While SAR data collection is not impacted by cloud cover, it is much harder to interpret and requires model-led feature extraction. Applications of convolutional neural networks (CNNs) have been the modelling approach of choice as applied to sea ice prediction, modelling and segmentation tasks using SAR data.
The Sea Ice Extent project, as part of the 2022 Artificial Intelligence for Environmental Risk (AI4ER) Guided Team Challenge (GTC), approaches the growing area of remote sensing of sea ice through the task of ice-water boundary detection in the Bellingshausen Sea of the Southern Ocean. The project scope was focused on sea ice boundary detection for the navigation of the new BAS vessel RSS Sir David Attenborough (SDA) during it’s sea ice trials. Sentinel-1 SAR data and MODIS optical data were selected as best sources for near-real-time use, and the U-Net architecture was identified as promising and in line with current advances in sea ice detection.
This project trained a CNN using a dataset containing concurrent SAR and MODIS imagery and reference images of manually digitised sea ice extent. The initial dataset contained MODIS and SAR imagery spanning 2011 to 2020, although only SAR imagery was available within the dataset during some years.
There were a number of considerations when training a CNN using two concurrent input image datasets. Firstly, images were resized to ensure MODIS and SAR imagery had the same spatial resolution. In addition, clouds were labelled in the reference images, even though clouds are not detected in the SAR imagery. Initial results show promise in the tool being able to predict sea ice extent using exclusively the SAR image in locations where clouds were present in the MODIS image.
The data used included:
This project was undertaken by Madeline Lisaius, Jonathan Roberts, and Sophie Turner.