Deep learning-based bathymetry retrieval without in-situ depths using remote sensing imagery and SfM-MVS DSMs with data gaps
This repository contains the code of the paper
Panagiotis Agrafiotis, Begüm Demir, Deep learning-based bathymetry retrieval without in-situ depths using remote sensing imagery and SfM-MVS DSMs with data gaps, ISPRS Journal of Photogrammetry and Remote Sensing, Volume 225, 2025, Pages 341-361, ISSN 0924-2716, https://doi.org/10.1016/j.isprsjprs.2025.04.020.
Accurate, detailed, and high-frequent bathymetry is crucial for shallow seabed areas facing intense climatological and anthropogenic pressures. Current methods utilizing airborne or satellite optical imagery to derive bathymetry primarily rely on either Structure-from-Motion and Multi-View Stereo (SfM-MVS) with refraction correction or Spectrally Derived Bathymetry (SDB). However, SDB methods often require extensive manual fieldwork or costly reference data, while SfM-MVS approaches face challenges even after refraction correction. These include depth data gaps and noise in environments with homogeneous visual textures, which hinder the creation of accurate and complete Digital Surface Models (DSMs) of the seabed. To address these challenges, this work introduces a methodology that combines the high-fidelity 3D reconstruction capabilities of the SfM-MVS methods with state-of-the-art refraction correction techniques, along with the spectral analysis capabilities of a new deep learning-based method for bathymetry prediction. This integration enables a synergistic approach where SfM-MVS derived DSMs with data gaps are used as training data to generate complete bathymetric maps. In this context, we propose Swin-BathyUNet that combines U-Net with Swin Transformer self-attention layers and a cross-attention mechanism, specifically tailored for SDB. Swin-BathyUNet is designed to improve bathymetric accuracy by capturing long-range spatial relationships and can also function as a standalone solution for standard SDB with various training depth data, independent of the SfM-MVS output. Experimental results in two completely different test sites in the Mediterranean and Baltic Seas demonstrate the effectiveness of the proposed approach through extensive experiments that demonstrate improvements in bathymetric accuracy, detail, coverage, and noise reduction in the predicted DSM.
If you find this repository useful, please consider giving a star ⭐.
If you use the code in this repository please cite:
Panagiotis Agrafiotis, Begüm Demir, Deep learning-based bathymetry retrieval without in-situ depths using remote sensing imagery and SfM-MVS DSMs with data gaps, ISPRS Journal of Photogrammetry and Remote Sensing, Volume 225, 2025, Pages 341-361, ISSN 0924-2716, https://doi.org/10.1016/j.isprsjprs.2025.04.020.
@article{AGRAFIOTIS2025341,
title = {Deep learning-based bathymetry retrieval without in-situ depths using remote sensing imagery and SfM-MVS DSMs with data gaps},
journal = {ISPRS Journal of Photogrammetry and Remote Sensing},
volume = {225},
pages = {341-361},
year = {2025},
issn = {0924-2716},
doi = {https://doi.org/10.1016/j.isprsjprs.2025.04.020},
url = {https://www.sciencedirect.com/science/article/pii/S0924271625001522},
author = {Panagiotis Agrafiotis and Begüm Demir},
}
If you use the dataset please cite:
P. Agrafiotis, Ł. Janowski, D. Skarlatos and B. Demir, "MAGICBATHYNET: A Multimodal Remote Sensing Dataset for Bathymetry Prediction and Pixel-Based Classification in Shallow Waters," IGARSS 2024 - 2024 IEEE International Geoscience and Remote Sensing Symposium, Athens, Greece, 2024, pp. 249-253, doi: 10.1109/IGARSS53475.2024.10641355.
@INPROCEEDINGS{10641355,
author={Agrafiotis, Panagiotis and Janowski, Łukasz and Skarlatos, Dimitrios and Demir, Begüm},
booktitle={IGARSS 2024 - 2024 IEEE International Geoscience and Remote Sensing Symposium},
title={MAGICBATHYNET: A Multimodal Remote Sensing Dataset for Bathymetry Prediction and Pixel-Based Classification in Shallow Waters},
year={2024},
volume={},
number={
8000
},
pages={249-253},
doi={10.1109/IGARSS53475.2024.10641355}}
Swin-BathyUNet combines U-Net with Swin Transformer self-attention layers and a cross-attention mechanism, tailored specifically for SDB.
For downloading the dataset and a detailed explanation of it (in case you don't have your own data or wish to compare), please visit the MagicBathy Project website at https://www.magicbathy.eu/magicbathynet.html
git clone https://github.com/pagraf/Swin-BathyUNet.git
The requirements are easily installed via Anaconda (recommended):
conda env create -f environment.yml
After the installation is completed, activate the environment:
conda activate magicbathynet
To train and test the model use Swin-BathyUNet_MVS_SDB_Intergration_pub.ipynb.
Example aerial patch (GSD=0.25m) of the Agia Napa area (left), refraction corrected SfM-MVS priors used for training, and predicted bathymetry obtained by the Dual Attention Network (right).
Example Sentinel-2 patch (GSD=10m) of the Agia Napa area (left), refraction corrected SfM-MVS priors used for training, and predicted bathymetry obtained by the Dual Attention Network (right).
For more information on the results and accuracy achieved read our paper.
Panagiotis Agrafiotis https://www.user.tu-berlin.de/pagraf/
Feel free to give feedback, by sending an email to: agrafiotis@tu-berlin.de
This work is part of MagicBathy project funded by the European Union’s HORIZON Europe research and innovation programme under the Marie Skłodowska-Curie GA 101063294. Work has been carried out at the Remote Sensing Image Analysis group. For more information about the project visit https://www.magicbathy.eu/.