Quantifying the Impact of Avian Influenza on the Northern Gannet Colony of Bass Rock Using Ultra-High-Resolution Drone Imagery and Deep Learning
<p>Left: Aerial view of the study site—Bass Rock (N 56<math display="inline"><semantics> <msup> <mo> </mo> <mo>∘</mo> </msup> </semantics></math>6′, W 2<math display="inline"><semantics> <msup> <mo> </mo> <mo>∘</mo> </msup> </semantics></math>36′). Image credit: UK Centre for Ecology and Hydrology. Right: Delimited areas used for counting during the previous decadal censuses. Image taken from Murray et al. (2014) [<a href="#B9-drones-08-00040" class="html-bibr">9</a>].</p> "> Figure 2
<p>Basic workflow for a standard DL Convolutional Neural Network (CNN) training process. Image taken from Akçay et al. [<a href="#B18-drones-08-00040" class="html-bibr">18</a>].</p> "> Figure 3
<p>Complete workflow for the research project, from data acquisition to final product.</p> "> Figure 4
<p>Example of the tiling process with the 2022 dataset. The entire orthomosaic is split into individual tiles measuring 200 × 200 pixels using the ArcGIS ‘Split Raster’ function. Each 200 × 200 pixel tile (highlighted in red here) equates to 6.4 m on the ground.</p> "> Figure 5
<p>Example of the tiling process with the 2023 dataset. The entire orthomosaic is split into individual tiles measuring 500 × 500 pixels using the ArcGIS ‘Split Raster’ function. Each 500 × 500 tile (highlighted in red here) equates to 6.8 m on the ground.</p> "> Figure 6
<p><b>Top Left</b> and <b>Bottom Left</b>: 2022 image tile, 200 × 200 pixels; <b>Top Right</b>: 2023 image tile, 200 × 200 pixels; <b>Bottom Right</b>: 2023 image tile, 500 × 500 pixels.</p> "> Figure 7
<p>Example of using the open source software VGG Image Annotator to classify gannets as either ‘Dead’ (e.g., box 3), ‘Alive’ (e.g., box 2), or ‘Flying’ (e.g., box 1). Nests and man-made structures are also visible.</p> "> Figure 8
<p>200 × 200 pixel image tiles showing examples of the three classes of gannet; Left: live gannets on the ground appear elliptical in shape; Middle: dead gannets have their neck and wings splayed; Right: flying gannets appear larger with their wings evenly spread out.</p> "> Figure 9
<p>200 × 200 pixel image tile showing example of ground-truth bounding boxes overlaid by predicted bounding boxes for each given class. Pink = ground truth, red = dead (class 1), blue = alive (class 2), yellow = flying (class 3). Any other background is class 0.</p> "> Figure 10
<p>200 × 200 pixel image tile showing examples of FP predictions made by the model that are rocks or other natural features, indicated by the yellow arrows.</p> "> Figure 11
<p>200 × 200 pixel image tile showing examples of FP predictions made by the model that are real birds missed out of the ground truth, indicated by the yellow arrows. The arrows point to red/blue bounding boxes that do not have a pink ground-truth bounding box underneath.</p> "> Figure 12
<p>200 × 200 pixel image tile showing examples of FN predictions made by the model, indicated by the yellow arrows. The arrows point to pink ground-truth bounding boxes that do not have an overlaying predicted red/blue bounding box.</p> "> Figure 13
<p>Full orthomosaic generated from the 2022 RGB imagery.</p> "> Figure 14
<p>Full orthomosaic generated from the 2023 RGB imagery.</p> "> Figure 15
<p>500 × 500 pixel image tile showing examples of live model predictions (blue) from the 2023 dataset. Gannets clearly detected against rocky background.</p> "> Figure 16
<p>500 × 500 pixel image tile showing examples of potential TP dead (left, red) and FP dead model predictions (right, red) from the 2023 dataset.</p> "> Figure 17
<p>500 × 500 pixel image tile showing examples of flying model predictions (yellow) from the 2023 dataset. Two gannets in flight clearly highlighted in contrast to live gannets on the ground.</p> "> Figure 18
<p>500 × 500 pixel image tiles showing examples of dead predictions made by the model (Left, red boxes), and the potential TP dead prediction (Right, red circle).</p> ">
Abstract
:1. Introduction
1.1. Population Census
1.2. Drones and Deep Learning as the Solution
2. Materials and Methods
2.1. Surveys and Data Preparation
2.2. Developing the Deep Learning Model
2.2.1. Model Architecture
2.2.2. Creating the Training Data
2.2.3. Hyperparameters
2.3. Validating the Model
2.3.1. Visualising Model Predictions
2.3.2. Model Performance Metrics
Positive and Negative Predictions
Mean Average Precision (mAP)
Reproducibility
3. Results
3.1. 2022 Dataset
3.1.1. Orthomosaic
3.1.2. 2022 Predicted Count
3.1.3. Model vs. Manual Count
3.2. 2023 Dataset
3.2.1. Orthomosaic
3.2.2. 2023 Predicted Count
3.2.3. Model vs. Manual Count
4. Discussion
4.1. Limitations
4.1.1. Labelling and Classification
4.1.2. Model Performance
4.2. Outcomes
4.2.1. Population Counts
4.2.2. Deep Learning Model
4.2.3. Time Efficiency
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Rush, G.P.; Clarke, L.E.; Stone, M.; Wood, M.J. Can drones count gulls? Minimal disturbance and semiautomated image processing with an unmanned aerial vehicle for colony-nesting seabirds. Ecol. Evol. 2018, 8, 12322–12334. [Google Scholar] [CrossRef]
- Conservation from the Clouds: The Use of Drones in Conservation. 2022. Available online: https://www.gwct.org.uk/blogs/news/2022/october/conservation-from-the-clouds-the-use-of-drones-in-conservation/ (accessed on 24 February 2023).
- Raudino, H.C.; Tyne, J.A.; Smith, A.; Ottewell, K.; McArthur, S.; Kopps, A.M.; Chabanne, D.; Harcourt, R.G.; Pirotta, V.; Waples, K. Challenges of collecting blow from small cetaceans. Ecosphere 2019, 10, e02901. [Google Scholar] [CrossRef]
- Sellés-Ríos, B.; Flatt, E.; Ortiz-García, J.; García-Colomé, J.; Latour, O.; Whitworth, A. Warm beach, warmer turtles: Using drone-mounted thermal infrared sensors to monitor sea turtle nesting activity. Front. Conserv. Sci. 2022, 3, 954791. [Google Scholar] [CrossRef]
- Lane, J.V.; Jeglinski, J.W.; Avery-Gomm, S.; Ballstaedt, E.; Banyard, A.C.; Barychka, T.; Brown, I.H.; Brugger, B.; Burt, T.V.; Careen, N.; et al. High Pathogenicity Avian Influenza (H5N1) in Northern Gannets (Morus bassanus): Global Spread, Clinical Signs and Demographic Consequences. Ibis. Available online: https://onlinelibrary.wiley.com/doi/pdf/10.1111/ibi.13275 (accessed on 22 September 2023). [CrossRef]
- Avian Flu. 2022. Available online: https://www.rspb.org.uk/birds-and-wildlife/advice/how-you-can-help-birds/disease-and-garden-wildlife/avian-influenza-updates (accessed on 4 April 2023).
- Murray, S.; Harris, M.P.; Wanless, S. The status of the Gannet in Scotland in 2013–14. Scott. Birds 2015, 35, 3–18. [Google Scholar]
- Bass Rock SSSI. 2022. Available online: https://sitelink.nature.scot/site/155 (accessed on 15 February 2023).
- Murray, S.; Wanless, S.; Harris, M. The Bass Rock—Now the world’s largest Northern Gannet colony. Br. Birds 2014, 107, 769–770. [Google Scholar]
- Seabirds on the Brink as Avian Flu Rips through Colonies for a Third Year. 2023. Available online: https://www.rspb.org.uk/about-the-rspb/about-us/media-centre/press-releases/seabirds-on-the-brink-as-avian-flu-rips-through-colonies-for-a-third-year/ (accessed on 29 July 2023).
- Murray, S.; Wanless, S.; Harris, M. The status of the Northern Gannet in Scotland 2003–04. Scott. Birds 2006, 26, 17–29. [Google Scholar]
- Sardà-Palomera, F.; Bota, G.; Viñolo, C.; Pallarés, O.; Sazatornil, V.; Brotons, L.; Gomáriz, S.; Sardà, F. Fine-scale bird monitoring from light unmanned aircraft systems. Ibis 2012, 154, 177–183. [Google Scholar] [CrossRef]
- Hodgson, J.C.; Baylis, S.M.; Mott, R.; Herrod, A.; Clarke, R.H. Precision wildlife monitoring using unmanned aerial vehicles. Sci. Rep. 2016, 6, 22574. [Google Scholar] [CrossRef]
- Chabot, D.; Craik, S.R.; Bird, D.M. Population Census of a Large Common Tern Colony with a Small Unmanned Aircraft. PLoS ONE 2015, 10, e0122588. [Google Scholar] [CrossRef]
- What Are Neural Networks? 2023. Available online: https://www.ibm.com/topics/neural-networks (accessed on 1 May 2023).
- Wäldchen, J.; Mäder, P. Machine learning for image based species identification. Methods Ecol. Evol. 2018, 9, 2216–2225. [Google Scholar] [CrossRef]
- Kellenberger, B.; Veen, T.; Folmer, E.; Tuia, D. 21,000 birds in 4.5 h: Efficient large-scale seabird detection with machine learning. Remote. Sens. Ecol. Conserv. 2021, 7, 445–460. [Google Scholar] [CrossRef]
- Akçay, H.G.; Kabasakal, B.; Aksu, D.; Demir, N.; Öz, M.; Erdoğan, A. Automated Bird Counting with Deep Learning for Regional Bird Distribution Mapping. Animals 2020, 10, 1207. [Google Scholar] [CrossRef] [PubMed]
- Hayes, M.C.; Gray, P.C.; Harris, G.; Sedgwick, W.C.; Crawford, V.D.; Chazal, N.; Crofts, S.; Johnston, D.W. Drones and deep learning produce accurate and efficient monitoring of large-scale seabird colonies. Ornithol. Appl. 2021, 123, duab022. [Google Scholar] [CrossRef]
- Hong, S.J.; Han, Y.; Kim, S.Y.; Lee, A.Y.; Kim, G. Application of Deep-Learning Methods to Bird Detection Using Unmanned Aerial Vehicle Imagery. Sensors 2019, 19, 1651. [Google Scholar] [CrossRef] [PubMed]
- Dujon, A.M.; Ierodiaconou, D.; Geeson, J.J.; Arnould, J.P.Y.; Allan, B.M.; Katselidis, K.A.; Schofield, G. Machine learning to detect marine animals in UAV imagery: Effect of morphology, spacing, behaviour and habitat. Remote Sens. Ecol. Conserv. 2021, 7, 341–354. Available online: https://onlinelibrary.wiley.com/doi/pdf/10.1002/rse2.205 (accessed on 12 June 2023). [CrossRef]
- Kuru, K.; Clough, S.; Ansell, D.; McCarthy, J.; McGovern, S. WILDetect: An intelligent platform to perform airborne wildlife census automatically in the marine ecosystem using an ensemble of learning techniques and computer vision. Expert Syst. Appl. 2023, 231, 120574. [Google Scholar] [CrossRef]
- Bist, R.B.; Subedi, S.; Yang, X.; Chai, L. Automatic Detection of Cage-Free Dead Hens with Deep Learning Methods. AgriEngineering 2023, 5, 1020–1038. [Google Scholar] [CrossRef]
- Geldart, E.A.; Barnas, A.F.; Semeniuk, C.A.; Gilchrist, H.G.; Harris, C.M.; Love, O.P. A colonial-nesting seabird shows no heart-rate response to drone-based population surveys. Sci. Rep. 2022, 12, 18804. [Google Scholar] [CrossRef]
- Irigoin-Lovera, C.; Luna, D.M.; Acosta, D.A.; Zavalaga, C.B. Response of colonial Peruvian guano birds to flying UAVs: Effects and feasibility for implementing new population monitoring methods. PeerJ 2019, 7, e8129. [Google Scholar] [CrossRef]
- Brisson-Curadeau, E.; Bird, D.; Burke, C.; Fifield, D.A.; Pace, P.; Sherley, R.B.; Elliott, K.H. Seabird species vary in behavioural response to drone census. Sci. Rep. 2017, 7, 17884. [Google Scholar] [CrossRef]
- Edney, A.; Hart, T.; Jessopp, M.; Banks, A.; Clarke, L.; Cugniere, L.; Elliot, K.; Juarez Martinez, I.; Kilcoyne, A.; Murphy, M.; et al. Best practices for using drones in seabird monitoring and research. Mar. Ornithol. 2023, 51, 265–280. [Google Scholar]
- Orthomosaic & DEM Generation (without GCPs). 2023. Available online: https://agisoft.freshdesk.com/support/solutions/articles/31000157908-orthomosaic-dem-generation-without-gcps (accessed on 20 January 2023).
- ArcGIS Pro. 2023. Available online: https://www.esri.com/en-us/arcgis/products/arcgis-pro/overview (accessed on 1 January 2023).
- Faster R-CNN: Model Builders. 2023. Available online: https://pytorch.org/vision/stable/models/faster_rcnn.html (accessed on 2 May 2023).
- Howard, A.G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; Adam, H. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications. arXiv 2017, arXiv:1704.04861. [Google Scholar]
- Lin, T.Y.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Dollár, P.; Zitnick, C.L. Microsoft COCO: Common Objects in Context. In Computer Vision—ECCV 2014, Proceedings of the 13th European Conference, Zurich, Switzerland, 6–12 September 2014; Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T., Eds.; Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2014. [Google Scholar]
- Dutta, A.; Gupta, A.; Zissermann, A. VGG Image Annotator (VIA). Version: 2.0.8. 2016. Available online: http://www.robots.ox.ac.uk/~vgg/software/via/ (accessed on 24 February 2023).
- Understanding Learning Rate. 2019. Available online: https://bit.ly/3rB2jzW (accessed on 13 June 2023).
- The Difference between a Batch and an Epoch in a Neural Network. 2022. Available online: https://machinelearningmastery.com/difference-between-a-batch-and-an-epoch/ (accessed on 13 June 2023).
- Training and Validation Loss in Deep Learning. 2023. Available online: https://www.baeldung.com/cs/training-validation-loss-deep-learning (accessed on 21 July 2023).
- Evaluating Object Detection Models: Guide to Performance Metrics. 2019. Available online: https://manalelaidouni.github.io/Evaluating-Object-Detection-Models-Guide-to-Performance-Metrics.html#intersection-over-union-iou (accessed on 23 May 2023).
- Models and Pre-Trained Weights. 2023. Available online: https://pytorch.org/vision/stable/models.html (accessed on 21 July 2023).
- Reproducibility. 2023. Available online: https://pytorch.org/docs/stable/notes/randomness.html (accessed on 12 June 2023).
- DotDotGoose (v.1.6.0). 2023. Available online: https://biodiversityinformatics.amnh.org/open_source/dotdotgoose (accessed on 30 June 2023).
- A Complete Guide to Data Augmentation. 2023. Available online: https://www.datacamp.com/tutorial/complete-guide-data-augmentation (accessed on 21 July 2023).
- Bird Species Detection Using Deep Learning and PyTorch. 2023. Available online: https://debuggercafe.com/bird-species-detection-using-deep-learning-and-pytorch/ (accessed on 20 July 2023).
Year | AOS Count |
---|---|
2014 | 75,259 |
2009 | 60,853 |
2004 | 48,098 |
1994 | 39,751 |
1985 | 21,589 |
Property | 2022 | 2023 |
---|---|---|
Camera | Zenmuse L1 | Zenmuse P1 |
Band | RGB | RGB |
Flight height | 100 m | 105 m |
Flight speed | 4 ms | 4 ms |
Total flight time | 18 min | 15 min |
GSD | 3.22 cm | 1.36 cm |
Side lap | 70% | 70% |
Forward overlap | 80% | 80% |
Tracks flown | E–W | S–N |
No. of missions | 3 | 2 |
Nadir images acquired | 102 | 135 |
Oblique images acquired | 76 | 15 |
Feature | 2022 | 2023 |
---|---|---|
Model | Zenmuse Z1 | Zenmuse P1 |
Band | RGB | RGB |
Resolution | 20 MP | 45 MP |
Image size (3:2) | 5472 × 3648 | 8192 × 5460 |
Physical focal length | 8.8 mm | 35 mm |
Full frame focal length | 24 mm | 35 mm |
Aperture | f/4 | f/4 |
ISO | Auto | Auto |
Shutter (priority mode) | 1/1000 | 1/1000 |
Exposure compensation | −0.3 | −0.7 |
Focus | N/A | Infinite |
Hyperparameter | Value | Description |
---|---|---|
Train-test split ratio | 0.1 | 10% of the labelled training data held back by the model for validation. 477 out of 530 used for training, 53 for validation. |
Batch size | 32 | Training dataset split into small batches of images so model can more efficiently calculate error and update weights accordingly. |
Learning rate | 0.001 | Step size determining how fast or slow model converges to optimal weights. Chosen rate is low enough to allow network to converge within a reasonable timescale [34]. |
Momentum | 0.9 | Prevents the optimisation process from becoming stuck in a local minimum and missing the global minimum as a result. Default value. |
Weight decay | 0.0005 | Factor applied after each update to prevent weights from growing too large and creating problems with over-fitting and model complexity. Default value. |
No. of epochs | 15 | The number of times that the learning algorithm will work through the entire training dataset and update weights accordingly [35]. Can vary each time model is run. |
Confidence threshold | 0.2 | A score above which a prediction is accepted or rejected by the model. Value chosen through trial and error. |
Detection | Acronym | Definition |
---|---|---|
True Positive | TP | Prediction made that matches ground truth |
False Positive | FP | Prediction made that does not match ground truth |
False Negative | FN | Prediction not made where ground truth exists |
True Negative | TN | Neither ground truth nor prediction exists (ignored for object detection) |
Run 1 | Run 2 | Run 3 | Average | |
---|---|---|---|---|
Epoch | 13 | 16 | 21 | - |
Validation data—Live | 582 | 560 | 552 | 565 |
Validation data—Dead | 182 | 189 | 188 | 186 |
Validation data—Flying | 38 | 37 | 37 | 37 |
mAP | 0.38 | 0.37 | 0.37 | 0.37 |
Final count—Live | 19,895 | 19,254 | 19,033 | 19,394 |
Final count—Dead | 3775 | 4284 | 4243 | 4100 |
Final count—Flying | 813 | 825 | 812 | 817 |
Alive | Dead | Flying | |
---|---|---|---|
Model Count | 552 | 188 | 37 |
FP (−) | 117 | 45 | 1 |
TP | 435 | 143 | 36 |
FP (+) | 70 | 12 | 2 |
FN | 25 | 10 | 4 |
True Count | 530 | 165 | 42 |
% Change | −3.99 | −12.23 | +13.51 |
Classification | Model Count | % Adjustment | Adjusted Count |
---|---|---|---|
Alive | 18,977 | −3.99 | 18,220 |
Dead | 4285 | −12.23 | 3761 |
Flying | 808 | +13.51 | 917 |
Alive | Dead | Flying | |
---|---|---|---|
Model Count | 5308 | 177 | 53 |
FP (−) | 103 | 174 | 1 |
TP | 5205 | 3 | 52 |
FP (+) | 122 | 2 | 18 |
FN | 510 | 0 | 6 |
True Count | 5837 | 5 | 76 |
% Change | +9.06 | −97.16 | +43.40 |
Classification | Model Count | % Adjustment | Adjusted Count |
---|---|---|---|
Alive | 44,433 | +9.06 | 48,455 |
Dead | 1510 | −97.16 | 43 |
Flying | 339 | +43.40 | 486 |
Action | Time (GPU) | Time (CPU) | Description |
---|---|---|---|
Training | 10 min | N/A | Training model on 477 labelled image tiles. Not recommended on CPU due to computational power required to run. |
Validation | <1 min | <1 min | Validating training data with 53 labelled image tiles. |
Implementation | <2 min | <25 min | Running trained model on entire 2022/2023 dataset. |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Tyndall, A.A.; Nichol, C.J.; Wade, T.; Pirrie, S.; Harris, M.P.; Wanless, S.; Burton, E. Quantifying the Impact of Avian Influenza on the Northern Gannet Colony of Bass Rock Using Ultra-High-Resolution Drone Imagery and Deep Learning. Drones 2024, 8, 40. https://doi.org/10.3390/drones8020040
Tyndall AA, Nichol CJ, Wade T, Pirrie S, Harris MP, Wanless S, Burton E. Quantifying the Impact of Avian Influenza on the Northern Gannet Colony of Bass Rock Using Ultra-High-Resolution Drone Imagery and Deep Learning. Drones. 2024; 8(2):40. https://doi.org/10.3390/drones8020040
Chicago/Turabian StyleTyndall, Amy A., Caroline J. Nichol, Tom Wade, Scott Pirrie, Michael P. Harris, Sarah Wanless, and Emily Burton. 2024. "Quantifying the Impact of Avian Influenza on the Northern Gannet Colony of Bass Rock Using Ultra-High-Resolution Drone Imagery and Deep Learning" Drones 8, no. 2: 40. https://doi.org/10.3390/drones8020040
APA StyleTyndall, A. A., Nichol, C. J., Wade, T., Pirrie, S., Harris, M. P., Wanless, S., & Burton, E. (2024). Quantifying the Impact of Avian Influenza on the Northern Gannet Colony of Bass Rock Using Ultra-High-Resolution Drone Imagery and Deep Learning. Drones, 8(2), 40. https://doi.org/10.3390/drones8020040