Detect, Consolidate, Delineate: Scalable Mapping of Field Boundaries Using Satellite Images
<p>Field boundary mapping with the <tt>DECODE</tt> method. Fields are extracted from a stack of single-date satellite images in three sequential steps: (1) semantic segmentation of fields, their boundaries and their distance to the closest boundary with <tt>FracTAL ResUNet</tt> (detection), single-date model outputs are time averaged (consolidation), and instance segmentation of fields by means of hierarchical watershed segmentation (delineation). Here, Sentinel-2 images (blue, green, red, near-infrared channels) were used as input to the <tt>FracTAL ResUNet</tt> model.</p> "> Figure 2
<p>Components of the <tt>FracTAL ResUNet</tt> architecture. The network macro-topology shows how feature extraction units (convolution layers, <tt>FracTAL</tt> <tt>ResNet</tt> units and PSP pooling layers) are applied to an image input with <span class="html-italic">C</span> channels, and with <span class="html-italic">H</span> and <span class="html-italic">W</span> pixels in <span class="html-italic">y</span> and <span class="html-italic">x</span> dimensions. Here, <span class="html-italic">d</span> corresponds to the depth of the network and <math display="inline"><semantics> <mrow> <mi>n</mi> <mi>f</mi> </mrow> </semantics></math> corresponds to the initial number of filters. The micro-topology (<tt>FracTAL</tt> <tt>ResNet</tt> unit) shows the fusion process between the <tt>FracTAL</tt> and the sequence of residual blocks. The conditioned multitasking classification head first predicts the distance mask (without <tt>PSP Pooling</tt>), then uses this information to detect boundaries, and finally, re-uses both estimates to predict the extent mask.</p> "> Figure 3
<p>Defining hierarchical segmentation with the dynamics criterion. (<b>a</b>) Dynamics of two points along a path (the path is illustrative and does not relate to a path on the following figure components), (<b>b</b>) boundary mask as produced by the model, (<b>c</b>) corresponding watershed built with the dynamics criterion, (<b>d</b>–<b>f</b>) cuts of the segmentation hierarchy at different dynamics levels, (<b>g</b>) input model image.</p> "> Figure 4
<p>Study sites. (<b>a</b>) The two study sites in South Africa, the source domain; (<b>b</b>) the number of cloud-free images per tile in the Australia, the target domain.</p> "> Figure 5
<p>Field distribution across Australia obtained with the source-to-target approach.</p> "> Figure 6
<p>Semantic segmentation outputs in a range of Australian cropping regions (5 km × 5 km). These outputs were obtained for the source-to-target case, that is, a <tt>FracTAL ResUNet</tt> trained on South African data and applied across Australia.</p> "> Figure 7
<p>Object-based accuracy measures. (<b>a</b>) Probability of exceedance curves by metric; (<b>b</b>) area under the probability of exceedance curves by metric. Interpretation of the area under the probability of exceedance curves differs from the interpretation of the metrics themselves as it conveys information about their distribution for the validation set. As perfectly extracted fields have an accuracy value of 1, perfect segmentation for the full reference data set should yield a probability of exceedance of 100% for an accuracy of 1 and a probability of exceedance of 100%.</p> "> Figure 8
<p>Distribution of median object-based accuracy metrics. Points indicate the centroids of the validation sites.</p> "> Figure 9
<p>Instance segmentation outputs in a range of Australian cropping regions (5 km × 5 km). These outputs were obtained for the source-to-target case, that is, a <tt>FracTAL ResUNet</tt> model trained on South African data and applied across Australia.</p> "> Figure 10
<p>Semantic and instance uncertainty across a range of cropping regions.</p> "> Figure A1
<p>Distribution of the validation sites in the target domain.</p> "> Figure A2
<p>Object-based accuracy binned scatter plots.</p> ">
Abstract
:1. Introduction
- A deep learning network, the FracTAL ResUNet, tailored for the task of semantic segmentation of satellite images. In particular, our architecture is a multitasking encoder–decoder network, with the same backbone structure as in Waldner and Diakogiannis [6]. The main difference is that we changed the residual building blocks with atrous convolutions with the newest FracTAL ResNet building blocks. FracTAL ResNet building blocks have been recently proposed as part of a change detection network that demonstrated state-of-the-art performance [22];
- Two field-level measures of uncertainty to characterize the semantic (assigning labels to pixels) and instance (grouping pixels together) uncertainty;
- A framework to report and compare field-based accuracy metrics.
2. Materials and Methods
2.1. Detect: Multi-Task Semantic Segmentation
2.1.1. Architecture Micro-Topology
2.1.2. Architecture Macro-Topology
2.1.3. Classification Head
2.1.4. Evolving Loss Strategy
2.2. Consolidate: Time Averaging
2.3. Delineate: Hierarchical Watershed Segmentation
2.3.1. Hierarchical Watershed Segmentation
2.3.2. Closing Boundaries with Hierarchical Watershed Segmentation
2.4. Defining Semantic and Instance Uncertainty
2.5. Experimental Design
- Target to target, where a FracTAL ResUNet model was trained and evaluated on data from Australia (the target domain);
- Source to target, where a FracTAL ResUNet model was trained on data from South Africa (the source domain) and evaluated in Australia.
2.5.1. Data and Study Sites
2.5.2. Model Training
2.5.3. Large-Scale Instance Segmentation
2.5.4. Accuracy Assessment
- The boundary similarity, which compares the boundary of a reference object coincident with that of a classified object [44]. Boundary similarity calculates the percentage of the target field boundary that coincides with the extracted field boundary:
- The location similarity, which evaluates the similarity between the centroid position of classified and reference objects [44]. Location similarity is evaluated first, by calculating the Euclidiean distance between corresponding the centroids of the target and extracted fields, and then by normalizing it by the diameter of a combined area circle (), i.e., the diameter of a circle whose area corresponds to the sum of the areas of the target and extracted fields:
- The oversegmentation rate, which measures incorrect subdivision of larger objects into smaller ones:
- The undersegmentation rate, which measures incorrect consolidation of small adjacent objects into larger ones [45]:
- The intersection over union, which evaluates the overlap between reference and classified objects;
- The shape similarity, which compares the geometric form of a reference object with that of the corresponding classified object(s) [44]. Shape similarity is based on the normalized perimeter index () and the concept of an equal area circle (), which is a circle with an area equal to the area of an object (here a field). The of an object is in fact the ratio between the perimeter of the equal area circle and the perimeter of the object (). Here, shape similarity compares the geometric form of a target field with that of the corresponding extracted field(s):
3. Results
4. Discussion
4.1. Methodological Advancement and Their Relevance
4.2. Managing, Communicating and Reducing Uncertainty
4.3. Perspectives
5. Conclusions
Author Contributions
Funding
Conflicts of Interest
Appendix A
Tile | Dates | Split |
---|---|---|
T34HBH | 20190327, 20190824 | Train |
T34HBJ | 20190625, 20190903 | Validation |
T34HCH | 20190406, 20190908 | Train |
T34HCJ | 20190506, 20190824 | Train |
T34HDH | 20190615, 20190809 | Train |
T34HEH | 20190418, 20190821 | Test |
T35JLJ | 20190301, 20190425 | Train |
T35JLK | 20190301, 20190510 | Train |
T35JLL | 20190301, 20190430 | Train |
T35JMJ | 20190301, 20190505 | Test |
T35JMK | 20190331, 20190505 | Train |
T35JML | 20190326, 20190510 | Train |
T35JNJ | 20190427, 20190517 | Train |
T35JNK | 20190226, 20190427 | Train |
T35JNL | 20190226, 20190427 | Train |
T35JPJ | 20190226, 20190502 | Train |
T35JPK | 20190226, 20190417 | Validation |
T35JPL | 20190226, 20190328 | Train |
References
- Kuemmerle, T.; Erb, K.; Meyfroidt, P.; Müller, D.; Verburg, P.H.; Estel, S.; Haberl, H.; Hostert, P.; Jepsen, M.R.; Kastner, T.; et al. Challenges and opportunities in mapping land use intensity globally. Curr. Opin. Environ. Sustain. 2013, 5, 484–493. [Google Scholar] [CrossRef]
- Geiger, F.; Bengtsson, J.; Berendse, F.; Weisser, W.W.; Emmerson, M.; Morales, M.B.; Ceryngier, P.; Liira, J.; Tscharntke, T.; Winqvist, C.; et al. Persistent negative effects of pesticides on biodiversity and biological control potential on European farmland. Basic Appl. Ecol. 2010, 11, 97–105. [Google Scholar] [CrossRef]
- Carfagna, E.; Gallego, F.J. Using remote sensing for agricultural statistics. Int. Stat. Rev. 2005, 73, 389–404. [Google Scholar] [CrossRef] [Green Version]
- Rudel, T.K.; Schneider, L.; Uriarte, M.; Turner, B.L.; DeFries, R.; Lawrence, D.; Geoghegan, J.; Hecht, S.; Ickowitz, A.; Lambin, E.F.; et al. Agricultural intensification and changes in cultivated areas, 1970–2005. Proc. Natl. Acad. Sci. USA 2009, 106, 20675–20680. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Johnson, D.M. A 2010 map estimate of annually tilled cropland within the conterminous United States. Agric. Syst. 2013, 114, 95–105. [Google Scholar] [CrossRef]
- Waldner, F.; Diakogiannis, F.I. Deep learning on edge: Extracting field boundaries from satellite images with a convolutional neural network. Remote Sens. Environ. 2020, 245, 111741. [Google Scholar] [CrossRef]
- Yan, L.; Roy, D. Conterminous United States crop field size quantification from multi-temporal Landsat data. Remote Sens. Environ. 2016, 172, 67–86. [Google Scholar] [CrossRef] [Green Version]
- Graesser, J.; Ramankutty, N. Detection of cropland field parcels from Landsat imagery. Remote Sens. Environ. 2017, 201, 165–180. [Google Scholar] [CrossRef] [Green Version]
- Watkins, B.; van Niekerk, A. A comparison of object-based image analysis approaches for field boundary delineation using multi-temporal Sentinel-2 imagery. Comput. Electron. Agric. 2019, 158, 294–302. [Google Scholar] [CrossRef]
- Wagner, M.P.; Oppelt, N. Extracting Agricultural Fields from Remote Sensing Imagery Using Graph-Based Growing Contours. Remote Sens. 2020, 12, 1205. [Google Scholar] [CrossRef] [Green Version]
- Martin, D.R.; Fowlkes, C.C.; Malik, J. Learning to detect natural image boundaries using local brightness, color, and texture cues. IEEE Trans. Pattern Anal. Mach. Intell. 2004, 26, 530–549. [Google Scholar] [CrossRef] [PubMed]
- Dollár, P.; Zitnick, C.L. Structured forests for fast edge detection. In Proceedings of the IEEE International Conference on Computer Vision, Sydney, NSW, Australia, 1–8 December 2013; pp. 1841–1848. [Google Scholar]
- Bertasius, G.; Shi, J.; Torresani, L. DeepEdge: A multi-scale bifurcated deep network for top-down contour detection. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 4380–4389. [Google Scholar]
- Xie, S.; Tu, Z. Holistically-nested edge detection. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 1395–1403. [Google Scholar]
- Badrinarayanan, V.; Kendall, A.; Cipolla, R. Segnet: A deep convolutional encoder–decoder architecture for image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 2481–2495. [Google Scholar] [CrossRef]
- Masoud, K.M.; Persello, C.; Tolpekin, V.A. Delineation of Agricultural Field Boundaries from Sentinel-2 Images Using a Novel Super-Resolution Contour Detector Based on Fully Convolutional Networks. Remote Sens. 2020, 12, 59. [Google Scholar] [CrossRef] [Green Version]
- Persello, C.; Tolpekin, V.; Bergado, J.; de By, R. Delineation of agricultural fields in smallholder farms from satellite images using fully convolutional networks and combinatorial grouping. Remote Sens. Environ. 2019, 231, 111253. [Google Scholar] [CrossRef]
- Crommelinck, S.; Bennett, R.; Gerke, M.; Yang, M.Y.; Vosselman, G. Contour detection for UAV-based cadastral mapping. Remote Sens. 2017, 9, 171. [Google Scholar] [CrossRef] [Green Version]
- Diakogiannis, F.I.; Waldner, F.; Caccetta, P.; Wu, C. ResUNet-a: A deep learning framework for semantic segmentation of remotely sensed data. ISPRS J. Photogramm. Remote Sens. 2020, 162, 94–114. [Google Scholar] [CrossRef] [Green Version]
- Arbelaez, P.; Maire, M.; Fowlkes, C.; Malik, J. Contour detection and hierarchical image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 33, 898–916. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Wagner, M.P.; Oppelt, N. Deep Learning and Adaptive Graph-Based Growing Contours for Agricultural Field Extraction. Remote Sens. 2020, 12, 1990. [Google Scholar] [CrossRef]
- Diakogiannis, F.I.; Waldner, F.; Caccetta, P. Looking for change? Roll the Dice and demand Attention. arXiv 2020, arXiv:cs.CV/2009.02062. [Google Scholar]
- Brodrick, P.G.; Davies, A.B.; Asner, G.P. Uncovering ecological patterns with convolutional neural networks. Trends Ecol. Evol. 2019, 34, 734–745. [Google Scholar] [CrossRef]
- Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention; Springer: Berlin/Heidelberg, Germany, 2015; pp. 234–241. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Identity mappings in deep residual networks. In European Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2016; pp. 630–645. [Google Scholar]
- Zhao, H.; Shi, J.; Qi, X.; Wang, X.; Jia, J. Pyramid scene parsing network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 2881–2890. [Google Scholar]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, L.; Polosukhin, I. Attention Is All You Need. arXiv 2017, arXiv:1706.03762. [Google Scholar]
- Zhang, H.; Wu, C.; Zhang, Z.; Zhu, Y.; Lin, H.; Zhang, Z.; Sun, Y.; He, T.; Mueller, J.; Manmatha, R.; et al. ResNeSt: Split-Attention Networks. arXiv 2020, arXiv:2004.08955. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. arXiv 2015, arXiv:1512.03385. [Google Scholar]
- Zhao, H.; Shi, J.; Qi, X.; Wang, X.; Jia, J. Pyramid Scene Parsing Network. arXiv 2017, arXiv:1612.01105. [Google Scholar]
- Meyer, F. The dynamics of minima and contours. In Mathematical Morphology and Its Applications to Image and Signal Processing; Springer: Berlin/Heidelberg, Germany, 1996; pp. 329–336. [Google Scholar]
- Cousty, J.; Bertrand, G.; Najman, L.; Couprie, M. Watershed Cuts: Minimum Spanning Forests and the Drop of Water Principle. IEEE Trans. Pattern Anal. Mach. Intell. 2008, 31, 1362–1374. [Google Scholar] [CrossRef] [Green Version]
- Perret, B.; Cousty, J.; Guimaraes, S.J.F.; Maia, D.S. Evaluation of hierarchical watersheds. IEEE Trans. Image Process. 2017, 27, 1676–1688. [Google Scholar] [CrossRef] [Green Version]
- Meyer, F. Minimum spanning forests for morphological segmentation. In Mathematical Morphology and Its Applications to Image Processing; Springer: Berlin/Heidelberg, Germany, 1994; pp. 77–84. [Google Scholar]
- Grimaud, M. New measure of contrast: The dynamics. In Image Algebra and Morphological Image Processing III; International Society for Optics and Photonics: Washington, DC, USA, 1992; Volume 1769, pp. 292–305. [Google Scholar]
- ABARES. The Australian Land Use and Management Classification Version 8; ABARES: Canberra, Australia, 2016.
- Michel, J.; Youssefi, D.; Grizonnet, M. Stable mean-shift algorithm and its application to the segmentation of arbitrarily large remote sensing images. IEEE Trans. Geosci. Remote Sens. 2014, 53, 952–964. [Google Scholar] [CrossRef]
- Snoek, J.; Larochelle, H.; Adams, R.P. Practical bayesian optimization of machine learning algorithms. In Proceedings of the 25th International Conference on Neural Information Processing Systems, NIPS’12, Lake Tahoe, NV, USA, 3–6 December 2012; pp. 2951–2959. [Google Scholar]
- Glorot, X.; Bengio, Y. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of the Thirteenth International Conference On Artificial Intelligence And Statistics, JMLR Workshop and Conference Proceedings, Sardinia, Italy, 13–15 May 2010; pp. 249–256. [Google Scholar]
- Kingma, D.P.; Ba, J. Adam: A Method for Stochastic Optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
- Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; MIT Press: Cambridge, MA, USA, 2016. [Google Scholar]
- Sokolova, M.; Lapalme, G. A systematic analysis of performance measures for classification tasks. Inf. Process. Manag. 2009, 45, 427–437. [Google Scholar] [CrossRef]
- Matthews, B.W. Comparison of the predicted and observed secondary structure of T4 phage lysozyme. Biochim. Biophys. Acta-(Bba)-Protein Struct. 1975, 405, 442–451. [Google Scholar] [CrossRef]
- Lizarazo, I. Accuracy assessment of object-based image classification: Another STEP. Int. J. Remote Sens. 2014, 35, 6135–6156. [Google Scholar] [CrossRef]
- Persello, C.; Bruzzone, L. A novel protocol for accuracy assessment in classification of very high resolution images. IEEE Trans. Geosci. Remote Sens. 2010, 48, 1232–1244. [Google Scholar] [CrossRef]
- Kokkinos, I. Pushing the boundaries of boundary detection using deep learning. arXiv 2015, arXiv:1511.07386. [Google Scholar]
- Mathis, A.; Yüksekgönül, M.; Rogers, B.; Bethge, M.; Mathis, M.W. Pretraining boosts out-of-domain robustness for pose estimation. arXiv 2019, arXiv:1909.11229. [Google Scholar]
- Wang, Y.; Zhao, X.; Huang, K. Deep crisp boundaries. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 3892–3900. [Google Scholar]
- ePaddocks ™Australian Paddock Boundaries. Available online: https://acds.csiro.au/epaddock-australian-paddock-boundaries (accessed on 31 May 2021).
- ParcelIO: Automatic Parcel Boundary Detection and Delineation. Available online: http://parcelio.sentinel-hub.com/ (accessed on 31 May 2021).
- Pringle, M.J.; Schmidt, M.; Tindall, D.R. Multi-decade, multi-sensor time-series modelling—Based on geostatistical concepts—To predict broad groups of crops. Remote Sens. Environ. 2018, 216, 183–200. [Google Scholar] [CrossRef]
- Waldner, F.; Chen, Y.; Lawes, R.; Hochman, Z. Needle in a haystack: Mapping rare and infrequent crops using satellite imagery and data balancing methods. Remote Sens. Environ. 2019, 233, 111375. [Google Scholar] [CrossRef]
- Defourny, P.; Bontemps, S.; Bellemans, N.; Cara, C.; Dedieu, G.; Guzzonato, E.; Hagolle, O.; Inglada, J.; Nicola, L.; Rabaute, T.; et al. Near real-time agriculture monitoring at national scale at parcel resolution: Performance assessment of the Sen2-Agri automated system in various cropping systems around the world. Remote Sens. Environ. 2019, 221, 551–568. [Google Scholar] [CrossRef]
- Wang, S.; Azzari, G.; Lobell, D.B. Crop type mapping without field-level labels: Random forest transfer and unsupervised clustering techniques. Remote Sens. Environ. 2019, 222, 303–317. [Google Scholar] [CrossRef]
- Zhao, L.; Waldner, F.; Scarth, P.; Mack, B.; Hochman, Z. Combining Fractional Cover Images with One-Class Classifiers Enables Near Real-Time Monitoring of Fallows in the Northern Grains Region of Australia. Remote Sens. 2020, 12, 1337. [Google Scholar] [CrossRef] [Green Version]
- Azzari, G.; Jain, M.; Lobell, D.B. Towards fine resolution global maps of crop yields: Testing multiple methods and satellites in three countries. Remote Sens. Environ. 2017, 202, 129–141. [Google Scholar] [CrossRef]
- Donohue, R.J.; Lawes, R.A.; Mata, G.; Gobbett, D.; Ouzman, J. Towards a national, remote-sensing-based model for predicting field-scale crop yield. Field Crop. Res. 2018, 227, 79–90. [Google Scholar] [CrossRef]
- Kamir, E.; Waldner, F.; Hochman, Z. Estimating wheat yields in Australia using climate records, satellite image time series and machine learning methods. ISPRS J. Photogramm. Remote Sens. 2020, 160, 124–135. [Google Scholar] [CrossRef]
- Chen, Y.; McVicar, T.R.; Donohue, R.J.; Garg, N.; Waldner, F.; Ota, N.; Li, L.; Lawes, R. To Blend or Not to Blend? A Framework for Nationwide Landsat–MODIS Data Selection for Crop Yield Prediction. Remote Sens. 2020, 12, 1653. [Google Scholar] [CrossRef]
- Waldner, F.; Schucknecht, A.; Lesiv, M.; Gallego, J.; See, L.; Pérez-Hoyos, A.; d’Andrimont, R.; De Maet, T.; Bayas, J.C.L.; Fritz, S.; et al. Conflation of expert and crowd reference data to validate global binary thematic maps. Remote Sens. Environ. 2019, 221, 235–246. [Google Scholar] [CrossRef]
- Marvaniya, S.; Devi, U.; Hazra, J.; Mujumdar, S.; Gupta, N. Small, Sparse, but Substantial: Techniques for Segmenting Small Agricultural Fields Using Sparse Ground Data. arXiv 2020, arXiv:2005.01947. [Google Scholar]
- Ruiz-Santaquiteria, J.; Bueno, G.; Deniz, O.; Vallez, N.; Cristobal, G. Semantic versus instance segmentation in microscopic algae detection. Eng. Appl. Artif. Intell. 2020, 87, 103271. [Google Scholar] [CrossRef]
Domain | Number of Fields | ||
---|---|---|---|
Training | Validation | Testing | |
Source—South Africa | 380,034 | 65,522 | 36,499 |
Target—Australia | 53,115 | 13,278 | 5555 |
(a) Source-to-target | ||||
---|---|---|---|---|
Actual | ||||
Predicted | Field | Non-field | ||
Field | 4,791,026,300 | 815,729,900 | 0.854 | |
Non-field | 318,970,725 | 2,833,865,625 | 0.899 | |
0.938 | 0.776 | OA = 0.870 | ||
MCC = 0.733 | ||||
(b) Target-to-target | ||||
Actual | ||||
Predicted | Field | Non-field | ||
Field | 4,610,109,200 | 627,107,300 | 0.880 | |
Non-field | 499,887,825 | 3,022,488,225 | 0.858 | |
0.902 | 0.828 | OA = 0.871 | ||
MCC = 0.734 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Waldner, F.; Diakogiannis, F.I.; Batchelor, K.; Ciccotosto-Camp, M.; Cooper-Williams, E.; Herrmann, C.; Mata, G.; Toovey, A. Detect, Consolidate, Delineate: Scalable Mapping of Field Boundaries Using Satellite Images. Remote Sens. 2021, 13, 2197. https://doi.org/10.3390/rs13112197
Waldner F, Diakogiannis FI, Batchelor K, Ciccotosto-Camp M, Cooper-Williams E, Herrmann C, Mata G, Toovey A. Detect, Consolidate, Delineate: Scalable Mapping of Field Boundaries Using Satellite Images. Remote Sensing. 2021; 13(11):2197. https://doi.org/10.3390/rs13112197
Chicago/Turabian StyleWaldner, François, Foivos I. Diakogiannis, Kathryn Batchelor, Michael Ciccotosto-Camp, Elizabeth Cooper-Williams, Chris Herrmann, Gonzalo Mata, and Andrew Toovey. 2021. "Detect, Consolidate, Delineate: Scalable Mapping of Field Boundaries Using Satellite Images" Remote Sensing 13, no. 11: 2197. https://doi.org/10.3390/rs13112197
APA StyleWaldner, F., Diakogiannis, F. I., Batchelor, K., Ciccotosto-Camp, M., Cooper-Williams, E., Herrmann, C., Mata, G., & Toovey, A. (2021). Detect, Consolidate, Delineate: Scalable Mapping of Field Boundaries Using Satellite Images. Remote Sensing, 13(11), 2197. https://doi.org/10.3390/rs13112197