[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ Skip to main content
Log in

Under-Canopy Navigation for an Agricultural Rover Based on Image Data

  • Regular paper
  • Published:
Journal of Intelligent & Robotic Systems Aims and scope Submit manuscript

Abstract

This paper presents an Image data-based autonomous navigation system for an under-canopy agricultural mini-rover called TerraSentia. This kind of navigation is a very challenging problem due to the lack of GNSS accuracy. This happens because the crop leaves and stems attenuate the GNSS signal and produce multi-path data. In such a scenario, reactive navigation techniques based on the detection of crop rows using image data have proved to be an efficient alternative to GNSS. However, it also presents some challenges, mainly owing to leaves occlusions under the canopy and dealing with varying weather conditions. Our system addresses these issues by combining different image-based approaches using low-cost hardware. Tests were carried out using multiple robots, in different field conditions, and in different locations. The results show that our system is able to safely navigate without interventions in fields without significant gaps in the crop rows. In addition to this, we see as future steps, not only comparing more recent convolutional neural networks based on processing power needs and accuracy, but also the fusion of these vision-based approaches previously developed by our group in order to obtain the best of both approaches.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
£29.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (United Kingdom)

Instant access to the full article PDF.

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

Data Availability

The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.

Data Availability

The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.

References

  1. Ramin Shamshiri, R., Weltzien, C., A. Hameed, I., J. Yule, I., E. Grift, T., K. Balasundram, S., Pitonakova, L., Ahmad, D., Chowdhary, G.: Research and development in agricultural robotics: A perspective of digital farming. Int. J. Agric. Biol. Eng. 11(4), 1–11 (2018). https://doi.org/10.25165/j.ijabe.20181104.4278

  2. Fountas, S., Mylonas, N., Malounas, I., Rodias, E., Hellmann Santos, C., Pekkeriet, E.: Agricultural Robotics for Field Operations. Sensors 20(9), 2672 (2020). https://doi.org/10.3390/s20092672

    Article  Google Scholar 

  3. Bakker, T., van Asselt, K., Bontsema, J., Müller, J., van Straten, G.: Autonomous navigation using a robot platform in a sugar beet field. Biosyst. Eng. 109(4), 357–368 (2011). https://doi.org/10.1016/j.biosystemseng.2011.05.001

    Article  Google Scholar 

  4. Mueller-Sim, T., Jenkins, M., Abel, J., Kantor, G.: The Robotanist: A ground-based agricultural robot for high-throughput crop phenotyping. In: 2017 IEEE International Conference on Robotics and Automation (ICRA), pp. 3634–3639. IEEE, (2017). http://ieeexplore.ieee.org/document/7989418

  5. Abdu, M.A., Batista, I.S., Carrasco, A.J., Brum, C.G.M.: South Atlantic magnetic anomaly ionization: A review and a new focus on electrodynamic effects in the equatorial ionosphere. Journal of Atmospheric and Solar-Terrestrial Physics. 67(17-18 SPEC. ISS.), 1643–1657 (2005). https://doi.org/10.1016/j.jastp.2005.01.014

  6. Spogli, L., Alfonsi, L., Romano, V., De Franceschi, G., Joao Francisco, G.M., Hirokazu Shimabukuro, M., Bougard, B., Aquino, M.: Assessing the GNSS scintillation climate over Brazil under increasing solar activity. Journal of Atmospheric and Solar-Terrestrial Physics 105-106, 199–206 (2013). https://doi.org/10.1016/j.jastp.2013.10.003

  7. Reina, G., Milella, A., Rouveure, R., Nielsen, M., Worst, R., Blas, M.R.: Ambient awareness for agricultural robotic vehicles. Biosyst. Eng. 146, 114–132 (2016). https://doi.org/10.1016/j.biosystemseng.2015.12.010

  8. Rovira-Más, F., Chatterjee, I., Sáiz-Rubio, V.: The role of GNSS in the navigation strategies of cost-effective agricultural robots. Comput. Electron. Agric. 112, 172–183 (2015). https://doi.org/10.1016/j.compag.2014.12.017

    Article  Google Scholar 

  9. Bergerman, M., Maeta, S.M., Zhang, J., Freitas, G.M., Hamner, B., Singh, S., Kantor, G.: Robot Farmers: Autonomous Orchard Vehicles Help Tree Fruit Production. Robotics & Automation Magazine. 22(march), 54–63 (2015)

    Article  Google Scholar 

  10. Santos, F.B.N.D., Sobreira, H.M.P., Campos, D.F.B., Santos, R.M.P.M.d., Moreira, A.P.G.M., Contente, O.M.S.: Towards a Reliable Monitoring Robot for Mountain Vineyards. In: 2015 IEEE International Conference on Autonomous Robot Systems and Competitions, pp. 37–43. IEEE, (2015). https://doi.org/10.1109/ICARSC.2015.21. http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=7101608, http://ieeexplore.ieee.org/document/7101608/

  11. Reid, J., Searcy, S.: Vision-based guidance of an agriculture tractor. IEEE Control Syst. Mag. 7(2), 39–43 (1987)

    Article  Google Scholar 

  12. Ball, D., Upcroft, B., Wyeth, G., Corke, P., English, A., Ross, P., Patten, T., Fitch, R., Sukkarieh, S., Bate, A.: Vision-based obstacle detection and navigation for an agricultural robot. J. Field Robot. 33(8), 1107–1130 (2016)

    Article  Google Scholar 

  13. Zhang, S., Wang, Y., Zhu, Z., Li, Z., Du, Y., Mao, E.: Tractor path tracking control based on binocular vision. Inf. Process. Agric. 5(4), 422–432 (2018). https://doi.org/10.1016/j.inpa.2018.07.003

  14. Radcliffe, J., Cox, J., Bulanon, D.M.: Machine vision for orchard navigation. Comput. Ind. 98, 165–171 (2018). https://doi.org/10.1016/j.compind.2018.03.008

    Article  Google Scholar 

  15. Higuti, V.A.H., Velasquez, A.E.B., Magalhaes, D.V., Becker, M., Chowdhary, G.: Under canopy light detection and ranging-based autonomous navigation. J. Field Robot. 36(3), 547–567 (2019)

    Article  Google Scholar 

  16. Xue, J., Zhang, L., Grift, T.E.: Variable field-of-view machine vision based row guidance of an agricultural robot. Comput. Electron. Agric. 84, 85–91 (2012). https://doi.org/10.1016/j.compag.2012.02.009

    Article  Google Scholar 

  17. Kayacan, E., Zhang, Z., Chowdhary, G.: Embedded High Precision Control and Corn Stand Counting Algorithms for an Ultra-Compact 3D Printed Field Robot. In: Proceedings of Robotics: Science and Systems, Pittsburgh, Pennsylvania, pp. 1–9 (2018). https://doi.org/10.15607/RSS.2018.XIV.036. http://www.roboticsproceedings.org/rss14/p36.html

  18. Choudhuri, A., Chowdhary, G.: Crop stem width estimation in highly cluttered field environment. Proceedings of the Computer Vision Problems in Plant Phenotyping (CVPPP 2018), Newcastle, UK, 6–13 (2018)

  19. Canny, J.: A computational approach to edge detection. IEEE Transactions on Pattern Analysis and Machine Intelligence PAMI 8(6), 679–698 (1986). https://doi.org/10.1109/TPAMI.1986.4767851

    Article  Google Scholar 

  20. Bradski, G., Kaehler, A.: Learning OpenCV: Computer Vision with the OpenCV Library. O’Reilly, Cambridge (2008)

  21. Aqel, M.O., Marhaban, M.H., Saripan, M.I., Ismail, N.B.: Review of visual odometry: types, approaches, challenges, and applications. SpringerPlus 5(1), 1–26 (2016)

  22. Scaramuzza, D., Fraundorfer, F.: Visual odometry [tutorial]. IEEE Robot. Auton. Mag. 18 (4), 80–92 (2011)

  23. Isik, S.: A comparative evaluation of well-known feature detectors and descriptors. Int. J. Appl. Math. Eletronics Comput. 3(1), 1–6 (2014)

  24. Rosten, E., Drummond, T.: Machine learning for high-speed corner detection. In: European Conference on Computer Vision, pp. 430–443 (2006). Springer

  25. Bouguet, J.-Y., et al.: Pyramidal implementation of the affine lucas kanade feautre tracker description of the algorithm. Intel Corp. 5(1-10), 4 (2001)

  26. Nistér, D.: An efficient solution to the five-point relative pose problem. IEEE Trans. Pattern. Anal. Mach. Intell. 26(6), 756–770 (2004)

    Article  Google Scholar 

  27. Fischler, M.A., Bolles, R.C.: Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM 24(6), 381–395 (1981)

    Article  MathSciNet  Google Scholar 

  28. Neto, J.C.: A combined statistical-soft computing approach for classification and mapping weed species in minimum-tillage systems. dissertation. The university of Nebraska - Lincoln (2004)

  29. Otsu, N.: A threshold selection method from gray-level histograms. IEEE Transactions on Systems, Man, and Cybernetics 9(1), 62–66 (1979). https://doi.org/10.1109/TSMC.1979.4310076

    Article  MathSciNet  Google Scholar 

  30. Raid, A.M., Khedr, W., El-dosuky, M., Aoud, M.: Image restoration based on morphological operations. Int. J. Comput. Sci. Eng. Inf. Technol. 4, 9–21 (2014). https://doi.org/10.5121/ijcseit.2014.4302

    Article  Google Scholar 

  31. Sklansky, J.: Finding the convex hull of a simple polygon. Pattern Recog. Lett. 1(2), 79–83 (1982). https://doi.org/10.1016/0167-8655(82)90016-2

    Article  MATH  Google Scholar 

  32. Chen, J., Quiang, H., Wu, J., Xu, G., Wang, Z., Liu, X.: Extracting the nav igation path of a tomato-cucumber greenhouse robot based on a median point hough transform. Comput. Electron. Agric. 174, 105472 (2020). https://doi.org/10.1016/j.compag.2020.105472

  33. Araujo, G.L., Filho, J.I.F., Higuti, V.A.H., Becker, M.: A new approach of monocular visual odometry to trajectory estimation within a plantation. In: 2021 Latin American Robotics Symposium (LARS), 2021 Brazilian Symposium on Robotics (SBR), and 2021 Workshop on Robotics in Edu cation (WRE), pp. 180–185 (2021). https://doi.org/10.1109/LARS/SBR/WRE54079.2021.9605451

  34. Gonzalez, R.C., Woods, R.E.: Digital Image Processing (3rd Edition). Prentice Hall, August 2007

  35. Kim, A.: FastSLIC: Optmized SLIC Superpixel. Available at https://github.com/Algy/fast-slic

  36. Achanta, R., Shaji, A., Smith, K., Lucchi, A., Fua, P., Süsstrunk, S.: SLIC Superpixels Compared to State-of-the-Art Superpixel Methods. In: IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 34, no. 11, pp. 2274–2282, (2012). https://doi.org/10.1109/TPAMI.2012.120

  37. Perissini, I.: Experimental analysis of color constancy and segmentation algorithms for plant seedlings detection. Master’s thesis. University of Sao Paulo (2018)

  38. Meyer, G., Neto, J.: Verification of color vegetation indices for automated crop imaging applications. Computers and Electronics in Agriculture, 282–293 (2008). https://doi.org/10.1016/j.compag.2008.03.00939

  39. Hamuda, E., Glavin, M., Jones, E.: A survey of image processing techniques for plant extraction and segmentation in the field. Computers and Electronics in Agriculture, 184–199 (2016). https://doi.org/10.1016/j.compag.2016.04.024

  40. Breiman, L.: Random Forests. Machine Learning 45, 5–32 (2001). https://doi.org/10.1023/A:1010933404324

  41. Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., Blondel, M., Prettenhofer, P., Weiss, R., Dubourg, V., Vander plas, J., Passos, A., Cournapeau, D., Brucher, M., Perrot, M., Duchesnay, E.: Scikit-learn: machine learning in Python. J. Mach. Learn. Res. 12, 2825–2830 (2011)

  42. Krüger, F.: Activity, Context, and Plan Recognition with Computational Causal Behaviour Models. PhD. Thesis. Advisor: Thomas Kirste. University of Rostock (2016)

  43. Redmon, J. and Farhadi, A. (2018) YOLOv3: An Incremental Improvement. Computer Science, arXiv:1804.02767

  44. Milz, S., Arbeiter, G., Witt, C., Abdallah, B., Yogamani, S.: Visual SLAM for Automated Driving: Exploring the Applications of Deep Learning. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Salt Lake City, UT, USA, 2018, pp. 360–36010. https://doi.org/10.1109/CVPRW.2018.00062

  45. Yu, C., Liu, Z., Liu, X., Xie, F., Yang, Y., Wei, Q., Fei, Q.: Ds-slam: A semantic visual slam towards dynamic environments. In: 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 1168–1174 (2018). https://doi.org/10.1109/IROS.2018.8593691

  46. Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C.-Y., Berg, A.C.: Ssd: Single shot multibox detector. Lecture Notes in Computer Science, 21–37 (2016). https://doi.org/10.1007/978-3-319-46448-02

  47. Zhao, H., Shi, J., Qi, X., Wang, X., Jia, J.; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2881–2890 (2017)

  48. Redmon, J., Divvala, S., Girshick, R., Farhadi, A.: You only look once: Unified, real-time object detection. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 779–788 (2016). https://doi.org/10.1109/CVPR.2016.91

  49. Ali, J., Khan, R., Ahmad, N., Maqsood, I.: Random forests and decision trees. International Journal of Computer Science Issues(IJCSI) 9 (2012)

Download references

Acknowledgements

The work was partially supported by Sao Paulo Research Foundation (FAPESP) grant numbers 2020/13037-3, 2020/ 12710-6, 2020/11262-0, 2020/11089-6, and 2020/10533-0. The authors thank EarthSense for support with TerraSentia robots and field data.

Funding

The work was partially supported by Sao Paulo Research Foundation (FAPESP) grant numbers 2020/13037-3, 2020/12710-6, 2020/11262-0, 2020/11089-6, and 2020/10533-0.

Author information

Authors and Affiliations

Authors

Contributions

All authors contributed to the study’s conception and design. The field preparation, data collection, and experimentation on fields were performed by Andres Eduardo Baquero Velasquez, Mateus Valverde Gsparino, Girish Chowdhary, and Vitor Akihiro Hisano Higuti. Each module development had a responsible, for entrance/exit development was done by Estevão Serafim Calera, Visual Odometry was done by Jorge Id Facuri Filho, orientation estimation and navigation control was done by Gabriel Lima Araujo, for the leaf/soil/sky classifier was performed by Gabriel Correa de Oliveira, and the deep learning plant identification was performed by Lucas Toschi. All modules were supervised by Vitor Akihiro Hisano Higuti, Marcelo Becker, and Andre Carmona Hernandes. All authors contributed to the first draft of the manuscript. Final revisions were performed by Marcelo Becker and Andre Carmona Hernandes. All authors read and approved the final manuscript.

Corresponding authors

Correspondence to Andre Carmona Hernandes or Marcelo Becker.

Ethics declarations

Competing Interests

The authors have no relevant financial or non-financial interests to disclose.

Ethics Approval

Not applicable

Consent to participate

Not applicable

Consent for Publication

Not applicable

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Calera, E.S., Oliveira, G.C.d., Araujo, G.L. et al. Under-Canopy Navigation for an Agricultural Rover Based on Image Data. J Intell Robot Syst 108, 29 (2023). https://doi.org/10.1007/s10846-023-01849-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10846-023-01849-8

Keywords

Navigation