Abstract
The main objective of this paper is to clarify the importance of explainability in the crop recommendation process and provide insights on how Explainable Artificial Intelligence (XAI) can be incorporated into existing models successfully. The objective is to increase the definition and transparency of the recommendations implemented by AI in smart agriculture, leading to a detailed analysis of the synchronization between crop recommendation systems and XAI that informs decisions as it has sustainable knowledge and practices in modern agriculture. It reviews state-of-the-art XAI techniques such as local interpretable model-agnostic interpretation (LIME), SHapley interpretation additive approach (SHAP), integrated gradients (IG), and level-wise relevance propagation (LRP). It focuses on interpretable models and critical features analysis, and XAI methods are discussed in terms of their applications, critical features, and definitions. The paper found that XAI methods such as LIME and SHAP can make AI-driven crop recommendation systems more transparent and reliable. Graphical techniques such as dependency plots, summary plots, waterfall graphs, and decision plots effectively analyze feature importance. The paper includes counterfactual explanations using dice ml and hearing with advanced techniques combining IG and LRP to provide in-depth narrative model behavior. The novelty of this study lies in a detailed investigation of how XAI can be incorporated into crop recommendation systems to address the “black box” nature of AI models. It uses a unique XAI technique and model approach to make AI-driven recommendations more meaningful and practical for farmers. The proposed systems and techniques are designed to consume agriculture, addressing the specific needs of intelligent systems, making this research a significant contribution to agricultural AI.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Data availability
The data supporting the findings of this study can be obtained by requesting the corresponding author. Based on user request code will be available on repositories.
References
Ribeiro M, Singh S, and Guestrin C (2016) “Why Should I Trust You?”: explaining the predictions of any classifier. In proceedings of the 2016 conference of the North American chapter of the association for computational linguistics: demonstrations, pages 97–101, San Diego, California. Association for Computational Linguistics
Rawal A, McCoy J, Rawat DB, Sadler BM, Amant RS (2022) Recent advances in trustworthy explainable artificial intelligence: status, challenges, and perspectives. IEEE Trans Artific Intell 3(6):852–866. https://doi.org/10.1109/TAI.2021.3133846
Sabrina F, Sohail S, Farid F, Jahan S, Ahamed F et al (2022) An interpretable artificial intelligence based smart agriculture system. Comput Mater Continua 72(2):3777–3797
Dwivedi R, Dave D, Naik H, Singhal S, Omer R, Patel P, Qian B, Wen Z, Shah T, Morgan G, Ranjan R (2023) Explainable AI (XAI): core ideas, techniques, and solutions. ACM Comput Surv 55(9):1–33. https://doi.org/10.1145/3561048
Minh D, Wang HX, Li YF et al (2022) Explainable artificial intelligence: a comprehensive review. Artif Intell Rev 55:3503–3568. https://doi.org/10.1007/s10462-021-10088-y
Haar LV, Elvira T, Ochoa O (2023) An analysis of explainability methods for convolutional neural networks. Eng Appl Artific Intell 117:105606. https://doi.org/10.1016/j.engappai.2022.105606
Antoniadi AM, Du Y, Guendouz Y, Wei L, Mazo C, Becker BA, Mooney C (2021) Current challenges and future opportunities for XAI in machine learning-based clinical decision support systems: a systematic review. Appl Sci 11(11):5088. https://doi.org/10.3390/app11115088.MDPIAG
Byrne RMJ (2019) Counterfactuals in explainable artificial intelligence (XAI): evidence from human reasoning. In proceedings of the twenty-eighth international joint conference on artificial intelligence survey track. 6276–6282. https://doi.org/10.24963/ijcai.2019/876
Čyras K, Rago A, Albini E, Baroni P, and Toni F (2021) Argumentative XAI: a survey. In 30th international joint conference on artificial intelligence, edited by Z.-H. Zhou, 4392–4399. Montreal: IJCAI
Ehsan U, Vera Liao Q, Muller M, Riedl MO, and Weisz JD (2021) Expanding explainability: towards social transparency in AI systems. In proceedings of the 2021 CHI conference on human factors in computing systems (CHI ‘21). Association for computing machinery, New York, NY, USA, Article 82, 1–19. https://doi.org/10.1145/3411764.3445188
Fiok K, Farahani FV, Karwowski W, Ahram T (2022) Explainable artificial intelligence for education and training. J Def Model Simul 19(2):133–144. https://doi.org/10.1177/15485129211028651
Gunning D, Stefik M, Choi J, Miller T, Stumpf S, Yang GZ (2019) XAI—explainable artificial intelligence. Sci Robot. https://doi.org/10.1126/scirobotics.aay7120
Jiang J, Kahai S, Yang M (2022) Who needs explanation and when? Juggling explainable AI and user epistemic uncertainty. Int J Human-Comput Stud 165:102839. https://doi.org/10.1016/j.ijhcs.2022.102839
Markus AF, Kors JA, Rijnbeek PR (2021) The role of explainability in creating trustworthy artificial intelligence for health care: a comprehensive survey of the terminology, design choices, and evaluation strategies. J Biomed Inf 113:103655. https://doi.org/10.1016/j.jbi.2020.103655
Tjoa E, Guan C (2021) A survey on explainable artificial intelligence (XAI): toward medical XAI. IEEE Trans Neural Netw Learn Syst 32(11):4793–4813. https://doi.org/10.1109/tnnls.2020.3027314
Vera Liao Q, Gruen D, and Miller S (2020) Questioning the AI: informing design practices for explainable AI user experiences. In proceedings of the 2020 CHI conference on Human factors in computing systems. Association for computing machinery, New York, NY, USA, 1–15. https://doi.org/10.1145/3313831.3376590
Szczęsny S, Huderek D, Przyborowski Ł (2023) Explainable spiking neural network for real time feature classification. J Exp Theor Artif Intell 35(1):77–92. https://doi.org/10.1080/0952813X.2021.1957024
Farrow R (2023) The possibilities and limits of XAI in education: a socio-technical perspective. Learn Media Technol 48(2):266–279. https://doi.org/10.1080/17439884.2023.2185630
Zhang K, Xu P and Zhang J (2020) Explainable AI in deep reinforcement learning models: A SHAP method applied in power system emergency control, 2020 IEEE 4th conference on energy internet and energy system integration (EI2), Wuhan, China, pp. 711–716, https://doi.org/10.1109/EI250167.2020.9347147
Lundberg SM, Erion G, Chen H, DeGrave A, Prutkin JM, Nair B, Katz R, Himmelfarb J, Bansal N, Lee S-I (2020) From local explanations to global understanding with explainable AI for trees. Nat Mach Intell 2(1):56–67. https://doi.org/10.1038/s42256-019-0138-9
Aas K, Jullum M, Løland A (2021) Explaining individual predictions when features are dependent: more accurate approximations to shapley values. Artif Intell 298:103502
Tintarev N, Masthoff J (2007). A survey of explanations in recommender systems. In 2007 IEEE 23rd international conference on data engineering workshop (pp. 801–810). IEEE
Tintarev N, Masthoff J (2012) Evaluating the effectiveness of explanations for recommender systems: methodological issues and empirical studies on the impact of personalization. User Model User-Adap Inter 22:399–439
Doshi Z, Nadkarni S, Agarwal R and Shah N (2018) AgroConsultant: intelligent crop recommendation system using machine learning algorithms, fourth international conference on computing communication control and automation
Akkem Y, Biswas SK, Varanasi A (2023) Smart farming using artificial Intelligence: a review. Eng Appl Artific Intell 120:105899. https://doi.org/10.1016/j.engappai.2023.105899
Akkem Y, Biswas SK, Varanasi A (2023) Smart Farming Monitoring Using ML and MLOps. In: Hassanien, A.E., Castillo, O., Anand, S., Jaiswal, A. (eds) International conference on innovative computing and communications. ICICC 2023. Lecture Notes in Networks and Systems, vol 703. Springer, Singapore. https://doi.org/10.1007/978-981-99-3315-0_51
Apat SK, Jyotirmaya Mishra K, Raju S, Padhy N (2023) State of the art of ensemble learning approach for crop prediction. In: Kumar R, Pattnaik PK, João MR, Tavares S (eds) Next generation of internet of things: proceedings of ICNGIoT 2022. Springer Nature, Singapore, pp 675–685. https://doi.org/10.1007/978-981-19-1412-6_58
Suresh G, Senthil Kumar A, Lekashri S, Manikandan R (2021) Efficient crop yield recommendation system using machine learning for digital farming. Int J Modern Agric 10(1):906–914
Anantha Reddy D, Dadore B, Watekar A (2019) Crop recommendation system to maximize crop yield in ramtek region using machine learning. Int J Sci Res Sci Technol. https://doi.org/10.32628/IJSRST196172
Garanayak M, Sahu G, Mohanty SN, Jagadev AK (2021) Agricultural recommendation system for crops using different machine learning regression methods. Int J Agric Environ Inf Syst 12(1):1–20. https://doi.org/10.4018/IJAEIS.20210101.oa1
Rajak RK, Pawar A, Pendke M, Shinde P, Rathod S, Devare A (2017) Crop recommendation system to maximize crop yield using machine learning technique. Int Res J Eng Technol 4(12):950–953
Jaiswal S, Tejaswi K, Nikita K, and Shilpa S (2020) Collaborative recommendation system for agriculture sector. In ITM web of conferences, vol. 32. EDP Sciences
Parikh DP, Jain J, Gupta T, Dabhade RH (2021) Machine learning based crop recommendation system. Int J Adv Res Sci Commun Technol. https://doi.org/10.48175/IJARSCT-1509
Bandara P, Weerasooriya T, Ruchirawya TH, Nanayakkara WJM, Dimantha MAC, Pabasara MGP (2020) Crop recommendation system. Int J Comput Appl 175(22):22–25. https://doi.org/10.5120/ijca2020920723
Rubia Gandhi RR, Angel Ida Chellam J, Prabhu TN, Kathirvel C, Sivaramkrishnan M, and Siva Ramkumar M (2022) Machine learning approaches for smart agriculture, 2022 6th international conference on computing methodologies and communication (ICCMC), pp. 1054–1058, https://doi.org/10.1109/ICCMC53470.2022.9753841
Zhang G, Zhao Z, Yin X, Zhu Y (2021) Impacts of biochars on bacterial community shifts and biodegradation of antibiotics in an agricultural soil during short-term incubation. Sci Total Environ 771:144751. https://doi.org/10.1016/j.scitotenv.2020.144751
Tian H, Huang N, Niu Z, Qin Y, Pei J, Wang J (2019) Mapping winter crops in China with multi-source satellite imagery and phenology-based algorithm. Remote Sens (Basel, Switzerland) 11(7):820. https://doi.org/10.3390/rs11070820
Saeed R, Feng H, Wang X, Zhang X, Fu Z (2022) Fish quality evaluation by sensor and machine learning: a mechanistic review. Food Control 137:108902. https://doi.org/10.1016/j.foodcont.2022.108902
Cheng B, Wang M, Zhao S, Zhai Z, Zhu D, Chen J (2017) Situation-aware dynamic service coordination in an IoT environment. IEEE/ACM Trans Netw 25(4):2082–2095. https://doi.org/10.1109/TNET.2017.2705239
Tong D, Sun Y, Tang J, Luo Z, Lu J, Liu X (2023) Modeling the interaction of internal and external systems of rural settlements: the case of Guangdong. China Land Use Policy 132:106830. https://doi.org/10.1016/j.landusepol.2023.106830
Shams MY, Gamel SA, Talaat FM (2024) Enhancing crop recommendation systems with explainable artificial intelligence: a study on agricultural decision-making. Neural Comput Appli 36:5695–5714. https://doi.org/10.1007/s00521-023-09391-2
Dogra V, Verma S, Kavita MW, Shafi J, Ijaz MF (2024) Shortcut learning explanations for deep natural language processing: a survey on dataset biases. IEEE Access 12:26183–26195. https://doi.org/10.1109/ACCESS.2024.3360306
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors affirm that they have no known financial or interpersonal conflicts that might have looked to have influenced the research presented in this study.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Akkem, Y., Biswas, S.K. & Varanasi, A. Streamlit-based enhancing crop recommendation systems with advanced explainable artificial intelligence for smart farming. Neural Comput & Applic 36, 20011–20025 (2024). https://doi.org/10.1007/s00521-024-10208-z
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00521-024-10208-z