[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to main content

Assessment of Explainable Anomaly Detection for Monitoring of Cold Rolling Process

  • Conference paper
  • First Online:
Computational Science – ICCS 2024 (ICCS 2024)

Abstract

The detection and explanation of anomalies within the industrial context remains a difficult task, which requires the use of well-designed methods. In this study, we focus on evaluating the performance of Explainable Anomaly Detection (XAD) algorithms in the context of a complex industrial process, specifically cold rolling. We train several state-of-the-art anomaly detection algorithms on the synthetic data from the cold rolling process and optimize their hyperparameters to maximize its predictive capabilities. Then we employ various model-agnostic Explainable AI (XAI) methods to generate explanations for the abnormal observations. The explanations are evaluated using a set of XAI metrics specifically selected for the anomaly detection task in industrial setting. The results provide insights into the impact of the selection of both machine learning and XAI methods on the overall performance of the model, emphasizing the importance of interpretability in industrial applications. For the detection of anomalies in cold rolling, we found that autoencoder-based approaches outperformed other methods, with the SHAP method providing the best explanations according to the evaluation metrics used.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
£29.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
GBP 19.95
Price includes VAT (United Kingdom)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
GBP 99.99
Price includes VAT (United Kingdom)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
GBP 64.99
Price includes VAT (United Kingdom)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Abdullah, T.A.A., Zahid, M.S.M., Ali, W.: A review of interpretable ml in healthcare: taxonomy, applications, challenges, and future directions. Symmetry 13(12), 2439 (2021). https://doi.org/10.3390/sym13122439

  2. Adadi, A., Berrada, M.: Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access 6, 52138–52160 (2018). https://doi.org/10.1109/ACCESS.2018.2870052

    Article  Google Scholar 

  3. Alvarez-Melis, D., Jaakkola, T.S.: Towards robust interpretability with self-explaining neural networks. In: Proceedings of the 32nd International Conference on Neural Information Processing Systems (NIPS 2018), pp. 7786–7795. Curran Associates Inc., Red Hook (2018)

    Google Scholar 

  4. Anello, E., et al.: Anomaly detection for the industrial internet of things: an unsupervised approach for fast root cause analysis. In: 2022 IEEE Conference on Control Technology and Applications (CCTA), pp. 1366–1371 (2022). https://doi.org/10.1109/CCTA49430.2022.9966158

  5. Baek, M., Kim, S.B.: Failure detection and primary cause identification of multivariate time series data in semiconductor equipment. IEEE Access 11, 54363–54372 (2023). https://doi.org/10.1109/ACCESS.2023.3281407

    Article  Google Scholar 

  6. Barbado, A.: Óscar Corcho: interpretable machine learning models for predicting and explaining vehicle fuel consumption anomalies. Eng. Appl. Artif. Intell. 115, 105222 (2022). https://doi.org/10.1016/j.engappai.2022.105222

    Article  Google Scholar 

  7. Bhatt, U., Weller, A., Moura, J.M.F.: Evaluating and aggregating feature-based model explanations. In: Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence (IJCAI 2020) (2021)

    Google Scholar 

  8. Bland, D.R., Ford, H.: The calculation of roll force and torque in cold strip rolling with tensions. Proc. Inst. Mech. Eng. 159(1), 144–163 (1948)

    Google Scholar 

  9. Breunig, M.M., Kriegel, H.P., Ng, R.T., Sander, J.: Lof: identifying density-based local outliers. In: Proceedings of the 2000 ACM SIGMOD International Conference on Management of Data (SIGMOD 2000), pp. 93–104. Association for Computing Machinery, New York (2000). https://doi.org/10.1145/342009.335388

  10. Carvalho, D.V., Pereira, E.M., Cardoso, J.S.: Machine learning interpretability: a survey on methods and metrics. Electronics 8(8) (2019). https://doi.org/10.3390/electronics8080832

  11. Chandola, V., Banerjee, A., Kumar, V.: Anomaly detection: a survey. ACM Comput. Surv. 41(3), 1–58 (2009). https://doi.org/10.1145/1541880.1541882

  12. Dwivedi, R., et al.: Explainable AI (XAI): core ideas, techniques, and solutions. ACM Comput. Surv. 55(9), 1–33 (2023). https://doi.org/10.1145/3561048

  13. Ester, M., Kriegel, H.P., Sander, J., Xu, X.: A density-based algorithm for discovering clusters in large spatial databases with noise. In: Proceedings of the Second International Conference on Knowledge Discovery and Data Mining (KDD 1996), pp. 226–231. AAAI Press (1996)

    Google Scholar 

  14. Goldstein, M., Dengel, A.: Histogram-based outlier score (HBOS): a fast unsupervised anomaly detection algorithm. In: KI-2012: Poster and Demo Track, vol. 1, pp. 59–63 (2012)

    Google Scholar 

  15. Ha, D.T., Hoang, N.X., Hoang, N.V., Du, N.H., Huong, T.T., Tran, K.P.: Explainable anomaly detection for industrial control system cybersecurity. IFAC-PapersOnLine 55(10), 1183–1188 (2022). https://doi.org/10.1016/j.ifacol.2022.09.550. 10th IFAC Conference on Manufacturing Modelling, Management and Control MIM 2022

  16. Hermansa, M., Kozielski, M., Michalak, M., Szczyrba, K., Wrobel, L., Sikora, M.: Sensor-based predictive maintenance with reduction of false alarms; a case study in heavy industry. Sensors 22(1) (2022). https://doi.org/10.3390/s22010226

  17. Jakubowski, J., Stanisz, P., Bobek, S., Nalepa, G.J.: Roll wear prediction in strip cold rolling with physics-informed autoencoder and counterfactual explanations. In: 2022 IEEE 9th International Conference on Data Science and Advanced Analytics (DSAA), pp. 1–10. IEEE (2022). https://doi.org/10.1109/DSAA54385.2022.10032357

  18. Kim, D., Antariksa, G., Handayani, M.P., Lee, S., Lee, J.: Explainable anomaly detection framework for maritime main engine sensor data. Sensors 21(15) (2021). https://doi.org/10.3390/s21155200

  19. Knorr, E.M., Ng, R.T.: Algorithms for mining distance-based outliers in large datasets. In: Proceedings of the 24rd International Conference on Very Large Data Bases (VLDB 1998), pp. 392–403. Morgan Kaufmann Publishers Inc., San Francisco (1998)

    Google Scholar 

  20. Lenard, J.G.: 9 - tribology. In: Lenard, J.G. (ed.) Primer on Flat Rolling, 2nd edn, pp. 193–266. Elsevier, Oxford (2014). https://doi.org/10.1016/B978-0-08-099418-5.00009-3

  21. Liu, F.T., Ting, K.M., Zhou, Z.: Isolation forest. In: 2008 Eighth IEEE International Conference on Data Mining, pp. 413–422 (2008). https://doi.org/10.1109/ICDM.2008.17

  22. Lundberg, S.M., Lee, S.I.: A unified approach to interpreting model predictions. In: Guyon, I., Luxburg, U.V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 30, pp. 4765–4774. Curran Associates, Inc. (2017)

    Google Scholar 

  23. Mahalanobis, P.: On the generalised distance in statistics. Proc. Natl. Inst. Sci. India 2, 49–55 (1936)

    Google Scholar 

  24. Nauta, M., et al.: From anecdotal evidence to quantitative evaluation methods: a systematic review on evaluating explainable AI. ACM Comput. Surv. 55(13s), 1–42 (2023). https://doi.org/10.1145/3583558

  25. Oblizanov, A., Shevskaya, N., Kazak, A., Rudenko, M., Dorofeeva, A.: Evaluation metrics research for explainable artificial intelligence global methods using synthetic data. Appl. Syst. Innov. 6(1) (2023). https://doi.org/10.3390/asi6010026

  26. Pevný, T.: Loda: lightweight on-line detector of anomalies. Mach. Learn. 102(2), 275–304 (2015). https://doi.org/10.1007/s10994-015-5521-0

  27. Ribeiro, M.T., Singh, S., Guestrin, C.: “Why should i trust you?”: Explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD 2016), pp. 1135–1144. Association for Computing Machinery, New York (2016). https://doi.org/10.1145/2939672.2939778

  28. Sakurada, M., Yairi, T.: Anomaly detection using autoencoders with nonlinear dimensionality reduction. In: Proceedings of the MLSDA 2014 2nd Workshop on Machine Learning for Sensory Data Analysis (MLSDA 2014), pp. 4–11. Association for Computing Machinery, New York (2014). https://doi.org/10.1145/2689746.2689747

  29. Schlegl, T., Seeböck, P., Waldstein, S.M., Schmidt-Erfurth, U., Langs, G.: Unsupervised anomaly detection with generative adversarial networks to guide marker discovery. In: Niethammer, M., et al. (eds.) IPMI 2017. LNCS, vol. 10265, pp. 146–157. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-59050-9_12

  30. Snoek, J., Larochelle, H., Adams, R.P.: Practical Bayesian optimization of machine learning algorithms. In: Pereira, F., Burges, C., Bottou, L., Weinberger, K. (eds.) Advances in Neural Information Processing Systems, vol. 25. Curran Associates, Inc. (2012)

    Google Scholar 

  31. Steenwinckel, B., et al.: Flags: a methodology for adaptive anomaly detection and root cause analysis on sensor data streams by fusing expert knowledge with machine learning. Futur. Gener. Comput. Syst. 116, 30–48 (2021). https://doi.org/10.1016/j.future.2020.10.015

  32. Tan, S.C., Ting, K.M., Liu, T.F.: Fast anomaly detection for streaming data. In: Proceedings of the Twenty-Second International Joint Conference on Artificial Intelligence (IJCAI 2011), vol. 2, pp. 1511–1516. AAAI Press (2011)

    Google Scholar 

  33. Venkata Reddy, N., Suryanarayana, G.: A set-up model for tandem cold rolling mills. J. Mater. Process. Technol. 116(2–3), 269–277 (2001). https://doi.org/10.1016/s0924-0136(01)01007-x

    Article  Google Scholar 

  34. Wachter, S., Mittelstadt, B., Russell, C.: Counterfactual explanations without opening the black box: automated decisions and the GDPR. Harv. JL Tech. 31, 841 (2017)

    Google Scholar 

  35. Wang, H., Bah, M.J., Hammad, M.: Progress in outlier detection techniques: a survey. IEEE Access 7, 107964–108000 (2019). https://doi.org/10.1109/access.2019.2932769

    Article  Google Scholar 

Download references

Acknowledgements

Project XPM is supported by the National Science Centre, Poland (2020/02/Y/ST6/00070), under CHIST-ERA IV programme, which has received funding from the EU Horizon 2020 Research and Innovation Programme, under Grant Agreement no. 857925.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jakub Jakubowski .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Jakubowski, J., Stanisz, P., Bobek, S., Nalepa, G.J. (2024). Assessment of Explainable Anomaly Detection for Monitoring of Cold Rolling Process. In: Franco, L., de Mulatier, C., Paszynski, M., Krzhizhanovskaya, V.V., Dongarra, J.J., Sloot, P.M.A. (eds) Computational Science – ICCS 2024. ICCS 2024. Lecture Notes in Computer Science, vol 14836. Springer, Cham. https://doi.org/10.1007/978-3-031-63775-9_24

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-63775-9_24

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-63774-2

  • Online ISBN: 978-3-031-63775-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics