[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
survey

AutoML to Date and Beyond: Challenges and Opportunities

Published: 04 October 2021 Publication History

Abstract

As big data becomes ubiquitous across domains, and more and more stakeholders aspire to make the most of their data, demand for machine learning tools has spurred researchers to explore the possibilities of automated machine learning (AutoML). AutoML tools aim to make machine learning accessible for non-machine learning experts (domain experts), to improve the efficiency of machine learning, and to accelerate machine learning research. But although automation and efficiency are among AutoML’s main selling points, the process still requires human involvement at a number of vital steps, including understanding the attributes of domain-specific data, defining prediction problems, creating a suitable training dataset, and selecting a promising machine learning technique. These steps often require a prolonged back-and-forth that makes this process inefficient for domain experts and data scientists alike and keeps so-called AutoML systems from being truly automatic. In this review article, we introduce a new classification system for AutoML systems, using a seven-tiered schematic to distinguish these systems based on their level of autonomy. We begin by describing what an end-to-end machine learning pipeline actually looks like, and which subtasks of the machine learning pipeline have been automated so far. We highlight those subtasks that are still done manually—generally by a data scientist—and explain how this limits domain experts’ access to machine learning. Next, we introduce our novel level-based taxonomy for AutoML systems and define each level according to the scope of automation support provided. Finally, we lay out a roadmap for the future, pinpointing the research required to further automate the end-to-end machine learning pipeline and discussing important challenges that stand in the way of this ambitious goal.

References

[1]
Oliver Zeldin, Sameer Indarapu, Damien Lefortier, Zheng Chen, Sushma Bannur, Jurgen Van Gael, John Myles White, and Jason Gauci. 2018. Introducing the Facebook Field Guide to Machine Learning Video Series. Retrieved from https://research.fb.com/ the-facebook-field-guide-to-machine-learning-video-series/.
[2]
Bowen Baker, Otkrist Gupta, Ramesh Raskar, and Nikhil Naik. 2017. Accelerating neural architecture search using performance prediction. arXiv:1705.10823. Retrieved from https://arxiv.org/abs/1705.10823.
[3]
Yoshua Bengio. 2012. Practical recommendations for gradient-based training of deep architectures. In Neural networks: Tricks of the Trade. Springer, 437–478.
[4]
Guy Berger. 2018. LinkedIn 2018 Emerging Jobs Report. Retrieved March 2, 2019 from https://economicgraph.linkedin.com/research/linkedin-2018-emerging-jobs-report.
[5]
James Bergstra and Yoshua Bengio. 2012. Random search for hyper-parameter optimization. J. Mach. Learn. Res. 13, (Feb.2012), 281–305.
[6]
James Bergstra, Daniel Yamins, and David Daniel Cox. 2013. Making a science of model search: Hyperparameter optimization in hundreds of dimensions for vision architectures. In International Conference on Machine Learning (PMLR’15). 115–123.
[7]
James S. Bergstra, Rémi Bardenet, Yoshua Bengio, and Balázs Kégl. 2011. Algorithms for hyper-parameter optimization. In Advances in Neural Information Processing Systems. 2546–2554.
[8]
Dylan Cashman, Shah Rukh Humayoun, Florian Heimerl, Kendall Park, Subhajit Das, John Thompson, Bahador Saket, Abigail Mosca, John Stasko, Alex Endert, et al. 2018. Visual analytics for automated model discovery. arXiv:1809.10782. Retrieved from https://arxiv.org/abs/1809.10782.
[9]
Girish Chandrashekar and Ferat Sahin. 2014. A survey on feature selection methods. Comput. Electr. Eng. 40, 1 (2014), 16–28.
[10]
Hongju Cheng, Danyang Feng, Xiaobin Shi, and Chongcheng Chen. 2018. Data quality analysis and cleaning strategy for wireless sensor networks. EURASIP J. Wireless Commun. Netw. 2018, 1 (2018), 1–11.
[11]
Xu Chu, Ihab F. Ilyas, Sanjay Krishnan, and Jiannan Wang. 2016. Data cleaning: Overview and emerging challenges. In Proceedings of the 2016 International Conference on Management of Data. ACM, 2201–2206.
[12]
Xu Chu, John Morcos, Ihab F. Ilyas, Mourad Ouzzani, Paolo Papotti, Nan Tang, and Yin Ye. 2015. KATARA: Reliable data cleaning with knowledge bases and crowdsourcing. Proc. VLDB Endow. 8, 12 (2015), 1952–1955.
[13]
Janez Demšar and Blaž Zupan. 2012. Orange: Data mining fruitful and fun. Informacijska Družba – IS 6 (2012).
[14]
Iddo Drori, Yamuna Krishnamurthy, Remi Rampin, Raoni de Paula Lourenco, Jorge Piazentin Ono, Kyunghyun Cho, Claudio Silva, and Juliana Freire. 2018. AlphaD3M: Machine learning pipeline synthesis. In AutoML Workshop at ICML.
[15]
Yan Duan, Xi Chen, Rein Houthooft, John Schulman, and Pieter Abbeel. 2016. Benchmarking deep reinforcement learning for continuous control. In Proceedings of the International Conference on Machine Learning. 1329–1338.
[16]
Katharina Eggensperger, Frank Hutter, Holger Hoos, and Kevin Leyton-Brown. 2015. Efficient benchmarking of hyperparameter optimizers via surrogates. In Proceedings of the 29th AAAI Conference on Artificial Intelligence.
[17]
Thomas Elsken, Jan Hendrik Metzen, and Frank Hutter. 2018. Neural architecture search: A survey. arXiv:1808.05377. Retreived from https://arxiv.org/abs/1808.05377.
[18]
Matthias Feurer, Aaron Klein, Katharina Eggensperger, Jost Springenberg, Manuel Blum, and Frank Hutter. 2015. Efficient and robust automated machine learning. In Advances in Neural Information Processing Systems. 2962–2970.
[19]
Pieter Gijsbers, Erin LeDell, Janek Thomas, Sébastien Poirier, Bernd Bischl, and Joaquin Vanschoren. 2019. An open source AutoML benchmark. arXiv:1907.00909. Retrieved from https://arxiv.org/abs/1907.00909.
[20]
Daniel Golovin, Benjamin Solnik, Subhodeep Moitra, Greg Kochanski, John Karro, and D. Sculley. 2017. Google vizier: A service for black-box optimization. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, 1487–1495.
[21]
Laura Gustafson. 2018. Bayesian Tuning and Bandits: An Extensible, Open Source Library for AutoML. M.Eng. Thesis. Massachusetts Institute of Technology, Cambridge, MA.
[22]
Daniel Haas, Sanjay Krishnan, Jiannan Wang, Michael J Franklin, and Eugene Wu. 2015. Wisteria: Nurturing scalable data cleaning infrastructure. Proc. VLDB Endow. 8, 12 (2015), 2004–2007.
[23]
Xin He, Kaiyong Zhao, and Xiaowen Chu. 2019. AutoML: A survey of the state-of-the-art. arXiv:1908.00709. Retrieved from https://arxiv.org/abs/1908.00709.
[24]
Markus Hofmann and Ralf Klinkenberg. 2016. RapidMiner: Data Mining Use Cases and Business Snalytics Applications. CRC Press.
[25]
Frank Hutter, Holger H. Hoos, and Kevin Leyton-Brown. 2011. Sequential model-based optimization for general algorithm configuration. In Proceedings of the International Conference on Learning and Intelligent Optimization. Springer, 507–523.
[26]
Frank Hutter, Jörg Lücke, and Lars Schmidt-Thieme. 2015. Beyond manual tuning of hyperparameters. Künstl. Intell. 29, 4 (2015), 329–337.
[27]
Ihab F. Ilyas, Xu Chu, et al. 2015. Trends in cleaning relational data: Consistency and deduplication. Found. Trends Databases 5, 4 (2015), 281–393.
[28]
Zhi-wei Ji, Min Hu, and Jian-xin Yin. 2011. A survey of feature selection algorithm. Electr. Des. Eng. 19, 9 (2011), 46–51.
[29]
James Max Kanter, Owen Gillespie, and Kalyan Veeramachaneni. 2016. Label, segment, featurize: a cross domain framework for prediction engineering. In Proceedings of the 2016 IEEE International Conference on Data Science and Advanced Analytics (DSAA’16). IEEE, 430–439.
[30]
James Max Kanter and Kalyan Veeramachaneni. 2015. Deep feature synthesis: Towards automating data science endeavors. In Proceedings of the IEEE International Conference on Data Science and Advanced Analytics (DSAA’15). IEEE, 1–10.
[31]
Shubhra Kanti Karmaker Santu, Parikshit Sondhi, and ChengXiang Zhai. 2017. On application of learning to rank for e-commerce search. In Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval. 475–484.
[32]
Gilad Katz, Eui Chul Richard Shin, and Dawn Song. 2016. Explorekit: Automatic feature generation and selection. In Proceedings of the 2016 IEEE 16th International Conference on Data Mining (ICDM’16). IEEE, 979–984.
[33]
Ambika Kaul, Saket Maheshwary, and Vikram Pudi. 2017. AutoLearn-automated feature generation and selection. In Proceedings of the 2017 IEEE International Conference on Data Mining (ICDM’17). IEEE, 217–226.
[34]
Udayan Khurana, Horst Samulowitz, and Deepak Turaga. 2017. feature engineering for predictive modeling using reinforcement learning. arXiv:1709.07150. Retrieved from https://arxiv.org/abs/1709.07150.
[35]
Aaron Klein, Zhenwen Dai, Frank Hutter, Neil Lawrence, and Javier Gonzalez. 2019. Meta-surrogate benchmarking for hyperparameter optimization. arXiv:1905.12982. Retrieved from https://arxiv.org/abs/1905.12982.
[36]
Sanjay Krishnan, Jiannan Wang, Eugene Wu, Michael J Franklin, and Ken Goldberg. 2016. ActiveClean: Interactive data cleaning for statistical modeling. Proc. VLDB Endow. 9, 12 (2016), 948–959.
[37]
Richard Lippmann, William Campbell, and Joseph Campbell. 2016. An overview of the DARPA data driven discovery of models (D3M) program. In Proceedings of the NIPS Workshop on Artificial Intelligence for Data Science.
[38]
Chenxi Liu, Barret Zoph, Jonathon Shlens, Wei Hua, Li-Jia Li, Li Fei-Fei, Alan Yuille, Jonathan Huang, and Kevin Murphy. 2017. Progressive neural architecture search. arXiv:1712.00559. Retrieved from https://arxiv.org/abs/1712.00559.
[39]
De-Rong Liu, Hong-Liang Li, and Ding Wang. 2015. Feature selection and feature learning for high-dimensional batch reinforcement learning: A survey. Int. J. Autom. Comput. 12, 3 (2015), 229–242.
[40]
Hanxiao Liu, Karen Simonyan, Oriol Vinyals, Chrisantha Fernando, and Koray Kavukcuoglu. 2017. Hierarchical representations for efficient architecture search. arXiv:1711.00436. Retrieved from https://arxiv.org/abs/1711.00436.
[41]
Bo Long, Jiang Bian, Olivier Chapelle, Ya Zhang, Yoshiyuki Inagaki, and Yi Chang. 2014. Active learning for ranking through expected loss optimization. IEEE Trans. Knowl. Data Eng. 27, 5 (2014), 1180–1191.
[42]
Dougal Maclaurin, David Duvenaud, and Ryan Adams. 2015. Gradient-based hyperparameter optimization through reversible learning. In Proceedings of the International Conference on Machine Learning. 2113–2122.
[43]
Pedro Marcelino. 2018. Transfer learning from pre-trained models. Towards Data Science (2018).
[44]
Zhuqi Miao, Shrieraam Sathyanarayanan, Elvena Fong, William Paiva, and Dursun Delen. 2018. An assessment and cleaning framework for electronic health records data. In Proceedings of the 2018 Institute of Industrial and Systems Engineers Annual Conference and Expo (IISE’18).
[45]
Amanda Minnich, Noor Abu-El-Rub, Maya Gokhale, Ronald Minnich, and Abdullah Mueen. 2016. ClearView: Data cleaning for online review mining. In Proceedings of the 2016 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining. IEEE Press, 555–558.
[46]
Michalis Mountantonakis and Yannis Tzitzikas. 2017. How linked data can aid machine learning-based tasks. In Proceedings of the International Conference on Theory and Practice of Digital Libraries. Springer, 155–168.
[47]
Randal S. Olson, Nathan Bartley, Ryan J. Urbanowicz, and Jason H. Moore. 2016. Evaluation of a tree-based pipeline optimization tool for automating data science. In Proceedings of the Genetic and Evolutionary Computation Conference. ACM, 485–492.
[48]
Randal S. Olson, William La Cava, Patryk Orzechowski, Ryan J. Urbanowicz, and Jason H. Moore. 2017. PMLB: A large benchmark suite for machine learning evaluation and comparison. BioData Min. 10, 1 (2017), 36.
[49]
F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. 2011. Scikit-learn: Machine learning in python. J. Mach. Learn. Res. 12 (2011), 2825–2830.
[50]
Hieu Pham, Melody Y. Guan, Barret Zoph, Quoc V. Le, and Jeff Dean. 2018. Efficient neural architecture search via parameter sharing. arXiv:1802.03268.
[51]
Alexander Ratner, Stephen H Bach, Henry Ehrenberg, Jason Fries, Sen Wu, and Christopher Ré. 2020. Snorkel: Rapid training data creation with weak supervision. VLDB J. 29, 2 (2020), 709–730.
[52]
Esteban Real, Alok Aggarwal, Yanping Huang, and Quoc V. Le. 2018. Regularized evolution for image classifier architecture search. arXiv:1802.01548. Retrieved from https://arxiv.org/abs/1802.01548.
[53]
Benjamin Schreck and Kalyan Veeramachaneni. 2016. What would a data scientist ask? automatically formulating and solving predictive problems. In Proceedings of the 2016 IEEE International Conference on Data Science and Advanced Analytics (DSAA’16). IEEE, 440–451.
[54]
Razieh Sheikhpour, Mehdi Agha Sarram, Sajjad Gharaghani, and Mohammad Ali Zare Chahooki. 2017. A survey on semi-supervised feature selection methods. Pattern Recogn. 64 (2017), 141–158.
[55]
Shaohuai Shi, Qiang Wang, Pengfei Xu, and Xiaowen Chu. 2016. Benchmarking state-of-the-art deep learning software tools. In Proceedings of the 7th International Conference on Cloud Computing and Big Data (CCBD’16). IEEE, 99–104.
[56]
Patrice Y. Simard, Saleema Amershi, David M. Chickering, Alicia Edelman Pelton, Soroush Ghorashi, Christopher Meek, Gonzalo Ramos, Jina Suh, Johan Verwey, Mo Wang, et al. 2017. Machine teaching: A new paradigm for building machine learning systems. arXiv:1707.06742. Retrieved from https://arxiv.org/abs/1707.06742.
[57]
Micah Smith, Carles Sala, James Max Kanter, and Kalyan Veeramachaneni. 2019. The machine learning bazaar: Harnessing the ML ecosystem for effective system development. arXiv:1905.08942. Retrieved from https://arxiv.org/abs/1905.08942.
[58]
Jasper Snoek, Hugo Larochelle, and Ryan P. Adams. 2012. Practical bayesian optimization of machine learning algorithms. In Advances in Neural Information Processing Systems. 2951–2959.
[59]
Thomas Swearingen, Will Drevo, Bennett Cyphers, Alfredo Cuesta-Infante, Arun Ross, and Kalyan Veeramachaneni. 2017. ATM: A distributed, collaborative, scalable system for automated machine learning. In Proceedings of the 2017 IEEE International Conference on Big Data (Big Data’17). IEEE, 151–162.
[60]
Kevin Swersky, Jasper Snoek, and Ryan P. Adams. 2013. Multi-task bayesian optimization. In Advances in Neural Information Processing Systems. 2004–2012.
[61]
Chris Thornton, Frank Hutter, Holger H. Hoos, and Kevin Leyton-Brown. 2013. Auto-WEKA: Combined selection and hyperparameter optimization of classification algorithms. In Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, 847–855.
[62]
Suzanne van den Bosch. 2017. Automatic Feature Generation and Selection in Predictive Analytics Solutions. Master’s thesis, Faculty of Science, Radboud University.
[63]
Joaquin Vanschoren. 2018. Meta-learning: A survey. arXiv:1810.03548. Retrieved from https://arxiv.org/abs/1810.03548.
[64]
Juan Wang, Lin-lin Ci, and Kang-ze Yao. 2005. A survey of feature selection. Comput. Eng. Sci. 27, 12 (2005), 68–71.
[65]
Wei Wang, Jinyang Gao, Meihui Zhang, Sheng Wang, Gang Chen, Teck Khim Ng, Beng Chin Ooi, Jie Shao, and Moaz Reyad. 2018. Rafiki: Machine learning as an analytics service system. Proc. VLDB Endow. 12, 2 (2018), 128–140.
[66]
Yiren Wang, Dominic Seyler, Shubhra Kanti Karmaker Santu, and ChengXiang Zhai. 2017. A study of feature construction for text-based forecasting of time series variables. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management. 2347–2350.
[67]
Ian H. Witten and Eibe Frank. 2002. Data mining: Practical machine learning tools and techniques with Java implementations. ACM Sigmod Rec. 31, 1 (2002), 76–77.
[68]
Lei Xu, Shubhra Kanti Karmaker Santu, and Kalyan Veeramachaneni. 2019. MLFriend: Interactive prediction task recommendation for event-driven time-series data. arXiv:1906.12348. Retrieved from https://arxiv.org/abs/1906.12348.
[69]
Aoqian Zhang, Shaoxu Song, Jianmin Wang, and Philip S Yu. 2017. Time series data cleaning: From anomaly detection to anomaly repairing. Proc. VLDB Endow. 10, 10 (2017), 1046–1057.
[70]
Marc-André Zöller and Marco F. Huber. 2019. Survey on automated machine learning. arXiv:1904.12054. Retrieved from https://arxiv.org/abs/1904.12054.
[71]
Barret Zoph and Quoc V Le. 2016. Neural architecture search with reinforcement learning. arXiv:1611.01578. Retrieved from https://arxiv.org/abs/1611.01578.
[72]
Barret Zoph, Vijay Vasudevan, Jonathon Shlens, and Quoc V. Le. 2017. Learning transferable architectures for scalable image recognition. arXiv:1707.07012. Retrieved from https://arxiv.org/abs/1707.07012.

Cited By

View all
  • (2024)Using automated machine learning for the upscaling of gross primary productivityBiogeosciences10.5194/bg-21-2447-202421:10(2447-2472)Online publication date: 24-May-2024
  • (2024)Enhancing Car Segmentation for Thailand's Expressway Industry With an Automated Hybrid Machine Learning FrameworkInternational Journal of Information Technologies and Systems Approach10.4018/IJITSA.35343917:1(1-23)Online publication date: 17-Sep-2024
  • (2024)From Weeds to Feeds: Exploring the Potential of Wild Plants in Horticulture from a Centuries-Long Journey to an AI-Driven FutureHorticulturae10.3390/horticulturae1010102110:10(1021)Online publication date: 25-Sep-2024
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Computing Surveys
ACM Computing Surveys  Volume 54, Issue 8
November 2022
754 pages
ISSN:0360-0300
EISSN:1557-7341
DOI:10.1145/3481697
Issue’s Table of Contents
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 04 October 2021
Accepted: 01 June 2021
Revised: 01 March 2021
Received: 01 October 2020
Published in CSUR Volume 54, Issue 8

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Automated machine learning
  2. interactive data science
  3. democratization of artificial intelligence
  4. predictive analytics

Qualifiers

  • Survey
  • Refereed

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)784
  • Downloads (Last 6 weeks)94
Reflects downloads up to 12 Dec 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Using automated machine learning for the upscaling of gross primary productivityBiogeosciences10.5194/bg-21-2447-202421:10(2447-2472)Online publication date: 24-May-2024
  • (2024)Enhancing Car Segmentation for Thailand's Expressway Industry With an Automated Hybrid Machine Learning FrameworkInternational Journal of Information Technologies and Systems Approach10.4018/IJITSA.35343917:1(1-23)Online publication date: 17-Sep-2024
  • (2024)From Weeds to Feeds: Exploring the Potential of Wild Plants in Horticulture from a Centuries-Long Journey to an AI-Driven FutureHorticulturae10.3390/horticulturae1010102110:10(1021)Online publication date: 25-Sep-2024
  • (2024)SIBILA: Automated Machine-Learning-Based Development of Interpretable Machine-Learning Models on High-Performance Computing PlatformsAI10.3390/ai50401165:4(2353-2374)Online publication date: 14-Nov-2024
  • (2024)A Review of Machine Learning Techniques in Agroclimatic StudiesAgriculture10.3390/agriculture1403048114:3(481)Online publication date: 16-Mar-2024
  • (2024)Predictive business process monitoring with AutoML for next activity predictionIntelligent Decision Technologies10.3233/IDT-24063218:3(1965-1980)Online publication date: 16-Sep-2024
  • (2024)Zaman serisi tahminlemede otomatikleştirilmiş makine öğrenmesi (AutoML) kütüphanelerinin karşılaştırılmasıGazi Üniversitesi Mühendislik Mimarlık Fakültesi Dergisi10.17341/gazimmfd.128672039:3(1693-1702)Online publication date: 20-May-2024
  • (2024)Uneven Usage Battery State of Health Estimation via Fractional-Order Equivalent Circuit Model and AutoML FusionJournal of The Electrochemical Society10.1149/1945-7111/ad3eb9171:4(040543)Online publication date: 24-Apr-2024
  • (2024)Unlocking AutoML: Enhancing Data with Deep Learning Algorithms for Medical ImagingJournal of Data and Information Quality10.1145/370589616:4(1-17)Online publication date: 26-Nov-2024
  • (2024)AutoDW: Automatic Data Wrangling Leveraging Large Language ModelsProceedings of the 39th IEEE/ACM International Conference on Automated Software Engineering10.1145/3691620.3695267(2041-2052)Online publication date: 27-Oct-2024
  • Show More Cited By

View Options

Login options

Full Access

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media