[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.5555/3600270.3601076guideproceedingsArticle/Chapter ViewAbstractPublication PagesnipsConference Proceedingsconference-collections
research-article

Hierarchical lattice layer for partially monotone neural networks

Published: 28 November 2022 Publication History

Abstract

Partially monotone regression is a regression analysis in which the target values are monotonically increasing with respect to a subset of input features. The TensorFlow Lattice library is one of the standard machine learning libraries for partially monotone regression. It consists of several neural network layers, and its core component is the lattice layer. One of the problems of the lattice layer is that it requires the projected gradient descent algorithm with many constraints to train it. Another problem is that it cannot receive a high-dimensional input vector due to the memory consumption. We propose a novel neural network layer, the hierarchical lattice layer (HLL), as an extension of the lattice layer so that we can use a standard stochastic gradient descent algorithm to train HLL while satisfying monotonicity constraints and so that it can receive a high-dimensional input vector. Our experiments demonstrate that HLL did not sacrifice its prediction performance on real datasets compared with the lattice layer.

Supplementary Material

Additional material (3600270.3601076_supp.pdf)
Supplemental material.

References

[1]
T. Akiba, S. Sano, T. Yanase, T. Ohta, and M. Koyama. Optuna: A next-generation hyperparameter optimization framework. In Proceedings of KDD 2019, pages 2623-2631, 2019.
[2]
A. Alzahrani and S. Sadaoui. Clustering and labeling auction fraud data. Data Management, Analytics and Innovation, pages 269-283, 2020.
[3]
C. Bartley, W. Liu, and M. Reynolds. A novel framework for constructing partially monotone rule ensembles. In Proceedings of ICDE 2018, 2018.
[4]
C. Bartley, W. Liu, and M. Reynolds. Enhanced random forest algorithms for partially monotone ordinal classification. In Proceedings of AAAI-19, pages 3224-3231, 2019.
[5]
K. Buza. Feedback prediction for blogs. Data Analysis, Machine Learning and Knowledge Discovery, pages 145-152, 2014.
[6]
K. Canini, A. Cotter, M. R. Gupta, M. Milani Fard, and J. Pfeifer. Fast and flexible monotonic functions with ensembles of lattices. In Proceedings of NIPS 2016, pages 2927-2935, 2016.
[7]
J.-R. Cano, P. A. Gutiérrez, B. Krawczyk, M. Woźniak, and S. García. Monotonic classification: An overview on algorithms, performance measures and data sets. Neurocomputing, 341:168-182, 2019.
[8]
M. Cassotti, D. Ballabio, V. Consonni, A. Mauri, I. V. Tetko, and R. Todeschini. Prediction of acute aquatic toxicity towards Daphnia magna using GA-kNN method. Alternatives to Laboratory Animals, 42(1):31-41, 2014.
[9]
H. Daniels and M. Velikova. Monotone and partially monotone neural networks. IEEE Transactions on Neural Networks, 21(6):906-917, 2010.
[10]
D. Dua and C. Graff. UCI Machine Learning Repository, 2017.
[11]
C. Dugas, Y. Bengio, F. Bélisle, C. Nadeau, and R. Garcia. Incorporating second-order functional knowledge for better option pricing. In Proceedings of NIPS 2000, pages 472-478, 2000.
[12]
M. Milani Fard, K. Canini, A. Cotter, J. Pfeifer, and M. Gupta. Fast and flexible monotonic functions with ensembles of lattices. In Proceedings of NIPS 2016, pages 2927-2935, 2016.
[13]
K. Fernandes, P. Vinagre, and P. Cortez. A proactive intelligent decision support system for predicting the popularity of online news. In Proceedings of EPIA 2015, pages 535-546, 2015.
[14]
M. Gupta, A. Cotter, J. Pfeifer, K. Voevodski, K. Canini, A. Mangylov, W. Moczydlowski, and A. van Esbroeck. Monotonic calibrated interpolated look-up tables. Journal of Machine Learning Research, 17(1):3790-3836, 2016.
[15]
A. Gupta, N. Shukla, L. Marla, A. Kolbeinsson, and K. Yellepeddi. How to incorporate monotonicity in deep networks while preserving flexibility? In Proceedings of NeurIPS 2019 Workshop on Machine Learning with Guarantees, 2019.
[16]
D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. In Proceedings of ICLR 2015, 2015.
[17]
R. Koenker and B. Bassett, Jr. Regression quantiles. Econometrica, 46(1):33-50, 1978.
[18]
F. Lauer and G. Bloch. Incorporating prior knowledge in support vector regression. Machine Learning, 70:89-118, 2008.
[19]
T.-S. Lim, W.-Y. Loh, and Y.-S. Shih. A comparison of prediction accuracy, complexity, and training time of thirty-three old and new classification algorithms. Machine Learning, 40:203-228, 2000.
[20]
X. Liu, X. Han, N. Zhang, and Q. Liu. Certified monotonic neural networks. In Proceedings of NeurIPS 2020, pages 15427-15438, 2020.
[21]
S. M. Lundberg and S.-I. Lee. A unified approach to interpreting model predictions. In Proceedings of NIPS 2017, pages 4768-4777, 2017.
[22]
J. Monteiro, M. O. Ahmed, H. Hajimirsadeghi, and G. Mori. Not too close and not too far: Enforcing monotonicity requires penalizing the right points. In Proceedings of XAI for Debugging Workshop at NeurIPS 2021, 2021.
[23]
N. Morioka, E. Louidor, and W. T. Bakst. Monotonic Kronecker-factored lattice. In Proceedings of ICLR 2021, 2021.
[24]
S. Moro, P. Cortez, and P. Rita. A data-driven approach to predict the success of bank telemarketing. Decision Support Systems, 62:22-31, 2014.
[25]
W. J. Nash, T. L. Sellers, S. R. Talbot, A. J. Cawthorn, and W. B. Fort. The population biology of Abalone (Haliotis species) in Tasmania. I. Blacklip Abalone (H. rubra) from the north coast and islands of bass strait. Technical report, Sea Fisheries Division, 1994.
[26]
Y. Park, D. Maddix, F.-X. Aubet, K. Kan, J. Gasthaus, and Y. Wang. Learning quantile functions without quantile crossing for distribution-free time series forecasting. In Proceedings of AISTATS 2022, pages 8127-8150, 2022.
[27]
R. Potharst and A. J. Feelders. Classification trees for problems with monotonicity constraints. ACM SIGKDD Explorations Newsletter, 4(1):1-10, 2002.
[28]
K. Ren, J. Qin, L. Zheng, Z. Yang, W. Zhang, L. Qiu, and Y. Yu. Deep recurrent survival analysis. In Proceedings of AAAI-19, pages 4798-4805, 2019.
[29]
D. Rindt, R. Hu, D. Steinsaltz, and D. Sejdinovic. Survival regression with proper scoring rules and monotonic neural networks. In Proceedings of AISTATS 2022, pages 1190-1205, 2022.
[30]
C. O. Sakar, S. O. Polat, M. Katircioglu, and Y. Kastro. Real-time prediction of online shoppers' purchasing intention using multilayer perceptron and LSTM recurrent neural networks. Neural Computing and Applications, 31:6893-6908, 2019.
[31]
K. Singh, R. Kaur, and D. Kumar. Comment volume prediction using neural networks and decision trees. In Proceedings of UKSIM 2015, pages 15-20, 2015.
[32]
A. Sivaraman, G. Farnadi, T. Millstein, and G. Van den Broeck. Counterexample-guided learning of monotonic neural networks. In Proceedings of NeurIPS 2020, pages 11936-11948, 2020.
[33]
N. Tagasovska and D. Lopez-Paz. Single-model uncertainties for deep learning. In Proceedings of NeurIPS 2019, pages 6414-6425, 2019.
[34]
A. Tsanas and A. Xifara. Accurate quantitative estimation of energy performance of residential buildings using statistical machine learning tools. Energy and Buildings, 49:560-567, 2012.
[35]
S. Wang and M. Gupta. Deontological ethics by monotonicity shape constraints. In Proceedings of AISTATS 2020, pages 2043-2054, 2020.
[36]
P. Wang, Y. Li, and C. K. Reddy. Machine learning for survival analysis: A survey. ACM Computing Surveys, 51(6):1-36, 2019.
[37]
I.-C. Yeh. Modeling of strength of high performance concrete using artificial neural networks. Cement and Concrete Research, 28(12):1797-1808, 1998.
[38]
S. You, D. Ding, K. Canini, J. Pfeifer, and M. R. Gupta. Deep lattice networks and partial monotonic functions. In Proceedings of NIPS 2017, pages 2985-2993, 2017.
[39]
P. Zheng, S. Yuan, and X. Wu. Safe: A neural survival analysis model for fraud early detection. In Proceedings of AAAI-19, pages 1278-1285, 2019.
[40]
F. Zhou, J. Wang, and X. Feng. Non-crossing quantile regression for deep reinforcement learning. In Proceedings of NeurIPS 2020, pages 15909-15919, 2020.

Index Terms

  1. Hierarchical lattice layer for partially monotone neural networks
    Index terms have been assigned to the content through auto-classification.

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image Guide Proceedings
    NIPS '22: Proceedings of the 36th International Conference on Neural Information Processing Systems
    November 2022
    39114 pages

    Publisher

    Curran Associates Inc.

    Red Hook, NY, United States

    Publication History

    Published: 28 November 2022

    Qualifiers

    • Research-article
    • Research
    • Refereed limited

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 0
      Total Downloads
    • Downloads (Last 12 months)0
    • Downloads (Last 6 weeks)0
    Reflects downloads up to 24 Jan 2025

    Other Metrics

    Citations

    View Options

    View options

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media