Abstract
The present research focuses on the development and applications of a sensitivity analysis technique on multi-layer perceptron (MLP) neural networks (NN), which eliminates distortions on the sensitivity measures due to dissimilar input ranges with different units of measure for input features of both continuous and symbolic types in NN’s practical engineering applications. The effect of randomly splitting the dataset into training and testing sets on the stability of a MLP network’s sensitivity is also observed and discussed. The IRIS-UCI dataset and a real concreting productivity dataset serve as case studies to illustrate the validity of the undistorted sensitivity measure proposed. The results of the two case studies lead to the conclusion that the sensitivity measures accounting for the relevant input range for each input feature are more accurate and effective for revealing the relevance of each input feature and identifying less significant ones for potential feature reduction on the model. The MLP NN model obtained in such a way can give not only high prediction accuracy, but also valid sensitivity measures on its input features, and hence can be deployed as a predictive tool for supporting the decision process on new scenarios within the engineering problem domain.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
Adeli H (2001) Neural networks in civil engineering: 1989–2000. J Comp Aided Civil Infrastr Eng 16:126–142
Anson M, Wang SQ (1994) Hong Kong performance yardsticks for concrete placing during building construction. Transact Hong Kong Inst Eng 1(1):1–18
Belue LM, Bauer KW Jr (1995) Determining input features for multilayer perceptrons. Neurocomputing 7:111–121
Choi JY, Choi CH (1992) Sensitivity analysis of multilayer perceptron with differentiable activation functions. IEEE Transact Neural Net 3(1):101–107
Dhar V, Stein R (1997) Intelligent decision support systems: the science of knowledge work. Upper Saddle River, Prentice-Hall
Duda RO, Hart PE, Stork DG (2001) Pattern Classification, 2nd edn. Wiley, N.Y.
Engelbrecht AP (2001) A new pruning heuristic based on variance analysis of sensitivity information. IEEE Transact Neural Net 12(6):1386–1399
Halpin D, Woodhead R (1998) Construction Management, 2nd edn. Wiley, New York
Hoff ME (1962) Learning phenomena in network of adaptive switching circuits. Technical Report 1554–1 of Stanford Electronic Laboratory, Stanford, CA
Jan JC, Hung SL, Chi SY, Chern JC (2002) Neural network forecast model in deep excavation. J Comput Civ Engrg ASCE 16(1):59–65
Knowles P (1997) Predicting labor productivity using neural networks. Masters of Science Thesis, University of Alberta, Edmonton
Li H, Shen LY, Love PED (1999) ANN-based mark-up estimation system with self-explanatory capabilities. J Construct Eng Manag ASCE 125(3):185–189
Lu M (2003) Discussion of neural network forecast model in deep excavation. J Comput Civ Eng ASCE 17(3):208
Lu M, Anson M (2004) Establish concrete placing rates using quality control records from Hong Kong building construction projects. J Construct Eng Manag ASCE 130(2):216–224
Lu M, AbouRizk S, Hermann U (2001) Sensitivity analysis of neural networks in spool fabrication productivity studies. J Comput Civ Eng ASCE 15(4):299–308
Ng Wing WY, Yeung Daniel S (2002b) Input dimensionality reduction for radial basis neural network classification problems using sensitivity measure. In: Proceedings of international conference of machine learning and cybernetics, Beijing, pp 2214–2219
Ng Wing WY, Yeung Daniel S (2003a) Selection of weight quantisation accuracy for radial basis function neural network using stochastic sensitivity measure. IEE Electron Lett 39(10):787–789
Ng Wing WY, Yeung Daniel S, Ran Q, Tsang Eric CC (2002a) Statistical output sensitivity to input and weight perturbations of radial basis function neural networks. In: IEEE Proc. of International Conference of S.M.C., Tunisia, pp 503–508
Ng Wing WY, Chang Rocky KC, Yeung Daniel S (2003b) Dimensionality reduction for denial of service detection problems using rbfnn output sensitivity. In: Proceedings of international conference of machine learning and cybernetics, Xi’an, vol. 2, pp 1293–1298
Piché SW, (1992) Selection of weight accuracies for neural networks. PhD Dissertation of Stanford University, Stanford
Piché SW (1995) The selection of weight accuracies for madalines. IEEE Transact Neural Net 6(2):432–445
Rumelhart D, Hinton G, Williams R (1986) Learning representations by backpropagating errors. Nature 323:533–536
Sayet T, Razavi A (1999) Comparison of neural and conventional approaches to mode choice analysis. J Comput Civil Eng ASCE 14(1):23–30
Sinha SK, McKim RA (2000) Artificial neural network for measuring organizational effectiveness. J Comput Civ Eng ASCE 14(1):9–14
Stevenson M, Winter R, Widrow B (1990) Sensitivity of feedforward neural networks to weight errors. IEEE Transact Neural Net 1(1):71–80
Widrow B, Hoff ME (1960) Adaptive switching circuits. 1960 IRE Wescon Convention Record. IRE, pp 96–104
Winter RG (1989) Madaline rule II: a new method for training networks of adaline. PhD Dissertation, Stanford University, Stanford
Yang Y, Zhang Q (1997) A hierarchical analysis for rock engineering using artificial neural networks. Rock Mech Rock Eng 30(4):207–222
Yeung DS, Sun X (2002) Using function aprroximation to analyze the sensitivity of MLP with antisymmetric squashing activation function. IEEE Transact Neural Netw 13(1):34–44
Zeng X, Yeung DS (2001) Sensitivity Analysis of Multilayer Perceptron to Input and Weight Perturbations. IEEE Transact Neural Netw 12(6):1358–1366
Zeng X, Yeung DS (2003) A Quantified sensitivity measure for multilayer perceptron to input perturbation. Neural Comput 15:183–212
Zurada JM, Malinowski A, Usui S (1997) Perturbation method for deleting redundant inputs of perceptron networks. Neurocomputing 14:177–193
Acknowledgements
The research was substantially supported by a Hong Kong Polytechnic University Research Committee Inter-Faculty Research Grant (A/C: G-YD64).
Author information
Authors and Affiliations
Corresponding author
Appendix
Appendix
The 58-record concreting productivity dataset is presented in a table, in which the first five columns are the five input factors, the second to last column “M3perHr” holds the placing rate for each pour in cubic metre per hour, and the last column “Status” marks a training record with “1”, a control record with “2”, and a validation record with “3”.
PourSize | Supply_ratio | AvgSlump | Pour_location | Element_pour | M3perHr | Status |
---|---|---|---|---|---|---|
186.4 | 194.7% | 160.6 | 8 | 5 | 21.9 | 1 |
64.8 | 141.7% | 151.7 | 7 | 6 | 16.5 | 2 |
214.4 | 250.6% | 156.1 | 7 | 6 | 26.8 | 1 |
44.4 | 198.6% | 157.9 | 7 | 6 | 18.4 | 3 |
360.4 | 441.3% | 154.0 | 7 | 6 | 39.9 | 1 |
206.0 | 176.6% | 157.5 | 15 | 6 | 24.7 | 3 |
199.6 | 341.4% | 151.4 | 7 | 6 | 37.3 | 2 |
94.8 | 321.6% | 155.4 | 7 | 5 | 27.9 | 3 |
503.6 | 283.8% | 150.1 | 6 | 6 | 36.2 | 3 |
222.6 | 241.4% | 161.9 | 6 | 6 | 29.0 | 3 |
376.0 | 256.5% | 156.6 | 6 | 6 | 34.1 | 1 |
34.0 | 175.2% | 154.0 | 6 | 5 | 16.3 | 1 |
236.0 | 313.8% | 154.9 | 14 | 6 | 30.1 | 1 |
196.0 | 302.9% | 156.7 | 15 | 6 | 26.1 | 2 |
228.0 | 294.1% | 155.9 | 15 | 6 | 28.0 | 1 |
228.8 | 219.6% | 151.9 | 15 | 6 | 27.5 | 3 |
198.0 | 89.6% | 151.8 | 15 | 6 | 27.5 | 1 |
200.8 | 213.1% | 153.6 | 15 | 6 | 25.4 | 2 |
236.0 | 89.8% | 153.6 | 15 | 6 | 26.2 | 1 |
202.0 | 187.4% | 152.3 | 14 | 6 | 23.9 | 3 |
234.8 | 113.6% | 151.2 | 14 | 6 | 27.3 | 1 |
151.1 | 283.4% | 150.2 | 8 | 5 | 28.9 | 1 |
195.6 | 109.5% | 155.9 | 14 | 6 | 23.1 | 1 |
50.4 | 192.6% | 157.1 | 14 | 6 | 22.2 | 3 |
156.4 | 175.8% | 157.5 | 14 | 6 | 26.4 | 1 |
179.6 | 227.2% | 162.5 | 8 | 5 | 20.2 | 1 |
241.2 | 242.7% | 145.0 | 8 | 5 | 22.6 | 1 |
86.8 | 187.9% | 145.7 | 8 | 5 | 15.8 | 1 |
170.0 | 179.2% | 154.6 | 6 | 5 | 18.6 | 2 |
224.4 | 243.0% | 155.2 | 14 | 6 | 23.6 | 1 |
374.0 | 323.1% | 158.9 | 6 | 6 | 35.0 | 1 |
394.0 | 358.2% | 157.9 | 5 | 6 | 35.7 | 1 |
404.4 | 227.0% | 159.3 | 5 | 5 | 34.1 | 2 |
130.8 | 71.0% | 163.4 | 5 | 5 | 15.1 | 1 |
414.8 | 198.8% | 165.2 | 6 | 5 | 36.6 | 1 |
217.6 | 116.2% | 163.2 | 5 | 6 | 24.0 | 2 |
359.6 | 366.7% | 157.9 | 5 | 5 | 49.6 | 1 |
96.8 | 163.2% | 160.7 | 5 | 6 | 21.8 | 3 |
170.0 | 94.6% | 156.2 | 5 | 5 | 21.3 | 1 |
201.2 | 133.5% | 165.0 | 5 | 5 | 21.7 | 2 |
102.0 | 126.5% | 159.0 | 6 | 5 | 17.8 | 1 |
185.6 | 170.8% | 155.5 | 6 | 5 | 20.0 | 1 |
126.8 | 144.9% | 159.2 | 6 | 5 | 17.8 | 3 |
346.4 | 268.8% | 156.0 | 6 | 5 | 31.6 | 1 |
35.0 | 180.6% | 151.7 | 6 | 5 | 16.3 | 2 |
62.8 | 179.6% | 149.4 | 6 | 6 | 17.0 | 1 |
188.8 | 209.1% | 159.3 | 6 | 6 | 21.9 | 1 |
208.4 | 116.9% | 155.5 | 6 | 5 | 21.3 | 1 |
284.4 | 269.4% | 153.0 | 6 | 5 | 31.1 | 2 |
262.1 | 273.3% | 162.8 | 6 | 5 | 29.7 | 1 |
301.6 | 286.0% | 167.0 | 6 | 6 | 33.3 | 1 |
177.2 | 145.9% | 163.3 | 6 | 5 | 20.3 | 2 |
244.6 | 270.8% | 161.7 | 6 | 5 | 29.6 | 1 |
26.8 | 174.7% | 147.5 | 6 | 5 | 18.5 | 1 |
388.0 | 316.6% | 150.6 | 6 | 5 | 34.2 | 1 |
282.4 | 337.3% | 153.2 | 6 | 6 | 38.3 | 1 |
280.8 | 295.4% | 159.4 | 6 | 6 | 36.9 | 1 |
140.0 | 165.1% | 159.5 | 6 | 6 | 21.4 | 1 |
Rights and permissions
About this article
Cite this article
Lu, M., Yeung, D.S. & Ng, W.W.Y. Applying undistorted neural network sensitivity analysis in iris plant classification and construction productivity prediction. Soft Comput 10, 68–77 (2006). https://doi.org/10.1007/s00500-005-0469-9
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00500-005-0469-9