Abstract
In the domain of high-speed impact between solids, the simulation of one trial entails the use of large resources and an elevated computational cost. The objective of this research is to find the best neural network associated with a new problem of ballistic impact, maximizing the quantity of trials available and simplifying their architecture. To achieve this goal, this paper proposes a tuning performance process based on four stages. These stages include existing statistical techniques, a combination of proposals to improve the performance and analyze the influence of each variable. To measure the quality of the different networks, two criteria based on information theory have been incorporated to reflect the fit of the data with respect to their complexity. The results obtained show that the application of an integrated tuning process in this domain permits improvement in the performance and efficiency of a neural network in comparison with different machine learning alternatives
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
ABAQUS (2003) ABAQUS/Explicit v6.4 users manual. ABAQUS Inc., Richmond
Adeli H, Yeh C (1989) Perceptron learning in engineering design. Microcomput Civ Eng 4:247–256
Akaike H (1974) A new look at the statistical model identification. IEEE Trans Automat Contr 19(6):716–723
Alfaro-Cortes E, Gamez-Martinez M, Garcia-Rubio N (2007) A boosting approach for corporate failure prediction. Appl Intell 27(1):29–37
An G (1996) The effects of adding noise during backpropagation training on a generalization performance. Neural Comput 8(3):643–674
Anderson C (1987) An overview of the theory of hydrocodes. Int J Impact Eng 5(1–4):33–59
Anlauf J, Biehl M (1989) The adatron: an adaptive perceptron algorithm. Europhys Lett 10:687
Awerbuch J, Bodner S (1974) Analysis of the mechanics of perforation of projectiles in metallic plates. Int J Solids Struct 10:671–684
Bauer E, Kohavi R (1999) An empirical comparison of voting classification algorithms: bagging boosting, and variants. Mach Learn 36(1–2):105–139
Bisagni C, Lanzi L, Ricci S (2002) Optimization of helicopter subfloor components under crashworthiness requirements using neural networks. J Aircr 39(2):296–304
Bishop C (1995) Training with noise is equivalent to Tikhonov regularization. Neural Comput 7:108
Bishop C (1996) Neural networks for pattern recognition. Oxford University Press, Oxford. Diferencias entre RBF and MLP
Boonyanunta N, Zeephongsekul P (2004) Predicting the relationship between the size of training sample and the predictive power of classifiers. Knowl-Based Intell Inf Eng Syst 3215:529–535
Borvik T, Langseth M, Hopperstad O, Malo K (1999) Ballistic penetration of steel plates—analysis and experiment. Int J Impact Eng 22(9–10):855–886
Breiman L (1996) Bagging predictors. Mach Learn 26(2):123
Breiman L, Spector P (1992) Submodel selection and evaluation in regression. The x-random case. Int Stat Rev 60:291–319
Burger M, Neubauer A (2003) Analysis of Tikhonov regularization for function approximation by neural networks. Neural Netw 16(1):79–90
Choi B, Hendtlass T, Bluff K (2004) A comparison of neural network input vector selection techniques. In: IEA/AIE’2004: Proceedings of the 17th international conference on innovations in applied artificial intelligence. Springer, Berlin, pp 1–10
Cover T (1965) Geometrical and statistical properties of systems of linear inequalities with applications in pattern recognition. IEEE Trans Electron Comput EC-14(3):326–334
Cybenko G (1989) Approximation by superpositions of a sigmoidal function. Math Control Signals Syst 2:303–314
Devijver P, Kittler J (1982) Pattern recognition: a statistical approach. Prentice Hall, New York
Du H, Zhang N (2008) Time series prediction using evolving radial basis function networks with new encoding scheme. Neurocomput. 71(7–9):1388–1400
Efron B (1979) Bootstrap methods: another look at the jackknife. Ann Stat 7(1):1–26
Efron B (1983) Estimating the error rate of a prediction rule: improvement on cross-validation. J Am Stat Assoc 78(382):316–331
Efron B, Tibshirani RJ (1994) An introduction to the bootstrap. Chapman & Hall/CRC, London. Hardcover
Fogel D (1990) An information criterion for optimal neural network selection. Signals, systems and computers, 1990 conference record twenty-fourth Asilomar conference on, vol 2, p 998
Foley D (1972) Considerations of sample and feature size. IEEE Trans Inf Theory 18(5):618–626
Garcia-Crespo A, Ruiz-Mezcua B, Fernandez D, Zaera R (2006) Prediction of the response under impact of steel armours using a multilayer perceptron. Neural Comput Appl 16(2):147–154
Garcia-Crespo A, Ruiz-Mezcua B, Gonzalez-Carrasco I, Lopez-Cuadrado J (2008) Multilayer perceptron training optimization for high speed impacts classification. Springer, pp 377–388
Gevrey MID, Lek S (2003) Review and comparison of methods to study the contribution of variables in artificial neural network models. Ecol Model 160(16):249–264
Girosi F, Jones M, Poggio T (1995) Regularization theory and neural networks architectures. Neural Comput 7:219
Goldsmith W (1960) Impact: the theory and physical behaviour of colliding solids. Edward Arnold Publishers Ltd., Sevenoaks
Goutte C (1997) Note on free lunches and cross-validation. Neural Comput 9(6):1245–1249
Harpham C, Dawson C (2006) The effect of different basis functions on a radial basis function network for time series prediction: a comparative study. Neurocomputing 69(16–18):2161–2170. Brain inspired cognitive systems—selected papers from the 1st international conference on brain inspired cognitive systems (BICS 2004)
He Y, Sun Y (2001) Neural network-based l1-norm optimisation approach for fault diagnosis of nonlinear circuits with tolerance. IEE Proc Circuits Devices Syst 148(4):223–228
Hinton G, Van-Camp D (1993) Keeping the neural networks simple by minimizing the description length of the weights. In: COLT ’93: Proceedings of the sixth annual conference on computational learning theory. ACM, New York, pp 5–13
Holmstrom L, Koistinen P (1992) Using additive noise in back-propagation training. IEEE Trans Neural Netw 3(1):24–38
Hornik K, Stinchcombe M, White H (1989) Multilayer feedforward networks are universal approximators. Neural Netw 2(5):359–366
Ince R (2004) Prediction of fracture parameters of concrete by artificial neural networks. Eng Fract Mech 71(15):2143–2159
Johnson G, Cook W (1983) A constitutive model and data for metals subjected to large strains, high strain rates, and temperatures. In: International symposium ballistics
Karystinos G, Pados D (2000) On overfitting, generalization, and randomly expanded training sets. IEEE Trans Neural Netw 11(5):1050–1057
Kohavi R (1995) A study of cross-validation and bootstrap for accuracy estimation and model selection. In: Proceedings of international joint conference on artificial intelligence. Morgan Kaufmann, San Mateo, pp 1137–1143
Koistinen P, Holmstrom L (1991) Kernel regression and backpropagation training with noise. In: Neural networks, 1991. 1991 IEEE international joint conference on, vol 1, pp 367–372
Lanzi L, Bisagni C, Ricci S (2004) Neural network systems to reproduce crash behavior of structural components. Comput Struct 82(1):93
Leclercq E, Druaux F, Lefebvre D, Zerkaoui S (2005) Autonomous learning algorithm for fully connected recurrent networks. Neurocomputing 63:25–44. New aspects in neurocomputing: 11th European symposium on artificial neural networks
Lippmann R (1987) An introduction to computing with neural nets. ASSP Mag IEEE 4(2):4–22 [see also IEEE signal processing magazine]
Liu H, Setiono R (1998) Incremental feature selection. Appl Intell 9(3):217–230
Liu S, Huang J, Sung J, Lee C (2002) Detection of cracks using neural networks and computational mechanics. Comput Methods Appl Mech Eng 191(25–26):2831
Majumder M, Roy P, Mazumdar A (2007) Optimization of the water use in the river Damodar in West Bengal in India: an integrated multi-reservoir system with the help of artificial neural network. Eng Comput Arch 1(2)
Mandal S, Saha D, Banerjee T (2005) A neural network based prediction model for flood in a disaster management system with sensor networks. In: Intelligent sensing and information processing, 2005. Proceedings of 2005 international conference on, pp 78–82
Mandic D, Chambers J (2001) Recurrent neural networks for prediction: learning algorithms, architectures and stability. Wiley, New York
Masters T (1995) Advanced algorithms for neural networks: a C++ sourcebook. Wiley, New York
Matsuoka K (1992) Noise injection into inputs in back-propagation learning. IEEE Trans Syst Man Cybern 22(3):436–440
Mrazova I, Wang D (2007) Improved generalization of neural classifiers with enforced internal representation. Neurocomputing 70(16–18):2940–2952
Murata N, Yoshizawa S, Amari SI (1994) Network information criterion—determining the number of hidden units for an artificial neural network model. IEEE Trans Neural Netw 5:865
Opitz D, Maclin R (1999) Popular ensemble methods: an empirical study. J Artif Intell Res 11:169
Park J, Sandberg I (1991) Universal approximation using radial-basis-function networks. Neural Comput 3(2):246–257
Priddy K, Keller P (2005) Artificial neural networks: an introduction. SPIE Press, Bellingham
Principe J, Euliano N, Lefebvre W (1999) Neural and adaptive systems: fundamentals through simulations with CD-ROM. Wiley, New York
Ravid M, Bodner S (1983) Dynamic perforation of viscoplastic plates by rigid projectiles. Int J Eng Sci 21:577–591
Rissanen J (1978) Modeling by shortest data description. Automatica 14:445–471
Sietsma J, Dow R (1991) Creating artificial neural networks that generalize. Neural Netw 4(1):67–79
Sokolova M, Rasras R, Skopin D (2006) The artificial neural network based approach for mortality structure analysis. Am J Appl Sci 3(2):1698–1702
Songwu L, Member S, Basar T (1998) Robust nonlinear system identification using neural network models. IEEE Trans Neural Netw 9:407
Swingler K (1996) Applying neural networks. A practical guide. Academic Press, New York
Tarassenko L (1998) A guide to neural computing applications. Arnold/NCAF
Tibshirani R (1996) A comparison of some error estimates for neural network models. Neural Comput 8(1):152–163
Ueda N (2000) Optimal linear combination of neural networks for improving classification performance. IEEE Trans Pattern Anal Mach Intell 22(2):207–215
Vapnik V (1995) The nature of statistical learning theory. Springer, Berlin
Vapnik V (1998) Statistical learning theory. Wiley, New York
Waszczyszyn Z, Ziemianski L (2001) Neural networks in mechanics of structures and materials—new results and prospects of applications. Comput Struct 79(16):2261–2276
Wythoff B (1993) Backpropagation neural networks: a tutorial. Chemom Intell Lab Syst 18:115–155
Xu S, Chen L (2008) A novel approach for determining the optimal number of hidden layer neurons for fnns and its application in data mining. In: 5th international conference on information technology and applications, pp 23–26
Zhao Y, Small M (2006) Minimum description length criterion for modeling of chaotic attractors with multilayer perceptron networks. IEEE Trans Circuits Syst I: Reg Pap 53(3):722–732
Zhao Z, Zhang Y, Liao H (2008) Design of ensemble neural network using the akaike information criterion. Eng Appl Artif Intell 21(8):1182–1188
Zukas J (1990) High velocity impact dynamics. Wiley, New York
Zupan J (1995) Neural networks in chemistry. The first European conference on computational chemistry 330(1), 469–470
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Gonzalez-Carrasco, I., Garcia-Crespo, A., Ruiz-Mezcua, B. et al. Dealing with limited data in ballistic impact scenarios: an empirical comparison of different neural network approaches. Appl Intell 35, 89–109 (2011). https://doi.org/10.1007/s10489-009-0205-8
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10489-009-0205-8