[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to main content

Selection of training data for neural networks by a genetic algorithm

  • Conference paper
  • First Online:
Parallel Problem Solving from Nature — PPSN V (PPSN 1998)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 1498))

Included in the following conference series:

  • 271 Accesses

Abstract

In applications of artificial neural networks (ANNs), it is common to partition the available data into (at least) two sets. One is then used to train the net, while the other is used as a ‘test set’ to measure the generalization capability of the trained net.

The partition is generally almost completely arbitrary, and little research has been done on the question of what constitutes a good training set, or on how it could be achieved. In this paper, we use a genetic algorithm (GA) to identify a training set for fitting radial basis function (RBF) networks, and test the methodology on two classification problems—one an artificial problem, and the other using real-world data on credit applications for mortgage loans.

In the process, we also exhibit an interesting application of Radcliffe's RAR operator, and present results that suggest the methodology tested here is a viable means of increasing ANN performance.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Reeves, C., Steele, N.: Neural networks for multivariate analysis: results of some cross-validation studies. Proc. of 6 th International Symposium on Applied Stochastic Models and Data Analysis, World Scientific Publishing, Singapore, Vol II (1993), 780–791.

    Google Scholar 

  2. Cook, R., Weisberg, S.: Residuals and Influence in Regression. Chapman and Hall, New York (1982).

    Google Scholar 

  3. Barnett, V., Lewis, T.: Outliers in Statistical Data. Wiley, Chichester (1978).

    Google Scholar 

  4. Dasarathy, B.: Nearest Neighbor (NN) Norms: NN Pattern Classification Techniques. IEEE Computer Society Press, Los Alamitos, CA (1991).

    Google Scholar 

  5. Reeves, C.: Training set selection in neural network applications. In Pearson, D., Albrecht, R., Steele, N.: Proc. of 2nd International Conference on Artificial Neural Nets and Genetic Algorithms. Springer-Verlag, Vienna (1995), 476–478.

    Google Scholar 

  6. Plutowski, M.: Selecting Training Exemplars for Neural Network Learning. PhD Dissertation, University of California, San Diego (1994).

    Google Scholar 

  7. Röbel, A.: The Dynamic Pattern Selection Algorithm: Effective Training and Controlled Generalization of Backpropagation Neural Networks. Technical Report, Technical University of Berlin (1994).

    Google Scholar 

  8. Tambouratzis, T., Tambouratzis, D.: Optimal training pattern selection using a cluster-generating artificial neural network. In Pearson, D., Albrecht, R., Steele, N.: Proc. of 2nd International Conference on Artificial Neural Nets and Genetic Algorithms. Springer-Verlag, Vienna (1995), 472–475.

    Google Scholar 

  9. Radcliffe, N.: Genetic set recombination and its application to neural network topology optimisation. Neural Computing and Applications, 1 (1993), 67–90.

    Article  MATH  Google Scholar 

  10. Radcliffe, N., George, F.: A study in set recombination. In Forrest, S. (Ed.) Proceedings of 5 th International Conference on Genetic Algorithms. Morgan Kaufmann, San Mateo, CA (1993), 23–30.

    Google Scholar 

  11. Moody, J., Darken, C.: Fast learning in networks of locally-tuned processing units. Neural Computation, 1 (1990), 281–294.

    Google Scholar 

  12. Bishop, C.: Neural Networks for Pattern Recognition. Clarendon Press, Oxford, UK (1995).

    Google Scholar 

  13. Whitley, L.D., Schaffer, J.D. (Eds.): Proceedings of COGANN-92: International Workshop on Combinations of Genetic Algorithms and Neural Networks. IEEE Computer Society Press, Los Alamitos, CA (1992).

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Agoston E. Eiben Thomas Bäck Marc Schoenauer Hans-Paul Schwefel

Rights and permissions

Reprints and permissions

Copyright information

© 1998 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Reeves, C.R., Taylor, S.J. (1998). Selection of training data for neural networks by a genetic algorithm. In: Eiben, A.E., Bäck, T., Schoenauer, M., Schwefel, HP. (eds) Parallel Problem Solving from Nature — PPSN V. PPSN 1998. Lecture Notes in Computer Science, vol 1498. Springer, Berlin, Heidelberg. https://doi.org/10.1007/BFb0056905

Download citation

  • DOI: https://doi.org/10.1007/BFb0056905

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-65078-2

  • Online ISBN: 978-3-540-49672-4

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics