[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to main content

Source-Target-Source Classification Using Stacked Denoising Autoencoders

  • Conference paper
  • First Online:
Pattern Recognition and Image Analysis (IbPRIA 2015)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 9117))

Included in the following conference series:

  • 4090 Accesses

Abstract

Deep Transfer Learning (DTL) emerged as a new paradigm in machine learning in which a deep model is trained on a source task and the knowledge acquired is then totally or partially transferred to help in solving a target task. Even though DTL offers a greater flexibility in extracting high-level features and enabling feature transference from a source to a target task, the DTL solution might get stuck at local minima leading to performance degradation-negative transference-, similar to what happens in the classical machine learning approach. In this paper, we propose the Source-Target-Source (STS) methodology to reduce the impact of negative transference, by iteratively switching between source and target tasks in the training process. The results show the effectiveness of such approach.

C. Kandaswamy—This work was financed by FEDER funds through the Programa Operacional Factores de Competitividade COMPETE and by Portuguese funds through FCT Fundação para a Ciência e a Tecnologia in the framework of the project PTDC/EIA-EIA/119004/2010. We thank Faculdade de Engenharia, Universidade do Porto.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
£29.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
GBP 19.95
Price includes VAT (United Kingdom)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
GBP 35.99
Price includes VAT (United Kingdom)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
GBP 44.99
Price includes VAT (United Kingdom)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    The naming ‘source’ and ‘target’ is some what misleading in our learning framework.

  2. 2.

    We would like to acknowledge researchers making available their datasets, Center for Neural Science, New York University for MNIST; Microsoft Research India for Chars74k; and LISA labs, University of Montreal, Canada for BabyAI shapes.

References

  1. Thrun, S.: Learning to learn: introduction. In: Learning To Learn (1996)

    Google Scholar 

  2. Caruana, R.: Multitask learning. Mach. Learn. 28(1), 41–75 (1997)

    Article  MathSciNet  Google Scholar 

  3. Daumé III, H., Marcu, D.: Domain adaptation for statistical classifiers. J. Artif. Intell. Res. (JAIR) 26, 101–126 (2006)

    MATH  Google Scholar 

  4. Raina, R., Battle, A., Lee, H., Packer, B., Ng, A.Y.: Self-taught learning: transfer learning from unlabeled data. In: ACM Conference on Proceedings (ICML) (2007)

    Google Scholar 

  5. Ciresan, D., Meier, U., Schmidhuber, J.: Multi-column deep neural networks for image classification. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR). IEEE (2012)

    Google Scholar 

  6. Kandaswamy, C., Silva, L.M., Alexandre, L.A., de Sá, J.M.: Improving deep neural network performance by reusing features trained with transductive transference. In: Wermter, S., Weber, C., Duch, W., Honkela, T., Koprinkova-Hristova, P., Magg, S., Palm, G., Villa, A.E.P. (eds.) ICANN 2014. LNCS, vol. 8681, pp. 265–272. Springer, Heidelberg (2014)

    Chapter  Google Scholar 

  7. Kandaswamy, C., Silva, L.M., Alexandre, L.A., Sousa, R. Santos, J.M., de Sá, J.M.: Improving transfer learning accuracy by reusing stacked denoising autoencoders. In: IEEE Conference on SMC. IEEE (2014)

    Google Scholar 

  8. Yosinski, J., et al.: How transferable are features in deep neural networks? In: Advances in Neural Information Processing Systems (2014)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Chetak Kandaswamy .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2015 Springer International Publishing Switzerland

About this paper

Cite this paper

Kandaswamy, C., Silva, L.M., Cardoso, J.S. (2015). Source-Target-Source Classification Using Stacked Denoising Autoencoders. In: Paredes, R., Cardoso, J., Pardo, X. (eds) Pattern Recognition and Image Analysis. IbPRIA 2015. Lecture Notes in Computer Science(), vol 9117. Springer, Cham. https://doi.org/10.1007/978-3-319-19390-8_5

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-19390-8_5

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-19389-2

  • Online ISBN: 978-3-319-19390-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics