[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to main content

Lifelong Unsupervised Learning

  • Chapter
Lifelong Machine Learning

Abstract

This chapter focuses on lifelong unsupervised learning. Much of the work in this area is done in the contexts of topic modeling and information extraction. These two areas are well suited to lifelong machine learning (LML). In the case of topic modeling, topics learned in the past in related domains can obviously be used to guide the modeling in the new or current domain [Chen and Liu, 2014a,b, Wang et al., 2016]. The knowledge base (KB) (Section 1.3) stores the past topics. Note that in this chapter, we use the terms domain and task interchangeably as in the existing research; each task is from a different domain. In terms of information extraction (IE), LML is also natural because the goal of IE is to extract and accumulate as much useful information or knowledge as possible. The extraction process is thus by nature continuous and cumulative. The extracted information earlier can be used to help extract more information later with higher quality [Liu et al., 2016]. These all match the objectives of LML. In the IE case, the knowledge base stores the extracted information and some other forms of useful knowledge. Even with the current primitive LML techniques, these tasks can already produce significantly better results regardless of whether the data is large or small. When the data size is small, these techniques are even more advantageous. For example, when the data is small, traditional topic models produce very poor results, but lifelong topic models can still generate very good topics. Ideally, as the knowledge base expands, more extractions will be made and few errors will incur. This is similar to our human learning. As we become more and more knowledgeable, it is easier for us to learn more and also make fewer mistakes. In the following sections, we discuss the current representative techniques of lifelong unsupervised learning which are geared toward achieving these properties.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
£29.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
GBP 19.95
Price includes VAT (United Kingdom)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
GBP 79.99
Price includes VAT (United Kingdom)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer Nature Switzerland AG

About this chapter

Cite this chapter

Chen, Z., Liu, B. (2017). Lifelong Unsupervised Learning. In: Lifelong Machine Learning. Synthesis Lectures on Artificial Intelligence and Machine Learning. Springer, Cham. https://doi.org/10.1007/978-3-031-01575-5_4

Download citation

Publish with us

Policies and ethics