Abstract
This chapter focuses on lifelong unsupervised learning. Much of the work in this area is done in the contexts of topic modeling and information extraction. These two areas are well suited to lifelong machine learning (LML). In the case of topic modeling, topics learned in the past in related domains can obviously be used to guide the modeling in the new or current domain [Chen and Liu, 2014a,b, Wang et al., 2016]. The knowledge base (KB) (Section 1.3) stores the past topics. Note that in this chapter, we use the terms domain and task interchangeably as in the existing research; each task is from a different domain. In terms of information extraction (IE), LML is also natural because the goal of IE is to extract and accumulate as much useful information or knowledge as possible. The extraction process is thus by nature continuous and cumulative. The extracted information earlier can be used to help extract more information later with higher quality [Liu et al., 2016]. These all match the objectives of LML. In the IE case, the knowledge base stores the extracted information and some other forms of useful knowledge. Even with the current primitive LML techniques, these tasks can already produce significantly better results regardless of whether the data is large or small. When the data size is small, these techniques are even more advantageous. For example, when the data is small, traditional topic models produce very poor results, but lifelong topic models can still generate very good topics. Ideally, as the knowledge base expands, more extractions will be made and few errors will incur. This is similar to our human learning. As we become more and more knowledgeable, it is easier for us to learn more and also make fewer mistakes. In the following sections, we discuss the current representative techniques of lifelong unsupervised learning which are geared toward achieving these properties.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer Nature Switzerland AG
About this chapter
Cite this chapter
Chen, Z., Liu, B. (2017). Lifelong Unsupervised Learning. In: Lifelong Machine Learning. Synthesis Lectures on Artificial Intelligence and Machine Learning. Springer, Cham. https://doi.org/10.1007/978-3-031-01575-5_4
Download citation
DOI: https://doi.org/10.1007/978-3-031-01575-5_4
Publisher Name: Springer, Cham
Online ISBN: 978-3-031-01575-5
eBook Packages: Synthesis Collection of Technology (R0)eBColl Synthesis Collection 7