Multiplicative multitask feature learning

X Wang, J Bi, S Yu, J Sun, M Song - Journal of Machine Learning Research, 2016 - jmlr.org
Journal of Machine Learning Research, 2016jmlr.org
We investigate a general framework of multiplicative multitask feature learning which
decomposes individual task's model parameters into a multiplication of two components.
One of the components is used across all tasks and the other component is task-specific.
Several previous methods can be proved to be special cases of our framework. We study the
theoretical properties of this framework when different regularization conditions are applied
to the two decomposed components. We prove that this framework is mathematically …
We investigate a general framework of multiplicative multitask feature learning which decomposes individual task's model parameters into a multiplication of two components. One of the components is used across all tasks and the other component is task-specific. Several previous methods can be proved to be special cases of our framework. We study the theoretical properties of this framework when different regularization conditions are applied to the two decomposed components. We prove that this framework is mathematically equivalent to the widely used multitask feature learning methods that are based on a joint regularization of all model parameters, but with a more general form of regularizers. Further, an analytical formula is derived for the across-task component as related to the task-specific component for all these regularizers, leading to a better understanding of the shrinkage effects of different regularizers. Study of this framework motivates new multitask learning algorithms. We propose two new learning formulations by varying the parameters in the proposed framework. An effcient blockwise coordinate descent algorithm is developed suitable for solving the entire family of formulations with rigorous convergence analysis. Simulation studies have identified the statistical properties of data that would be in favor of the new formulations. Extensive empirical studies on various classification and regression benchmark data sets have revealed the relative advantages of the two new formulations by comparing with the state of the art, which provides instructive insights into the feature learning problem with multiple tasks.
jmlr.org