[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/2020408.2020593acmconferencesArticle/Chapter ViewAbstractPublication PageskddConference Proceedingsconference-collections
poster

Multi-view transfer learning with a large margin approach

Published: 21 August 2011 Publication History

Abstract

Transfer learning has been proposed to address the problem of scarcity of labeled data in the target domain by leveraging the data from the source domain. In many real world applications, data is often represented from different perspectives, which correspond to multiple views. For example, a web page can be described by its contents and its associated links. However, most existing transfer learning methods fail to capture the multi-view {nature}, and might not be best suited for such applications.
To better leverage both the labeled data from the source domain and the features from different views, {this paper proposes} a general framework: Multi-View Transfer Learning with a Large Margin Approach (MVTL-LM). On one hand, labeled data from the source domain is effectively utilized to construct a large margin classifier; on the other hand, the data from both domains is employed to impose consistencies among multiple views. As an instantiation of this framework, we propose an efficient optimization method, which is guaranteed to converge to ε precision in O(1/ε) steps. Furthermore, we analyze its error bound, which improves over existing results of related methods. An extensive set of experiments are conducted to demonstrate the advantages of our proposed method over state-of-the-art techniques.

References

[1]
E. C. Anderson. Monte Carlo Methods and Importance Sampling, 1999.
[2]
A. Argyriou, T. Evgeniou, and M. Pontil. Multi-task feature learning. In NIPS, page 41. MIT Press, 2007.
[3]
S. Ben-David, J. Blitzer, K. Crammer, and F. Pereira. Analysis of representations for domain adaptation. In NIPS, pages 137--144, 2006.
[4]
A. Blum and T. M. Mitchell. Combining labeled and unlabeled sata with co-training. In COLT, pages 92--100, 1998.
[5]
E. Bonilla, K. Chai, and C. Williams. Multi-task Gaussian process prediction. NIPS, 20:153--160, 2008.
[6]
K. Crammer, M. Kearns, and J. Wortman. Learning from multiple sources. Journal of Machine Learning Research, 9:1757--1774, 2008.
[7]
W. Dai, G.-R. Xue, Q. Yang, and Y. Yu. Co-clustering based classification for out-of-domain documents. In KDD, pages 210--219, 2007.
[8]
W. Dai, Q. Yang, G. Xue, and Y. Yu. Boosting for transfer learning. In ICML, pages 200--207, 2007.
[9]
W. Dai, Q. Yang, G. Xue, and Y. Yu. Self-taught clustering. In ICML, pages 200--207, 2008.
[10]
J. Davis and P. Domingos. Deep transfer via second-order Markov logic. In ICML, pages 217--224, 2009.
[11]
L. Duan, I. Tsang, D. Xu, and S. Maybank. Domain transfer svm for video concept detection. CVPR, pages 1375--1381, 2009.
[12]
J. Farquhar, D. Hardoon, H. Meng, J. Shawe-Taylor, and S. Szedmak. Two view learning: SVM-2K, theory and practice. NIPS, 18:355, 2006.
[13]
K. Ganchev, J. Graça, J. Blitzer, and B. Taskar. Multi-view learning over structured and non-identical outputs. In UAI, pages 204--211, 2008.
[14]
T. Hofmann. Probabilistic latent semantic indexing. In SIGIR, pages 50--57, 1999.
[15]
J. Huang, A. Smola, A. Gretton, K. Borgwardt, and B. Scholkopf. Correcting sample selection bias by unlabeled data. NIPS, 19:601, 2007.
[16]
T. Joachims. Training linear SVMs in linear time. In KDD, pages 217--226, 2006.
[17]
T. Joachims and N. Cristianini. Composite kernels for hypertext categorisation. In ICML, pages 250--257, 2001.
[18]
J. Kelley. The cutting plane method for solving convex programs. Journal of the SIAM, 8(4):703--712, 1960.
[19]
N. Lawrence and J. Platt. Learning to learn with the informative vector machine. In ICML, page 65, 2004.
[20]
G. Li, S. C. H. Hoi, and K. Chang. Two-view transductive support vector machines. In SDM, pages 235--244, 2010.
[21]
X. Ling, W. Dai, G.-R. Xue, Q. Yang, and Y. Yu. Spectral domain-transfer learning. In KDD, pages 488--496, 2008.
[22]
X. Liu, Y. Gong, W. Xu, and S. Zhu. Document clustering with cluster refinement and model selection capabilities. In SIGIR, pages 191--198, 2002.
[23]
Y. Mansour, M. Mohri, and A. Rostamizadeh. Domain adaptation with multiple sources. In NIPS, pages 1041--1048, 2008.
[24]
Y. Mansour, M. Mohri, and A. Rostamizadeh. Multiple source adaptation and the rényi divergence. In UAI, pages 367--374. AUAI Press, 2009.
[25]
L. Mihalkova, T. Huynh, and R. Mooney. Mapping and revising Markov logic networks for transfer learning. In AAAI, volume 22, pages 608--613, 2007.
[26]
L. Mihalkova and R. Mooney. Transfer learning by mapping with minimal target data. In Proceedings of the AAAI-08 Workshop on Transfer Learning for Complex Tasks, 2008.
[27]
K. Nigam and R. Ghani. Analyzing the effectiveness and applicability of co-training. In CIKM, pages 86--93, 2000.
[28]
S. Pan, J. Kwok, and Q. Yang. Transfer learning via dimensionality reduction. In Proceedings of AAAI, pages 677--682.
[29]
B. Quanz and J. Huan. Large margin transductive transfer learning. In CIKM, pages 1327--1336, 2009.
[30]
R. Raina, A. Battle, H. Lee, B. Packer, and A. Ng. Self-taught learning: transfer learning from unlabeled data. In ICML, pages 766--763, 2007.
[31]
D. Rosenberg, V. Sindhwani, P. Bartlett, and P. Niyogi. A Kernel for Semi-Supervised Learning With Multi-View Point Cloud Regularization. IEEE Signal Processing Magazine, 2009.
[32]
B. Scholkopf and A. Smola. Learning with kernels. MIT press Cambridge, Mass, 2002.
[33]
J. Shawe-Taylor and N. Cristianini. Kernel Methods for Pattern Analysis. Cambridge University Press, New York, NY, USA, 2004.
[34]
S. Sheather and M. Jones. A reliable data-based bandwidth selection method for kernel density estimation. Journal of the Royal Statistical Society. Series B (Methodological), 53(3):683--690, 1991.
[35]
H. Shimodaira. Improving predictive inference under covariate shift by weighting the log-likelihood function. Journal of Statistical Planning and Inference, 90(2):227--244, 2000.
[36]
V. Sindhwani and P. Niyogi. A co-regularized approach to semi-supervised learning with multiple views. In ICML Workshop on Learning with Multiple Views, 2005.
[37]
V. Sindhwani and D. Rosenberg. An RKHS for multi-view learning and manifold co-regularization. In ICML, pages 976--983, 2008.
[38]
A. Smola, S. Vishwanathan, and Q. Le. Bundle methods for machine learning. NIPS, 20, 2008.
[39]
C. Teo, S. Vishwanthan, A. Smola, and Q. Le. Bundle methods for regularized risk minimization. The Journal of Machine Learning Research, 11:311--365, 2010.
[40]
G. Tur. Co-adaptation: Adaptive co-training for semi-supervised learning. In ICASSP, 2009.
[41]
M. Wu and B. Schölkopf. A local learning approach for clustering. In NIPS, pages 1529--1536, 2006.
[42]
B. Zadrozny. Learning and evaluating classifiers under sample selection bias. In ICML, 2004.
[43]
L. Zelnik-Manor and P. Perona. Self-tuning spectral clustering. NIPS, 17:1601--1608, 2004.
[44]
D. Zhang, F. Wang, C. Zhang, and T. Li. Multi-view local learning. In AAAI, pages 752--757, 2008.
[45]
T. Zhang, A. Popescul, and B. Dom. Linear prediction models with graph regularization for web-page categorization. In SIGKDD, pages 821--826, 2006.
[46]
D. Zhou, O. Bousquet, T. N. Lal, J. Weston, and B. Schölkopf. Learning with local and global consistency. In NIPS, 2003.
[47]
S. Zhu, K. Yu, Y. Chi, and Y. Gong. Combining content and link for classification using matrix factorization. In SIGIR, pages 487--494, 2007.

Cited By

View all
  • (2023)Mutual Supervised Fusion & Transfer Learning with Interpretable Linguistic Meaning for Social Data AnalyticsACM Transactions on Asian and Low-Resource Language Information Processing10.1145/356867522:5(1-20)Online publication date: 9-May-2023
  • (2022)Deep Multimodal Transfer Learning for Cross-Modal RetrievalIEEE Transactions on Neural Networks and Learning Systems10.1109/TNNLS.2020.302918133:2(798-810)Online publication date: Feb-2022
  • (2022)Clustering Ensemble Based on Hybrid Multiview ClusteringIEEE Transactions on Cybernetics10.1109/TCYB.2020.303415752:7(6518-6530)Online publication date: Jul-2022
  • Show More Cited By

Index Terms

  1. Multi-view transfer learning with a large margin approach

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    KDD '11: Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining
    August 2011
    1446 pages
    ISBN:9781450308137
    DOI:10.1145/2020408
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 21 August 2011

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. large margin approach
    2. multi-view learning
    3. transfer learning

    Qualifiers

    • Poster

    Conference

    KDD '11
    Sponsor:

    Acceptance Rates

    Overall Acceptance Rate 1,133 of 8,635 submissions, 13%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)23
    • Downloads (Last 6 weeks)1
    Reflects downloads up to 13 Dec 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2023)Mutual Supervised Fusion & Transfer Learning with Interpretable Linguistic Meaning for Social Data AnalyticsACM Transactions on Asian and Low-Resource Language Information Processing10.1145/356867522:5(1-20)Online publication date: 9-May-2023
    • (2022)Deep Multimodal Transfer Learning for Cross-Modal RetrievalIEEE Transactions on Neural Networks and Learning Systems10.1109/TNNLS.2020.302918133:2(798-810)Online publication date: Feb-2022
    • (2022)Clustering Ensemble Based on Hybrid Multiview ClusteringIEEE Transactions on Cybernetics10.1109/TCYB.2020.303415752:7(6518-6530)Online publication date: Jul-2022
    • (2022)Learning Shared Mobility-Aware Knowledge for Multiple Urban Travel DemandsIEEE Internet of Things Journal10.1109/JIOT.2021.31151749:9(7025-7035)Online publication date: 1-May-2022
    • (2022)Double embedding-transfer-based multi-view spectral clusteringExpert Systems with Applications10.1016/j.eswa.2022.118374210(118374)Online publication date: Dec-2022
    • (2021)Contextual Skill Proficiency via Multi-task Learning at LinkedInProceedings of the 30th ACM International Conference on Information & Knowledge Management10.1145/3459637.3481904(4273-4282)Online publication date: 26-Oct-2021
    • (2021)Adaptive Similarity Embedding for Unsupervised Multi-View Feature SelectionIEEE Transactions on Knowledge and Data Engineering10.1109/TKDE.2020.296986033:10(3338-3350)Online publication date: 1-Oct-2021
    • (2021)Cross-Dataset Point Cloud Recognition Using Deep-Shallow Domain Adaptation NetworkIEEE Transactions on Image Processing10.1109/TIP.2021.309281830(7364-7377)Online publication date: 2021
    • (2021)A Comprehensive Survey on Transfer LearningProceedings of the IEEE10.1109/JPROC.2020.3004555109:1(43-76)Online publication date: Jan-2021
    • (2021)Multi-View Collaborative Learning for Semi-Supervised Domain AdaptationIEEE Access10.1109/ACCESS.2021.31365679(166488-166501)Online publication date: 2021
    • Show More Cited By

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media