[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ Skip to main content
Log in

Learning multiple metrics for ranking

  • Research Article
  • Published:
Frontiers of Computer Science in China Aims and scope Submit manuscript

Abstract

Directly optimizing an information retrieval (IR) metric has become a hot topic in the field of learning to rank. Conventional wisdom believes that it is better to train for the loss function on which will be used for evaluation. But we often observe different results in reality. For example, directly optimizing averaged precision achieves higher performance than directly optimizing precision@3 when the ranking results are evaluated in terms of precision@3. This motivates us to combine multiple metrics in the process of optimizing IR metrics. For simplicity we study learning with two metrics. Since we usually conduct the learning process in a restricted hypothesis space, e.g., linear hypothesis space, it is usually difficult to maximize both metrics at the same time. To tackle this problem, we propose a relaxed approach in this paper. Specifically, we incorporate one metric within the constraint while maximizing the other one. By restricting the feasible hypothesis space, we can get a more robust ranking model. Empirical results on the benchmark data set LETOR show that the relaxed approach is superior to the direct linear combination approach, and also outperforms other baselines.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
£29.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (United Kingdom)

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Burges C, Shaked T, Renshaw E, Lazier A, Deeds M, Hamilton N, Hullender G. Learning to rank using gradient descent. In: Proceedings of 22nd International Conference on Machine learning. 2005, 89–96

  2. Freund Y, Iyer R, Schapire R E, Singer Y. An efficient boosting algorithm for combining preferences. Journal of Machine Learning Research, 2003, 4: 933–969

    Article  MathSciNet  Google Scholar 

  3. Joachims T. Optimizing search engines using clickthrough data. In: Proceedings of 8th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 2002, 133–142

  4. Cao Z, Qin T, Liu T Y, Tsai M F, Li H. Learning to rank: from pairwise approach to listwise approach. In: Proceedings of 24th International Conference on Machine Learning. 2007, 129–136

  5. Xia F, Liu T Y, Wang J, Zhang W, Li H. Listwise approach to learning to rank: theory and algorithm. In: Proceedings of 25th International Conference on Machine Learning. 2008, 1192–1199

  6. Robertson S. On the optimisation of evaluation metrics. In: Proceedings of SIGIR 2008 Workshop on Learning to Rank. 2008

  7. Chakrabarti S, Khanna R, Sawant U, Bhattacharyya C. Structured learning for non-smooth ranking losses. In: Proceeding of 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 2008, 88–96

  8. Caruana R. Multitask learning. Machine Learning, 1997, 28(1): 41–75

    Article  MathSciNet  Google Scholar 

  9. Baxter J. A model of inductive bias learning. Journal of Artificial Intelligence Research, 2000, 12: 149–198

    MathSciNet  MATH  Google Scholar 

  10. Xu J, Liu T Y, Lu M, Li H, Ma W Y. Directly optimizing evaluation measures in learning to rank. In: Proceedings of 31st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. 2008, 107–114

  11. Taylor M, Guiver J, Robertson S, Minka T. Softrank: optimizing non-smooth rank metrics. In: Proceedings of 1st International Conference on Web Search and Web Data Mining. 2008, 77–86

  12. Qin T, Liu T Y, Li H. A general approximation framework for direct optimization of information retrieval measures. Technical Report MSR-TR-2008-164, Microsoft Corporation, 2008

  13. Yue Y, Finley T, Radlinski F, Joachims T. A support vector method for optimizing average precision. In: Proceedings of 30th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. 2007, 271–278

  14. Xu J, Li H. Adarank: a boosting algorithm for information retrieval. In: Proceedings of 30th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. 2007, 391–398

  15. Liu T Y, Xu J, Qin T, Xiong W, Li H. LETOR: benchmark dataset for research on learning to rank for information retrieval. In: Proceedings of the Learning to Rank Workshop in the 30th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. 2007

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiubo Geng.

Additional information

Xiubo Geng is a Ph. D candidate at Institute of Computing Technology, Chinese Academy of Sciences. Her research interests include machine learning, information retrieval and graphical model. She got her Bachelor degree from University of Science and Technology of China.

Xue-Qi Cheng is a Professor at the Institute of Computing Technology, Chinese Academy of Sciences (ICT-CAS), and the director of the key Laboratory of Network Science and Technology in ICT-CAS. His main research interests include Network Science, Web Search and Data Mining, P2P and Distributed System, Information Security. He has published over 100 publications in prestigious journals and international conferences, including New Journal of Physics, Journal of Statistics Mechanics: Theory and Experiment, IEEE Trans actions on Information Theory, ACM SIGIR, www, ACM CIKM, WSDM and so on. He is currently serving on the editorial board of Journal of Computer Science and Technology, Journal of Computer Research and Development, Journal of Computer.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Geng, X., Cheng, XQ. Learning multiple metrics for ranking. Front. Comput. Sci. China 5, 259–267 (2011). https://doi.org/10.1007/s11704-011-0152-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11704-011-0152-5

Keywords

Navigation