[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/2645710.2645780acmconferencesArticle/Chapter ViewAbstractPublication PagesrecsysConference Proceedingsconference-collections
tutorial

REDD 2014 -- international workshop on recommender systems evaluation: dimensions and design

Published: 06 October 2014 Publication History

Abstract

Evaluation is a cardinal issue in recommender systems; as in any technical discipline, it highlights to a large extent the problems that need to be solved by the field and, hence, leads the way for algorithmic research and development in the community. Yet, in the field of recommender systems, there still exists considerable disparity in evaluation methods, metrics and experimental designs, as well as a significant mismatch between evaluation methods in the lab and what constitutes an effective recommendation for real users and businesses. Even after the relevant quality dimensions have been defined, a clear evaluation protocol should be specified in detail and agreed upon, allowing for the comparison of results and experiments conducted by different authors. This would enable any contribution to the same problem to be incremental and add up on top of previous work, rather than grow sideways. The REDD 2014 workshop seeks to provide an informal forum to tackle such issues and to move towards better understood and shared evaluation methodologies, allowing one to leverage the efforts and the workforce of the academic community towards meaningful and relevant directions in real-world developments.

References

[1]
Adamopoulos, P. and Tuzhilin, A. On unexpectedness in recommender systems: Or how to better expect the unexpected. ACM TIST.
[2]
Adomavicius, G. and Kwon, Y. Improving aggregate recommendation diversity using ranking-based techniques. IEEE TKDE, 24(5):896 --911, 2012.
[3]
Breese, J. S., Heckerman, D., and Kadie, C. Empirical analysis of predictive algorithms for collaborative filtering. 14th Conf. on Uncertainty in Artificial Intelligence (UAI 1998), 43--52, 1998.
[4]
Celma, O. and Herrera, P. A New Approach to Evaluating Novel Recommendations. 2nd ACM International Conference on Recommender Systems (RecSys 2008), 179--186, 2008.
[5]
Cremonesi, P., Garzotto, F., Negro, S., Papadopoulos, A. V., and Turrin, R. Comparative evaluation of recommender system quality. ACM Conference on Human Factors in Computing Systems (CHI 2011), 1927--1932, 2011.
[6]
Cremonesi, P., Koren, Y., and Turrin, R. Performance of Recommender Algorithms on Top-N Recommendation Tasks. 4th ACM International Conference on Recommender Systems (RecSys 2010), 39--46, 2010.
[7]
Fleder, D. M. and Hosanagar, K. Blockbuster Culture's Next Rise or Fall: The Impact of Recommender Systems on Sales Diversity. Management Science, 55(5):697--712, 2009.
[8]
Ge, M., Delgado-Battenfeld, C. and Jannach, D. Beyond accuracy: Evaluating recommender systems by coverage and serendipity. 4th ACM International Conference on Recommender Systems (RecSys 2010), 257--260, 2010.
[9]
Herlocker, J. L., Konstan, J. A., Terveen, L. G., and Riedl, J. T. Evaluating collaborative filtering recommender systems. ACM Trans. on Information Systems, 22(1):5--53, 2004.
[10]
Lathia, N., Hailes, S., Capra, L., and Amatriain, X. Temporal Diversity in Recommender Systems. 33rd International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2010), 210--217, 2010.
[11]
McNee, S. M., Riedl, J., and Konstan, J. A. Being Accurate is Not Enough: How Accuracy Metrics have hurt Recommender Systems. ACM Conf. on Human Factors in Computing Systems (CHI 2006), 1097--1101, 2006.
[12]
Shani, G. and Gunawardana, A. Evaluating Recommendation Systems. In Ricci, F. et al (Eds.), Recommender Systems Handbook, pages 257--297. Springer, 2011.
[13]
Steck, H. Training and Testing of Recommender Systems on Data Missing Not At Random. 16th ACM International Conference on Knowledge Discovery and Data Mining (KDD 2010), 713--722, 2010.
[14]
Vargas, S. and Castells, P. Rank and Relevance in Novelty and Diversity Metrics for Recommender Systems. 5th ACM International Conference on Recommender Systems (RecSys 2011), 109--116, 2011.
[15]
Voorhees, E. M. and Harman, D. K. TREC: Experiment and Evaluation in Information Retrieval. The MIT Press, 2005.

Cited By

View all
  • (2024)Introduction to Recommendation SystemsRecommender Systems: Algorithms and their Applications10.1007/978-981-97-0538-2_1(1-10)Online publication date: 12-Jun-2024
  • (2019)Online ranking combinationProceedings of the 13th ACM Conference on Recommender Systems10.1145/3298689.3346993(12-19)Online publication date: 10-Sep-2019
  • (2015)Replicable Evaluation of Recommender SystemsProceedings of the 9th ACM Conference on Recommender Systems10.1145/2792838.2792841(363-364)Online publication date: 16-Sep-2015

Index Terms

  1. REDD 2014 -- international workshop on recommender systems evaluation: dimensions and design

      Recommendations

      Comments

      Please enable JavaScript to view thecomments powered by Disqus.

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      RecSys '14: Proceedings of the 8th ACM Conference on Recommender systems
      October 2014
      458 pages
      ISBN:9781450326681
      DOI:10.1145/2645710
      Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

      Sponsors

      In-Cooperation

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 06 October 2014

      Check for updates

      Author Tags

      1. evaluation
      2. methodology
      3. metrics
      4. recommender systems
      5. utility

      Qualifiers

      • Tutorial

      Conference

      RecSys'14
      Sponsor:
      RecSys'14: Eighth ACM Conference on Recommender Systems
      October 6 - 10, 2014
      California, Foster City, Silicon Valley, USA

      Acceptance Rates

      RecSys '14 Paper Acceptance Rate 35 of 234 submissions, 15%;
      Overall Acceptance Rate 254 of 1,295 submissions, 20%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)3
      • Downloads (Last 6 weeks)0
      Reflects downloads up to 12 Jan 2025

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)Introduction to Recommendation SystemsRecommender Systems: Algorithms and their Applications10.1007/978-981-97-0538-2_1(1-10)Online publication date: 12-Jun-2024
      • (2019)Online ranking combinationProceedings of the 13th ACM Conference on Recommender Systems10.1145/3298689.3346993(12-19)Online publication date: 10-Sep-2019
      • (2015)Replicable Evaluation of Recommender SystemsProceedings of the 9th ACM Conference on Recommender Systems10.1145/2792838.2792841(363-364)Online publication date: 16-Sep-2015

      View Options

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media