[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/1390334.1390347acmconferencesArticle/Chapter ViewAbstractPublication PagesirConference Proceedingsconference-collections
research-article

The good and the bad system: does the test collection predict users' effectiveness?

Published: 20 July 2008 Publication History

Abstract

Test collections are extensively used in the evaluation of information retrieval systems. Crucial to their use is the degree to which results from them predict user effectiveness. At first, past studies did not substantiate a relationship between system and user effectiveness; more recently, however, correlations have begun to emerge. The results of this paper strengthen and extend those findings. We introduce a novel methodology for investigating the relationship, which shows great success in establishing a significant correlation between system and user effectiveness. It is shown that users behave differently and discern differences between pairs of systems that have a very small absolute difference in test collection effectiveness. Our results strengthen the use of test collections in IR evaluation, confirming that users' effectiveness can be predicted successfully.

References

[1]
J. Allan, J., Carterette, B. & Lewis, J. When Will Information Retrieval Be "Good Enough"? User Effectiveness As a Function of Retrieval Accuracy. ACM SIGIR, pages 433--440, Salvador, Brazil.2005
[2]
Buckley, C. & E. M. Voorhees. Evaluating Evaluation Measure Stability. ACM SIGIR. Athens, Greece .2000
[3]
Buckley, C. & E. M. Voorhees. Retrieval system Evaluation. In E. M. VOORHEES & HARMAN D. K. (Eds.), TREC: experiment and evaluation in information retrieval. London, England, MIT Press. 2005
[4]
S. P. Harter, & C. A. Hert. Evaluation of information retrieval systems: Approaches, issues, and methods. Annual Review of Information Science and Technology (ARIST), 32, 3--94. 1997
[5]
W. Hersh, A. Turpin, S. Price, B. Chan, D. Kraemer, L. Sacherek, and D. Olson. Do batch and user evaluations give the same results? In Proc. ACM SIGIR, pages 17--24, Athens, Greece, 2000.
[6]
S. Huffman & M. Hochster. How Well does Result Relevance Predict Session Satisfaction?. ACM SIGIR.pages 567--573 Amsterdam, The Netherlands. 2007
[7]
P. Ingwersen & K. Jäärvelin, The turn: integration of information seeking and retrieval in context, Springer. 2005
[8]
T. Joachims, L. A. Granka, B. Pan, H. Hembrooke, and G. Gay. Accurately interpreting clickthrough data as implicit feedback. In Proc. ACM SIGIR, pages 154--161, Salvador, Brazil, 2005.
[9]
A. Turpin, & F. Scholer. User Performance versus Precision Measures for Simple Search Tasks. SIGIR, Pages 11--18. Seattle, Washington, USA. 2006
[10]
A. Turpin, & W. Hersh. Why batch and user evaluations do not give the same results. ACM SIGIR, pages 225--231.New Orleans, Louisiana, United States. 2001
[11]
A. Turpin, & W. Hersh. User interface effects in past batch versus user experiments. ACM SIGIR, pages 431--434. Tampere, Finland. 2002
[12]
K. Yuri, & J. R. Moehr. Current Status of the Evaluation of Information Retrieval. Journal of Medical Systems 27(5): 409--424. 2003

Cited By

View all
  • (2023)Clustering of Relevant Documents Based on Findability Effort in Information RetrievalInternational Journal of Information Retrieval Research10.4018/IJIRR.31576412:1(1-18)Online publication date: 6-Jan-2023
  • (2023)How Well do Offline Metrics Predict Online Performance of Product Ranking Models?Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval10.1145/3539618.3591865(3415-3420)Online publication date: 19-Jul-2023
  • (2022)Proposing a New Combined Indicator for Measuring Search Engine Performance and Evaluating Google, Yahoo, DuckDuckGo, and Bing Search Engines based on Combined IndicatorJournal of Librarianship and Information Science10.1177/0961000622113857956:1(178-197)Online publication date: 8-Dec-2022
  • Show More Cited By

Index Terms

  1. The good and the bad system: does the test collection predict users' effectiveness?

      Recommendations

      Comments

      Please enable JavaScript to view thecomments powered by Disqus.

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      SIGIR '08: Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval
      July 2008
      934 pages
      ISBN:9781605581644
      DOI:10.1145/1390334
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 20 July 2008

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. effectiveness measures
      2. test collection
      3. user study

      Qualifiers

      • Research-article

      Conference

      SIGIR '08
      Sponsor:

      Acceptance Rates

      Overall Acceptance Rate 792 of 3,983 submissions, 20%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)3
      • Downloads (Last 6 weeks)1
      Reflects downloads up to 07 Mar 2025

      Other Metrics

      Citations

      Cited By

      View all
      • (2023)Clustering of Relevant Documents Based on Findability Effort in Information RetrievalInternational Journal of Information Retrieval Research10.4018/IJIRR.31576412:1(1-18)Online publication date: 6-Jan-2023
      • (2023)How Well do Offline Metrics Predict Online Performance of Product Ranking Models?Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval10.1145/3539618.3591865(3415-3420)Online publication date: 19-Jul-2023
      • (2022)Proposing a New Combined Indicator for Measuring Search Engine Performance and Evaluating Google, Yahoo, DuckDuckGo, and Bing Search Engines based on Combined IndicatorJournal of Librarianship and Information Science10.1177/0961000622113857956:1(178-197)Online publication date: 8-Dec-2022
      • (2022)Pilot trial comparing COVID-19 publication database to conventional online search methodsBMJ Health & Care Informatics10.1136/bmjhci-2022-10061629:1(e100616)Online publication date: 8-Nov-2022
      • (2021)DiffIR: Exploring Differences in Ranking Models' BehaviorProceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval10.1145/3404835.3462784(2595-2599)Online publication date: 11-Jul-2021
      • (2021)On the Instability of Diminishing Return IR MeasuresAdvances in Information Retrieval10.1007/978-3-030-72113-8_38(572-586)Online publication date: 27-Mar-2021
      • (2021)The Digital Resources Objects Retrieval: Concepts and FiguresInnovative Systems for Intelligent Health Informatics10.1007/978-3-030-70713-2_40(430-438)Online publication date: 6-May-2021
      • (2020)Retrieval Evaluation Measures that Agree with Users’ SERP PreferencesACM Transactions on Information Systems10.1145/343181339:2(1-35)Online publication date: 31-Dec-2020
      • (2019)Which Diversity Evaluation Measures Are "Good"?Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval10.1145/3331184.3331215(595-604)Online publication date: 18-Jul-2019
      • (2019)The Evolution of CranfieldInformation Retrieval Evaluation in a Changing World10.1007/978-3-030-22948-1_2(45-69)Online publication date: 14-Aug-2019
      • Show More Cited By

      View Options

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Figures

      Tables

      Media

      Share

      Share

      Share this Publication link

      Share on social media