[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/2766462.2767732acmconferencesArticle/Chapter ViewAbstractPublication PagesirConference Proceedingsconference-collections
research-article

How many results per page?: A Study of SERP Size, Search Behavior and User Experience

Published: 09 August 2015 Publication History

Abstract

The provision of "ten blue links" has emerged as the standard for the design of search engine result pages (SERPs). While numerous aspects of SERPs have been examined, little attention has been paid to the number of results displayed per page. This paper investigates the relationships among the number of results shown on a SERP, search behavior and user experience. We performed a laboratory experiment with 36 subjects, who were randomly assigned to use one of three search interfaces that varied according to the number of results per SERP (three, six or ten). We found subjects' click distributions differed significantly depending on SERP size. We also found those who interacted with three results per page viewed significantly more SERPs per query; interestingly, the number of SERPs they viewed per query corresponded to about 10 search results. Subjects who interacted with ten results per page viewed and saved significantly more documents. They also reported the greatest difficulty finding relevant documents, rated their skills the lowest and reported greater workload, even though these differences were not significant. This work shows that behavior changes with SERP size, such that more time is spent focused on earlier results when SERP size decreases.

References

[1]
J. Arguello, W.-C. Wu, D. Kelly, and A. Edwards. Task complexity, vertical display and user interaction in aggregated search. In Proceedings of the 35th International ACM SIGIR Conference, SIGIR '12, pages 435--444, 2012.
[2]
A. Aula, R. M. Khan, Z. Guan, P. Fontes, and P. Hong. A comparison of visual and textual page previews in judging the helpfulness of web pages. In Proceedings of the 19th International WWW Conference, pages 51--60, 2010.
[3]
L. Azzopardi, D. Kelly, and K. Brennan. How query cost affects search behavior. In Proc. of the 36th International ACM SIGIR Conference, pages 23--32, 2013.
[4]
J. Bar-Ilan, K. Keenoy, M. Levene, and E. Yaari. Presentation bias is significant in determining user preference for search results: A user study. J. of the Am. Soc. for Info. Sci. and Tech., 60(1):135--149, 2009.
[5]
F. Chierichetti, R. Kumar, and P. Raghavan. Optimizing two-dimensional search results presentation. In Proc. of the 4th Int. ACM WSDM Conference, pages 257--266, 2011.
[6]
C. L. A. Clarke, E. Agichtein, S. Dumais, and R. W. White. The influence of caption features on clickthrough patterns in web search. In Proc. of the 30th Annual International ACM SIGIR Conference, pages 135--142, 2007.
[7]
N. Craswell, O. Zoeter, M. Taylor, and B. Ramsey. An experimental comparison of click position-bias models. In Proceedings of the 2008 International Conference on Web Search and Data Mining, WSDM '08, pages 87--94, 2008.
[8]
E. Cutrell and Z. Guan. What are you looking for?: an eye-tracking study of information usage in web search. In Proc. of the SIGCHI conference, pages 407--416, 2007.
[9]
S. Debowski, R. Wood, and A. Bandura. The impact of guided exploration & enactive exploration on self-regulatory mechanisms & information acquisition through electronic enquiry. J. of Applied Psych., 86:1129--1141, 2001.
[10]
M. A. Hearst. Tilebars: Visualization of term distribution information in full text information access. In Proceedings of the SIGCHI Conference, pages 59--66, 1995.
[11]
K. Jarvelin and J. Kekalainen. Cumulated gain-based evaluation of ir techniques. ACM Trans. Inf. Syst., 20(4):422--446, Oct. 2002.
[12]
T. Joachims, L. Granka, B. Pan, H. Hembrooke, and G. Gay. Accurately interpreting clickthrough data as implicit feedback. In Proceedings of the 28th International ACM SIGIR Conference, pages 154--161, 2005.
[13]
H. Joho and J. M. Jose. A comparative study of the effectiveness of search result presentation on the web. In Proceedings of the 28th European Conference on Information Retrieval, pages 302--313, 2006.
[14]
M. Jones, G. Marsden, N. Mohd-Nasir, K. Boone, and G. Buchanan. Improving web interaction on small displays. In Proceedings of the Eighth International Conference on World Wide Web, WWW '99, pages 1129--1137, New York, NY, USA, 1999. Elsevier North-Holland, Inc.
[15]
Y. Kammerer and P. Gerjets. How the interface design influences users' spontaneous trustworthiness evaluations of web search results: comparing a list and a grid interface. In Proc. of the Symp. on Eye-Tracking Research & Applications, pages 299--306, 2010.
[16]
M. T. Keane, M. O'Brien, and B. Smyth. Are people biased in their use of search engines? Communications of the ACM, 51(2):49--52, 2008.
[17]
D. Kelly, K. Gyllstrom, and E. W. Bailey. A comparison of query and term suggestion features for interactive searching. In Proceedings of the 32nd ACM SIGIR conference, pages 371--378, 2009.
[18]
J. Kim, P. Thomas, R. Sankaranarayana, T. Gedeon, and H.-J. Yoon. Eye-tracking analysis of user behavior and performance in web search on large and small screens. J. of the Assoc. for Information Science and Technology, 2014.
[19]
G. Linden. Marissa mayer at web 2.0, November 2006. http://glinden.blogspot.com/2006/11/marissa-mayer-at-web-20.html.
[20]
H. Liu, X. Xie, X. Tang, Z.-W. Li, and W.-Y. Ma. Effective browsing of web image search results. In Proceedings of the 6th ACM SIGMM International Workshop on Multimedia Information Retrieval, pages 84--90, 2004.
[21]
L. Lorigo, B. Pan, H. Hembrooke, T. Joachims, L. Granka, and G. Gay. The influence of task and gender on search and evaluation behavior using google. Information Processing & Management, 42(4):1123--1131, 2006.
[22]
A. Moffat and J. Zobel. Rank-biased precision for measurement of retrieval effectiveness. ACM Trans. on Information Systems, 27(1):2:1--2:27, 2008.
[23]
A. Oulasvirta, J. P. Hukkinen, and B. Schwartz. When more is less: The paradox of choice in search engine use. In Proceedings of the 32nd International ACM SIGIR Conference, pages 516--523, 2009.
[24]
T. Paek, S. Dumais, and R. Logan. Wavelens: A new view onto internet search results. In Proceedings of the SIGCHI Conference, pages 727--734, 2004.
[25]
H. Reiterer, G. Tullius, and T. M. Mann. Insyder: a content-based visual-information-seeking system for the web. Int. Journal on Digital Libraries, 5(1):25--41, 2005.
[26]
M. L. Resnick, C. Maldonado, J. M. Santos, and R. Lergier. Modeling on-line search behavior using alternative output structures. In Proc. of the Human Factors and Ergonomics Soc. Annual Meeting, volume 45, pages 1166--1170, 2001.
[27]
S. E. Robertson. The probability ranking principle in ir. Journal of documentation, 33(4):294--304, 1977.
[28]
M. D. Smucker and C. L. Clarke. Time-based calibration of effectiveness measures. In Proceedings of the 35th ACM SIGIR conference, pages 95--104, 2012.
[29]
S. Sushmita, H. Joho, M. Lalmas, and R. Villa. Factors affecting click-through behavior in aggregated search interfaces. In Proceedings of the 19th International ACM CIKM Conference, CIKM '10, pages 519--528, 2010.
[30]
S. Sweeney and F. Crestani. Effective search results summary size and device screen size: Is there a relationship? IPM, 42(4):1056--1074, 2006.
[31]
J. Teevan, E. Cutrell, D. Fisher, S. M. Drucker, G. Ramos, P. André, and C. Hu. Visual snippets: Summarizing web pages for search and revisitation. In Proceedings of the SIGCHI Conference, pages 2023--2032, 2009.
[32]
A. Tombros and M. Sanderson. Advantages of query biased summaries in information retrieval. In Proc. of the 21st International ACM SIGIR Conference, pages 2--10, 1998.
[33]
E. M. Voorhees. Overview of the trec 2005 robust retrieval track. In Proceedings of TREC-14, 2006.
[34]
A. Woodruff, R. Rosenholtz, J. B. Morrison, A. Faulring, and P. Pirolli. A comparison of the use of text summaries, plain thumbnails, and enhanced thumbnails for web search tasks. J. Am. Soc. Inf. Sci. Technol., 53(2):172--185, 2002.
[35]
Y. Yue, R. Patel, and H. Roehrig. Beyond position bias: Examining result attractiveness as a source of presentation bias in clickthrough data. In Proc. of the 19th Int. Conference on World Wide Web, pages 1011--1018, 2010.
[36]
K. Zhou, R. Cummins, M. Lalmas, and J. M. Jose. Evaluating aggregated search pages. In Proceedings of the 35th International ACM SIGIR Conference, SIGIR '12, pages 115--124, 2012.

Cited By

View all
  • (2024)Mediación algorítmica: sesgos en la búsqueda de información en estudiantes universitariosEuropean Public & Social Innovation Review10.31637/epsir-2024-3979(1-18)Online publication date: 19-Jul-2024
  • (2024)The Influence of Presentation and Performance on User SatisfactionProceedings of the 2024 Conference on Human Information Interaction and Retrieval10.1145/3627508.3638335(77-86)Online publication date: 10-Mar-2024
  • (2024)Optimization of Information Retrieval Systems for Learning ContextsInternational Journal of Artificial Intelligence in Education10.1007/s40593-024-00415-zOnline publication date: 8-Jul-2024
  • Show More Cited By

Index Terms

  1. How many results per page?: A Study of SERP Size, Search Behavior and User Experience

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    SIGIR '15: Proceedings of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval
    August 2015
    1198 pages
    ISBN:9781450336215
    DOI:10.1145/2766462
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 09 August 2015

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. search behavior
    2. search interface
    3. search result page
    4. user studies

    Qualifiers

    • Research-article

    Conference

    SIGIR '15
    Sponsor:

    Acceptance Rates

    SIGIR '15 Paper Acceptance Rate 70 of 351 submissions, 20%;
    Overall Acceptance Rate 792 of 3,983 submissions, 20%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)82
    • Downloads (Last 6 weeks)10
    Reflects downloads up to 03 Jan 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Mediación algorítmica: sesgos en la búsqueda de información en estudiantes universitariosEuropean Public & Social Innovation Review10.31637/epsir-2024-3979(1-18)Online publication date: 19-Jul-2024
    • (2024)The Influence of Presentation and Performance on User SatisfactionProceedings of the 2024 Conference on Human Information Interaction and Retrieval10.1145/3627508.3638335(77-86)Online publication date: 10-Mar-2024
    • (2024)Optimization of Information Retrieval Systems for Learning ContextsInternational Journal of Artificial Intelligence in Education10.1007/s40593-024-00415-zOnline publication date: 8-Jul-2024
    • (2024)Analyzing Adversarial Attacks on Sequence-to-Sequence Relevance ModelsAdvances in Information Retrieval10.1007/978-3-031-56060-6_19(286-302)Online publication date: 16-Mar-2024
    • (2023)A Systematic Review of Cost, Effort, and Load Research in Information Search and Retrieval, 1972–2020ACM Transactions on Information Systems10.1145/358306942:1(1-39)Online publication date: 18-Aug-2023
    • (2023)Users Meet Clarifying Questions: Toward a Better Understanding of User Interactions for Search ClarificationACM Transactions on Information Systems10.1145/352411041:1(1-25)Online publication date: 9-Jan-2023
    • (2023)Analysis of types of and language used in online information available to patients with periodontitisBritish Dental Journal10.1038/s41415-023-5525-2234:4(253-258)Online publication date: 24-Feb-2023
    • (2023)Analysis of types of and language used in online information available to patients with periodontitisBDJ Team10.1038/s41407-023-1809-210:4(22-28)Online publication date: 21-Apr-2023
    • (2023)Asking Clarifying Questions: To benefit or to disturb users in Web search?Information Processing & Management10.1016/j.ipm.2022.10317660:2(103176)Online publication date: Mar-2023
    • (2023)In a Hurry: How Time Constraints and the Presentation of Web Search Results Affect User Behaviour and ExperienceWeb Engineering10.1007/978-3-031-34444-2_16(221-235)Online publication date: 16-Jun-2023
    • Show More Cited By

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media