[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3576840.3578278acmconferencesArticle/Chapter ViewAbstractPublication PagesirConference Proceedingsconference-collections
short-paper
Open access

How to Make an Outlier? Studying the Effect of Presentational Features on the Outlierness of Items in Product Search Results

Published: 20 March 2023 Publication History

Abstract

In two-sided marketplaces, items compete for attention from users since attention translates to revenue for suppliers. Item exposure is an indication of the amount of attention that items receive from users in a ranking. It can be influenced by factors like position bias. Recent work suggests that another phenomenon related to inter-item dependencies may also affect item exposure, viz. outlier items in the ranking. Hence, a deeper understanding of outlier items is crucial to determining an item’s exposure distribution. In this work, we study the impact of different presentational e-commerce features on users’ perception of outlierness of an item in a search result page. Informed by visual search literature, we design a set of crowdsourcing tasks where we compare the observability of three main features, viz. price, star rating, and discount tag. We find that various factors affect item outlierness, namely, visual complexity (e.g., shape, color), discriminative item features, and value range. In particular, we observe that a distinctive visual feature such as a colored discount tag can attract users’ attention much easier than a high price difference, simply because of visual characteristics that are easier to spot. Moreover, we see that the magnitude of deviations in all features affects the task complexity, such that when the similarity between outlier and non-outlier items increases, the task becomes more difficult.

References

[1]
Aman Agarwal, Xuanhui Wang, Cheng Li, Michael Bendersky, and Marc Najork. 2019. Addressing Trust Bias for Unbiased Learning-to-rank. In WWW. 4–14.
[2]
Praveen Aggarwal and Rajiv Vaidyanathan. 2016. Is Font Size a Big Deal? A Transaction–Acquisition Utility Perspective on Comparative Price Promotions. Journal of Consumer Marketing(2016).
[3]
Leif Azzopardi. 2021. Cognitive Biases in Search: A Review and Reflection of Cognitive Biases in Information Retrieval. In CHIIR. ACM, 27–37.
[4]
Asia J Biega, Krishna P Gummadi, and Gerhard Weikum. 2018. Equity of Attention: Amortizing Individual Fairness in Rankings. In SIGIR. 405–414.
[5]
Fernando Diaz, Bhaskar Mitra, Michael D Ekstrand, Asia J Biega, and Ben Carterette. 2020. Evaluating Stochastic Rankings with Expected Exposure. In CIKM. 275–284.
[6]
John Duncan and Glyn W Humphreys. 1989. Visual Search and Stimulus Similarity. Psychological review 96, 3 (1989), 433.
[7]
Loann Giovannangeli, Romain Bourqui, Romain Giot, and David Auber. 2022. Color and Shape Efficiency for Outlier Detection from Automated to User Evaluation. Visual Informatics (2022).
[8]
Thorsten Joachims, Laura Granka, Bing Pan, Helene Hembrooke, and Geri Gay. 2005. Accurately Interpreting Clickthrough Data as Implicit Feedback. In SIGIR. 154–161.
[9]
Thorsten Joachims, Adith Swaminathan, and Tobias Schnabel. 2017. Unbiased Learning-to-rank with Biased Feedback. In WSDM. 781–789.
[10]
Karen C Kao, Sally Rao Hill, and Indrit Troshani. 2020. Effects of Cue Congruence and Perceived Cue Authenticity in Online Group Buying. Internet Research (2020).
[11]
Aniket Kittur, Ed H. Chi, and Bongwon Suh. 2008. Crowdsourcing User Studies with Mechanical Turk. In Proceedings of the 2008 Conference on Human Factors in Computing Systems, CHI. ACM, 453–456.
[12]
Brian McElree and Marisa Carrasco. 1999. The Temporal Dynamics of Visual Search: Evidence for Parallel Processing in Feature and Conjunction Searches. Journal of Experimental Psychology: Human Perception and Performance 25, 6(1999), 1517.
[13]
Rishabh Mehrotra, James McInerney, Hugues Bouchard, Mounia Lalmas, and Fernando Diaz. 2018. Towards a Fair Marketplace: Counterfactual Evaluation of the Trade-off between Relevance, Fairness & Satisfaction in Recommendation Systems. In CIKM. 2243–2251.
[14]
Marco Morik, Ashudeep Singh, Jessica Hong, and Thorsten Joachims. 2020. Controlling Fairness and Bias in Dynamic Learning-to-rank. In SIGIR. 429–438.
[15]
Alamir Novin and Eric M. Meyers. 2017. Making Sense of Conflicting Science Information: Exploring Bias in the Search Engine Result Page. In CHIIR. ACM, 175–184.
[16]
Zohreh Ovaisi, Ragib Ahsan, Yifan Zhang, Kathryn Vasilaky, and Elena Zheleva. 2020. Correcting for Selection Bias in Learning-to-rank Systems. In WWW. 1863–1873.
[17]
Piotr Sapiezynski, Wesley Zeng, Ronald E Robertson, Alan Mislove, and Christo Wilson. 2019. Quantifying the Impact of User Attentionon Fair Group Representation in Ranked Lists. In WWW. 553–562.
[18]
Fatemeh Sarvi, Maria Heuss, Mohammad Aliannejadi, Sebastian Schelter, and Maarten de Rijke. 2022. Understanding and Mitigating the Effect of Outliers in Fair Ranking. In WSDM. 861–869.
[19]
Jiye Shen, Eyal M Reingold, and Marc Pomplun. 2003. Guidance of Eye Movements during Conjunctive Visual Search: The Distractor-ratio Effect. Canadian Journal of Experimental Psychology/Revue canadienne de psychologie expérimentale 57, 2 (2003), 76.
[20]
Ashudeep Singh and Thorsten Joachims. 2018. Fairness of Exposure in Rankings. In KDD. 2219–2228.
[21]
Ashudeep Singh and Thorsten Joachims. 2019. Policy Learning for Fairness in Ranking. In NeurIPS.
[22]
Anne M Treisman and Garry Gelade. 1980. A Feature-integration Theory of Attention. Cognitive psychology 12, 1 (1980), 97–136.
[23]
Ali Vardasbi, Harrie Oosterhuis, and Maarten de Rijke. 2020. When Inverse Propensity Scoring does not Work: Affine Corrections for Unbiased Learning to Rank. In CIKM. 1475–1484.
[24]
Jeremy M Wolfe. 1998. What Can 1 Million Trials Tell Us about Visual Search?Psychological Science 9, 1 (1998), 33–39.
[25]
Himank Yadav, Zhengxiao Du, and Thorsten Joachims. 2019. Policy-Gradient Training of Fair and Unbiased Ranking Functions. arXiv preprint arXiv:1911.08054(2019).

Index Terms

  1. How to Make an Outlier? Studying the Effect of Presentational Features on the Outlierness of Items in Product Search Results

      Recommendations

      Comments

      Please enable JavaScript to view thecomments powered by Disqus.

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      CHIIR '23: Proceedings of the 2023 Conference on Human Information Interaction and Retrieval
      March 2023
      520 pages
      ISBN:9798400700354
      DOI:10.1145/3576840
      • Editors:
      • Jacek Gwizdka,
      • Soo Young Rieh
      This work is licensed under a Creative Commons Attribution International 4.0 License.

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 20 March 2023

      Check for updates

      Author Tags

      1. Fairness
      2. Outliers
      3. Product search

      Qualifiers

      • Short-paper
      • Research
      • Refereed limited

      Conference

      CHIIR '23
      Sponsor:

      Acceptance Rates

      Overall Acceptance Rate 55 of 163 submissions, 34%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • 0
        Total Citations
      • 290
        Total Downloads
      • Downloads (Last 12 months)154
      • Downloads (Last 6 weeks)19
      Reflects downloads up to 18 Jan 2025

      Other Metrics

      Citations

      View Options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      HTML Format

      View this article in HTML Format.

      HTML Format

      Login options

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media