[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3442188.3445933acmconferencesArticle/Chapter ViewAbstractPublication PagesfacctConference Proceedingsconference-collections
research-article
Open access

From Optimizing Engagement to Measuring Value

Published: 01 March 2021 Publication History

Abstract

Most recommendation engines today are based on predicting user engagement, e.g. predicting whether a user will click on an item or not. However, there is potentially a large gap between engagement signals and a desired notion of value that is worth optimizing for. We use the framework of measurement theory to (a) confront the designer with a normative question about what the designer values, (b) provide a general latent variable model approach that can be used to operationalize the target construct and directly optimize for it, and (c) guide the designer in evaluating and revising their operationalization. We implement our approach on the Twitter platform on millions of users. In line with established approaches to assessing the validity of measurements, we perform a qualitative evaluation of how well our model captures a desired notion of "value".

Supplementary Material

milli (milli.zip)
Supplemental movie, appendix, image and software files for, From Optimizing Engagement to Measuring Value

References

[1]
Michael D Ekstrand and Martijn C Willemsen. Behaviorism is not enough: better recommendations through listening to users. In Proceedings of the 10th ACM Conference on Recommender Systems, pages 221--224, 2016.
[2]
Hongyi Wen, Longqi Yang, and Deborah Estrin. Leveraging post-click feedback for content recommendations. In Proceedings of the 13th ACM Conference on Recommender Systems, pages 278--286, 2019.
[3]
David J Hand. Measurement theory and practice: The world through quantification. Arnold London, 2004.
[4]
Simon Jackman. Measurement. In The Oxford Handbook of Political Methodology, chapter 9. Oxford University Press, 09 2009. ISBN 9780199286546.
[5]
Judea Pearl. Causality. Cambridge university press, 2009.
[6]
American Educational Research Association, American Psychological Association, National Council on Measurement in Education, Joint Committee on Standards for Educational and Psychological Testing. Standards for educational and psychological testing. AERA, 2014.
[7]
Samuel Messick. Validity. ETS Research Report Series, 1987(2):i-208, 1987.
[8]
Todd D Reeves and Gili Marbach-Ad. Contemporary test validity in theory and practice: A primer for discipline-based education researchers. CBE---Life Sciences Education, 15(1):rm1, 2016.
[9]
Abigail Z Jacobs and Hanna Wallach. Measurement and fairness. arXiv preprint arXiv:1912.05511, 2019.
[10]
Sanjeev Arora, Rong Ge, and Ankur Moitra. Learning topic models-going beyond svd. In 2012 IEEE 53rd Annual Symposium on Foundations of Computer Science, pages 1--10. IEEE, 2012.
[11]
Sanjeev Arora, Rong Ge, Yonatan Halpern, David Mimno, Ankur Moitra, David Sontag, Yichen Wu, and Michael Zhu. A practical algorithm for topic modeling with provable guarantees. In International Conference on Machine Learning, pages 280--288, 2013.
[12]
Yoni Halpern, Steven Horng, Youngduck Choi, and David Sontag. Electronic medical record phenotyping using the anchor and learn framework. Journal of the American Medical Informatics Association, 23(4):731--740, 2016.
[13]
Yoni Halpern, Steven Horng, and David Sontag. Clinical tagging with joint probabilistic models. In Conference on Machine Learning for Health Care, 2016.
[14]
Judea Pearl. On measurement bias in causal inference. In Proceedings of the Twenty-Sixth Conference on Uncertainty in Artificial Intelligence, pages 425--432. AUAI Press, 2010.
[15]
Manabu Kuroki and Judea Pearl. Measurement bias and effect restoration in causal inference. Biometrika, 101(2):423--437, 2014.
[16]
Kenneth J Rothman, Sander Greenland, and Timothy L Lash. Modern epidemiology, volume 3. Wolters Kluwer Health/Lippincott Williams & Wilkins Philadelphia, 2008.

Cited By

View all
  • (2024)From Model Performance to Claim: How a Change of Focus in Machine Learning Replicability Can Help Bridge the Responsibility GapSSRN Electronic Journal10.2139/ssrn.4806609Online publication date: 2024
  • (2024)Embedding Democratic Values into Social Media AIs via Societal Objective FunctionsProceedings of the ACM on Human-Computer Interaction10.1145/36410028:CSCW1(1-36)Online publication date: 26-Apr-2024
  • (2024)The Fault in Our Recommendations: On the Perils of Optimizing the MeasurableProceedings of the 18th ACM Conference on Recommender Systems10.1145/3640457.3688144(200-208)Online publication date: 8-Oct-2024
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
FAccT '21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency
March 2021
899 pages
ISBN:9781450383097
DOI:10.1145/3442188
This work is licensed under a Creative Commons Attribution International 4.0 License.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 01 March 2021

Check for updates

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

FAccT '21
Sponsor:

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)278
  • Downloads (Last 6 weeks)33
Reflects downloads up to 12 Dec 2024

Other Metrics

Citations

Cited By

View all
  • (2024)From Model Performance to Claim: How a Change of Focus in Machine Learning Replicability Can Help Bridge the Responsibility GapSSRN Electronic Journal10.2139/ssrn.4806609Online publication date: 2024
  • (2024)Embedding Democratic Values into Social Media AIs via Societal Objective FunctionsProceedings of the ACM on Human-Computer Interaction10.1145/36410028:CSCW1(1-36)Online publication date: 26-Apr-2024
  • (2024)The Fault in Our Recommendations: On the Perils of Optimizing the MeasurableProceedings of the 18th ACM Conference on Recommender Systems10.1145/3640457.3688144(200-208)Online publication date: 8-Oct-2024
  • (2024)System-2 Recommenders: Disentangling Utility and Engagement in Recommendation Systems via Temporal Point-ProcessesProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3659004(1763-1773)Online publication date: 3-Jun-2024
  • (2024)From Model Performance to Claim: How a Change of Focus in Machine Learning Replicability Can Help Bridge the Responsibility GapProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3658951(1002-1013)Online publication date: 3-Jun-2024
  • (2024)Clickbait vs. Quality: How Engagement-Based Optimization Shapes the Content Landscape in Online PlatformsProceedings of the ACM Web Conference 202410.1145/3589334.3645353(36-45)Online publication date: 13-May-2024
  • (2024)Evaluating Twitter’s algorithmic amplification of low-credibility content: an observational studyEPJ Data Science10.1140/epjds/s13688-024-00456-313:1Online publication date: 7-Mar-2024
  • (2024)An approach to sociotechnical transparency of social media algorithms using agent-based modellingAI and Ethics10.1007/s43681-024-00527-1Online publication date: 29-Jul-2024
  • (2024)Skewed perspectives: examining the influence of engagement maximization on content diversity in social media feedsJournal of Computational Social Science10.1007/s42001-024-00255-w7:1(721-739)Online publication date: 20-Mar-2024
  • (2023)Automating Automaticity: How the Context of Human Choice Affects the Extent of Algorithmic BiasSSRN Electronic Journal10.2139/ssrn.4364729Online publication date: 2023
  • Show More Cited By

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media