[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/2858036.2858498acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
research-article
Public Access

Local Standards for Sample Size at CHI

Published: 07 May 2016 Publication History

Abstract

We describe the primary ways researchers can determine the size of a sample of research participants, present the benefits and drawbacks of each of those methods, and focus on improving one method that could be useful to the CHI community: local standards. To determine local standards for sample size within the CHI community, we conducted an analysis of all manuscripts published at CHI2014. We find that sample size for manuscripts published at CHI ranges from 1 -- 916,000 and the most common sample size is 12. We also find that sample size differs based on factors such as study setting and type of methodology employed. The outcome of this paper is an overview of the various ways sample size may be determined and an analysis of local standards for sample size within the CHI community. These contributions may be useful to researchers planning studies and reviewers evaluating the validity of results.

References

[1]
ACM SIGCHI 2015. Guide to a Successful Paper or Note Submission.
[2]
ACM SIGCHI 2015. Papers Versus Notes Whats the Difference.
[3]
Herman Aguinis and Erika Harden. 2009. Sample size rules of thumb: evaluating three common practices. In Statistical and methodological myths and urban legends: doctrine, verity and fable in the organizational and social sciences., Charles Lance and Robert Vandenberg Eds., NY, NY, 267--286.
[4]
William Albert and Thomas Tullis. 2013. Measuring the user experience: collecting, analyzing, and presenting usability metrics. Newnes.
[5]
Peter Bacchetti. 2002. Peer review of statistics in medical research: the other problem. BMJ: British Medical Journal 324, 7348, 1271.
[6]
Peter Bacchetti. 2010. Current sample size conventions: Flaws, harms, and alternatives. BMC Medicine 8, 1, 1--7. http://dx.doi.org/10.1186/1741--7015--8--17.
[7]
Peter Bacchetti, Steven G. Deeks, and Joseph M. McCune. 2011. Breaking Free of Sample Size Dogma to Perform Innovative Translational Research. Science Translational Medicine 3, 87 (2011-06--15 00:00:00), 87ps24--87ps24. http://dx.doi.org/10.1126/scitranslmed.3001628.
[8]
Peter Bacchetti, Charles E. McCulloch, and Mark R. Segal. 2008. Simple, Defensible Sample Sizes Based on Cost Efficiency. Biometrics 64, 2, 577--585. http://dx.doi.org/10.1111/j.1541-0420.2008.01004_1.x.
[9]
Louise Barkhuus and Jennifer A. Rode. 2007. From Mice to Men - 24 Years of Evaluation in CHI. In Proceedings of the Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (San Jose, California, USA 2007), ACM, 2180963. http://dx.doi.org/10.1145/1240624.2180963.
[10]
Kathy Baxter, Catherine Courage, and Kelly Caine. 2015. Understanding Your Users: A Practical Guide to User Research Methods. Morgan Kaufmann.
[11]
Simone Borsci, Robert D. Macredie, Julie Barnett, Jennifer Martin, Jasna Kuljis, and Terry Young. 2013. Reviewing and Extending the Five-User Assumption: A Grounded Procedure for Interaction Evaluation. ACM Trans. Comput.-Hum. Interact. 20, 5, 1--23. http://dx.doi.org/10.1145/2506210.
[12]
Paul Cairns. 2007. HCI... not as it should be: inferential statistics in HCI research. In Proceedings of the 21st British HCI Group Annual Conference on People and Computers: HCI... but not as we know it-Volume 1 British Computer Society, 195--201.
[13]
Ed H. Chi. 2011. On the importance of Replication in HCI and Social Computing Research. In BLOG@CACM.
[14]
Jacob Cohen. 1962. The statistical power of abnormalsocial psychological research: A review. The Journal of Abnormal and Social Psychology 65, 3, 145--153. http://dx.doi.org/http://dx.doi.org.libproxy.clemson.edu/ 10.1037/h0045186.
[15]
Paul D Ellis. 2010. The essential guide to effect sizes: Statistical power, meta-analysis, and the interpretation of research results. Cambridge University Press.
[16]
Barney G Glaser and Anselm L Strauss. 2009. The discovery of grounded theory: Strategies for qualitative research. Transaction Publishers.
[17]
Henry A Glick. 2011. Sample Size and Power for Cost Effectiveness Analysis (Part 2). Pharmacoeconomics 29, 4, 287--296.
[18]
Saul Greenberg and Bill Buxton. 2008. Usability evaluation considered harmful (some of the time). In Proceedings of the Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Florence, Italy2008), ACM, 1357074, 111--120. http://dx.doi.org/10.1145/1357054.1357074.
[19]
Scott A. Hale. 2014. Global connectivity and multilinguals in the Twitter network. In Proceedings of the Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Toronto, Ontario, Canada2014), ACM, 2557203, 833--842. http://dx.doi.org/10.1145/2556288.2557203.
[20]
Joseph Henrich, Steven J. Heine, and Ara Norenzayan. 2010. The weirdest people in the world? Behavioral and Brain Sciences 33, 2--3, 61--83. http://dx.doi.org/
[21]
Wonil Hwang and Gavriel Salvendy. 2010. Number of people required for usability evaluation: the 10±2 rule. Commun. ACM 53, 5, 130--133. http://dx.doi.org/10.1145/1735223.1735255.
[22]
Maurits Kaptein and Judy Robertson. 2012. Rethinking statistical analysis methods for CHI. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems ACM, 1105--1114.
[23]
Helena Chmura Kraemer and Christine Blasey. 2015. How many subjects?: Statistical power analysis in research. Sage Publications.
[24]
Yong Liu, Jorge Goncalves, Denzil Ferreira, Bei Xiao, Simo Hosio, and Vassilis Kostakos. 2014. CHI 19942013: mapping two decades of intellectual progress through co-word analysis. In Proceedings of the Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Toronto, Ontario, Canada2014), ACM, 2556969, 3553--3562. http://dx.doi.org/10.1145/2556288.2556969.
[25]
Jakob Nielsen. 1994. Estimating the number of subjects needed for a thinking aloud test. International journal of human-computer studies 41, 3, 385--397.
[26]
R Nuzzo. 2014. Statistical errors: P values, the 'gold standard' of statistical validity, are not as reliable as many scientists assume. Nature 506, 150, 52.
[27]
Robert J. Ploutz-Snyder, James Fiedler, and Alan H. Feiveson. 2014. Justifying small-n research in scientifically amazing settings: Challenging the notion that only "big-n" studies are worthwhile. Journal of Applied Physiology(2014-01-09 22:33:40). http://dx.doi.org/10.1152/japplphysiol.01335.2013.
[28]
Jenny Preece, Helen Sharp, and Yvonne Rogers. 2015. Interaction Design-beyond human-computer interaction. John Wiley & Sons.
[29]
Daniel Reed and Ed H. Chi. 2012. Online privacy; replicating research results. Commun. ACM 55, 10, 8--9. http://dx.doi.org/10.1145/2347736.2347739.
[30]
Robert Rosenthal. 1965. The volunteer subject. Human relations 18, 4, 389.
[31]
Albrecht Schmidt and Tovi Grossman. 2014. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems ACM, Toronto, Ontario, Canada, 4206.
[32]
Wendie Wulff and Dick E Mahling. 1990. An assessment of HCI: issues and implications. ACM SIGCHI Bulletin 22, 1, 80--87.

Cited By

View all
  • (2025)DG Comics: Semi-Automatically Authoring Graph Comics for Dynamic GraphsIEEE Transactions on Visualization and Computer Graphics10.1109/TVCG.2024.345634031:1(973-983)Online publication date: Jan-2025
  • (2024)Collaboration, Socialization, and Support: Accountability Partners in an Online Doctoral Learning EnvironmentThe Pinnacle: A Journal by Scholar-Practitioners10.61643/c302752:1Online publication date: Mar-2024
  • (2024)MoNACA: A System for Anime-like Motion Transfer by Adaptive Partitioning of Articular TrajectoriesMoNACA: 関節軌道曲線の適応的分割によるセルアニメ風モーション変換システムThe Journal of the Society for Art and Science10.3756/artsci.23.6_123:3(6_1-6_17)Online publication date: 25-Sep-2024
  • Show More Cited By

Index Terms

  1. Local Standards for Sample Size at CHI

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    CHI '16: Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems
    May 2016
    6108 pages
    ISBN:9781450333627
    DOI:10.1145/2858036
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 07 May 2016

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. N
    2. evaluation
    3. meta-HCI
    4. methodology
    5. number of participants
    6. research methods
    7. sample size

    Qualifiers

    • Research-article

    Funding Sources

    • NSF

    Conference

    CHI'16
    Sponsor:
    CHI'16: CHI Conference on Human Factors in Computing Systems
    May 7 - 12, 2016
    California, San Jose, USA

    Acceptance Rates

    CHI '16 Paper Acceptance Rate 565 of 2,435 submissions, 23%;
    Overall Acceptance Rate 6,199 of 26,314 submissions, 24%

    Upcoming Conference

    CHI 2025
    ACM CHI Conference on Human Factors in Computing Systems
    April 26 - May 1, 2025
    Yokohama , Japan

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)1,636
    • Downloads (Last 6 weeks)256
    Reflects downloads up to 13 Dec 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2025)DG Comics: Semi-Automatically Authoring Graph Comics for Dynamic GraphsIEEE Transactions on Visualization and Computer Graphics10.1109/TVCG.2024.345634031:1(973-983)Online publication date: Jan-2025
    • (2024)Collaboration, Socialization, and Support: Accountability Partners in an Online Doctoral Learning EnvironmentThe Pinnacle: A Journal by Scholar-Practitioners10.61643/c302752:1Online publication date: Mar-2024
    • (2024)MoNACA: A System for Anime-like Motion Transfer by Adaptive Partitioning of Articular TrajectoriesMoNACA: 関節軌道曲線の適応的分割によるセルアニメ風モーション変換システムThe Journal of the Society for Art and Science10.3756/artsci.23.6_123:3(6_1-6_17)Online publication date: 25-Sep-2024
    • (2024)Engagement and Brand Recall in Software Developers: An Eye-Tracking Study on AdvergamesApplied Sciences10.3390/app1418836014:18(8360)Online publication date: 17-Sep-2024
    • (2024)Designing values elicitation technologies for mental health and chronic care integration: an iterative co-design study (Preprint)JMIR Formative Research10.2196/68419Online publication date: 11-Nov-2024
    • (2024)Testing Two Online Symptom Checkers With Vulnerable Groups: Usability Study to Improve Cognitive Accessibility of eHealth ServicesJMIR Human Factors10.2196/4527511(e45275)Online publication date: 8-Mar-2024
    • (2024)The Elephant in the Room: Expert Experiences Designing, Developing and Evaluating Data Visualizations on Large DisplaysProceedings of the ACM on Human-Computer Interaction10.1145/36981398:ISS(301-329)Online publication date: 24-Oct-2024
    • (2024)Casual Competition by Design: A Study of the All Random All Mid (ARAM) Mode in League of LegendsProceedings of the ACM on Human-Computer Interaction10.1145/36869928:CSCW2(1-28)Online publication date: 8-Nov-2024
    • (2024)From Viral Content to Real-Life Cuisine and Beyond: Examining Teenagers' Interactions with TikTok Food Videos and the Influence on their Food PracticesProceedings of the ACM on Human-Computer Interaction10.1145/36869288:CSCW2(1-30)Online publication date: 8-Nov-2024
    • (2024)Evaluating What Others Say: The Effect of Accuracy Assessment in Shaping Mental Models of AI SystemsProceedings of the ACM on Human-Computer Interaction10.1145/36869128:CSCW2(1-26)Online publication date: 8-Nov-2024
    • Show More Cited By

    View Options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Login options

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media