[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3544548.3581332acmconferencesArticle/Chapter ViewFull TextPublication PageschiConference Proceedingsconference-collections
research-article
Open access

Literature Reviews in HCI: A Review of Reviews

Published: 19 April 2023 Publication History

Abstract

This paper analyses Human-Computer Interaction (HCI) literature reviews to provide a clear conceptual basis for authors, reviewers, and readers. HCI is multidisciplinary and various types of literature reviews exist, from systematic to critical reviews in the style of essays. Yet, there is insufficient consensus of what to expect of literature reviews in HCI. Thus, a shared understanding of literature reviews and clear terminology is needed to plan, evaluate, and use literature reviews, and to further improve review methodology. We analysed 189 literature reviews published at all SIGCHI conferences and ACM Transactions on Computer-Human Interaction (TOCHI) up until August 2022. We report on the main dimensions of variation: (i) contribution types and topics; and (ii) structure and methodologies applied. We identify gaps and trends to inform future meta work in HCI and provide a starting point on how to move towards a more comprehensive terminology system of literature reviews in HCI.

1 Introduction

The relatively young Human-Computer Interaction (HCI) community is in a period of steady growth, with an ever-increasing number of contributions every year. Given the abundance of information in papers contributing to the HCI field and its diversity, there is a growing need for work that supports scholars in understanding the overall direction of the field. One way to build a foundation to advance knowledge effectively is literature reviews [203]. Many authors have contributed reviews to the HCI field, especially in recent years. Reading literature reviews enables researchers to reflect on past research, understand their results in context, and look for new interests. Yet, as literature reviews may form an increasing part of our HCI’s knowledge base, there is a need, on the one hand, to assess their quality with respect to methodological approaches and, on the other hand, to develop an understanding of their content and structure. This paper discusses some of the key differences, both conceptual and practical, between different types of literature reviews.
In the HCI community, a wide variety of contributions can be found under the umbrella term “literature reviews”. This ranges from literature reviews focusing on a set of 17 papers [92] to reviews including a set of 2,494 papers [210], from literature reviews analysing papers from a single year [34] to reviews analysing papers from multiple decades [168], and reviews focusing on specific HCI research of a specified conference [170] to reviews analysing scientific literature from within and beyond HCI encompassing multiple journal and conference papers [180]. Furthermore, the HCI community is the intellectual home of scholars with a variety of academic backgrounds such as Computer Science, Design, and Psychology, amongst others. Multiple different academic backgrounds come with a broad range of notions and understandings of sound methodological approaches. These differences regarding (implicit) quality assessment criteria then impact HCI research, whether in the role of an author conducting a literature review, a reviewer assessing the quality of a literature review, or a reader engaging and reflecting on its content. Consequently, HCI needs to integrate the variety of academic backgrounds and different methodological notions that contribute to its intellectual diversity to establish a shared terminology and identify the key dimensions of literature reviews in HCI. We aim to address this challenge through an analysis of literature review contributions in the field.
To that end, we conducted a literature review of literature reviews in HCI to explore the topics that literature reviews in HCI address, the contribution types that they offer, and how literature reviews in HCI are conducted. In particular, we reviewed publications in SIGCHI conferences and TOCHI up until August 2022, coding and categorising a final list of 189 publications from 111,459 originally identified records. This selection is based on historical reasons, as the SIGCHI conferences and the way they were shaped in the last 40 years accurately describe the intellectual development of HCI [205]. We classified the reviews into types that describe the contributions that the works offer: empirical, artefact, methodological, theoretical, and opinion. We also analysed papers based on the topic they addressed, resulting in the following review topics: User Experience & Design, HCI Research, Interaction Design and Children, AI & ML, Games & Play, Work & Creativity, Accessibility, Well-being & Health, Human-Robot Interaction, AutoUI, Specific Application Area, and Specific Modality. Additionally, we investigated at which venues the reviews were published, which databases, conferences, and journals were used, if the PRISMA statement (or another literature review standard) was applied, and if inter-rater reliability was calculated, in order to build an understanding of lived practice in literature reviews in HCI. Our analysis of these publications demonstrates the following regarding literature reviews in HCI:
The majority of literature reviews within the HCI field can be classified as empirical (68/189), methodological (55/189), and artefact (54/189) review contributions.
Methodological and empirical literature review contributions often employed more rigorous reporting methods than artefact and theoretical reviews, while no papers that were classified as opinion reviews reported on inter-rater reliability, and only one used a PRISMA statement to describe their process.
Databases were the most frequently reported standard across the various review contribution types. To illustrate, approximately 76% of empirical contributions reported the databases they had employed in their review process.
Inter-rater reliability was rarely reported across all review contribution types. In total, only 13% of the 189 literature reviews reported inter-rater reliability.
One third of our corpus applied PRISMA or other flow charts. In total, roughly 23% of our corpus used PRISMA, QUOROM, or another type of flow chart to describe their review process.
This paper contributes the following: (i) an account of the contribution types offered by literature reviews in HCI; (ii) an overview of review topics that literature reviews in HCI have addressed to date, (iii) information on methodological approaches (e.g. databases used) for literature reviews at all SIGCHI conferences and TOCHI; (iv) current gaps and and future opportunities for meta work in HCI, and (v) a set of two practical contributions. First, an online paper library, where the full list of papers in our tagged corpus can be filtered based on specific criteria1. Second, an HCI literature review design document that can support future authors of literature reviews in their research process, available as part of the supplementary material. Our work can serve as a discussion starter that supports building a shared understanding of the growing body of literature reviews in HCI.

2 Motivation & Research Questions

The motivation and research questions of our literature review are rooted in research on literature reviews and meta-work in HCI, as well as in previous work focusing on methods in HCI.

2.1 Understanding HCI Research

Several papers in HCI seek to understand the field and analyse what constitutes a scientific knowledge contribution, for example, by investigating the methods that HCI researchers use and how they report on their research. This not only provides an overview to fellow HCI researchers that can guide them in their own work, but it can also establish trends and ultimately contribute to shaping the field itself. One strain of previous work focused on understanding HCI research from a conceptual perspective, trying to define the field or its evolution. For instance, Oulasvirta et al. [135] aimed to provide an answer to the question of what HCI is as a field and what “good” research in HCI constitutes, by addressing the field as a whole and providing a meta-scientific account of HCI research as problem-solving. They advocate that HCI research is about solving problems related to human use of computing, building on Laudan’s [101] philosophy. They propose that the majority of HCI work is about three main problem types by showing how contributions in HCI can be classified via extending Laudan’s typology of ‘empirical’ and ‘conceptual’ problems to also include ‘constructive’ ones. In contrast, our work does not study the question of ‘what is HCI’. Instead, we investigate how knowledge is created in HCI by building on larger work bodies.
On another note, Liu et al. [110] described the thematic evolution of the HCI field by analysing the research published at CHI between 1994 and 2013. By employing hierarchical cluster analysis, strategic diagrams, and network analysis on their corpus through co-word analysis, they mapped the evolution of major themes and outlined specific topics of importance within HCI. Their results show that HCI does not have a well-defined way of studying new technologies. Based on these findings, Liu et al. [110] emphasise the relative fragmentation within HCI regarding research approaches. We aim to address this fragmentation with a focus on literature reviews in HCI by moving towards a shared understanding of HCI literature reviews and relevant terminology.
Wobbrock et al. [207] identified seven research contribution types in HCI. Empirical research contributions offer new knowledge through findings based on observation and data gathering. Artefact contributions, on the other hand, arise from generative design-driven activities. Interactive artefacts, often prototypes, enable new explorations and facilitate discoveries and new insights. Methodological research contributes to new knowledge that informs how we carry out our work, both in terms of research or practice, while Theoretical research contributions offer explanatory accounts of why we do what we do, and they consist of novel or improved concepts, models, definitions, or frameworks. Dataset contributions provide a dataset that is new and useful to the community. Survey contributions focus on synthesising work with the aim of identifying trends, gaps and previously non-apparent structures. Lastly, Opinion contributions seek to persuade their readers, as well as provoke reflection, discussion, and debate. With respect to the research described above, we also seek to conceptualise HCI research, not by defining what HCI research is (e.g. problem-solving according to Oulasvirta et al. [135]) or the field’s evolution (Liu et al. [110]), but rather by analysing literature reviews in HCI, which can lead to a better understanding of the field itself. Consequently, applying the conceptualisation by Wobbrock et al. [207], our literature review is a combination of a methodological and a survey contribution.
Understanding and classifying literature reviews poses a challenge across a variety of disciplines. Analysing scientific work on literature reviews and related approaches, we identified two main foci dominating the research landscape. One is the focus on the differentiation between different types of reviews (e.g. [44]). Researching the term ‘literature review’ across different fields, we found that literature reviews can be considered the general umbrella-term under which several types exist, with varying popularity depending on the research domain [1, 108]. For instance, systematic reviews are often presented as requiring more rigorous and well-defined approaches than other types of reviews and can be further categorised as meta-analyses or meta-syntheses, depending on whether the research approach is deductive or inductive [1]. Other review categories often do not necessarily include a formal assessment or analysis [108]. Such reviews can include narrative reviews, critical, scoping, state-of-the-art and conceptual reviews, amongst others [67, 191]. The above categories, however, (i) exhibit overlap, (ii) are not consistent across disciplines, and (iii) are not necessarily applicable to the HCI domain. For example, difficulties may arise when a systematic review of HCI is reviewed by researchers with different academic backgrounds. For instance, one of the reviewers may be an HCI researcher with a background in Computer Science, and the other reviewer may be an HCI researcher with a background in Psychology. It is likely that both reviewers are familiar with systematic reviews. However, it is equally likely that both reviewers will analyse and assess a systematic review in HCI with different methodological emphases and standards (based on their different academic backgrounds). As a result, it is difficult for authors of literature reviews in HCI, reviewers, and readers to gain clarity about how a review should be planned, written, and assessed. In addition, this lack of clarity can lead to the use of already published HCI literature reviews as methodological standards. However, this does not achieve the desired methodological clarity but rather dilutes the discussion. In other words, often, no shared understanding or consensus is reached. Still, the respective positions are trumped by solely drawing on previous work, which inhibits the generation of field-specific methodological standards given the diversity of published papers under the umbrella term HCI literature review.
We seek to understand the topics and contributions of literature reviews in the field by addressing the following research question:
RQ1:
What kind of topics do literature reviews in HCI address, and what are their contribution types?

2.2 Understanding HCI Methods

In addition to understanding the HCI field from a conceptual perspective, scholars have focused on understanding HCI-specific methodological approaches by investigating specific methods, trends and standards. For example, Caine [34] provided an overview of the various ways existing research in HCI determines and reports participant sample size and an analysis of local standards for sample size within the CHI community. In particular, they focused on manuscripts published at CHI 2014. Their results include recommendations for authors, such as always reporting sample size and including all relevant demographic information. McDonald et al. [121] investigated reliability in qualitative research. They explored and described local norms in the CSCW and HCI literature and combined examples from these findings with guidelines from methods literature. Their findings demonstrate the scarcity of inter-rater reliability reporting. They propose guidelines for reporting on reliability in qualitative research.
The paper by Pohl et al. [146] is another example of how researchers have tried to understand specific aspects of HCI research; in this case, of writing style. They analysed all CHI papers published from 1982-2018 to derive trends regarding how writing affects the impact and citation of papers. In particular, they looked at the following measures of writing style: readability, title, novelty, name-dropping, as well as the CHI subcommittees. For instance, in order to assess readability, they used the New Dale-Chall Readability Formula [36]; and for titles, they explored the use of different marks (e.g. semi-colon) and the title’s length. Citation metrics were acquired from Google Scholar. They thus provide insights into the ways CHI papers are written and how that impacts citation counts. However, they note that a large amount of variability can be found and that the correlations they describe do not necessarily imply causation. While Caine [34] and McDonald et al. [121] looked at specific methodological aspects such as sample sizes and inter-rater reliability reporting, and Pohl et al. [146] at writing style in HCI research, we strive to explore the kind of metrics and reporting standards utilised by literature reviews in HCI.
To illustrate, a methodological approach in the context of literature reviews rooted in healthcare research that is widely used among a variety of different disciplines, including HCI, are the PRISMA and the QUOROM statement. The PRISMA statement was developed by medical researchers and is a revised version of the QUOROM (QUality Of Reporting Of Meta-analyses) statement. The name PRISMA stands for Preferred Reporting Items for Systematic reviews and Meta-Analyses [124]. Scholars in a variety of disciplines used the QUOROM and the PRISMA statements in the past, as have some researchers in HCI (e.g. [142]). Yet, to date, it remains unclear if PRISMA and QUOROM statements are applicable and meaningful for all kinds of literature review contributions in HCI.
Seeking to shed light on the methodological aspects of literature reviews in HCI, we pose the following research question:
RQ2:
How are literature reviews in HCI conducted in terms of methods and reporting standards?
It becomes apparent that methodological questions have been the subject of several HCI research attempts. At the same time, literature reviews conducted in a field are representative of the ways a field is evolving. Aiming to bring more structure to literature review papers in HCI and inspired by other method papers in the field, we conducted this “review of reviews” within the HCI field.

3 Review Methodology

The goal of our literature review is to shed light on the diverse HCI research landscape in a generative way. This means that we aim to provide a starting point that supports authors, reviewers, and readers alike on building an understanding of the ways literature reviews in HCI have been written and what the different review types contribute. By integrating the wide variety of literature reviews of the HCI community in our analysis, we aim to provide a meaningful way of understanding literature reviews in HCI. This section describes the methodology we followed in our literature review, including how records were identified, screened, and assessed to make up our final corpus. Following an adapted PRISMA statement [124], our process is depicted in Figure 1. We also describe how we conducted our analysis on the final corpus.

3.1 Identification Process

In order to explore the state of the art of literature reviews in the field of HCI and how they can build knowledge within that area, we used the ACM Digital Library (DL) to collect all publications stemming from SIGCHI conferences and TOCHI starting from 1982 to August 2022 (in 1982 CHI was organised for the first time), that used one or more of the search terms: review, meta-analysis, survey in their title and/or their abstract and/or as one of their keywords. Our review focuses on these publication outlets as CHI is considered the leading international HCI conference2. Further, TOCHI is considered the flagship journal connected to the CHI conference. Moreover, the inclusion of all SIGCHI conferences is due to historical reasons. All SIGCHI conferences taken together (including CHI) plus TOCHI provide a good representation of the development of the intellectual HCI landscape [205].
In particular, the following publication outlets were included: CHI, UbiComp, UIST, HRI, CSCW, IUI, DIS, TEI, ICMI, IDC, ETRA, EICS, IMX, UMAP, C&C, CI, AutomotiveUI, RecSys, ISS, GROUP, CHI PLAY, MobileHCI, ITS, ISWC, and TOCHI. However, not all of these publication outlets are represented in the papers of our corpus, since some of the above venues have not published papers which we classified as literature reviews. The first step of our procedure led to an initial set of 111,459 papers. Our review followed an adaptation of the PRISMA statement [124], structured in four main phases (see Figure 1).
Figure 1:
Figure 1: Adapted PRISMA flow diagram representing the selection and refinement process in our literature review, from the identification of 111459 records by keyword search, to screening eligible papers and arriving at our final corpus of 189 papers. For each of the stages where literature reviews were excluded (identification, screening, and eligibility) we further present the total of excluded records.

3.2 Screening Process

Four authors screened the initial set of 111,459 papers (i.e. each paper was screened by one out of four authors respectively). The authors read the title and abstract of each of the papers assigned to them. A paper was excluded when it was not a literature review or similar; for example, papers presenting the design and evaluation of an interactive system were excluded. In cases where an author was unsure if a paper should be excluded or not, the paper was marked for discussion. Per year, between zero and eight papers were marked for discussion (e.g. for the year 2021 six papers were marked for discussion). After screening the full body of papers, the four authors had a final discussion to decide about the potential inclusion of the marked papers. We excluded 111,252 papers during this second step of our review process, which led to a set of 207 remaining papers.

3.3 Assessing Eligibility

Next, the set of 207 papers was split in half to determine eligibility. Two authors went through a set of 104 and 103 papers respectively and marked papers where they were unsure if it represented a literature review. The two authors conducted iterative discussion sessions throughout this step of the process to discuss marked papers. During the iterative discussion sessions of marked papers, the previously defined exclusion criteria were further refined. The iterative discussion sessions led to the final following exclusion criteria. A paper was excluded when:
It was not a full paper (e.g. extended abstracts, workshops and keynotes),
It did not specifically state in the abstract, the keywords, the introduction, the contribution statement or in the conclusion, that a literature review or a similarly named literature selection and analysis procedure was conducted,
It referred to its related work section using the term "literature review".
From those exclusion criteria, the first two were already defined before starting the screening process, while the third one was added at this stage as we discovered that some papers referred to their Related Work section as a "literature review". Hence, the primary contribution of the included papers was the literature reviews in contrast to papers that included a related work section named literature review, which was used to outline a specific research gap connected to a subsequently conducted study or, for instance, the design of an interactive technology. In other words, it was required that an included paper conducted a literature review, i.e. it actually reviewed a topic and went beyond presenting related work to contextualise its study or prototype or to identify a specific research gap and naming that section "literature review" to be included in our corpus. Based on the defined exclusion criteria, we reviewed the titles and abstracts of the 207 papers again. This resulted in the exclusion of four additional papers, which led to a set of 203 remaining papers.

3.4 Final Corpus

The remaining 203 papers were split in four sets of papers. The papers were randomly assigned to four authors. Each author read the assigned papers entirely (i.e. 50-51 papers per author) and analysed them based on the previously defined exclusion criteria (listed in section 3.3). At this stage, the full papers were read only with respect to the exclusion criteria, and not for further analysis. When one of the authors was not sure if a paper should be excluded, it was marked, and the authors made their decision in a final group discussion. The last step of the screening process led to a final corpus of 189 included papers.

3.5 Analysis

In order to answer our research questions, we used a multi-step analysis approach. The code categories used in our analysis that correspond to each of our two research questions, along with example codes are presented in Table 1.
In our analysis, a consensus-based approach was applied [22]. In line with that, no inter-rater reliability has been calculated. First, four authors open coded [22] the 189 papers of the final corpus with respect to the topic of the paper. The topic code reflects the area the literature review primarily focuses on. We used affinity diagramming and created clusters of topics. The affinity diagramming process took over a week, as the authors kept revisiting papers and allowed the discussions to set. While more topics could have been identified, the authors decided to set a minimum limit of five papers per topic to consolidate knowledge and avoid fragmentation. This resulted in the identification of twelve higher-level topics of literature reviews in HCI: User Experience & Design, HCI Research, Interaction Design and Children, AI & ML, Games & Play, Work & Creativity, Accessibility, Well-being & Health, HRI,AutoUI, Specific Application Area, Specific Modality.
As a second step of the analysis process, each of the four authors open coded the same representative sample of 10% of the corpus in line with Blandford et al. [22] with regards to the contribution type it provided to the HCI community. Through iterative discussions an initial coding tree was established. The authors then divided the remaining papers between them and used the initial coding tree as a basis to code the remaining papers of the corpus. If uncertainties arose, they were discussed with all authors throughout the process. Finally, a consolidating discussion session was additionally conducted, when all authors finished coding their respective papers.
Based on this analysis, we derived five contribution types of HCI literature reviews: empirical, artefact, methodological, theoretical, and opinion. Empirical literature review contributions offer new knowledge through analysing their corpus on a quantitative level. Artefact literature review contributions on the other hand arise from analysing work on artefacts with the goal of classifying them. Methodological literature review contributions inform how we carry out our work, both in terms of research or practice by analysing previous work often across a variety of topics. Theoretical literature review contributions offer an analysis of specific theories, concepts, models, definitions, or frameworks and how these have been applied in different contexts. Lastly, Opinion literature review contributions seek to persuade their readers, as well as provoke reflection, discussion, and debate by using an analysis of the literature to strengthen their argument. After conducting the open coding process, we later determined that these categories closely follow Wobbrock et al.’s [207] types of contribution.
Moreover, during the same open coding session [22], the following code categories regarding methods and approaches were identified for each paper in our corpus: reporting standards (i.e. whether the literature review utilised a PRISMA or QUOROM statement or another type of flow diagram to describe their review process), databases (i.e. the databases that it used for the search), and inter-rater reliability (i.e. whether inter-rater reliability was calculated and for which aspect). We also coded for publication outlets, i.e. where each paper in our corpus was published. In line with the process outlined above, in case of uncertainties, the authors marked the corresponding field and discussed it with the rest of the authors in an iterative discussion session, which also aimed to address any disagreements in the coding.
Table 1:
RQCodePertinent or Example Codes
RQ1Review contribution typesEmpirical, artefact, methodological, theoretical, opinion
 Review topicsUX & design, HCI research,
  IDC, AI & ML, games & play
RQ2Databasese.g. ACM DL, Scopus
 Reporting standards (PRISMA)Use of PRISMA, QUOROM, or other flowchart
 IRRReport of inter-rater reliability or not
Table 1: The code categories and example codes that correspond to our two research questions.

4 Results

In this section, we report on the results of our analysis. The remaining results section is organised in line with our research questions. We present the different identified contribution types of HCI literature reviews. We then describe the topics literature reviews in HCI address. Next, we report on the methods literature reviews in HCI applied. Offering insights into the distribution of HCI literature reviews, Figure 3 demonstrates the number of literature reviews that were published in each HCI venue. Notably, more than twice the amount of literature reviews were published at CHI compared to other venues considered here. However, it should be taken into account that CHI is generally a bigger venue and that more papers were published there in comparison to the other venues. Meanwhile, Figure 4 describes a total increase in literature reviews in recent years. While only a few published works were of this nature in the 20 years between 1982 and 2002, a growing increase can be noticed in the following years, with a more steep rising from 2017 onward.We observe an increasing number of literature reviews in HCI, peaking at 32 papers in 2021. Figure 5 visualises the co-occurrence between the five review contribution types and the coded data of each review paper regarding the employed methods through a heat map: use of IRR, Databases, PRISMA (or other flow charts).
Based on our results, we have created an online paper library where visitors can navigate the full list of papers in our corpus and filter entries based on specific criteria. This can be achieved either via typing in the search box, or by using the available filters (e.g. based on the venue where the paper was published or the contribution type). Additionally, the online paper library provides visitors with a contact email, so that visitors of the website can e.g. suggest corrections for an article or request that a missing paper be added. This is an important feature to ensure that the library stays up-to-date and considers user feedback. A screenshot of the web page is visible in Figure 2.
Figure 2:
Figure 2: Screenshot of the online paper library containing the papers in our corpus with search and filtering functionality. Available at: https://thomaskosch.com/review-of-reviews/
Figure 3:
Figure 3: Distribution of literature review papers per HCI venue. The frequency of literature review papers published at CHI is more than double compared to other venues.
Figure 4:
Figure 4: Evolution of the total number of literature review papers in HCI as indicated by our selection process. The number of literature reviews has increased sharply in the last four years.
Figure 5:
Figure 5: This heat map shows the number of papers for each review contribution type (first row) that calculated IRR (second row), reported the Databases they searched (third row), and offered PRISMA statements or other flow charts (fourth row). The colour shades represent the number of papers. Heat maps have been widely used by the HCI research community to graphically visualise the density distribution of numerical data through colour intensity [99, 137]. Notice that due to double coding (multi-part review contributions), summing up the N Papers row equals 213, and not 189 which is the number of papers in our corpus.

4.1 Contribution Types of Literature Reviews in HCI

Based on our analysis, we identified five review contribution types, partly inspired by the categories proposed by Wobbrock et al. [207]: empirical, artefact, methodological, theoretical, and opinion contributions3. We describe each contribution type in the following sections.

4.1.1 Empirical Contributions.

We classified a paper as an empirical contribution if the literature review analysed and compared its corpus with a focus on specific details or phenomena. The focus of empirical literature review contributions lies primarily on data-driven analysis, often but not always on a somewhat quantitative level (e.g. comparing sample sizes across different studies). Papers in this contribution type mainly focused on specific topics; in particular, the majority of empirical publications either focus on how a specific topic is studied (e.g. study approaches in the area of affective health [165]) or how specific phenomena relate to each other or have been studied together (e.g. which population characteristics have been considered when evaluating mHealth interventions [180]). This strand includes aspects such as the operationalisation of terms and concepts, measures used, types of studies and reflection on domain-specific ethics procedures and concerns.
Our results show that empirical literature contributions focus on a variety of different contexts ranging from affective health [165] to an analysis of current trends in Human-Food Interaction [9]. A comparatively large part of literature reviews in this category focused on aspects connected to games and play. For instance, Silpasuwanchai et al. [174] conducted an analysis on the engagement of gamification for learning. The review identified the frequency of the use of specific gamification strategies and analysed how often strategies were used together. Furthermore, among other aspects, the authors analysed the study results of previous work regarding the effect of gamification on performance. This paper exemplifies how literature can be analysed quantitatively to derive meaningful insights about a specific research topic. Notably, the work by Silpasuwanchai [174] conducted and presented a user study in addition to their meta-synthesis. Another example in our corpus conducted a literature review of 66 publications grounded in Disability Studies and Self-Determination Theory to assess the status quo of HCI game research pertaining to neurodivergent players [177]. The review analyses aspects such as populations included in their corpus, research methods, the kinds of play, and the overall aim of existing games. Based on the results of their literature review, they identify opportunities for future work, such as ways of addressing the players’ needs, preferences, and desires for play [177].

4.1.2 Artefact Contributions.

Papers were categorised as artefact contributions when they reviewed research papers focusing on artefacts, with the intention to classify them and their characteristics. The majority of reviews in this category either review a specific artefact type in the sense of the employed technology, e.g. wearable technologies [188], or review artefacts that have the same goal, independent of the technology type, e.g. exploring technologies supporting intimate relationships [74].
Regarding the first, reviews often aim to provide classifications depending on the specific technology employed, e.g. classification of artefacts based on user identification technologies they employed [94], or to help readers understand a specific field by providing consistent terminology and classification, e.g. for capacitive sensing [68] systems. Regarding the second, these artefact reviews explore key aspects of the proposed goal of the system, e.g. mapping design strategies for closeness in remote relationships [74], or providing an overview of key characteristics for creativity-supporting systems [62].
Below, we present one detailed example for each of these two cases. First, Kalegina et al. [88] reviewed robots with rendered faces. The authors reviewed 157 robot faces and conducted two additional surveys to understand people’s perceptions of rendered robot faces and identify the impact of different facial features. They categorise the different features that constitute robot faces and discuss how these elements can be combined. As a second example, Nunes et al. [131] aimed to establish an understanding of the body of work in HCI focusing on self-care technologies for chronic conditions. They reviewed 29 papers and identified research trends and design tensions, as well as opportunities for future HCI research in that domain.

4.1.3 Methodological Contributions.

Methodological contributions inquire how a particular method, part of a method, or a design approach is used across multiple cases. In other words, this type of review strives to establish a deeper understanding of how methods or approaches are being applied across different contexts. This category is an interesting case because it was challenging to identify code groups within the category. Instead, a certain hierarchy emerges within the category in terms of the characteristics analysed. The aspects analysed range from small-scale, concerning how the HCI community conducts studies (e.g. an analysis of sample size at CHI [34]) over literature reviews that focus on methods in broad sub-fields within HCI (e.g. lab versus field studies in mobile HCI [97]) to more strategic approaches with a focus on methods that go beyond specific studies (e.g. research dissemination practices in HCI [39]). A considerable number or papers in this category are concerned with the analysis of population groups (including both study participants and authors) from a variety of different perspectives. To illustrate, Linxen et al. [109] focus on the question of how WEIRD (western, educated, industrialised, rich, and democratic) authors and participants of CHI papers are. Instead of focusing on analysing the demographic characteristics of participants, Pater et al. [140] analyse strategies of participant compensation prevalent in current HCI studies and how it is reported.
An example of a methodological contribution that goes in a slightly different direction is a literature review by Salminen et al. [163]. They reviewed quantitative persona creation (QPC). The aim of their review was to offer an overview of the main QPC methods and their strengths and weaknesses. Based on their analysis, the authors then proposed a research agenda and guidelines for both researchers and practitioners. Other contributions in this category, inter alia, explored the use of Likert scales [171] and the use of machine learning to improve user experience [210].

4.1.4 Theoretical Contributions.

We categorised papers in our corpus as theoretical contributions when they reviewed how a particular theory is used in different contexts, for instance with the aim to further validate this theory, or if a literature review led to theory development. Juxtaposing theoretical and methodological literature review contributions, one could say that theoretical contributions focus on the nature of what is studied, whereas methodological contributions focus on how something is studied. As this review contribution type focuses on systems of ideas or theoretical principles across study contexts, reviews in this area often engage with definitions in depth. For instance, Tyack et al. [190] reviewed 110 papers to gain a better understanding of the ways Self Determination Theory (SDT) has contributed to HCI games research. They analysed how specific concepts of SDT have been applied in HCI games research and discussed conceptual gaps. Other examples from this contribution type are reviews of work design theories [11] and theories regarding ethics [192, 216]. Zoshak et al. [216] analysed how ethical theories have been applied to artificial moral agents. They found that the majority of their corpus focused on two ethical paradigms (deontology and consequentialism) and emphasise the need for additional empirical studies to broaden the spectrum of ethical theories applied in this domain.
The examples above illustrate how the in-depth analysis of theories applied in HCI can assist in building an understanding of conceptual research gaps. This in turn can help to understand if and how a specific theory advanced HCI research. Furthermore, the literature reviews by Zoshak et al. [216] and Tyack et al [190] show that the analysed theories can be theories from a broad spectrum of research fields (e.g. Philosophy, Psychology), provided that they are relevant for HCI. A research gap we identified through our analysis is that the majority of literature reviews in this category address how theories are applied in the field of HCI. The focus of researchers is therefore on analysing the influence or application of theory rather than theory generation.

4.1.5 Opinion Contributions.

We classified a paper as an opinion contribution if the paper aspired to persuade its readers, as well as provoke reflection, discussion, and debate. These included strong arguments or essays which did not aim to contribute an overview of past research but rather used an account of past work for an argument. Reviews which scrutinise past papers were also included here.
An example of such a contribution is a paper by Keyes et al. [92] that focused on “women’s health” in HCI. The authors conducted a critical discourse analysis of 17 publications that explicitly positioned themselves as works concerned with women’s health. The paper offers two speculative designs to provoke reflection on the current framing of “women’s health” in HCI. Another example is the work by Beck et al. [19]. They engage with the meaning of “big questions” and emphasise that discussing big questions can potentially foster reflection about HCI research. Beck et al. [19] discuss examples of big questions and end their paper with the remark that the question of whether HCI needs big questions already raises many useful questions. This ending illustrates the elements of reflection and debate that constitute opinion contributions nicely.

4.1.6 Multi-part Contributions.

Based on our analysis, we identified some papers in our corpus which offered multiple different review contribution types (e.g. empirical and theoretical). Based on multiple discussion sessions with five authors, it emerged that the majority of these reviews focused on quite specific (and sometimes) narrow topics. Table 2 (see Appendix) provides an overview of all multi-part literature review contributions. These contribution types can be recognised by the markings in more than one contribution type column. On a pragmatic level, studying a more focused topic allows for addressing a wide variety of different aspects without going beyond the standard publication length.
This is exemplified in the work by Suh et al. [181]. They conducted an analysis of different concreteness fading techniques across different settings. Based on their analysis, they contribute an overview of the concreteness fading technique and its design dimensions (i.e. methodological). Furthermore, they analysed key findings of each dimension (i.e. empirical). While the topic is completely different, the clear focus of the work by Suh et al. [181] is similar to the review by Maggioni et al. [118]. This literature review identified central design features of the olfactory design space. These features are relevant for interaction design in this area (i.e. methodological) [118]. In addition, the authors discuss technical features that should be considered when navigating the olfactory design space (i.e. artefact).
In contrast, the topic of the literature review by Dell et al. [49] is broader than the aforementioned examples. The authors survey 259 publications focusing on HCI for development (HCI4D) to assess the current geographical scope of existing research (i.e. empirical), the technologies in focus, as well as the underlying epistemologies and methods (i.e. methodological). In addition, they chart the evolution of HCI4D and discuss potential future trends [49].

4.2 Review Topics of Literature Reviews in HCI

Our analysis of literature reviews in HCI yielded a variety of different themes. Using affinity diagramming, we clustered the identified themes into twelve review topics. The review topics highlight HCI subfields that were particularly active in publishing literature reviews. We categorised each paper in our corpus in one review topic. The review topics of literature reviews we identified are the following:
User Experience & Design
HCI Research
Interaction Design and Children
AI & ML
Games & Play
Work & Creativity
Accessibility
Well-being & Health
Human-Robot Interaction
AutoUI
Specific Application Area
Specific Modality
Below we explain each of these review topics and underpin them with illustrative examples from the reviewed publication set.

4.2.1 User Experience & Design.

The review topic user experience (UX) and design encompasses literature reviews with a focus on building a conceptual understanding of user experience in specific contexts or in relation to other concepts (e.g. how UX and the technology acceptance relate to each other [81]). Furthermore, this review topic includes reviews on design tools such as a classification of design cards [2] or quantitative persona creation in HCI [163]. Another strain of research in this review topic focuses on the study of user experience. For example, Bargas-Avila et al. [15] analysed how user experience has been studied in the HCI field. Almost a decade later, Pettersson et al. [142] published a similar review of UX studies, methods, and triangulation based on the review by Bargas-Avila et al. [15].

4.2.2 HCI Research.

We included literature reviews that dealt with "meta" subjects of HCI research in this review topic and papers that addressed conventions of HCI publications and research dissemination. These include, among others, a literature review that focused on statistical significance testing at CHI PLAY [201] and a meta-analysis on computer (online questionnaires) versus paper forms [204]. Notably, literature reviews in this review topic encompass both focused review contributions that engage with a specific approach or methodological detail of HCI research such as statistical significance testing at CHI PLAY [201] or sample size at CHI [34], as well as broader topics concerning questions on how we as HCI researchers act and interact with our participants [140] and society as a whole [39]. This review topic has clear overlap with the methodological literature review contribution. However, the methodological contribution type encompasses a broader spectrum of review topics and also includes, for example, methodological contributions from other review topics. An example of this is the work by Kawas et al. [91], which is located in the review topic "Interaction Design and Children", but was coded as a methodological contribution.

4.2.3 Interaction Design and Children.

In this review topic the main distinguishing criterion is not the application focus of the literature reviews. Instead, the papers in this review topic can be broken down into literature reviews with a focus on specific user groups such as teenagers and (young) children, with or without special needs. For example, Baykal et al. [17] reviewed collaborative technologies for children with special needs [17]. Other populations addressed include teenagers, e.g. Zimmerman et al. [214] conducted a review on financial teen literacy. Other areas that have been addressed within this topic focus on safety [145], welfare [166], well-being [80], learning [58] and inclusion [178].

4.2.4 AI & ML.

Literature reviews that addressed Artificial Intelligence or Machine Learning applications were categorised in this topic. Research assigned to this review topic includes both more technically focused papers as well as articles addressing definitions and concepts. For example, Yang et al. [210] conducted a literature review with a more technical focus, clustering Machine Learning technical capabilities within HCI. Along similar lines, D’mello et al. [51] analysed the accuracy of multimodal and unimodal affect detection classifiers. Another strain of topics focuses on building a conceptual understanding of key terms in this review topic. For instance, Völkel et al. [200] explored the meaning of "intelligence" in intelligent user interfaces.

4.2.5 Games & Play.

Literature reviews in HCI that focused on games and play include works on specific aspects of game interaction, e.g. Velloso et al. [196] surveyed eye interaction in games and Alavesa et al. [6] reviewed commercial and non-commercial location-based mobile games. Other literature reviews analyse game-related measures, such as the work by Mekler et al. [122] who reviewed measures of game enjoyment and player experience. These examples illustrate three different foci within this review topic. One strain of research addresses aspects concerning the question of how to conduct studies in this area (e.g. analysis of game-related measures [122]). Another strain of research focuses on the analysis of specific game mechanisms (e.g. analysis of eye-enabled game mechanisms [196]). Instead of focusing on game mechanics, literature reviews such as the one by Alavesa et al. [6] focus on analysing complete games (often encompassing both commercial and non-commercial games).

4.2.6 Work & Creativity.

Another identified topic of literature reviews in HCI is addressing the aspects of work and creativity. Literature reviews in this review topic range from an analysis of system-specific aspects (e.g. an analysis of notifications in collaborative systems [111] to reviews on models, concepts, and definitions (e.g. an analysis and conceptualisation of creativity methods in design [126]), to inquiries focusing on analysing creativity support tools [157].

4.2.7 Accessibility.

This review topic includes literature reviews on technologies which are situated in the accessibility context (e.g. [33]), to reviews with a focus on specific user groups [20], to reviews on approaches to researching the topic of accessibility and literature reviews on broader aspects such as the question of how accessibility is addressed in HCI research. To illustrate, Mack et al. [117] conducted a literature review on a broader aspect concerned with accessibility research in HCI. They analysed how the term accessibility has been conceptualised and studied at ASSETS and CHI within one decade (2010-2019). Furthermore, they identified specific accessibility research areas that have received a disproportionate amount of attention within the research community.

4.2.8 Well-being & Health.

The topic of well-being interventions focuses on understanding health and well-being-related matters. Literature reviews in this review topic encompass accounts of mental, physical, and holistic health and well-being. More precisely, this review topic includes literature reviews on technology-supported health and well-being promotion, management of disorders and illnesses, and prevention of disorders and illnesses. For instance, Epstein et al. [57] reviewed work on personal informatics data and behaviour (well-being focus), and Hassenzahl et al. [74] explored technologies supporting intimate relationships. Instead of focusing on health and well-being promotion, other literature reviews focus on HCI research in the area of (mental) health disorders [165]. Furthermore, the focus of the works in this review topic ranges from stakeholder-specific accounts (e.g. the analysis of use and design of online health communities) to papers that focus on a specific well-being-related concepts, such as mindfulness, without specifying a particular user group [186].

4.2.9 Human-Robot Interaction.

The literature reviews in the topic of Human-Robot Interaction (HRI) mainly focus on social HRI. The range of subtopics includes literature reviews on the psychological impact on social HRI experiment researchers [155], and emotions and affect in HRI research [87]. A similar strain of research addresses personality in the context of HRI [59] and antropomorphisation [88]. Furthermore, this topic includes research on HRI study methods (e.g. an analysis of the use of Likert scales in the HRI domain [171]).

4.2.10 AutoUI.

Literature reviews in the review topic of Automotive User Interfaces (AutoUI) primarily focus on legal and safety issues (e.g. [55, 84]). To illustrate, Naujoks et al. [128] analysed how interruptions in semi-automated driving have been managed and Inners et al. [84] explored legal issues of human-machine interaction for automated vehicles. Another strain of research focuses on broader aspects of the design and evaluation of AutoUIs (e.g. mapping the design space for in-car AR applications [206]).

4.2.11 Specific Application Area.

A number of literature reviews in HCI focus on one specific application area. The literature reviews in this review topic are quite diverse and range from work on HCI for development [49] to playful human food interaction [9]. One prominent subject within this topic seems to be sustainability research, including research on sustainable approaches to fashion and interaction design [138], energy systems in and out of HCI [144], and eco-feedback tech in HCI & environmental psychology [64]. Other examples that showcase the diversity of this review topic include, among others, reviews on conducting research with stakeholders from nonprofit organisations [24] and reviews of dark pattern properties [120].

4.2.12 Specific Modality.

The aim of this review topic is to understand previous work in HCI with a focus on specific modalities. In general, this review topic approaches specific modalities from two sides. On the one hand, some papers in this review topic focus on building an understanding of key terms concerning specific modalities (e.g. defining mixed reality). On the other hand, some reviews in this review topic focus on exploring a specific modality across a variety of application contexts. Work in this review topic include, among others, eTextile tools and kits [148] and autonomous tangible interfaces [130]. A selection of papers focus on shape-changing materials in HCI [152] and deformable interfaces and technologies [23]. Another focus lies on mixed, virtual and augmented reality [176].

4.3 Review Methods & Publication Outlets

To understand the methods applied when conducting literature reviews, we analysed the methods that were used in the reviews of our corpus. Furthermore, we identified in which journals or conferences the HCI literature reviews of our corpus were published.

4.3.1 Inter-rater Reliability.

Figure 6:
Figure 6: This Figure presents the shares of papers reporting IRR between the years 1982 - 2022. The data is visualised in five-year intervals, except for the first and last bar. As the number of publications between 1982 and 2004 was considerably low, we aggregated these years. Following a five-year interval, the last bar includes the three years between 2020-2022.
Inter-Rater Reliability (IRR) measures agreement between two or more people that code the same set of qualitative data [121]. As illustrated by Figure 5, only 13% of the papers in our corpus (23 out of 189) calculated IRR. In both the empirical and methodological contribution types, a total of 21 papers calculated IRR. The theoretical literature reviews had four papers where the IRR was calculated. Please note the overlap among the paper counts in Figure 5, due to multi-part contributions, so that e.g. from the eleven empirical and ten methodological papers that reported on IRR, four of those refer to the same papers( [17, 113, 117, 145]). The majority of papers that reported on inter-rater reliability used this statistical measure for calculating the reliability of their inclusion/exclusion criteria [31, 162]. In other papers, the IRR was used to assess the reliability of coding and categorising [10, 63]. The most often used method for calculating the IRR was Cohen’s Kappa (e.g. [91, 140]), while one paper used Krippendorff’s alpha [117]. Other papers did not specify the method that was used to calculate the IRR (e.g. [9, 59]). Figure 6 demonstrates the trends in which IRR was reported in published works between the years 1982-2022. Notably, the number of papers that report IRR has slightly increased in the past decade.

4.3.2 Reporting Standards: PRISMA & QUOROM.

Figure 7:
Figure 7: This Figure presents the shares of papers offering PRISMA statements between the years 1982 - 2022. The data is visualised in five-year intervals, except for the first and last bars. As the number of publications between 1982 and 2004 was considerably low, we aggregated these years. Following a five-year interval, the last bar includes the three years between 2020-2022.
The most common reporting standard for systematic reviews in other fields is the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) [124]. In total, 43 papers in our corpus utilised either the PRISMA or QUOROM statement (or other flow charts) to structure and report their reviewing process. Empirical and methodological contributions most often used one of these reporting statements. For instance, a review by Spiel and Gerling [177] reported their approach through PRISMA, whereas Bargas-Avila and Hornbaek used an adapted version of the QUOROM statement [15]. However, most papers in the corpus employed a variety of approaches to collect and construct their final corpus. For instance, Grosse-Puppendahl et al. [68] loaded the results of their search for relevant papers into a custom-developed paper management system, and each paper was examined by at least one reviewer to assess its relevance. This variety in approaches links to the diversity in literature review contribution types in HCI which goes beyond systematic literature reviews. Furthermore, this finding shows that there is currently no common reporting standard for literature reviews in HCI. This is highlighted by Figure 7, which presents a change in trends of papers that offer PRISMA statements, increasing transparency. Similar to reports of IRR (see Figure 6), a positive trend for including PRISMA statements can be noticed during the past 12 years. Especially during the past three years (2020, 2021, and 2023), we can observe a significant increase in the number of HCI literature reviews that offer PRISMA statement (approx. 40%, compared to just 10% in the previous five year interval).

4.3.3 Databases.

As part of our analysis, we reviewed the databases which the papers in our final corpus used to search for relevant papers to include in their literature reviews. As Figure 5 demonstrates, 69% of the papers in our corpus (131 out of 189 papers) did specify the database that was used for the literature search. Interestingly, some papers did not specify the databases used. However, they were still implicitly ascertainable. An example of this is a highly cited study on sample size in HCI [34]. This particular paper does not specifically mention the database that was used. Yet, as this paper reviewed all CHI 2014 papers, this lack of specification has a limited effect within the HCI community, similar to some other papers that did not explicitly mention the used database, as readers can potentially conclude where the publications have been searched for. However, specifying the databases that were used to identify relevant literature could make HCI literature reviews more accessible for scholars of other fields that might not be that familiar with HCI’s publication processes and outlets. The majority of empirical, artefact, methodological and opinion papers reported on the databases that were used, as opposed to only approximately half of the theoretical papers.

4.3.4 Publication Outlets.

We analysed where HCI literature reviews were published and how their publication outlet relates to the HCI literature review contribution types. Since the majority of publications covered in this literature review were published at the CHI conference, it comes as no surprise that CHI is the top venue across nearly all contribution types, with the vast majority of CHI papers making empirical (30 works) and methodological (23 works) contributions. Artefact contributions are the next most common at CHI, with 13 papers. In addition, we identified two opinion pieces that were published at that venue. Overall, empirical publications are the most common type of literature reviews we encountered among the surveyed HCI works (69 empirical reviews). IDC (7 papers), CHI PLAY (5 papers) and DIS (5 papers) are in the top four publishing venues for this type of contribution after CHI. Methodological contributions are the second most common type of HCI literature reviews (54 papers). After CHI, they are mostly published at DIS (9 papers) and IDC (8 papers). Artefact contributions are the third most common contribution type (53 reviews), with CHI being closely followed by DIS (10 papers) as a venue of choice, and TOCHI (7 papers). Theoretical reviews (28 in total) are also often published at DIS (4 papers), but closely followed by CSCW and IDC (3 papers each). Interestingly, opinion contributions in our review were the only type where CHI (2 papers) was not the number one venue of choice. Instead, the most frequent publication outlet was DIS (3 papers). The rest of the opinion contributions came from TOCHI, CSCW, and AutomotiveUI (1 paper each).

5 Discussion

The goal of this paper is to provide a starting point for a shared understanding of literature review contributions in HCI. We hope that our analysis sparks discussions within the HCI community of what a literature review constitutes, how it can be conducted, and what it can contribute. With the identified review contribution types Empirical, Artefact Methodological, Theoretical, and Opinion (RQ1), the review topics identified, and our results on literature review methods (RQ2), we aim to assist authors, reviewers, and interested readers in situating the literature review they are writing, reviewing, or reading in related work. In addition to our literature review of literature reviews in HCI, we contribute an HCI literature review design document which can be found in the supplementary material. It provides an overview of the literature contribution types identified as well as reflective questions that can support future authors of literature reviews in their research process. Furthermore, the literature review design document can be used by reviewers of literature reviewers to engage with the contribution types the literature review delivers and its methods on a deeper level. Moreover, to support conducting, analysing, and using literature reviews in HCI, we provide recommendations for future reviews based on our results and created an online paper library for easy and efficient navigation and filtering of literature review papers in HCI. We discuss how a literature review can play different roles in HCI discourse and use a variety of methodologies. In the following sections, we reflect on our findings and subsequently discuss recommendations for scholars in HCI.

5.1 Contextualising Our Review with Respect to the Identified Criteria

Before conducting this review of reviews in HCI, our approach could most likely be termed as a systematic literature review, based on the methodological steps we followed; for instance, based on Grant and Booth [67], a key feature of systematic reviews is the clear and transparent reporting of methods applied. However, if we look at other method papers in HCI, and in particular the mapping review of personal informatics literature by Epstein et al. [56], one could argue that some elements in our approach could be categorised as a mapping review. In particular, mapping reviews can help a research field understand the topics that have traditionally been studied and the methods that have been used [12, 67, 141]. This is in line with what we aim to achieve with our literature review. Therefore, in the context of this paper, we initially refrained from using a specific term (systematic or mapping), and instead clearly stated the methodological steps that we followed. This lack of clarity is perhaps a further indication of the need for a clear overview of literature reviews in our field as a first step towards a deeper understanding of terminology and of how to perform and structure literature reviews in HCI.
Now that we conducted our review, we contextualise our own work with respect to the characteristics we identified in our analysis in this subsection. This demonstrates how the identified categories can be understood and applied by the authors of future literature reviews. For instance, to clearly state their employed methods and reporting standards as well as their contribution. Our paper provides a methodological contribution as it inquires how a particular method (here literature reviews) can be used across multiple cases. The review topic our review can be categorised under is HCI research as it focuses on meta-subjects of HCI research. Our literature review of literature reviews in the HCI field employed an adapted version of the PRISMA statement to clearly illustrate the corpus selection process. Moreover, we outline the sources we used for our search, explaining the conferences and journals that were included and explicitly stating we used the ACM Digital Library as the database for the paper identification. We did not calculate IRR, as explained in the Review Methodology Section.

5.2 Embracing Diversity in HCI Literature Reviews

Our analysis showed that, in contrast to other fields, literature reviews in HCI seem to cover a broader range of intellectual contributions on a spectrum from informal and exploratory through critical to formal approaches such as meta-analytical or quantitative analyses of previous work. This reflects the diversity of the field and showcases that the community not only recognises but values a variety of different approaches, contribution types and scholarly traditions in literature reviews. On the other hand, this variety comes with a challenge for authors and reviewers. The results show that there is little shared understanding of what constitutes a valid contribution of a literature review in HCI. Currently, scholars in HCI face the challenge of communicating their specific literature review contributions in an accessible, plausible way for researchers with a variety of academic backgrounds. This challenge is also reflected in the discussions we had with colleagues who authored or reviewed literature reviews in the past. Some of them reflected on reviews they assessed in the past, stating that a literature review does not constitute a valid contribution for a scientific outlet like CHI. We hope that our study can spark a discussion about the value of literature reviews and support authors in communicating the contribution of their literature clearly. In short, we hope that the literature review contribution types Empirical, Artefact Methodological, Theoretical, and Opinion we derived, as well as the identified literature review topics, can serve as a discussion base to get a step closer towards finding common ground regarding literature reviews in HCI. Further, when writing literature reviews, authors can use our categories, their definitions, and examples to unambiguously position their work. Here it should be noted that, based on our analysis, authors should not add structure to their reviews for the sake of adding structure. In other words, more structure in a literature review process is not necessarily better. Instead, we invite authors to introduce as much structure as needed in their review process, in line with their intended contribution type (e.g. making a persuasive opinion literature review contribution may not require the similar amount of structure than a method contribution providing a longitudinal overview).

5.3 Reviews of Reviews Across Disciplines

Other fields in the computing area and beyond have already aimed to build an understanding of how their research community utilises and conducts literature reviews. For example, in the field of Software Engineering, Kitchenham et al. [95] conducted a systematic literature review of systematic literature reviews in their field. In another study, MacDonell et al. [116] explored the reliability of systematic literature reviews in empirical Software Engineering. Here, we juxtapose our findings with some of the work that has been conducted in other fields that also constitute "reviews of reviews".
Cooper [43] presented a taxonomy of literature reviews in Education and Psychology. They outlined that due to a steadily growing field and the accompanying growth of the respective body of knowledge, the interest in (and publication of) literature reviews is increasing. Our analysis revealed a similar phenomenon in HCI with a steadily increasing number of literature reviews being published per year (see Figure 3). Cooper’s goal is similar to ours; analysing and consolidating different approaches of conducting literature reviews. However, he focuses on an analysis of literature reviews in Psychology and Education, whereas we focused on an analysis of literature reviews in HCI. Furthermore, our aim to contribute a literature review of literature reviews in HCI to guide future authors, reviewers, and readers of literature reviews in situating a specific review and justifying why a specific approach to conduct the respective review was chosen is congruent with the goal stated by Cooper.
In the field of Information Systems, Templier and Paré [185] distinguished between four broad categories of review papers: narrative, developmental, cumulative, and aggregative reviews, based on the review’s input, process, and output. Relating this to our own analysis, the process criteria that they employed relate to our identification of methods that literature reviews in our corpus employed (e.g. the databases that were used), while some of the input and output criteria that they used (e.g. the product of the review) relate to our identification of review contribution type. Therefore, unlike Templier and Paré [185], we did not categorise papers based on the methods they applied, but identified them with the goal of creating an understanding of how different methods are applied across different review contribution types. Additionally, our categorisation of literature reviews spanned two axes: contribution type, and review topic. The first was the result of an analysis of what a paper reviewed, while the second was based on an analysis of the topics. Nevertheless, future work in our field could analyse HCI literature reviews based on Templier and Paré’s [185] categorisation of investigating a review’s input, process, and output, using our review as a guide. For instance, the use of a PRISMA statement would be part of the process, while exploring which databases where searched for the review would be part of the review’s input.
Exploring reviews conducted in Engineering Education, Borrego et al. [26] conducted a systematic review of systematic review articles published on that topic. Their goals also included lowering the barrier for access to the literature and enabling more objective critique of past efforts. Similarly, we construct a shared language for conducting literature reviews in the HCI field. From the reporting standards they looked at, we find similarities in our approach in the following: while we analysed which reviews reported the Databases they used to find their papers, they explored review papers’ "finding and cataloguing of sources", which included sources that were not necessarily relevant when conducting literature reviews in the field of HCI, such as evaluation reports that are not published online. From the 189 papers in our corpus, none of them had their sources in offline libraries or repositories. On the contrary, this seems to be a valuable resource for the field of Engineering Education. This further underlines the importance of analysing and conceptualising literature reviews in different fields.
Aguinis et al. [5] analysed and categorised methodological literature reviews published in management and applied psychology journals. Their applied categories included: critical review, descriptive review, meta-analytic review, narrative review, qualitative systematic review, scoping review, and umbrella review. Juxtaposing our findings in the context of HCI literature reviews to theirs, we could for instance draw lines between their category of "critical" reviews to our identified review contribution type of "opinion" contributions. Additionally, similar to how they found that the majority of published reviews belong to three categories: critical, narrative, and descriptive reviews, our findings demonstrated that in the field of HCI, the majority of literature reviews are categorised as empirical, artefact, or methodological contributions.
The reviews of reviews described above seeking to understand the field in different disciplines from HCI do not constitute an exhaustive list. Nevertheless, they showcase the need for consolidated knowledge about literature reviews in the various disciplines, underlining the need for this within the HCI field. While one could argue that we could have used and applied different lenses to analyse our corpus, for instance, analysing our papers based on their input, process, and outcome (similar to Templier and Paré [185], our derived contribution types of literature reviews demonstrated their suitability in the field of HCI, as they closely follow Wobbrock et al.’s [207] types of contribution which are applicable to HCI research.
In any case, one can argue that the similarities and differences found between literature reviews in HCI and other fields are "natural", in that they point to and underline the multidisciplinary nature of the HCI field. To elaborate, as HCI intersects with e.g. the field of Psychology, it is expected to identify similarities in the contribution types or structure of literature reviews in each of those fields, as showcased for instance with Aguinis et al.’s [5] identification of critical reviews, which could be aligned with our identified opinion contributions. On the other hand, HCI constitutes a field by its own, and the differences found in the literature reviews from other fields, even those fields that might intersect with HCI e.g. Engineering, highlight exactly that. For example, Borrego et al.’s [26] finding and cataloguing of sources which included offline evaluation reports is not relevant for literature reviews conducted in the HCI field. We argue that those similarities and differences not only highlight the multidisciplinary nature of the HCI field, but also underline the need for creating a shared understanding of HCI literature reviews, which this paper aims to offer.

5.4 The Categories Provide a Shared Language to both the Writers and the Audience of Literature Reviews

One key takeaway of our analysis is that there is not one valid way of conducting a literature review in HCI. The five literature review contribution types we derived show that the spectrum of literature reviews in HCI ranges from comparably tangible contributions (e.g. empirical contribution) to high-level critical explorations (e.g. opinion contribution). Some papers applied a formal, structured approach (e.g. using a PRISMA diagram to describe their review process), whereas other reviews opted for a more informal, exploratory approach. However, it needs to be noted that we did not analyse our identified review contribution types over time, although we did analyse the use of reporting standards (PRISMA and IRR) over the years. Future work should determine if and how the requirements of conducting an HCI literature review change over time. Our results emphasise the multifaceted nature of literature review contributions in the HCI community. Both researchers and reviewers alike should consider the diversity of the community when conducting and assessing literature review contributions. However, our review contribution types can provide a shared language when conducting, assessing and discussing literature reviews.
Our findings might not necessarily be surprising, at least to authors that have written literature reviews or read them (more or less often) during their work. However, by contributing a needed consolidation regarding knowledge on literature reviews in HCI, we present researchers with a starting point for shared understanding, through this aforementioned shared language. This is contributed through our findings along with the online paper library and the design document with reflective questions. As discussed in the previous subsection, our analysis illustrates that the literature review landscape in HCI is more diverse than in other fields. Therefore, there is a need for a structured understanding as contributed by our paper, to support authors of future literature reviews in clarifying how they situate their review and clearly communicating their contribution (e.g. clarifying their use or non-use of specific reporting standards within their reviews). It should be noted here that while providing a shared structured language regarding aspects of literature reviews in HCI, our goal is to provide future authors with a starting point of consolidated knowledge, rather than claiming that one specific "way" is optimal. This latter point is reflected by looking at the impact of literature reviews which for instance did not report on IRR but are highly cited, such as this literature review published at CHI [153]. This further underlines the diversity of used methodologies and standards in the field.
We observe that the topics which we created in our analysis of the corpus concern different levels of subject abstraction. Some research questions within HCI were grouped together in larger topics, e.g. specific modality, while other topics were more specific, e.g. AutoUI. This is partly the result of our chosen method of building groups with a minimum size of five papers. However, this diversity also reflects the fact that some communities with HCI may be prone to more meta-work. This, in turn, can be caused by the need to systematise knowledge more or the high costs of empirical studies. It could also be that some areas within HCI involve researchers with a greater diversity of backgrounds and thus require more frequent clarifications of terminology and/or state-of-the-art. This is not to say that some sub-communities are more effective at taking stock of existing knowledge. Rather, these differences illustrate the richness of the HCI field and the diverse academic traditions that contribute to its development.
We therefore suggest using our findings to situate future literature reviews. We propose using the identified review topics and review contribution types as possible guidelines for this endeavour. Researchers can use the identified review contribution types to make a decision about the contribution type of their literature review and then see if they can connect their work with one of the review topics or if it goes in a different direction (both approaches constitute a worthwhile endeavour). This decision can and should be altered throughout the process. This first step merely serves as a starting point to situate the literature review in previous work and to support scholars to decide on the next steps in their literature review process. This can then help to build a clear understanding of what their literature review is about, formulating research aims, and deciding on the research process for the literature review.

5.5 Literature Reviews in HCI Can Have Varying Degrees of Rigour

We encourage researchers to reflect on the degree of structure they want their literature review to have. This can range from less structured approaches driven by curiosity and exploration to rigorously structured approaches. This reflection process can be helpful to answer questions such as: Why was the search procedure done in this way? How should the corpus be analysed? In addition, the process may serve as a discussion basis to foster a shared understanding between a group of authors working together on one literature review. We argue that it is necessary to reflect on the contribution before deciding on a methodological approach. After this initial decision, scholars can agree on an initial methodological approach. The emphasis here lies on the initial, since we do recognise that a literature review can evolve or change over time. Here, our results point to a set of initial strategies that can be used to conduct structured literature reviews. For instance, PRISMA statements or inter-rater reliability. The fragmented use of such approaches could potentially point towards a need for different ways to support structured literature reviews in HCI. Our analysis shows that choosing a more or less rigorous review method is suitable for different kinds of contributions and there are no incorrect choices in review methods. Nevertheless, the observed upwards trend of reporting IRR or PRISMA statements (see Figures 6 and 7) should be considered here. In particular, looking at the last 3 years (2021, 2022, and 2023), even though this increase is not significant concerning reporting IRR (only 16% of literature reviews), 40% of literature reviews published in those 3 years reported PRISMA, up by 30% from the previous decade. This could indicate the adoption of those two reporting standards by the HCI community, but more data over the following decade should be collected before conclusions can be drawn with certainty.

5.6 Literature Reviews in HCI can Benefit from Explicit Statements of Method and Contribution

The last step in a literature review is for researchers to communicate where they locate their work on the spectrum illustrated by our analysis. In short, researchers should clearly state in their reviews which literature review approach they chose and where they situate their work. This can support authors choosing appropriate literature review methods and reviewers to assess their work appropriately and understand it in the way the authors want it to be understood. In addition, this process has the potential to support interested readers in understanding the contribution of the literature review and make an informed decision if they want to engage with this study in more depth.
However, we note that this does not necessarily mean that HCI literature reviews are always required to fit in one of the review contribution types which we derived. Instead, we argue that our results can support authors, reviewers and readers to understand where a specific study can be situated. We believe that our results can also be useful in case authors want to make a point that their work does not fit within the review contribution types. It can support identifying where their work is interestingly different and support generating structured and meaningful knowledge for the community.

5.7 Limitations & Future Work

Several notions of what constitutes a literature review contribution in HCI exist. This is our main motivation for this study and our main limitation. We recognise that we opted for a specific approach to conduct this literature review of literature reviews and acknowledge that there are several other ways we could have achieved the same goal. For instance, we did not calculate IRR. Instead, we applied a rigorous consensus-based approach to ensure the reliability of our analysis. While this approach is in line with the majority of qualitative research in HCI [121] and many of the reviews in our corpus, we do recognise that there are different ways of ensuring reliability in qualitative data analysis. Further, we decided to review all literature reviews published at TOCHI and SIGCHI conferences. Instead, we could have focused on the most highly cited CHI literature reviews to explore the potential impact of such papers in more depth. However, this decision to include all literature reviews was based on our belief that, as a first step, we should attempt to derive a holistic analysis of HCI literature reviews. Future work could use our framework to generate an understanding of literature reviews at CHI in a more focused manner, e.g. by exploring if one specific contribution type of literature reviews has the most impact over time.
This article is based on the analysis of a large corpus of papers. Based on this analysis, we derived a framework of five literature review contribution types in HCI. However, this is not the only valid way to analyse literature reviews in HCI. An alternative approach would explore more bibliometric properties of review papers in HCI and study their temporal dynamics. Nevertheless, our work contributes a system of categories for literature reviews, and our method allowed us to not only construct these categories but also provide rich descriptions for them. We recognise that there is a further need for structure in understanding literature reviews in HCI which may require alternative methods.
We would also like to point out that there were some types of data that we did not code for, but that could be considered for future research. For instance, future studies could look at the initial number of identified records versus the final number of included papers for different types of literature review contributions. This could also be explored in conjunction with exclusion/inclusion criteria employed by literature reviews in our domain.

6 Conclusion

In this work, we analysed literature reviews published at SIGCHI conferences and TOCHI. In a structured review process, we identified a final corpus of 189 HCI papers. Based on our analysis, we constructed five categories which describe possible contribution types of literature reviews in HCI: Empirical, Artefact Methodological, Theoretical, and Opinion. Additionally, we identified the following review topics for literature reviews in HCI: User Experience & Design, HCI Research, Interaction Design and Children, AI & ML, Games & Play, Work & Creativity, Accessibility, Well-being & Health, Human-Robot Interaction, AutoUI, Specific Application Area, and Specific Modality. Our results reflect the variety of different scholarly traditions within the HCI community. To support conducting literature reviews in HCI, we provided recommendations for future reviews based on our results and created an online paper library for easy and efficient navigation, filtering, and addition of literature review papers in HCI. Furthermore, we provide an HCI literature review design document to support future authors of literature reviews. We discuss how a literature review can play different roles in HCI discourse and use a variety of methodologies. We hope that our work can serve as a driver towards a shared understanding of literature review contributions in HCI, inspire fruitful academic discourse and unpack the knowledge the HCI community has already generated about literature reviews, to make sure it is not lost in information.

Acknowledgments

This research is funded by the German Research Foundation (DFG) under Germany’s Excellence Strategy (EXC 2077, University of Bremen) and by the Swedish Research Council 2022-03196.

A Full Corpus (Indicating Contribution Type)

Table 2:
Literature Review Contribution Type
PaperEmpiricalArtefactMethodologicalTheoreticalOpinion
Quinn and Benderson, 2011 [153]    
Serim and Jacucci, 2019 [173]    
Caine, 2016 [34]    
Weisband and Kiesler, 1996 [204]    
Curtis, 1982 [47]    
Grossman et al., 2009 [69]    
Froehlich et al., 2010 [64]    
Bargas-Avila and Hornbaek, 2011 [15]    
Pierce and Paulos, 2012 [144]    
Rasmussen et al., 2012 [154]    
Mekler et al., 2014 [122]    
Dell and Kumar, 2016 [49]   
Chen et al., 2017 [39]    
Schlesinger et al., 2017[168]    
Mäkelä et al., 2017 [119]    
Grosse-Puppendahl et al., 2017 [68]    
Velt et al., 2017 [197]    
Stowell et al., 2018 [180]    
Yang et al., 2018 [210]    
Schneider et al., 2018 [170]    
Roohi et al., 2018 [160]    
Qamar et al., 2018 [152]    
Pettersson et al., 2018 [142]    
Sanches et al., 2019 [165]    
Baytas et al., 2019 [18]    
Frich et al., 2019 [62]    
Pohl et al., 2019 [147]    
Terzimehić et al., 2019 [186]    
Caraban et al., 2019 [35]    
Speicher et al., 2019 [176]   
Brudy et al., 2019 [30]    
Wang et al., 2019 [202]    
Altarriba et al., 2019 [9]    
Hassenzahl et al., 2012 [74]    
Hornbaek and Hertzum, 2017 [81]    
Kharrufa et al., 2017 [94]    
Nunes et al., 2015 [131]    
Kono et al., 2018 [98]    
Abowd and Mynatt, 2000 [3]    
Kannabiran et al., 2011 [89]    
Saxena et al., 2020 [166]    
Salminen et al., 2020 [163]    
Baykal et al., 2020 [17]   
Parker et al., 2020 [139]    
Prpa et al., 2020 [151]    
Brule et al., 2020 [31]    
Tyack and Mekler, 2020 [190]    
Katsini et al., 2020 [90]    
Beneteau, 2020 [20]    
Thieme et al., 2020 [187]    
Alavi et al., 2020 [7]    
Keyes et al., 2020 [92]    
Maggioni et al., 2020 [118]   
Bopp and Voida, 2020 [24]    
Hansson et al., 2021 [71]    
Steiger et al., 2021 [179]    
Zoshak and Dew, 2021 [216]    
Bergström et al., 2021 [21]    
Butler et al., 2021 [33]    
Bruns et al., 2021 [32]    
Iivari et al., 2021 [83]    
Hirzle et al., 2021 [79]    
Babaei et al., 2021 [14]    
Offenwanger et al., 2021 [133]    
Mack et al., 2021 [117]   
Esterwood et al., 2021 [59]    
Mathur et al., 2021 [120]    
Osmers et al., 2021 [134]    
Turmo Vidal et al., 2021 [188]    
MacArthur et al., 2021 [114]    
Pater et al., 2021 [140]    
Zhou et al., 2021 [213]   
Preist et al., 2014 [150]    
Epstein et al., 2015 [57]    
Lee and Paine, 2015 [103]    
Anya, 2015 [11]    
Wong and Jackson, 2015 [208]    
Lopez and Guerrero, 2017 [111]    
Reeves et al., 2017 [156]    
Pierce, 2014 [143]    
Huang and Stolterman, 2014 [82]    
Diefenbach et al., 2014 [50]    
Pan and Blevis, 2014 [138]   
Baumer et al., 2016 [16]    
Silpasuwanchai et al., 2016 [174]    
Zimmerman et al., 2016 [214]    
Beck and Stolterman, 2017 [19]    
Mose Biskjaer et al., 2017 [126]    
Çorlu et al., 2017 [46]    
Frich et al., 2018 [63]    
Lefeuvre et al., 2018 [105]    
Talkad Sukumar and Metoyer, 2019 [184]    
MacDonald, 2019 [115]   
Boem and Troiano, 2019 [23]    
Remy et al., 2020 [157]    
Villarreal-Narvaez et al., 2020 [199]    
Aarts et al., 2020 [2]    
Arzate Cruz and Igarashi, 2020 [13]    
Falk Olesen and Halskov, 2020 [60]    
Chung et al., 2021 [40]    
Han et al., 2021 [70]    
Gatos et al., 2021 [66]    
Adams et al., 2015 [4]    
Turner et al., 2015 [189]    
Posch et al., 2019 [148]    
Velloso and Carter, 2016 [196]    
Alavesa et al., 2017 [6]    
Law et al., 2018 [102]    
Scheepmaker et al., 2018 [167]    
Allison et al., 2018 [8]    
Harpstead et al., 2019 [72]    
Altarriba Bertran et al., 2019 [10]    
Robinson et al., 2020 [158]    
Vornhagen et al., 2020 [201]    
Kjeldskov and Paay, 2012 [96]    
Kjeldskov and Skov, 2014 [97]    
Khamis et al., 2018 [93]    
Völkel et al., 2020 [200]    
Eiband et al., 2021 [54]    
Sun et al., 2020 [182]    
Lalanne et al., 2009 [100]    
D’Mello and Kory, 2012 [51]    
Jung, 2017 [87]    
Rea et al., 2017 [155]    
Kalegina et al., 2018 [88]    
Schrum et al., 2020 [171]    
Naujoks et al., 2017 [128]    
Inners and Kun, 2017 [84]    
Forster et al., 2018 [61]    
Wiegand et al., 2019 [206]    
Elias et al., 2019 [55]    
Moore et al., 2019 [125]    
Schmitz, 2010 [169]    
Lyons et al., 2012 [112]    
Döring et al., 2013 [53]    
Nowacka and Kirk, 2014 [130]    
Nabil et al., 2017 [127]    
Hayes and Hogan, 2020 [75]    
Saeghe et al., 2020 [162]    
Vatavu et al., 2020 [195]   
Verginadis et al., 2010 [198]    
Pace et al., 2010 [136]  
Börjesson et al., 2015 [25]   
Pinter et al., 2017 [145]   
Høiseth and Van Mechelen, 2017 [80]    
Yu and Roque, 2018 [211]    
Ma et al., 2019 [113]   
Soni et al., 2019 [175]    
Van Mechelen et al., 2020 [192]   
Suh et al., 2020 [181]    
Kawas et al., 2020 [91]    
Van Mechelen et al., 2021 [193]    
Brown et al., 2021 [29]    
Spiel and Gerling, 2021 [177]    
Linxen et al., 2021 [109]    
Sabie et al., 2022 [161]    
Xia et al., 2022 [209]    
Zheng et al., 2022 [212]    
Rogers et al., 2022 [159]    
Herdel et al., 2022 [77]    
Jones et al., 2022 [86]    
Moge et al., 2022 [123]    
Salminen et al., 2022 [164]    
Harrington et al., 2022 [73]    
Nittala et al., 2022 [129]    
Suzuki et al., 2022 [183]    
Cooper et al., 2022 [45]   
Bremer et al., 2022 [28]   
Eriksson et al., 2022 [58]  
Stefanidi et al., 2022 [178]  
Liang et al., 2021 [107]    
Vatavu, 2021 [194]    
Li et al., 2022 [106]    
Dönmez Özkan et al., 2021 [52]    
Dam and Jeon, 2021 [48]   
Seaborn and Pennefather, 2022 [172]    
Zimmerman et al., 2022 [215]    
Heller et al., 2021 [76]    
Lee et al., 2021 [104]   
Johansen et al., 2022 [85]    
Froehlich et al., 2022 [65]    
Nunes Vilaza et al., 2022 [132]    
Clashing et al., 2022 [42]    
Chang et al., 2022 [37]    
Pouta and Mikkonen, 2022 [149]    
Hirzle et al., 2020 [78]   
Charfi et al., 2009 [38]    
Bouzit et al., 2016 [27]    
Cirelli and Nakamura, 2014 [41]    
Table 2: Full corpus indicating contribution type.

Footnotes

3
See appendix for the full corpus (sorted per contribution type)

Supplementary Material

Supplemental Materials (3544548.3581332-supplemental-materials.zip)
MP4 File (3544548.3581332-talk-video.mp4)
Pre-recorded Video Presentation
MP4 File (3544548.3581332-video-preview.mp4)
Video Preview

References

[1]
2020. Types of Literature Review - Research-Methodology. Retrieved September 14, 2020 from https://research-methodology.net/research-methodology/types-literature-review/
[2]
Tessa Aarts, Linas K Gabrielaitis, Lianne C De Jong, Renee Noortman, Emma M Van Zoelen, Sophia Kotea, Silvia Cazacu, Lesley L Lock, and Panos Markopoulos. 2020. Design card sets: Systematic literature survey and card sorting study. In Proceedings of the 2020 ACM Designing Interactive Systems Conference. 419–428.
[3]
Gregory D Abowd and Elizabeth D Mynatt. 2000. Charting past, present, and future research in ubiquitous computing. ACM Transactions on Computer-Human Interaction (TOCHI) 7, 1(2000), 29–58.
[4]
Alexander T Adams, Jean Costa, Malte F Jung, and Tanzeem Choudhury. 2015. Mindless computing: designing technologies to subtly influence behavior. In Proceedings of the 2015 ACM international joint conference on pervasive and ubiquitous computing. 719–730.
[5]
Herman Aguinis, Ravi S Ramani, and Nawaf Alabduljader. 2020. Best-practice recommendations for producers, evaluators, and users of methodological literature reviews. Organizational research methods(2020), 1094428120943281.
[6]
Paula Alavesa, Minna Pakanen, Hannu Kukka, Matti Pouke, and Timo Ojala. 2017. Anarchy or order on the streets: Review based characterization of location based mobile games. In Proceedings of the Annual Symposium on Computer-Human Interaction in Play. 101–113.
[7]
Hamed S Alavi, Denis Lalanne, and Yvonne Rogers. 2020. The Five Strands of Living Lab: A Literature Study of the Evolution of Living Lab Concepts in HCI. ACM Transactions on Computer-Human Interaction (TOCHI) 27, 2(2020), 1–26.
[8]
Fraser Allison, Marcus Carter, Martin Gibbs, and Wally Smith. 2018. Design patterns for voice interaction in games. In Proceedings of the 2018 Annual Symposium on Computer-Human Interaction in Play. 5–17.
[9]
Ferran Altarriba Bertran, Samvid Jhaveri, Rosa Lutz, Katherine Isbister, and Danielle Wilde. 2019. Making sense of human-food interaction. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 1–13.
[10]
Ferran Altarriba Bertran, Danielle Wilde, Ernő Berezvay, and Katherine Isbister. 2019. Playful human-food interaction research: State of the art and future directions. In Proceedings of the Annual Symposium on Computer-Human Interaction in Play. 225–237.
[11]
Obinna Anya. 2015. Bridge the gap! What can work design in crowdwork learn from work design theories?. In Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing. 612–627.
[12]
Hilary Arksey and Lisa O’Malley. 2005. Scoping studies: towards a methodological framework. International journal of social research methodology 8, 1(2005), 19–32.
[13]
Christian Arzate Cruz and Takeo Igarashi. 2020. A survey on interactive reinforcement learning: Design principles and open challenges. In Proceedings of the 2020 ACM Designing Interactive Systems Conference. 1195–1209.
[14]
Ebrahim Babaei, Benjamin Tag, Tilman Dingler, and Eduardo Velloso. 2021. A Critique of Electrodermal Activity Practices at CHI. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–14.
[15]
Javier A Bargas-Avila and Kasper Hornbæk. 2011. Old wine in new bottles or novel challenges: a critical analysis of empirical studies of user experience. In Proceedings of the SIGCHI conference on human factors in computing systems. 2689–2698.
[16]
Eric PS Baumer, Vera Khovanskaya, Mark Matthews, Lindsay Reynolds, Victoria Schwanda Sosik, and Geri Gay. 2014. Reviewing reflection: on the use of reflection in interactive system design. In Proceedings of the 2014 conference on Designing interactive systems. 93–102.
[17]
Gökçe Elif Baykal, Maarten Van Mechelen, and Eva Eriksson. 2020. Collaborative Technologies for Children with Special Needs: A Systematic Literature Review. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1–13.
[18]
Mehmet Aydin Baytas, Damla Çay, Yuchong Zhang, Mohammad Obaid, Asim Evren Yantaç, and Morten Fjeld. 2019. The design of social drones: A review of studies on autonomous flyers in inhabited environments. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 1–13.
[19]
Jordan Beck and Erik Stolterman. 2017. Reviewing the big questions literature; or, should HCI have big questions?. In Proceedings of the 2017 Conference on Designing Interactive Systems. 969–981.
[20]
Erin Beneteau. 2020. Who Are You Asking?: Qualitative Methods for Involving AAC Users as Primary Research Participants. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1–13.
[21]
Joanna Bergström, Tor-Salve Dalsgaard, Jason Alexander, and Kasper Hornbæk. 2021. How to Evaluate Object Selection and Manipulation in VR? Guidelines from 20 Years of Studies. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–20.
[22]
Ann Blandford, Dominic Furniss, and Stephann Makri. 2016. Qualitative HCI research: Going behind the scenes. Synthesis lectures on human-centered informatics 9, 1(2016), 1–115.
[23]
Alberto Boem and Giovanni Maria Troiano. 2019. Non-Rigid HCI: A review of deformable interfaces and input. In Proceedings of the 2019 on Designing Interactive Systems Conference. 885–906.
[24]
Chris Bopp and Amy Voida. 2020. Voices of the social sector: a systematic review of stakeholder voice in HCI research with nonprofit organizations. ACM Transactions on Computer-Human Interaction (TOCHI) 27, 2(2020), 1–26.
[25]
Peter Börjesson, Wolmet Barendregt, Eva Eriksson, and Olof Torgersson. 2015. Designing technology for and with developmentally diverse children: a systematic literature review. In Proceedings of the 14th international conference on interaction design and children. 79–88.
[26]
Maura Borrego, Margaret J Foster, and Jeffrey E Froyd. 2014. Systematic literature reviews in engineering education and other developing interdisciplinary fields. Journal of Engineering Education 103, 1 (2014), 45–76.
[27]
Sara Bouzit, Gaëlle Calvary, Denis Chêne, and Jean Vanderdonckt. 2016. A design space for engineering graphical adaptive menus. In Proceedings of the 8th ACM SIGCHI Symposium on Engineering Interactive Computing Systems. 239–244.
[28]
Christina Bremer, Bran Knowles, and Adrian Friday. 2022. Have We Taken On Too Much?: A Critical Review of the Sustainable HCI Landscape. In CHI Conference on Human Factors in Computing Systems. 1–11.
[29]
Sarah Anne Brown, Sharon Lynn Chu, and Pengfei Yin. 2021. A Survey of Interface Representations in Visual Programming Language Environments for Children’s Physical Computing Kits. In Interaction Design and Children. 268–275.
[30]
Frederik Brudy, Christian Holz, Roman Rädle, Chi-Jui Wu, Steven Houben, Clemens Nylandsted Klokmose, and Nicolai Marquardt. 2019. Cross-device taxonomy: Survey, opportunities and challenges of interactions spanning across multiple devices. In Proceedings of the 2019 chi conference on human factors in computing systems. 1–28.
[31]
Emeline Brulé, Brianna J Tomlinson, Oussama Metatla, Christophe Jouffrais, and Marcos Serrano. 2020. Review of Quantitative Empirical Evaluations of Technology for People with Visual Impairments. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1–14.
[32]
Miguel Bruns, Stijn Ossevoort, and Marianne Graves Petersen. 2021. Expressivity in Interaction: a Framework for Design. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–13.
[33]
Matthew Butler, Leona M Holloway, Samuel Reinders, Cagatay Goncu, and Kim Marriott. 2021. Technology Developments in Touch-Based Accessible Graphics: A Systematic Review of Research 2010-2020. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–15.
[34]
Kelly Caine. 2016. Local standards for sample size at CHI. In Proceedings of the 2016 CHI conference on human factors in computing systems. 981–992.
[35]
Ana Caraban, Evangelos Karapanos, Daniel Gonçalves, and Pedro Campos. 2019. 23 ways to nudge: A review of technology-mediated nudging in human-computer interaction. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 1–15.
[36]
Jeanne Sternlicht Chall and Edgar Dale. 1995. Readability revisited: The new Dale-Chall readability formula. Brookline Books.
[37]
Michelle Chang, Chenyi Shen, Aditi Maheshwari, Andreea Danielescu, and Lining Yao. 2022. Patterns and Opportunities for the Design of Human-Plant Interaction. In Designing Interactive Systems Conference. 925–948.
[38]
Syrine Charfi, Emmanuel Dubois, and Dominique L Scapin. 2009. Usability recommendations in the design of mixed interactive systems. In Proceedings of the 1st ACM SIGCHI symposium on Engineering interactive computing systems. 231–236.
[39]
Ko-Le Chen, Rachel Clarke, Teresa Almeida, Matthew Wood, and David S Kirk. 2017. Situated dissemination through an HCI workplace. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. 2078–2090.
[40]
John Joon Young Chung, Shiqing He, and Eytan Adar. 2021. The Intersection of Users, Roles, Interactions, and Technologies in Creativity Support Tools. In Designing Interactive Systems Conference 2021. 1817–1833.
[41]
Mauricio Cirelli and Ricardo Nakamura. 2014. A survey on multi-touch gesture recognition and multi-touch frameworks. In Proceedings of the Ninth ACM International Conference on Interactive Tabletops and Surfaces. 35–44.
[42]
Christal Clashing, Ian Smith, Maria F Montoya, Rakesh Patibanda, Swamy Ananthanarayan, Sarah Jane Pell, and Florian Floyd Mueller. 2022. Going into Depth: Learning from a Survey of Interactive Designs for Aquatic Recreation. In Designing Interactive Systems Conference. 1119–1132.
[43]
Harris M Cooper. 1988. Organizing knowledge syntheses: A taxonomy of literature reviews. Knowledge in society 1, 1 (1988), 104–126.
[44]
Harris M Cooper. 1998. Synthesizing research: A guide for literature reviews. Vol. 2. Sage.
[45]
Ned Cooper, Tiffanie Horne, Gillian R Hayes, Courtney Heldreth, Michal Lahav, Jess Holbrook, and Lauren Wilcox. 2022. A Systematic Review and Thematic Analysis of Community-Collaborative Approaches to Computing Research. In CHI Conference on Human Factors in Computing Systems. 1–18.
[46]
Doăa Çorlu, Şeyma Taşel, Semra Gülce Turan, Athanasios Gatos, and Asim Evren Yantaç. 2017. Involving autistics in user experience studies: A critical review. In Proceedings of the 2017 Conference on Designing Interactive Systems. 43–55.
[47]
Bill Curtis. 1982. A review of human factors research on programming languages and specifications. In Proceedings of the 1982 Conference on Human Factors in Computing Systems. 212–218.
[48]
Abhraneil Dam and Myounghoon Jeon. 2021. A Review of Motion Sickness in Automated Vehicles. In 13th International Conference on Automotive User Interfaces and Interactive Vehicular Applications. 39–48.
[49]
Nicola Dell and Neha Kumar. 2016. The ins and outs of HCI for development. In Proceedings of the 2016 CHI conference on human factors in computing systems. 2220–2232.
[50]
Sarah Diefenbach, Nina Kolb, and Marc Hassenzahl. 2014. The’hedonic’in human-computer interaction: history, contributions, and future research directions. In Proceedings of the 2014 conference on Designing interactive systems. 305–314.
[51]
Sidney D’Mello and Jacqueline Kory. 2012. Consistent but modest: a meta-analysis on unimodal and multimodal affect detection accuracies from 30 studies. In Proceedings of the 14th ACM international conference on Multimodal interaction. 31–38.
[52]
Yasemin Dönmez Özkan, Alexander G Mirnig, Alexander Meschtscherjakov, Cansu Demir, and Manfred Tscheligi. 2021. Mode awareness interfaces in automated vehicles, robotics, and aviation: A literature review. In 13th International Conference on Automotive User Interfaces and Interactive Vehicular Applications. 147–158.
[53]
Tanja Döring, Axel Sylvester, and Albrecht Schmidt. 2013. A design space for ephemeral user interfaces. In Proceedings of the 7th International Conference on Tangible, Embedded and Embodied Interaction. 75–82.
[54]
Malin Eiband, Daniel Buschek, and Heinrich Hussmann. 2021. How to support users in understanding intelligent systems? Structuring the discussion. In 26th International Conference on Intelligent User Interfaces. 120–132.
[55]
Sardar Elias, Moojan Ghafurian, and Siby Samuel. 2019. Effectiveness of Red-Light Running Countermeasures: A Systematic Review. In Proceedings of the 11th International Conference on Automotive User Interfaces and Interactive Vehicular Applications. 91–100.
[56]
Daniel A Epstein, Clara Caldeira, Mayara Costa Figueiredo, Xi Lu, Lucas M Silva, Lucretia Williams, Jong Ho Lee, Qingyang Li, Simran Ahuja, Qiuer Chen, 2020. Mapping and taking stock of the personal informatics literature. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 4, 4 (2020), 1–38.
[57]
Daniel A Epstein, Bradley H Jacobson, Elizabeth Bales, David W McDonald, and Sean A Munson. 2015. From" nobody cares" to" way to go!" A Design Framework for Social Sharing in Personal Informatics. In Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing. 1622–1636.
[58]
Eva Eriksson, Gökçe Elif Baykal, and Olof Torgersson. 2022. The Role of Learning Theory in Child-Computer Interaction-A Semi-Systematic Literature Review. In Interaction Design and Children. 50–68.
[59]
Connor Esterwood, Kyle Essenmacher, Han Yang, Fanpan Zeng, and Lionel Peter Robert. 2021. A Meta-Analysis of Human Personality and Robot Acceptance in Human-Robot Interaction. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–18.
[60]
Jeanette Falk Olesen and Kim Halskov. 2020. 10 years of research with and on hackathons. In Proceedings of the 2020 ACM designing interactive systems conference. 1073–1088.
[61]
Yannick Forster, Sebastian Hergeth, Frederik Naujoks, and Josef F Krems. 2018. How usability can save the day-methodological considerations for making automated driving a success story. In Proceedings of the 10th international conference on automotive user interfaces and interactive vehicular applications. 278–290.
[62]
Jonas Frich, Lindsay MacDonald Vermeulen, Christian Remy, Michael Mose Biskjaer, and Peter Dalsgaard. 2019. Mapping the landscape of creativity support tools in HCI. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 1–18.
[63]
Jonas Frich, Michael Mose Biskjaer, and Peter Dalsgaard. 2018. Twenty years of creativity research in human-computer interaction: Current state and future directions. In Proceedings of the 2018 Designing Interactive Systems Conference. 1235–1257.
[64]
Jon Froehlich, Leah Findlater, and James Landay. 2010. The design of eco-feedback technology. In Proceedings of the SIGCHI conference on human factors in computing systems. 1999–2008.
[65]
Michael Fröhlich, Franz Waltenberger, Ludwig Trotter, Florian Alt, and Albrecht Schmidt. 2022. Blockchain and Cryptocurrency in Human Computer Interaction: A Systematic Literature Review and Research Agenda. arXiv preprint arXiv:2204.10857(2022).
[66]
Doğa Gatos, Aslı Günay, Güncel Kırlangıç, Kemal Kuscu, and Asim Evren Yantac. 2021. How HCI Bridges Health and Design in Online Health Communities: A Systematic Review. In Designing Interactive Systems Conference 2021. 970–983.
[67]
Maria J Grant and Andrew Booth. 2009. A typology of reviews: an analysis of 14 review types and associated methodologies. Health Information & Libraries Journal 26, 2 (2009), 91–108.
[68]
Tobias Grosse-Puppendahl, Christian Holz, Gabe Cohn, Raphael Wimmer, Oskar Bechtold, Steve Hodges, Matthew S Reynolds, and Joshua R Smith. 2017. Finding common ground: A survey of capacitive sensing in human-computer interaction. In Proceedings of the 2017 CHI conference on human factors in computing systems. 3293–3315.
[69]
Tovi Grossman, George Fitzmaurice, and Ramtin Attar. 2009. A survey of software learnability: metrics, methodologies and guidelines. In Proceedings of the sigchi conference on human factors in computing systems. 649–658.
[70]
Feng Han, Yifei Cheng, Megan Strachan, and Xiaojuan Ma. 2021. Hybrid Paper-Digital Interfaces: A Systematic Literature Review. In Designing Interactive Systems Conference 2021. 1087–1100.
[71]
Lon Åke Erni Johannes Hansson, Teresa Cerratto Pargman, and Daniel Sapiens Pargman. 2021. A Decade of Sustainable HCI: Connecting SHCI to the Sustainable Development Goals. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–19.
[72]
Erik Harpstead, Juan Sebastian Rios, Joseph Seering, and Jessica Hammer. 2019. Toward a Twitch research toolkit: A systematic review of approaches to research on game Streaming. In Proceedings of the Annual Symposium on Computer-Human Interaction in Play. 111–119.
[73]
Christina Harrington, Aqueasha Martin-Hammond, and Kirsten E Bray. 2022. Examining Identity as a Variable of Health Technology Research for Older Adults: A Systematic Review. In CHI Conference on Human Factors in Computing Systems. 1–24.
[74]
Marc Hassenzahl, Stephanie Heidecker, Kai Eckoldt, Sarah Diefenbach, and Uwe Hillmann. 2012. All you need is love: Current strategies of mediating intimate relationships through technology. ACM Transactions on Computer-Human Interaction (TOCHI) 19, 4(2012), 30.
[75]
Sarah Hayes and Trevor Hogan. 2020. Towards a material landscape of tuis, through the lens of the tei proceedings 2008-2019. In Proceedings of the Fourteenth International Conference on Tangible, Embedded, and Embodied Interaction. 95–110.
[76]
Florian Heller, Kashyap Todi, and Kris Luyten. 2021. An Interactive Design Space for Wearable Displays. In Proceedings of the 23rd International Conference on Mobile Human-Computer Interaction. 1–14.
[77]
Viviane Herdel, Lee J Yamin, and Jessica R Cauchard. 2022. Above and Beyond: A Scoping Review of Domains and Applications for Human-Drone Interaction. In CHI Conference on Human Factors in Computing Systems. 1–22.
[78]
Teresa Hirzle, Maurice Cordts, Enrico Rukzio, and Andreas Bulling. 2020. A survey of digital eye strain in gaze-based interactive systems. In ACM Symposium on Eye Tracking Research and Applications. 1–12.
[79]
Teresa Hirzle, Maurice Cordts, Enrico Rukzio, Jan Gugenheimer, and Andreas Bulling. 2021. A Critical Assessment of the Use of SSQ as a Measure of General Discomfort in VR Head-Mounted Displays. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–14.
[80]
Marikken Høiseth and Maarten Van Mechelen. 2017. Identifying Patterns in IDC Research: Technologies for Improving Children’s Well-being Connected to Overweight Issues. In Proceedings of the 2017 Conference on Interaction Design and Children. 107–116.
[81]
Kasper Hornbæk and Morten Hertzum. 2017. Technology acceptance and user experience: A review of the experiential component in HCI. ACM Transactions on Computer-Human Interaction (TOCHI) 24, 5(2017), 33.
[82]
Chung-Ching Huang and Erik Stolterman. 2014. Temporal anchors in user experience research. In Proceedings of the 2014 conference on Designing interactive systems. 271–274.
[83]
Netta Iivari, Leena Ventä-Olkkonen, Sumita Sharma, Tonja Molin-Juustila, and Essi Kinnunen. 2021. CHI Against Bullying: Taking Stock of the Past and Envisioning the Future. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–17.
[84]
Michael Inners and Andrew L Kun. 2017. Beyond liability: Legal issues of human-machine interaction for automated vehicles. In Proceedings of the 9th International Conference on Automotive User Interfaces and Interactive Vehicular Applications. 245–253.
[85]
Stine S Johansen, Niels van Berkel, and Jonas Fritsch. 2022. Characterising Soundscape Research in Human-Computer Interaction. In Designing Interactive Systems Conference. 1394–1417.
[86]
Michael D Jones, Meredith Von Feldt, and Natalie Andrus. 2022. Outside Where? A Survey of Climates and Built Environments in Studies of HCI outdoors. In CHI Conference on Human Factors in Computing Systems. 1–15.
[87]
Malte F Jung. 2017. Affective grounding in human-robot interaction. In 2017 12th ACM/IEEE International Conference on Human-Robot Interaction (HRI. IEEE, 263–273.
[88]
Alisa Kalegina, Grace Schroeder, Aidan Allchin, Keara Berlin, and Maya Cakmak. 2018. Characterizing the design space of rendered robot faces. In Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction. 96–104.
[89]
Gopinaath Kannabiran, Jeffrey Bardzell, and Shaowen Bardzell. 2011. How HCI Talks about Sexuality: Discursive Strategies, Blind Spots, and Opportunities for Future Research. Association for Computing Machinery, New York, NY, USA, 695–704. https://doi.org/10.1145/1978942.1979043
[90]
Christina Katsini, Yasmeen Abdrabou, George E Raptis, Mohamed Khamis, and Florian Alt. 2020. The Role of Eye Gaze in Security and Privacy Applications: Survey and Future HCI Research Directions. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1–21.
[91]
Saba Kawas, Ye Yuan, Akeiylah DeWitt, Qiao Jin, Susanne Kirchner, Abigail Bilger, Ethan Grantham, Julie A Kientz, Andrea Tartaro, and Svetlana Yarosh. 2020. Another decade of IDC research: Examining and reflecting on values and ethics. In Proceedings of the Interaction Design and Children Conference. 205–215.
[92]
Os Keyes, Burren Peil, Rua M Williams, and Katta Spiel. 2020. Reimagining (women’s) health: HCI, gender and essentialised embodiment. ACM Transactions on Computer-Human Interaction (TOCHI) 27, 4(2020), 1–42.
[93]
Mohamed Khamis, Florian Alt, and Andreas Bulling. 2018. The past, present, and future of gaze-enabled handheld mobile devices: Survey and lessons learned. In Proceedings of the 20th International Conference on Human-Computer Interaction with Mobile Devices and Services. 1–17.
[94]
Ahmed Kharrufa, Thomas Ploetz, and Patrick Olivier. 2017. A unified model for user identification on multi-touch surfaces: a survey and meta-analysis. ACM Transactions on Computer-Human Interaction (TOCHI) 24, 6(2017), 1–39.
[95]
Barbara Kitchenham, O Pearl Brereton, David Budgen, Mark Turner, John Bailey, and Stephen Linkman. 2009. Systematic literature reviews in software engineering–a systematic literature review. Information and software technology 51, 1 (2009), 7–15.
[96]
Jesper Kjeldskov and Jeni Paay. 2012. A longitudinal review of Mobile HCI research methods. In Proceedings of the 14th international conference on Human-computer interaction with mobile devices and services. 69–78.
[97]
Jesper Kjeldskov and Mikael B Skov. 2014. Was it worth the hassle? Ten years of mobile HCI research discussions on lab and field evaluations. In Proceedings of the 16th international conference on Human-computer interaction with mobile devices & services. 43–52.
[98]
Michinari Kono, Takumi Takahashi, Hiromi Nakamura, Takashi Miyaki, and Jun Rekimoto. 2018. Design guideline for developing safe systems that apply electricity to the human body. ACM Transactions on Computer-Human Interaction (TOCHI) 25, 3(2018), 19.
[99]
Matthias Kraus, Katrin Angerbauer, Juri Buchmüller, Daniel Schweitzer, Daniel A Keim, Michael Sedlmair, and Johannes Fuchs. 2020. Assessing 2d and 3d heatmaps for comparative analysis: An empirical study. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1–14.
[100]
Denis Lalanne, Laurence Nigay, Philippe Palanque, Peter Robinson, Jean Vanderdonckt, and Jean-François Ladry. 2009. Fusion engines for multimodal input: a survey. In Proceedings of the 2009 international conference on Multimodal interfaces. 153–160.
[101]
Larry Laudan. 1978. Progress and its problems: Towards a theory of scientific growth. Vol. 282. Univ of California Press.
[102]
Effie L-C Law, Florian Brühlmann, and Elisa D Mekler. 2018. Systematic review and validation of the game experience questionnaire (geq)-implications for citation and reporting practice. In Proceedings of the 2018 Annual Symposium on Computer-Human Interaction in Play. 257–270.
[103]
Charlotte P Lee and Drew Paine. 2015. From The matrix to a model of coordinated action (MoCA) A conceptual framework of and for CSCW. In Proceedings of the 18th ACM conference on computer supported cooperative work & social computing. 179–194.
[104]
Seol-Yee Lee, Md Tahmidul Islam Molla, and Cindy Hsin-Liu Kao. 2021. A 10-Year Review of the Methods and Purposes of On-Skin Interface Research in ACM SIGCHI. In 2021 International Symposium on Wearable Computers. 84–90.
[105]
Kevin Lefeuvre, Soeren Totzauer, Michael Storz, Albrecht Kurze, Andreas Bischof, and Arne Berger. 2018. Bricks, Blocks, Boxes, Cubes, and Dice: On the Role of Cubic Shapes for the Design of Tangible Interactive Devices. In Proceedings of the 2018 Designing Interactive Systems Conference. 485–496.
[106]
Yanhong Li, Meng Liang, Julian Preissing, Nadine Bachl, Michelle Melina Dutoit, Thomas Weber, Sven Mayer, and Heinrich Hussmann. 2022. A meta-analysis of tangible learning studies from the tei conference. In Sixteenth International Conference on Tangible, Embedded, and Embodied Interaction. 1–17.
[107]
Meng Liang, Yanhong Li, Thomas Weber, and Heinrich Hussmann. 2021. Tangible Interaction for Children’s Creative Learning: A Review. In Creativity and Cognition. 1–14.
[108]
Western Libraries. 2020. Literature Reviews, Introduction to Different Types of. Retrieved September 14, 2020 from https://www.lib.uwo.ca/tutorials/typesofliteraturereviews/index.html
[109]
Sebastian Linxen, Christian Sturm, Florian Brühlmann, Vincent Cassau, Klaus Opwis, and Katharina Reinecke. 2021. How WEIRD is CHI?. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (Yokohama, Japan) (CHI ’21). Association for Computing Machinery, New York, NY, USA, Article 143, 14 pages. https://doi.org/10.1145/3411764.3445488
[110]
Yong Liu, Jorge Goncalves, Denzil Ferreira, Bei Xiao, Simo Hosio, and Vassilis Kostakos. 2014. CHI 1994-2013: mapping two decades of intellectual progress through co-word analysis. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 3553–3562.
[111]
Gustavo Lopez and Luis A Guerrero. 2017. Awareness supporting technologies used in collaborative systems: a systematic literature review. In Proceedings of the 2017 ACM conference on computer supported cooperative work and social computing. 808–820.
[112]
Leilah Lyons, Brian Slattery, Priscilla Jimenez, Brenda Lopez, and Tom Moher. 2012. Don’t forget about the sweat: effortful embodied interaction in support of learning. In Proceedings of the Sixth International Conference on Tangible, Embedded and Embodied Interaction. 77–84.
[113]
Yudan Ma, Annemiek Veldhuis, Tilde Bekker, Jun Hu, and Steven Vos. 2019. A Review of Design Interventions for Promoting Adolescents’ Physical Activity. In Proceedings of the 18th ACM International Conference on Interaction Design and Children. 161–172.
[114]
Cayley MacArthur, Arielle Grinberg, Daniel Harley, and Mark Hancock. 2021. You’re Making Me Sick: A Systematic Review of How Virtual Reality Research Considers Gender & Cybersickness. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–15.
[115]
Craig M MacDonald. 2019. User Experience (UX) Capacity-Building: A Conceptual Model and Research Agenda. In Proceedings of the 2019 on Designing Interactive Systems Conference. 187–200.
[116]
Stephen MacDonell, Martin Shepperd, Barbara Kitchenham, and Emilia Mendes. 2010. How Reliable Are Systematic Reviews in Empirical Software Engineering?IEEE Trans. Softw. Eng. 36, 5 (Sept. 2010), 676–687. https://doi.org/10.1109/TSE.2010.28
[117]
K Mack, Emma McDonnell, Dhruv Jain, Lucy Lu Wang, Jon E. Froehlich, and Leah Findlater. 2021. What Do We Mean by “Accessibility Research”? A Literature Survey of Accessibility Papers in CHI and ASSETS from 1994 to 2019. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–18.
[118]
Emanuela Maggioni, Robert Cobden, Dmitrijs Dmitrenko, Kasper Hornbæk, and Marianna Obrist. 2020. SMELL SPACE: mapping out the olfactory design space for novel interactions. ACM Transactions on Computer-Human Interaction (TOCHI) 27, 5(2020), 1–26.
[119]
Ville Mäkelä, Sumita Sharma, Jaakko Hakulinen, Tomi Heimonen, and Markku Turunen. 2017. Challenges in public display deployments: A taxonomy of external factors. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. 3426–3475.
[120]
Arunesh Mathur, Mihir Kshirsagar, and Jonathan Mayer. 2021. What makes a dark pattern... dark? design attributes, normative considerations, and measurement methods. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–18.
[121]
Nora McDonald, Sarita Schoenebeck, and Andrea Forte. 2019. Reliability and inter-rater reliability in qualitative research: Norms and guidelines for CSCW and HCI practice. Proceedings of the ACM on Human-Computer Interaction 3, CSCW(2019), 1–23.
[122]
Elisa D Mekler, Julia Ayumi Bopp, Alexandre N Tuch, and Klaus Opwis. 2014. A systematic review of quantitative studies on the enjoyment of digital entertainment games. In Proceedings of the SIGCHI conference on human factors in computing systems. 927–936.
[123]
Clara Moge, Katherine Wang, and Youngjun Cho. 2022. Shared User Interfaces of Physiological Data: Systematic Review of Social Biofeedback Systems and Contexts in HCI. In CHI Conference on Human Factors in Computing Systems. 1–16.
[124]
David Moher, Alessandro Liberati, Jennifer Tetzlaff, and Douglas G Altman. 2009. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. Annals of internal medicine 151, 4 (2009), 264–269.
[125]
Dylan Moore, Rebecca Currano, G Ella Strack, and David Sirkin. 2019. The case for implicit external human-machine interfaces for autonomous vehicles. In Proceedings of the 11th international conference on automotive user interfaces and interactive vehicular applications. 295–307.
[126]
Michael Mose Biskjaer, Peter Dalsgaard, and Kim Halskov. 2017. Understanding creativity methods in design. In Proceedings of the 2017 conference on designing interactive systems. 839–851.
[127]
Sara Nabil, Thomas Plötz, and David S Kirk. 2017. Interactive architecture: Exploring and unwrapping the potentials of organic user interfaces. In Proceedings of the Eleventh International Conference on Tangible, Embedded, and Embodied Interaction. 89–100.
[128]
Frederik Naujoks, Katharina Wiedemann, and Nadja Schömig. 2017. The importance of interruption management for usefulness and acceptance of automated driving. In Proceedings of the 9th international conference on automotive user interfaces and interactive vehicular applications. 254–263.
[129]
Aditya Shekhar Nittala and Jürgen Steimle. 2022. Next Steps in Epidermal Computing: Opportunities and Challenges for Soft On-Skin Devices. In CHI Conference on Human Factors in Computing Systems. 1–22.
[130]
Diana Nowacka and David Kirk. 2014. Tangible autonomous interfaces (TAIs) exploring autonomous behaviours in TUIs. In Proceedings of the 8th International Conference on Tangible, Embedded and Embodied Interaction. 1–8.
[131]
Francisco Nunes, Nervo Verdezoto, Geraldine Fitzpatrick, Morten Kyng, Erik Grönvall, and Cristiano Storni. 2015. Self-care technologies in HCI: Trends, tensions, and opportunities. ACM Transactions on Computer-Human Interaction (TOCHI) 22, 6(2015), 33.
[132]
Giovanna Nunes Vilaza, Kevin Doherty, Darragh McCashin, David Coyle, Jakob Bardram, and Marguerite Barry. 2022. A scoping review of ethics across SIGCHI. In Designing Interactive Systems Conference. 137–154.
[133]
Anna Offenwanger, Alan John Milligan, Minsuk Chang, Julia Bullard, and Dongwook Yoon. 2021. Diagnosing bias in the gender representation of HCI research participants: how it happens and where we are. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–18.
[134]
Niklas Osmers, Michael Prilla, Oliver Blunk, Gordon George Brown, Marc Janßen, and Nicolas Kahrl. 2021. The Role of Social Presence for Cooperation in Augmented Reality on Head Mounted Devices: A Literature Review. Association for Computing Machinery, New York, NY, USA. https://doi.org/10.1145/3411764.3445633
[135]
Antti Oulasvirta and Kasper Hornbæk. 2016. HCI research as problem-solving. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. 4956–4967.
[136]
Tyler Pace, Shaowen Bardzell, and Geoffrey Fox. 2010. Practice-centered e-science: a practice turn perspective on cyberinfrastructure design. In Proceedings of the 16th ACM international conference on Supporting group work. 293–302.
[137]
Cicero AL Pahins, Sean A Stephens, Carlos Scheidegger, and Joao LD Comba. 2016. Hashedcubes: Simple, low memory, real-time visual exploration of big data. IEEE transactions on visualization and computer graphics 23, 1(2016), 671–680.
[138]
Yue Pan and Eli Blevis. 2014. Fashion thinking: lessons from fashion and sustainable interaction design, concepts and issues. In Proceedings of the 2014 conference on Designing interactive systems. 1005–1014.
[139]
Callum Parker, Martin Tomitsch, Nigel Davies, Nina Valkanova, and Judy Kay. 2020. Foundations for Designing Public Interactive Displays that Provide Value to Users. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1–12.
[140]
Jessica Pater, Amanda Coupe, Rachel Pfafman, Chanda Phelan, Tammy Toscos, and Maia Jacobs. 2021. Standardizing Reporting of Participant Compensation in HCI: A Systematic Literature Review and Recommendations for the Field. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–16.
[141]
Micah DJ Peters, Christina M Godfrey, Hanan Khalil, Patricia McInerney, Deborah Parker, and Cassia Baldini Soares. 2015. Guidance for conducting systematic scoping reviews. JBI Evidence Implementation 13, 3 (2015), 141–146.
[142]
Ingrid Pettersson, Florian Lachner, Anna-Katharina Frison, Andreas Riener, and Andreas Butz. 2018. A Bermuda triangle? A Review of method application and triangulation in user experience evaluation. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. 1–16.
[143]
James Pierce. 2014. On the presentation and production of design research artifacts in HCI. In Proceedings of the 2014 conference on Designing interactive systems. 735–744.
[144]
James Pierce and Eric Paulos. 2012. Beyond energy monitors: interaction, energy, and emerging energy systems. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 665–674.
[145]
Anthony T Pinter, Pamela J Wisniewski, Heng Xu, Mary Beth Rosson, and Jack M Caroll. 2017. Adolescent online safety: Moving beyond formative evaluations to designing solutions for the future. In Proceedings of the 2017 Conference on Interaction Design and Children. 352–357.
[146]
Henning Pohl and Aske Mottelson. 2019. How we Guide, Write, and Cite at CHI. In Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems. ACM, alt01.
[147]
Henning Pohl, Andreea Muresan, and Kasper Hornbæk. 2019. Charting subtle interaction in the HCI literature. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 1–15.
[148]
Irene Posch, Liza Stark, and Geraldine Fitzpatrick. 2019. eTextiles: reviewing a practice through its tool/kits. In Proceedings of the 23rd International Symposium on Wearable Computers. 195–205.
[149]
Emmi Pouta and Jussi Ville Mikkonen. 2022. Woven eTextiles in HCI–a Literature Review. In Designing Interactive Systems Conference (DIS’22). ACM.
[150]
Chris Preist, Elaine Massung, and David Coyle. 2014. Competing or aiming to be average? Normification as a means of engaging digital volunteers. In Proceedings of the 17th ACM conference on Computer supported cooperative work & social computing. 1222–1233.
[151]
Mirjana Prpa, Ekaterina R Stepanova, Thecla Schiphorst, Bernhard E Riecke, and Philippe Pasquier. 2020. Inhaling and Exhaling: How Technologies Can Perceptually Extend our Breath Awareness. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1–15.
[152]
Isabel PS Qamar, Rainer Groh, David Holman, and Anne Roudaut. 2018. HCI meets material science: A literature review of morphing materials for the design of shape-changing interfaces. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. 1–23.
[153]
Alexander J Quinn and Benjamin B Bederson. 2011. Human computation: a survey and taxonomy of a growing field. In Proceedings of the SIGCHI conference on human factors in computing systems. 1403–1412.
[154]
Majken K Rasmussen, Esben W Pedersen, Marianne G Petersen, and Kasper Hornbæk. 2012. Shape-changing interfaces: a review of the design space and open research questions. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. 735–744.
[155]
Daniel J Rea, Denise Geiskkovitch, and James E Young. 2017. Wizard of awwws: Exploring psychological impact on the researchers in social HRI experiments. In Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction. 21–29.
[156]
Neal Reeves, Ramine Tinati, Sergej Zerr, Max G Van Kleek, and Elena Simperl. 2017. From crowd to community: a survey of online community features in citizen science projects. In Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing. 2137–2152.
[157]
Christian Remy, Lindsay MacDonald Vermeulen, Jonas Frich, Michael Mose Biskjaer, and Peter Dalsgaard. 2020. Evaluating Creativity Support Tools in HCI Research. In Proceedings of the 2020 ACM Designing Interactive Systems Conference. 457–476.
[158]
Raquel Robinson, Katelyn Wiley, Amir Rezaeivahdati, Madison Klarkowski, and Regan L Mandryk. 2020. " Let’s Get Physiological, Physiological!" A Systematic Review of Affective Gaming. In Proceedings of the Annual Symposium on Computer-Human Interaction in Play. 132–147.
[159]
Katja Rogers, Sukran Karaosmanoglu, Maximilian Altmeyer, Ally Suarez, and Lennart E Nacke. 2022. Much Realistic, Such Wow! A Systematic Literature Review of Realism in Digital Games. In CHI Conference on Human Factors in Computing Systems. 1–21.
[160]
Shaghayegh Roohi, Jari Takatalo, Christian Guckelsberger, and Perttu Hämäläinen. 2018. Review of intrinsic motivation in simulation-based game testing. In Proceedings of the 2018 chi conference on human factors in computing systems. 1–13.
[161]
Dina Sabie, Cansu Ekmekcioglu, and Syed Ishtiaque Ahmed. 2022. A Decade of International Migration Research in HCI: Overview, Challenges, Ethics, Impact, and Future Directions. ACM Transactions on Computer-Human Interaction (TOCHI) 29, 4(2022), 1–35.
[162]
Pejman Saeghe, Gavin Abercrombie, Bruce Weir, Sarah Clinch, Stephen Pettifer, and Robert Stevens. 2020. Augmented Reality and Television: Dimensions and Themes. In ACM International Conference on Interactive Media Experiences. 13–23.
[163]
Joni Salminen, Kathleen Guan, Soon-gyo Jung, Shammur A Chowdhury, and Bernard J Jansen. 2020. A literature review of quantitative persona creation. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1–14.
[164]
Joni Salminen, Kathleen Wenyun Guan, Soon-Gyo Jung, and Bernard Jansen. 2022. Use cases for design personas: A systematic review and new frontiers. In CHI Conference on Human Factors in Computing Systems. 1–21.
[165]
Pedro Sanches, Axel Janson, Pavel Karpashevich, Camille Nadal, Chengcheng Qu, Claudia Daudén Roquet, Muhammad Umair, Charles Windlin, Gavin Doherty, Kristina Höök, 2019. HCI and Affective Health: Taking stock of a decade of studies and charting future research directions. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 1–17.
[166]
Devansh Saxena, Karla Badillo-Urquiola, Pamela J Wisniewski, and Shion Guha. 2020. A Human-Centered Review of Algorithms used within the US Child Welfare System. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1–15.
[167]
Laura Scheepmaker, Christopher Frauenberger, and Katta Spiel. 2018. The things we play with roles of technology in social play. In Proceedings of the 2018 Annual Symposium on Computer-Human Interaction in Play. 451–462.
[168]
Ari Schlesinger, W Keith Edwards, and Rebecca E Grinter. 2017. Intersectional HCI: Engaging identity through gender, race, and class. In Proceedings of the 2017 CHI conference on human factors in computing systems. 5412–5427.
[169]
Michael Schmitz. 2010. Concepts for life-like interactive objects. In Proceedings of the fifth international conference on Tangible, embedded, and embodied interaction. 157–164.
[170]
Hanna Schneider, Malin Eiband, Daniel Ullrich, and Andreas Butz. 2018. Empowerment in HCI-A survey and framework. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. 1–14.
[171]
Mariah L Schrum, Michael Johnson, Muyleng Ghuy, and Matthew C Gombolay. 2020. Four years in review: Statistical practices of likert scales in human-robot interaction studies. In Companion of the 2020 ACM/IEEE International Conference on Human-Robot Interaction. 43–52.
[172]
Katie Seaborn and Peter Pennefather. 2022. Gender neutrality in robots: An open living review framework. arXiv preprint arXiv:2205.00182(2022).
[173]
Barış Serim and Giulio Jacucci. 2019. Explicating" Implicit Interaction" An Examination of the Concept and Challenges for Research. In Proceedings of the 2019 chi conference on human factors in computing systems. 1–16.
[174]
Chaklam Silpasuwanchai, Xiaojuan Ma, Hiroaki Shigemasu, and Xiangshi Ren. 2016. Developing a comprehensive engagement framework of gamification for reflective learning. In Proceedings of the 2016 ACM Conference on Designing Interactive Systems. 459–472.
[175]
Nikita Soni, Aishat Aloba, Kristen S Morga, Pamela J Wisniewski, and Lisa Anthony. 2019. A framework of touchscreen interaction design recommendations for children (tidrc) characterizing the gap between research evidence and design practice. In Proceedings of the 18th acm international conference on interaction design and children. 419–431.
[176]
Maximilian Speicher, Brian D Hall, and Michael Nebeling. 2019. What is mixed reality?. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 1–15.
[177]
Katta Spiel and Kathrin Gerling. 2021. The Purpose of Play: How HCI Games Research Fails Neurodivergent Populations. ACM Transactions on Computer-Human Interaction (TOCHI) 28, 2(2021), 1–40.
[178]
Evropi Stefanidi, Johannes Schöning, Sebastian S Feger, Paul Marshall, Yvonne Rogers, and Jasmin Niess. 2022. Designing for Care Ecosystems: a Literature Review of Technologies for Children with ADHD. In Interaction Design and Children. 13–25.
[179]
Miriah Steiger, Timir J Bharucha, Sukrit Venkatagiri, Martin J Riedl, and Matthew Lease. 2021. The Psychological Well-Being of Content Moderators: The Emotional Labor of Commercial Moderation and Avenues for Improving Support. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–14.
[180]
Elizabeth Stowell, Mercedes C Lyson, Herman Saksono, Reneé C Wurth, Holly Jimison, Misha Pavel, and Andrea G Parker. 2018. Designing and evaluating mHealth interventions for vulnerable populations: A systematic review. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. 1–17.
[181]
Sangho Suh, Martinet Lee, and Edith Law. 2020. How do we design for concreteness fading? survey, general framework, and design dimensions. In Proceedings of the Interaction Design and Children Conference. 581–588.
[182]
Zhu Sun, Di Yu, Hui Fang, Jie Yang, Xinghua Qu, Jie Zhang, and Cong Geng. 2020. Are we evaluating rigorously? benchmarking recommendation for reproducible evaluation and fair comparison. In Fourteenth ACM Conference on Recommender Systems. 23–32.
[183]
Ryo Suzuki, Adnan Karim, Tian Xia, Hooman Hedayati, and Nicolai Marquardt. 2022. Augmented Reality and Robotics: A Survey and Taxonomy for AR-enhanced Human-Robot Interaction and Robotic Interfaces. In CHI Conference on Human Factors in Computing Systems. 1–33.
[184]
Poorna Talkad Sukumar and Ronald Metoyer. 2019. Mobile Devices in Programming Contexts: A Review of the Design Space and Processes. In Proceedings of the 2019 on Designing Interactive Systems Conference. 1109–1122.
[185]
Mathieu Templier and Guy Paré. 2015. A framework for guiding and evaluating literature reviews. Communications of the Association for Information Systems 37, 1(2015), 6.
[186]
Nađa Terzimehić, Renate Häuslschmid, Heinrich Hussmann, and MC Schraefel. 2019. A Review & Analysis of Mindfulness Research in HCI: Framing Current Lines of Research and Future Opportunities. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 1–13.
[187]
Anja Thieme, Danielle Belgrave, and Gavin Doherty. 2020. Machine learning in mental health: A systematic review of the HCI literature to support the development of effective and implementable ML systems. ACM Transactions on Computer-Human Interaction (TOCHI) 27, 5(2020), 1–53.
[188]
Laia Turmo Vidal, Hui Zhu, Annika Waern, and Elena Márquez Segura. 2021. The Design Space of Wearables for Sports and Fitness Practices. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–14.
[189]
Liam D Turner, Stuart M Allen, and Roger M Whitaker. 2015. Interruptibility prediction for ubiquitous systems: conventions and new directions from a growing field. In Proceedings of the 2015 ACM international joint conference on pervasive and ubiquitous computing. 801–812.
[190]
April Tyack and Elisa D Mekler. 2020. Self-Determination Theory in HCI Games Research: Current Uses and Open Questions. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. 1–22.
[191]
Griffith University. 2020. Systematic Literature Reviews for Education: Different types of literature review. Retrieved September 14, 2020 from https://libraryguides.griffith.edu.au/c.php?g=451351&p=3333115
[192]
Maarten Van Mechelen, Gökçe Elif Baykal, Christian Dindler, Eva Eriksson, and Ole Sejer Iversen. 2020. 18 Years of ethics in child-computer interaction research: a systematic literature review. In Proceedings of the Interaction Design and Children Conference. 161–183.
[193]
Maarten Van Mechelen, Line Have Musaeus, Ole Sejer Iversen, Christian Dindler, and Arthur Hjorth. 2021. A Systematic Review of Empowerment in Child-Computer Interaction Research. In Interaction Design and Children. 119–130.
[194]
Radu-Daniel Vatavu. 2021. Accessibility of Interactive Television and Media Experiences: Users with Disabilities Have Been Little Voiced at IMX and TVX. In ACM International Conference on Interactive Media Experiences. 218–222.
[195]
Radu-Daniel Vatavu, Pejman Saeghe, Teresa Chambel, Vinoba Vinayagamoorthy, and Marian F Ursu. 2020. Conceptualizing Augmented Reality Television for the Living Room. In ACM International Conference on Interactive Media Experiences. 1–12.
[196]
Eduardo Velloso and Marcus Carter. 2016. The emergence of eyeplay: a survey of eye interaction in games. In Proceedings of the 2016 Annual Symposium on Computer-Human Interaction in Play. 171–185.
[197]
Raphael Velt, Steve Benford, and Stuart Reeves. 2017. A survey of the trajectories conceptual framework: Investigating theory use in HCI. In Proceedings of the 2017 CHI conference on human factors in computing systems. 2091–2105.
[198]
Yiannis Verginadis, Nikos Papageorgiou, Dimitris Apostolou, and Gregoris Mentzas. 2010. A review of patterns in collaborative work. In Proceedings of the 16th ACM international conference on Supporting Group Work. 283–292.
[199]
Santiago Villarreal-Narvaez, Jean Vanderdonckt, Radu-Daniel Vatavu, and Jacob O. Wobbrock. 2020. A Systematic Review of Gesture Elicitation Studies: What Can We Learn from 216 Studies?. In Proceedings of the 2020 ACM Designing Interactive Systems Conference (Eindhoven, Netherlands) (DIS ’20). Association for Computing Machinery, New York, NY, USA, 855–872. https://doi.org/10.1145/3357236.3395511
[200]
Sarah Theres Völkel, Christina Schneegass, Malin Eiband, and Daniel Buschek. 2020. What is" intelligent" in intelligent user interfaces? a meta-analysis of 25 years of IUI. In Proceedings of the 25th international conference on intelligent user interfaces. 477–487.
[201]
Jan B Vornhagen, April Tyack, and Elisa D Mekler. 2020. Statistical significance testing at chi play: Challenges and opportunities for more transparency. In Proceedings of the Annual Symposium on Computer-Human Interaction in Play. 4–18.
[202]
Danding Wang, Qian Yang, Ashraf Abdul, and Brian Y Lim. 2019. Designing theory-driven user-centric explainable AI. In Proceedings of the 2019 CHI conference on human factors in computing systems. 1–15.
[203]
Jane Webster and Richard T Watson. 2002. Analyzing the past to prepare for the future: Writing a literature review. MIS quarterly (2002), xiii–xxiii.
[204]
Suzanne Weisband and Sara Kiesler. 1996. Self disclosure on computer forms: Meta-analysis and implications. In Proceedings of the SIGCHI conference on human factors in computing systems. 3–10.
[205]
Mikael Wiberg, Daniela Rosner, and Alex Taylor. 2022. 40 Years of SIGCHI. Interactions 29, 6 (nov 2022), 5. https://doi.org/10.1145/3568730
[206]
Gesa Wiegand, Christian Mai, Kai Holländer, and Heinrich Hussmann. 2019. Incarar: A design space towards 3d augmented reality applications in vehicles. In Proceedings of the 11th international conference on automotive user interfaces and interactive vehicular applications. 1–13.
[207]
Jacob O. Wobbrock and Julie A. Kientz. 2016. Research Contributions in Human-Computer Interaction. Interactions 23, 3 (April 2016), 38–44. https://doi.org/10.1145/2907069
[208]
Richmond Y Wong and Steven J Jackson. 2015. Wireless visions: Infrastructure, imagination, and US spectrum policy. In Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing. 105–115.
[209]
Haijun Xia, Michael Glueck, Michelle Annett, Michael Wang, and Daniel Wigdor. 2022. Iteratively Designing Gesture Vocabularies: A Survey and Analysis of Best Practices in the HCI Literature. ACM Transactions on Computer-Human Interaction (TOCHI) 29, 4(2022), 1–54.
[210]
Qian Yang, Nikola Banovic, and John Zimmerman. 2018. Mapping machine learning advances from hci research to reveal starting places for design innovation. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. 1–11.
[211]
Junnan Yu and Ricarose Roque. 2018. A survey of computational kits for young children. In Proceedings of the 17th ACM conference on interaction design and children. 289–299.
[212]
Qingxiao Zheng, Yiliu Tang, Yiren Liu, Weizi Liu, and Yun Huang. 2022. UX Research on Conversational Human-AI Interaction: A Literature Review of the ACM Digital Library. In CHI Conference on Human Factors in Computing Systems. 1–24.
[213]
Qiushi Zhou, Cheng Cheng Chua, Jarrod Knibbe, Jorge Goncalves, and Eduardo Velloso. 2021. Dance and Choreography in HCI: A Two-Decade Retrospective. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–14.
[214]
John Zimmerman, Jodi Forlizzi, Justin Finkenaur, Sarah Amick, Ji Young Ahn, Nanako Era, and Owen Tong. 2016. Teens, parents, and financial literacy. In Proceedings of the 2016 ACM Conference on Designing Interactive Systems. 312–322.
[215]
Megan Zimmerman, Shelly Bagchi, Jeremy Marvel, and Vinh Nguyen. 2022. An Analysis of Metrics and Methods in Research from Human-Robot Interaction Conferences, 2015-2021. In Proceedings of the 2022 ACM/IEEE International Conference on Human-Robot Interaction. 644–648.
[216]
John Zoshak and Kristin Dew. 2021. Beyond Kant and Bentham: How Ethical Theories are being used in Artificial Moral Agents. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–15.

Cited By

View all
  • (2024)Estudio de un caso exploratorio sobre influencias cruzadas entre Interacción Humano-Computadora y Artes InteractivasExploratory Case Study on Cross-Influences between Human-Computer Interaction and Interactive ArtsTecnoLógicas10.22430/22565337.295827:60(e2958)Online publication date: 27-May-2024
  • (2024)Zooming In: A Review of Designing for Photo Taking in Human-Computer Interaction and Future ProspectsProceedings of the ACM on Human-Computer Interaction10.1145/36981508:ISS(597-623)Online publication date: 24-Oct-2024
  • (2024)Reflections Towards More Thoughtful Engagement with Literature Reviews in HCIProceedings of the Halfway to the Future Symposium10.1145/3686169.3686183(1-4)Online publication date: 21-Oct-2024
  • Show More Cited By

Index Terms

  1. Literature Reviews in HCI: A Review of Reviews

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    CHI '23: Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems
    April 2023
    14911 pages
    ISBN:9781450394215
    DOI:10.1145/3544548
    Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 19 April 2023

    Check for updates

    Author Tags

    1. literature review
    2. literature survey
    3. meta review
    4. meta-analysis
    5. method

    Qualifiers

    • Research-article
    • Research
    • Refereed limited

    Funding Sources

    • DFG
    • Swedish Research Council

    Conference

    CHI '23
    Sponsor:

    Acceptance Rates

    Overall Acceptance Rate 6,199 of 26,314 submissions, 24%

    Upcoming Conference

    CHI 2025
    ACM CHI Conference on Human Factors in Computing Systems
    April 26 - May 1, 2025
    Yokohama , Japan

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)6,415
    • Downloads (Last 6 weeks)1,365
    Reflects downloads up to 10 Dec 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Estudio de un caso exploratorio sobre influencias cruzadas entre Interacción Humano-Computadora y Artes InteractivasExploratory Case Study on Cross-Influences between Human-Computer Interaction and Interactive ArtsTecnoLógicas10.22430/22565337.295827:60(e2958)Online publication date: 27-May-2024
    • (2024)Zooming In: A Review of Designing for Photo Taking in Human-Computer Interaction and Future ProspectsProceedings of the ACM on Human-Computer Interaction10.1145/36981508:ISS(597-623)Online publication date: 24-Oct-2024
    • (2024)Reflections Towards More Thoughtful Engagement with Literature Reviews in HCIProceedings of the Halfway to the Future Symposium10.1145/3686169.3686183(1-4)Online publication date: 21-Oct-2024
    • (2024)An Umbrella Review of Reporting Quality in CHI Systematic Reviews: Guiding Questions and Best Practices for HCIACM Transactions on Computer-Human Interaction10.1145/368526631:5(1-55)Online publication date: 31-Jul-2024
    • (2024)More-than-Human Perspectives in Human-Computer Interaction Research: A Scoping ReviewProceedings of the 13th Nordic Conference on Human-Computer Interaction10.1145/3679318.3685408(1-18)Online publication date: 13-Oct-2024
    • (2024)Augmented Reality on the Move: A Systematic Literature Review for Vulnerable Road UsersProceedings of the ACM on Human-Computer Interaction10.1145/36764908:MHCI(1-30)Online publication date: 24-Sep-2024
    • (2024)Opportunities, tensions, and challenges in computational approaches to addressing online harassmentProceedings of the 2024 ACM Designing Interactive Systems Conference10.1145/3643834.3661623(1483-1498)Online publication date: 1-Jul-2024
    • (2024)Learning from Learning - Design-Based Research Practices in Child-Computer InteractionProceedings of the 23rd Annual ACM Interaction Design and Children Conference10.1145/3628516.3655754(338-354)Online publication date: 17-Jun-2024
    • (2024)What Do We Do? Lessons Learned from Conducting Systematic Reviews to Improve HCI DisseminationExtended Abstracts of the CHI Conference on Human Factors in Computing Systems10.1145/3613905.3637117(1-8)Online publication date: 11-May-2024
    • (2024)Sitting Posture Recognition and Feedback: A Literature ReviewProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642657(1-20)Online publication date: 11-May-2024
    • Show More Cited By

    View Options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Login options

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media