[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3613904.3642274acmconferencesArticle/Chapter ViewFull TextPublication PageschiConference Proceedingsconference-collections
research-article
Open access

Beyond Repairing with Electronic Speech: Towards Embodied Communication and Assistive Technology

Published: 11 May 2024 Publication History

Abstract

Traditionally, Western philosophies have strongly favoured a dualist interpretation of consciousness – emphasising the importance of the ‘mind’ over the ‘body’. However, we argue that adopted assistive technologies become embodied and extend intentionality within environments. In this paper, we restore an embodied view of the mind to theoretically enhance: understandings of assistive technology and human-human communication. Initially, we explore literature on: phenomenological theories of human experience, post-phenomenological accounts of technology, embodied accounts of assistive technology and participatory design. We then present a case study demonstrating the generative and disruptive effects of the embodied framework for co-designing AAC with people living with aphasia. Our findings show that the embodied framework supports a more multidimensional account of experience and suggests a shift from AAC devices that seek to ‘repair’ users’ speech. Reflecting on our case study, we then outline concerns with nascent technologies that could disembody and limit accessibility.

1 Introduction

A landmark moment in Western philosophy and science occurred with Descartes’ dualist first principle, “cogito ero sum” or “I think therefore I am”, which instantiated a foundationalist view that the mind and body were separate – meaning consciousness was reducible to processes occurring only within the mind [23, 24]. In recent years, the Cartesian dualist position, which omits the role of the body has been significantly weakened by both neuroscientific and biological findings [22, 67, 88].
Equally, there has been a significant growth in mainstream awareness of bodily techniques for ensuring general mental well-being such as meditation [76], yoga [47], fitness/exercise [85], restful sleep [31] and balanced diets [1]. Specifically, meditation and yoga come from an Eastern philosophical tradition, which embraces the essential role of the body in everyday human conscious experience [46]. Although these mainstream trends offer much promise, the importance of embodied interactions has often been overlooked in the design of assistive technologies – particularly augmentative and alternative communication (AAC) devices [9, 38, 39, 40].
Since the 1960s, AAC that generates speech (i.e., speech generating devices or SGDs) emerged as a technology designed to offer opportunities for people living with complex communication needs (CCN) to produce electronic speech through composing messages using symbol or lexical representations of language [34, 38, 40]. Yet, recent research has found this AAC design to underplay many communities’ pre-existing autonomy and more embodied forms of communicative expression [9, 38, 40]. Looking beyond verbal dialogue, many people with CCN actively harness total and non-verbal communication strategies (i.e., gestures, facial expressions and physical props etc.) to successfully communicate [71].
Research from both Ibrahim et al. [40] and Bircanin et al. [9] amongst separate communities found that AAC devices counter-intuitively undermined these embodied communication pathways prompting device abandonment. In response, Ibrahim et al. [40], co-created Whisper an AAC system which shape-shifts preventing the prominent status of the technology from physically blocking embodied gestures for communication. Meanwhile, Bircanin et al. [9] and Alper [3] have encouraged more tangible and low-fidelity AAC approaches even appropriating tangible objects of interest favoured by communities living with disabilities to support their self-expression and embodied communication styles [87].
Despite this noteworthy prior research, AAC devices still face high levels of abandonment from many communities worldwide [57, 59, 91]. Simultaneously, the majority of mainstream AAC devices and research has continued to focus predominantly on electronic speech production [20, 39, 91] and more recently even the rise of artificial intelligence (AI) techniques to improve SGDs [30, 73, 86]. Yet, fundamentally, speech-generating AAC devices are – reparative – aspiring to replace or fix communities natural speech and pre-existing non-verbal communication abilities [40]. Concerningly, this design perspective emanates from the ableist medical model of disability, which defines disability as residing within the individual [39, 74]. Equally, these AAC technologies fail to acknowledge the co-constructed nature of communication and embodied nature of human experience [39].
In response, the embodied framework presents a more multidimensional account of people’s lived experiences. Consequently, we seek to demonstrate how the embodied framework and tangible co-design can help transcend traditional views of AAC technology. We demonstrate through a case study how the embodied framework and co-design activities can assist in generating novel insights for AAC technology. Finally, we reflect upon wider research implications for the embodied framework and concerns regarding nascent disembodying assistive technologies. In sum, in this short paper we offer three contributions:
(1)
Using key literature we argue for a more embodied theoretical framework of assistive technologies. Plus, discuss how embodied and tangible co-design activities can empower by facilitating purposeful critique and communication.
(2)
With these tenets, we present a case study demonstrating the generative effects of the embodied framework for co-designing AAC amongst people living with aphasia. Initially, we contextualise with relevant literature. Then from 25 hours of co-design sessions and 300 hours of volunteer time at an aphasia charity, we present four vignettes and concurrently apply the framework – to establish more empowering AAC design insights.
(3)
Finally, we situate and discuss wider research implications for the embodied framework and outline nascent concerns with disembodying technologies.

2 Background

In this section, we review some of the core literature from phenomenology and integral ideas from post-phenomenological frameworks of technology. This literature is then honed with embodied accounts of assistive technology. We then discuss how co-design activities can empower by facilitating purposeful critique and communication.

2.1 Phenomenology and Embodied Human Experience

Starting in the early 20th century, Husserl and his student Heidegger began the philosophical enterprise of phenomenology – they both sought to ground and understand the human experience of the world [32]. However, the conception of embodiment features most strongly in the subsequent phenomenology of perception developed by Merleau Ponty [32, 55]. In particular, Svanæs [81] presents Merleau-Ponty’s influence on theories of embodied interaction and relevance to human-computer interaction (HCI) research. In summary, Merleau-Ponty’s framework broke with Cartesian mind-body dualism, arguing that we inhabit our bodies, living with them and through them in a complex social world [26, 27, 55, 81]. He distinguished two notions of the body – the objective body: our size, weight etc. and the lived body: the body through which we act and experience the world [2, 55, 81].
Notably, Merleau-Ponty’s lived body postulates that world experience is fundamentally composed of bodily experience and not just mental cognition [55, 81]. Indeed, daily we live, breathe and experience all manner of subtleties within our lived bodies [2]. Subsequently, Merleau-Ponty’s work has strongly influenced more recent enactivist phenomenologies [88]. Enactivist theory argues that cognition arises from the dynamic interaction between the lived body and the environment [88]. Enabling the possibility for extended minds and that cognition can even depend on environmental resources [15, 80].
Clark and Chalmers [15] illustrated this extended mind thesis (EMT), through a famous thought experiment involving low-tech assistive technology. Within the thought experiment, a fictional character – Otto – has Alzheimer’s, and thus carries an assistive technology in the form of a notebook, which he annotates with critical information to manage amnesia [15, 18]. Just as we regard internal brain processes as constitutive of memory Clark and Chalmers [15] argue that we too should regard Otto’s notebook as a constituent part of his memory [18].
Figure 1:
Figure 1: Rapp’s post-phenomenological framework and extension relation for wearables [70]. Within this framework, the human is integrated with the wearable, which extends motor intentionality and conscious intentionality [70]. Similarly, assistive technology foster embodied relations with their users – actively mediating and extending users environmental and bodily relations thereby actively shaping their cognitive experience of the world.
Figure 2:
Figure 2: Frameworks of cognition. (a) Dualist framework – cognition sourced entirely in the brain, directing the body to interact with assistive technology and environments. (b) Embodied framework – cognition enacted from its simultaneous interaction with the brain, environments, the body and assistive technology. Embodied cognition is more extended, dynamic and holistic – recognising the significance of the body, environment and assistive technology in unison for a comprehensive understanding of cognition.

2.2 Post-Phenomenological Frameworks of Technology

Originally introduced by Ihde [41, 42], post-phenomenology builds upon the contribution of phenomenology by focusing more strongly on the role of technological artifacts and their relation to human experience. Based on the philosophies of Ihde, Verbeek originally taxonomised the different relations technology has on human experience [89, 90]. With the first four based on Ihde’s work, Verbeek argued for ultimately seven relations to technology: embodiment, hermeneutic, background, alterity, cyborg, immersion and augmentation [89, 90].
More recently, Rapp [70] has extended this framework to constitute human relations with wearable technologies. Depicted in Figure 1, his framework of extended intentionality implores designers to look beyond the external properties of wearable technologies and more readily consider internal aspects of interaction [70]. Rapp argues that wearable devices can serve as extensions of our motor and conscious intentionality and can even alter how we relate to the world [70]. In much the same way, assistive technologies can extend intentionality and actively mediate users world relations. Indeed, first-person internalistic experience of using assistive technologies reveals that assistive technology can alter people’s bodily action (e.g., access), environmental experience (e.g., stigmas) and cognition (e.g., self-identity).

2.3 Embodied Accounts of Assistive Technologies

Embodied frameworks of human cognition and human relation to technology can serve as a rich conceptual framework for designing and developing assistive technologies as exemplified by Figure 2. Ultimately, ensuring that assistive technologies better support a more holistic view of human conscious experience. Merleau-Ponty [55] originally discussed the embodiment of assistive technologies and relation to conscious perception whilst interacting with environments. He described the first-person experience of a blind man and cane as follows, “the pressures on the hand and the stick are no longer given; the stick is no longer an object perceived by the blind man, but an instrument with which he perceives. It is a bodily auxiliary, an extension of the bodily synthesis” [55, 81]. Accordingly, perception involves the whole body and the cane has become an embodied part of his corporeal schema – facilitating the experience of environments [55, 81].
More recently, Toombs [83] phenomenological account of living with a disability has reinforced these embodied relations with assistive technologies. She reflects on her own experiences as a wheelchair user, “For the person who routinely uses a wheelchair the device becomes part of the body. One intuitively allows for the width of the wheels when going through a doorway; one performs the necessary hand/arm movements to move forwards and backwards without thinking about it... Thus when a stranger pushes my wheelchair without my permission, it is invading my personal bodily space”. For Toombs [83], the wheelchair becomes a part of her body and the tangible hand/arm gestures for interaction become second nature – not requiring significant conscious thought.
Elsewhere, Butcher [12] has described her experience of using her wheelchair in a similar vein, “my wheelchair really is a part of my body; I spend every minute of every day in it, and when it breaks, I am stripped of my independence and my autonomy”. Indeed, her wheelchair becomes a part of her body, intentionality and cognitive experience of the world. In Butcher’s own words – “The characteristics of my wheelchair, the way it moves and the things it allows me to do, are indicative of something alive and human, not a lifeless object” [12]. These phenomenological accounts and Figure 2, emphasise the importance of designing for a more extended view of cognition and experience. Whereby, much closer attention is paid to end-users internal experience as the assistive technology actively mediates their cognitive experience and relations to the world.
Figure 3:
Figure 3: The timeline of events of data collection across 24 months from October 2021–September 2023. Throughout, the first author has weekly volunteered and observed at the aphasia clinic.

2.4 Tangible and Embodied Co-Design Techniques

Accessibility scholarship has repeatedly emphasised the importance of engaging end-users and stakeholders directly within the design process as essential for developing technologies that will effectively support people living with disabilities [51]. Critically, participatory design provides a method and vision for involving the users of assistive technology directly within the design process [51]. Indeed, due to the embodied relation users have with their assistive technologies it is critical to support end users participation in the design of the assistive technologies – particularly as the technology can serve as an extension of their lived experiences and intentionality.
Researchers have adapted and developed methods that hone on the role of the body to support understanding and communication during co-design activities. For instance, Wilson et al. [92] developed tangible design languages to support people with aphasia’s voice in design. In particular, their rich design activities ranged from short direct tasks to group activities and even testing of high-fidelity prototypes to limit cognitive abstraction [92]. Aware of people with aphasia’s difficulties with verbal communication they supported tangible making activities which considered embodied styles of communicating and critique (e.g., the use of expressive bodily gestures) [92].
Elsewhere, Spiel and Angelini [79] used customised co-design methods to support accessible engagement, collaboration and critique of technology including using the communities preferred communication and cultural conventions. Furthermore, they also recommended acting out with technology – as an effective means for the communication of embodied emotions and mental models [79]. Finally, the tangible and physical action of co-design or self-building assistive technologies has been found to empower co-designers living with disabilities. In particular, Hamidi et al. [33] noted that the participatory design, tangible fabrication and customization of DIY AAC devices fostered bodily empowerment and design for the unique and multidimensional experience of living with a disability in Western Kenya.

3 Case Study: Co-designing AAC with Communities Living with Aphasia

To emphasise the importance of the embodied framework (i.e., Figure 2) we draw on qualitative insights from co-designing AAC. Initially, we overview our research approach, introduce the language impairment aphasia and discuss available AAC interventions. We then present vignettes of frequent collaborators with aphasia to establish their communication experiences. For each vignette, we provide emergent design insights from using: the embodied framework and tangible co-design activities. Finally, we reflect on the disruptive effect of the embodied model for AAC research.

3.1 Research Procedure

In total, we have held 14 co-design activities at the Roberta Williams Speech and Language Therapy Clinic across a 14 month period beginning in July 2022. Co-design sessions ranged between: 12–19 people living with aphasia (avg: 13.7), 3–6 speech and language therapists (SLTs) (avg: 4) and 1 family member (avg: 0.3). Approximately, sessions took 1–2.5 hours. Ethical approval for all research was granted by the King’s College London Health Faculties Research Ethics Subcommittee.

3.2 Living with Aphasia

Aphasia is an acquired language impairment most commonly caused by stroke [4, 7] but can be the result of any damage to the language centres of the brain. It can affect reading, writing, speech and comprehension [4, 7]. Often, aphasia is a publicly invisible disability, affecting approximately one-third of stroke survivors yet less than 10% of the population know of the condition [17]. Typically, people with aphasia’s communication abilities vary significantly [4]. For instance, some people with aphasia might find speaking more challenging than writing or vice versa. The number of people living with aphasia will likely increase due to ageing global populations [17] and increasingly people with aphasia face social exclusion and barriers to long-term speech and language therapy [11, 37, 52]. People with aphasia face individualised challenges with technology [56] particularly technologies which remove tangible, physical and embodied interactions [14, 43, 48, 49].

3.3 Available AAC for Aphasia

For people with aphasia dedicated hardware and subscription apps remain the most common AAC intervention. Typically, dedicated AAC devices are symbol-based with touchscreen, eye-gaze or manual switches as input e.g., Dynavox, Liberator and Lingraphica [20]. Yet, there has been a substantial growth in mobile and tablet-based AAC applications – the Tavistock Trust for Aphasia provides an extensive repository of iOS and Android options [54, 84]. Most AAC devices take the form of an electronic lexical or symbol-based dictionary, which present navigable grids of images, symbols or text e.g., Proloquo2Go and TouchChat [56]. Most AAC for aphasia operates as an SGD in which the user composes messages before it is then read-aloud by the device1.
Figure 4:
Figure 4: Raymond’s embodied perspectives and AAC insights. Picture of Raymond wearing a Stroke and Aphasia badge on lanyard plus wrist splints.
Figure 5:
Figure 5: Anthony’s embodied perspectives and AAC insights. Image of Anthony wearing Keith Haring art t-shirt with animated facial expressions using pointing gesture.

3.4 Vignette Methodology

The vignettes were derived from reflective analysis of the researchers extensive engagement with older adults living with aphasia and by following the processes outlined in previous work: [8, 63, 93]. Each vignette is drawn from three data sources: (1) 25 hours of coded video footage from co-design sessions, (2) ethnographic experiences of the first-author’s 300 hours spent observing/volunteering at the aphasia clinic and (3) coded data from exhaustive field notes documenting lived experiences. These data sources were collectively analysed enabling: identification of key quotes from transcripts and clear identification of people’s unique contextual/bodily communication strengths – this categorised data was then thematically analysed to craft vignettes [16]. For each vignette, we then applied the embodied framework to generate key AAC design insights and share relevant co-developed AAC from people living with aphasia. Overall, we employed a qualitative methodological approach, analyzing our data inductively. Vignettes have been very helpful for synthesising the large amounts of field data – supporting triangulation of observations, identification of quotes and themes. The vignettes seek to show the real, unique and embodied experiences of people living with aphasia and not just present abstract representations of people’s communication experiences.

3.5 Embodied Co-Design Techniques

We briefly outline three co-design techniques we frequently employed throughout sessions to develop more embodied assistive technologies. These techniques draw on embodied and phenomenological frameworks of cognition especially Rapp’s [70] internal aspects of interaction. Indeed, these techniques helped support our understanding of co-designer’s internalistic thought-styles whilst co-designing assistive technologies. Ensuring our co-designers had an empowered influence on technologies that would mediate their world relations, shape their cognitive abilities, sense of identity and eventual experience of different environments.

3.5.1 Low-fidelity prototyping.

Exploratory low-fidelity prototyping lets co-designers envision their own AAC and self-build using craft materials. This process lets participants design customised AAC for their unique challenges. Whereby co-designers can extensively customise their to AAC support their internal needs and intentionality within these communication contexts (e.g., public transport) [70].

3.5.2 High-fidelity prototypes.

The liberal use of high-fidelity prototypes mitigated cognitive abstraction for our co-designers. Initially, high-fidelity prototypes could be accessibly tried/tested. Then co-designers could simultaneously use the high-fidelity AAC as props to support personalised critiques in relation to their intentionality and needs across different contexts.

3.5.3 Experience prototyping.

In later co-design sessions, we used an actor to experience prototype and role-play with participants using AAC – thereby simulating real-world scenarios of usage. This was effective for recognising immediate problems with the AAC, first-person feelings/emotions and determining if the intervention obstructed co-designers’ pre-existing embodied styles of communication [79].

3.6 Applying the Embodied Framework: Vignettes of AAC with People Living with Aphasia

We present four vignettes of communication strengths and challenges from people living with aphasia to elucidate different communication experiences, embodied perspectives and subsequent key insights for AAC devices. All names are pseudonyms.

3.6.1 Vignette 1.

Raymond has lived with aphasia for over 10 years. Most days he is fatigued, “its down to the [blood pressure] medication” causing nightly discomforts. Before his stroke, Raymond repaired computers and mobile phones – he is an ardent technologist and wears a smartwatch to “help me with [my] blood pressure”. He has no publicly visible physical impairment from his stroke, consequently Raymond faces difficulties with seat requesting on public transport, “Because they look at you like... erm why do you need a seat? Yeah. yeah... you’ve got to either... have a badge on... or... erm have a card that says I have aphasia... say that I’ve had a stroke”. To mitigate public challenges, Raymond wears his disability identification, a Stroke/Aphasia badge and medical wristband. His wrist arthritis limits his communicative hand gestures. Within pressured contexts involving communication with strangers, Raymond’s anxiety causes him to freeze rendering him unable to “download” his dialogue. Nonetheless, he greatly desires to “fix my speech... let me talk... and confidence... get your confidence back”. If Raymond has an important appointment, he is supported by family who almost “telepathically” understand his non-verbal communication styles.

3.6.2 Raymond’s Insights.

For Raymond, AAC design insights were established using the embodied model and his co-developed AAC smartwatch. The embodied model’s awareness of extended cognition noted that the environment and confidence profoundly impact Raymond’s communication. The context-specific nature of his communication anxieties, suggests any AAC should be regulatable and only needed just during challenging episodes. Next, vocal and technological competency are key to Raymond’s sense of identity and must be endorsed by AAC technology. During ensuing co-design activities, Raymond low-fidelity prototyped an AAC smartwatch application. Despite his aphasia, Raymond wanted to control the smartwatch with verbal commands as Siri is presently frustrating to use, “I can’t get the words out and it cuts off!”. Furthermore, he is very accustomed to wearing a smartwatch and the AAC could provide discreet haptic sensations prompting mindful breathing and self-regulation of his contextual speech anxiety.
Figure 6:
Figure 6: Joy’s embodied perspectives and AAC insights. Image of Joy word-finding using: hand gestures, non-lexical vocalisations and steups (i.e., teeth kissing). She wears a hidden disability lanyard and right-arm brace.

3.6.3 Vignette 2.

Anthony has right-sided hemiplegic bodily paralysis caused by his stroke in the 2000s. Consequently, Anthony wears a right-sided ankle brace and uses a walking stick – limiting mobility to carry additional objects, “The stick! Can’t carry!”. He has limited verbal dialogue yet Anthony leverages his appearance to support communication: he wears colourful badges related to his favourite artists/accessibility i.e., “On buses... blue aphasia badge and blue seat badge”, band/art T-shirts, sunglasses and fedora hats. Regularly, Anthony holds conversations concerning his elaborate outfits and the vocabulary comes more naturally as they relate to his hobbies or are rehearsed. If Anthony is word-finding he will employ verbal filler e.g., “errrr”, maintains conversational flow with hand-gestures and liberally searches on his phone e.g., “Search on phone... look up [types of] pasta”. In particular, he traverses Wikipedia to word-find, plus leverages the site images as a prop and pointing e.g., “Poke them and point at sign”. Whilst discussing music, Anthony enjoys playing music from his phone on high-volume and he sometimes draws on paper provided his hands are not shaking.

3.6.4 Anthony’s Insights.

Two insights were established using the embodied model and renewed during high-fidelity prototype testing. Currently, Anthony creatively leverages his body to scaffold communication via accessibility badges and fashionable style across different contexts (i.e., art galleries/concerts). Importantly, AAC should not detract from these effective non-verbal competencies. Furthermore, if Anthony chooses to express himself via: gestures, pointing, drawing (i.e., props), phone music and Wikipedia searches – AAC should reinforce this self-expression. During high-fidelity prototype testing, an AAC eInk smart-badge, which provides useful expressions to scaffold communication as a prop was criticised by Anthony along dimensions of the embodied framework. Firstly (1), the latency of the eInk refresh-rate was criticised for not meeting expectations rather perpetuating “frustration”, (2) the buttons were not tangible enough to accommodate his finger-presses when shaking and (3) the display lacked a 3D-printed case negating comfortable usage in public.

3.6.5 Vignette 3.

Joy has right-sided hemiplegic bodily paralysis caused by her stroke in 2018. To mitigate paralysis, Joy uses a walking stick, wears a velcro-fitted black leg-brace, cross-body bag for accessories and a sling arm-brace. Furthermore, Joy has much self-belief for instance she will always complain if a coffee order is wrong, “I will! [...] No! You can hear me! [...] Can I have my say please! [Exclaiming with hands] Do you know what I mean!?”. Challenging environments for Joy include escalators/stairs and public transport – but she always communicates her needs with transport operators, “No me for me I go, Excuse me! Excuse me! Yeah! Yeah!”. In terms of technologies, Joy wears a Hidden Disabilities lanyard 2 with attached Stroke/Aphasia badge, she uses a smartwatch with stretchable strap, tracks daily step-counts and uses the SpokenAAC app for word-finding. Joy is partially verbal but greatly enjoys communicating verbally through abrupt changes in tone, pitch and loudness to add dramatic depths. Whilst word-finding, she writes on paper or prompts her AAC with speech-to-text/one-handed typing, “Spoken! Yep! I use it a lot”. Often, she repeats words for emphasis and employs non-lexical vocalisations e.g., “shh!”. Non-verbally, Joy communicates through a diverse array of animated facial expressions, left-handed pointing/gestures e.g., “Here I’d like this! And this!”, pantomiming and interjecting with steups (i.e., teeth kissing) to communicate disapproval. Joy lives with her extended West Indian family who embrace these embodied communication styles.
Figure 7:
Figure 7: Brian’s embodied perspectives and AAC insights. Image of Brian wearing a Formula-1 shirt and employing hand-gestures with a smartphone whilst enunciating.

3.6.6 Joy’s Insights.

For Joy, two AAC insights were established using the embodied model. Presently, Joy has the confidence to spend plenty of time in challenging environments for communication and mobility. Therefore, any AAC must be comfortably operable within these contexts without impacting mobility or causing public stigmas. Next, AAC must not interfere with Joy’s diverse cultural heritage including distinctive language practices and West Indian creole dialects. Indeed, for Joy unaided communication and intuitive exchanges are regularly essential to intimate co-constructed communication experiences shared with her community. Currently, SpokenAAC serves this purpose and offers word-finding support via phone speech-to-text interactions – critically the app does not interfere with intuitive exchanges.

3.6.7 Vignette 4.

Brian has limited hemiplegic mobility issues caused by his stroke in 2015. In public, Brian wears a Stroke/Aphasia badge coupled with a Hidden Disabilities Lanyard. To adapt to challenges with dialogue, Brian leverages his personal appearance to support communication he wears: football or Formula One team shirts, gold rings, chains and bracelets. Despite his stroke, Brian works at festivals throughout the UK, “Ummm... concerts... errr... for me... er... yes we do once every 6 months a year with the ice cream [vans]”. However, at these events he sometimes gets misguided accusations of “being drunk” as his speech slurs when more fatigued. When verbally communicating, Brian deliberately slows the pace of speech giving him adequate time to “let you get the words out” – enunciating each word patiently. He actively embraces personal challenges, “Yes! Yes! I think if you want it? You need to force yourself into doing it... err... sit back and hopefully it returns! No! No!”. Brian uses non-verbal communication liberally, he maintains upright posture and employs hand gestures with accompanying dialogue. When struggling with word-finding, Brian turns to verbal filler (i.e., “errr”), hand gestures and scaffolds with photos on his phone.

3.6.8 Brian’s Insights.

Two perspectives were established for Brian using the embodied model and acknowledged during experience prototyping. To begin, Brian is a very competent communicator for whom AAC should operate just as a safety net during communication difficulties with strangers. Otherwise, Brian does not need to be imbued with an intervention as he already communicates successfully in highly varied domains (i.e., festivals). Furthermore Brian is non-verbally very proficient, he liberally uses posture and hand gestures to self-express with speech – therefore AAC should not detract from these faculties with touch-screen input demands. During an experience prototyping workshop, his critique of an AAC smartwatch prototype can be categorised into the three dimensions of the embodied framework. Initially (1), the interface size was too cognitively exerting with his vision difficulties and undermined personal competency, (2) the velcro strap felt too tight and he preferred the looser links/style of his gold watch but (3) he appreciated the outward AAC smartwatch display augmented his intuitive use of expressive hand-gestures.

3.7 Ramifications for AAC Research

The embodied framework recognises the importance of AAC working in unison with the: (1) brain, (2) body, (3) environment – yet in these areas AAC has several design shortcomings. For our co-designers, archetypal AAC that generates speech would replace their dialogue and communication strengths. Plus, introduce established problems with AAC/SGDs into their lives including: learning demands [50], exerting operational pressures [13], mobility problems with heavy AAC form-factors [21, 25], public stigmas [58], slow sender-receiver styles of communication [28] and more. In contrast, the embodied model has disrupted prevailing modes of thinking and provided novel design insights that look beyond traditional AAC functionality. We synthesise these insights into three recommendations for future AAC research.

3.7.1 Communication Occurs Beyond Words.

Our first recommendation, focuses on communication beyond words – in particular from non-verbal (i.e., eye-gaze and gesture) to leveraging material resources (i.e., appearance and props) [40, 71]. However, AAC devices that speech generate can undermine co-designers preferred and empowering non-verbal communication modes. This is a medicalised approach, our co-designers non-verbal strategies represent natural variation in human communication and should not be diminished by AAC interventions. Thus far, the notion of communication confidence has been under-explored in research and AAC should instill confidence in users low and embodied no-tech communication methods [35]. In sum, we recommend exploration of AAC beyond just functional verbal message transmission as communication is more multidimensional [40]. Using the embodied framework, AAC could look to enhance many non-verbal (i.e., gestural), environmental (i.e, situational) and psychological (i.e., emotional) factors [20, 21, 40].

3.7.2 AAC Should Build on Pre-existing Competencies.

Our next recommendation, notes that AAC should not diminish users’ pre-existing competencies. Indeed, the vignettes emphasise that our co-designers are already competent and resourceful communicators in a multitude of settings – despite potential challenges, most did not use any AAC technology at all. Beyond communication, AAC devices should not limit users’ personal style/appearance and engagement with different settings. Indeed, the design of AAC devices can be ill-suited to social environments as some devices have an unappealing medical aesthetic and do not support external personalisation or customisation [78]. In contrast, our embodied model recognises that self-expression via clothing is a key modality for communication [87].

3.7.3 More Focus on Regulatable AAC Technology.

Finally, we recommend the development of more regulatable AAC technology [62]. Indeed, as noted by Ibrahim et al. [39] AAC can have an obstructive physical presence making it challenging to regulate in environments with limited space and the device manifest as an invasive presence obstructing pathways for communication (i.e., limited verbal and non-verbal). During collaboration with co-designers living with aphasia we also quickly established that AAC must be regulatable – so as to not intervene and obstruct intimate exchanges with cultures and close family/friends [53]. Consequently, AAC devices for many communities would be more effective as a safety net rather than a persistent intervention for communication. In addition, the cumbersome form-factor of some AAC devices can be challenging to use ‘on the go’ [19] and dependent on adequate battery charging. Instead AAC devices should be designed to maximise users autonomy – augmenting pre-existing communication modes [62], designed to be potentially not used at all and in smaller concealable form-factors.

4 Towards Embodied Assistive Technology

Many assistive technologies face abandonment from their communities – the reasons for abandonment can be highly varied, complex and personal [9, 36, 66]. However, a more embodied framework of cognition can provide useful reflection and support the generation of more enriching approaches for assistive technology. From this holistic framework, it is critical for assistive technologies to serve as an embodied extension thereby working in unison with the brain, body and environmental cognition. In this section, we outline three new directions for assistive technology research using the embodied model.

4.1 Personal and Bodily Identity is Shaped by Assistive Technology

Researchers must be more cognisant that personal identity and experience can be shaped by assistive technology [64, 77]. Detrimentally, assistive technologies can be paternalistic – undermining both users bodily agency and confidence [64, 77]. For instance, previous research found that children with severe, speech and physical impairments (SSPIs) had to resort to rejecting their AAC to ascertain bodily agency and communicate non-verbally [39]. Elsewhere, large AAC tablet form factors were found to hinder users own resourcefulness and agency to travel [19, 21]. Instead, assistive technologies should embolden users view of themselves and enhance their personal and bodily agency. Equally, assistive technologies should be designed for periods of non-use and respectful of the pre-existing abilities of their user.

4.2 Assistive Technology Should Not be Reparative and Cause New Problems

With the increasing prominence of AI-centered solutions, researchers must be reminded that we are not designing technology to repair bodies [74]. Indeed, AI may improve the naturalness of AAC speech synthesis – yet AAC users can often already communicate competently leveraging their body and the environment [71]. Therefore, a human-centered and interdependent direction must still remain the core focus of future research [6]. Furthermore, it is important to design assistive technologies that more strongly consider the role of the environment and context of usage [6, 75]. Devices may function but cause the user public discomforts and thereby limit overall agency. For example, Parette and Scherer [65] interviewed deaf and hard of hearing (DHH) users that actively avoided using hearing aids due to social stigmas. Whilst, co-design sessions undertaken by Profita et al. [68] noted DHH community members were keen to customise hearing aids and cochlear implants to support personal style and self-expression in public environments. Consequently, assistive technologies could look to embrace users personal style as fashionable technologies [69] or adapt dependent upon environmental context [21].

4.3 Assistive Technology Should Intuitively Extend Intentionality

Embodied assistive technologies serve as an enhancing bodily extension: extending human abilities intuitively, supporting low-latency interactions and ensuring minimal cognitive demands [45]. Previous research has noted that AAC devices can often have too high learning demands – undermining users self-belief and personal confidence [9, 13, 39, 40, 50]. In contrast, we discourage from assistive technologies that burden users with effortful mental tasks in public domains. Therefore, we support the design of assistive technologies that are more immediately intuitive – naturally extending intentionality of the body, engendering feelings of competence and enhancing experience intuitively. Previously, Theil et al. [82] extended deaf-blind users motor intentionality by affording immediate novel sensations for communication through a wearable vest. Meanwhile, Boyd et al. [10] extended conscious intentionality through smart glasses, which discreetly supported the atypical speech prosody of adults living with autism. Participatory design methodologies centered on the internalised thoughts, interactions, competencies and feelings of the user are helpful for co-designing these more intuitive and embodied assistive technologies [79].

4.4 Disembodying Technologies

Under the more holistic embodied framework, the body and environment play a significant factor in human cognition. Equally, the framework recognises the potential for intimate relations with assistive technologies – viscerally serving as an extension of cognition. In light of these factors, caution should be applied regarding technologies that strongly dis-inhibit from our bodies or seek to replace traditionally more tangible forms of intuitive interaction with physical environments. Especially given the emergence in artificial intelligence (AI) (e.g., [5, 29, 86]), videoconferencing (e.g., [61, 72]) and virtual reality (VR) (e.g., [44, 60, 94]) technologies for communities with disabilities. At all times, the focus must remain on supporting the people and communities rather than the improvement of technology. Previously, Obiorah et al. [62] have emphasised the importance of technologies which seek to augment rather than automate and replace – thereby preserving empowering communal, cultural and social rituals for their communities.

5 Conclusion

In this work, we contributed new literature concerning embodied and enactive theories of human experience and post-phenomenological accounts of human relations with technology. We draw connections with accessibility scholarship and present a more embodied theoretical framework of cognition. Finally, we present four vignettes drawn from our extensive data collected during the co-design of AAC technologies, present insights that transcend traditional AAC and synthesise recommendations for future AAC research. Plus, consider wider research implications for nascent assistive technologies.

Acknowledgments

Many thanks to our participants and Dr Sally McVicker from Aphasia Re-Connect. We would also like to thank Dr Rita Borgo, Dr Georgia Panagiotidou and Dr Seray Ibrahim for their support, comments and advice throughout the establishment and writing of the paper. We also thank the anonymous CHI 2024 reviewers for their valuable input. This work was supported in part by a UKRI EPSRC Studentship.

Footnotes

1
Predominantly, AAC evaluations have centered on functional communication practices and speech generation i.e., words-per-minute output using AAC [20].
2
The Hidden Disabilities Sunflower lets wearers voluntarily share that they have a non-visible disability/condition and may need help and understanding in public spaces.

Supplemental Material

MP4 File - Video Presentation
Video Presentation
Transcript for: Video Presentation

References

[1]
Roger AH Adan, Eline M van der Beek, Jan K Buitelaar, John F Cryan, Johannes Hebebrand, Suzanne Higgs, Harriet Schellekens, and Suzanne L Dickson. 2019. Nutritional psychiatry: Towards improving mental health by what you eat. European Neuropsychopharmacology 29, 12 (2019), 1321–1332.
[2]
Aeonmag. 2022. The phenomenology of Merleau-Ponty and embodiment in the world: Aeon Essays. https://aeon.co/essays/the-phenomenology-of-merleau-ponty-and-embodiment-in-the-world
[3]
Meryl Alper. 2017. Giving voice: Mobile communication, disability, and inequality. MIT Press, Cambridge, Massachusetts, USA.
[4]
National Aphasia Association. 2022. Home. https://www.aphasia.org/
[5]
Andrew Begel, John Tang, Sean Andrist, Michael Barnett, Tony Carbary, Piali Choudhury, Edward Cutrell, Alberto Fung, Sasa Junuzovic, Daniel McDuff, 2020. Lessons learned in designing AI for autistic adults. In Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility. ACM Press, New York, USA, 1–6.
[6]
Cynthia L Bennett, Erin Brady, and Stacy M Branham. 2018. Interdependence as a frame for assistive technology research and design. In Proceedings of the 20th international acm sigaccess conference on computers and accessibility. ACM Press, New York, NY, USA, 161–173.
[7]
Marcelo L Berthier. 2005. Poststroke aphasia: epidemiology, pathophysiology and treatment. Drugs & aging 22 (2005), 163–182.
[8]
Filip Bircanin, Margot Brereton, Laurianne Sitbon, Bernd Ploderer, Andrew Azaabanye Bayor, and Stewart Koplick. 2021. Including adults with severe intellectual disabilities in co-design through active support. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. ACM Press, New York, NY, USA, 1–12.
[9]
Filip Bircanin, Bernd Ploderer, Laurianne Sitbon, Andrew A Bayor, and Margot Brereton. 2019. Challenges and opportunities in using augmentative and alternative communication (AAC) technologies: Design considerations for adults with severe disabilities. In Proceedings of the 31st Australian Conference on Human-Computer-Interaction. ACM Press, New York, USA, 184–196.
[10]
LouAnne E Boyd, Alejandro Rangel, Helen Tomimbang, Andrea Conejo-Toledo, Kanika Patel, Monica Tentori, and Gillian R Hayes. 2016. SayWAT: Augmenting face-to-face conversations for adults with autism. In Proceedings of the 2016 CHI conference on human factors in computing systems. ACM Press, New York, USA, 4872–4883.
[11]
Caterina Breitenstein, Tanja Grewe, Agnes Flöel, Wolfram Ziegler, Luise Springer, Peter Martus, Walter Huber, Klaus Willmes, E Bernd Ringelstein, Karl Georg Haeusler, 2017. Intensive speech and language therapy in patients with chronic aphasia after stroke: a randomised, open-label, blinded-endpoint, controlled trial in a health-care setting. The Lancet 389, 10078 (2017), 1528–1538.
[12]
Ginny Butcher. 2020. Why my wheelchair is a part of my body, and what this means for you.https://medium.com/conscious-life/why-my-wheelchair-is-a-part-of-my-body-and-what-this-means-for-you-f1635acfa8d4
[13]
Jessica Caron, Janice Light, and Kathryn Drager. 2016. Operational demands of AAC mobile technology applications on programming vocabulary and engagement during professional and child interactions. Augmentative and Alternative Communication 32, 1 (2016), 12–24.
[14]
Mark Chignell, Lu Wang, Atefeh Zare, and Jamy Li. 2023. The evolution of HCI and human factors: Integrating human and artificial intelligence. ACM Transactions on Computer-Human Interaction 30, 2 (2023), 1–30.
[15]
Andy Clark and David Chalmers. 1998. The extended mind. analysis 58, 1 (1998), 7–19.
[16]
Victoria Clarke and Virginia Braun. 2017. Thematic analysis. The journal of positive psychology 12, 3 (2017), 297–298.
[17]
Chris Code, Ilias Papathanasiou, Silvia Rubio-Bruno, María de la Paz Cabana, Maria Marta Villanueva, Line Haaland-Johansen, Tatjana Prizl-Jakovac, Ana Leko, Nada Zemva, Ruth Patterson, 2016. International patterns of the public awareness of aphasia. International journal of language & communication disorders 51, 3 (2016), 276–284.
[18]
Giovanna Colombetti and Joel Krueger. 2015. Scaffoldings of the affective mind. Philosophical Psychology 28, 8 (2015), 1157–1176.
[19]
Humphrey Curtis and Timothy Neate. 2023. Watch Your Language: Using Smartwatches to Support Communication. In Proceedings of the 25th International ACM SIGACCESS Conference on Computers and Accessibility. ACM Press, New York, USA, 1–22.
[20]
Humphrey Curtis, Timothy Neate, and Carlota Vazquez Gonzalez. 2022. State of the Art in AAC: A Systematic Review and Taxonomy. In Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility. ACM Press, New York, USA, 1–22.
[21]
Humphrey Curtis, Zihao You, William Deary, Miruna-Ioana Tudoreanu, and Timothy Neate. 2023. Envisioning the (In) Visibility of Discreet and Wearable AAC Devices. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. ACM Press, New York, USA, 1–19.
[22]
Antonio R Damasio. 1999. The feeling of what happens: Body and emotion in the making of consciousness. Houghton Mifflin Harcourt, Boston, USA.
[23]
René Descartes. 1955. The philosophical works of Descartes.[2 vols.]. Vol. 2. Dover Publications, Cambridge, UK.
[24]
René Descartes. 1984. The Philosophical Writings of Descartes: Volume 2. Vol. 2. Cambridge University Press, Cambridge, UK.
[25]
Jurica Dolic, Jesenka Pibernik, and Josip Bota. 2012. Evaluation of mainstream tablet devices for symbol based AAC communication. In Agent and Multi-Agent Systems. Technologies and Applications: 6th KES International Conference, KES-AMSTA 2012, Dubrovnik, Croatia, June 25-27, 2012. Proceedings 6, Vol. 7327. Springer, New York, USA, 251–260.
[26]
Paul Dourish. 1999. Embodied interaction: Exploring the foundations of a new approach to HCI. Work 1, 1 (1999), 1–16.
[27]
Paul Dourish. 2001. Where the action is: the foundations of embodied interaction. MIT press, Cambridge, Massachusetts, USA.
[28]
Alexander Fiannaca, Ann Paradiso, Mira Shah, and Meredith Ringel Morris. 2017. AACrobat: Using mobile devices to lower communication barriers and provide autonomy with gaze-based AAC. In Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing. ACM Press, New York, NY, USA, 683–695.
[29]
Leah Findlater, Steven Goodman, Yuhang Zhao, Shiri Azenkot, and Margot Hanley. 2020. Fairness issues in AI systems that augment sensory abilities. ACM SIGACCESS Accessibility and Computing 125, 8 (2020), 1–1.
[30]
Mauricio Fontana de Vargas, Jiamin Dai, and Karyn Moffatt. 2022. AAC with Automated Vocabulary from Photographs: Insights from School and Speech-Language Therapy Settings. In Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility. ACM Press, New York, USA, 1–18.
[31]
Daniel Freeman, Bryony Sheaves, Guy M Goodwin, Ly-Mee Yu, Alecia Nickless, Paul J Harrison, Richard Emsley, Annemarie I Luik, Russell G Foster, Vanashree Wadekar, 2017. The effects of improving sleep on mental health (OASIS): a randomised controlled trial with mediation analysis. The Lancet Psychiatry 4, 10 (2017), 749–758.
[32]
Amedeo Giorgi and Barbro Giorgi. 2003. Phenomenology.Sage Publications, Inc, New York, USA.
[33]
Foad Hamidi, Patrick Mbullo, Deurence Onyango, Michaela Hynie, Susan McGrath, and Melanie Baljko. 2018. Participatory design of DIY digital assistive technology in Western Kenya. In Proceedings of the Second African Conference for Human Computer Interaction: Thriving Communities. ACM Press, New York, USA, 1–11.
[34]
D Jeffery Higginbotham, Howard Shane, Susanne Russell, and Kevin Caves. 2007. Access to AAC: Present, past, and future. Augmentative and alternative communication 23, 3 (2007), 243–257.
[35]
Tami Howe, Elaina McCarron, and Jacob Rowe. 2023. What helps confidence in communication-Perspectives of adults with aphasia:“Get maze… not stay out it”. Journal of Communication Disorders 103 (2023), 106334.
[36]
Amy Hurst and Jasmine Tobias. 2011. Empowering individuals with do-it-yourself assistive technology. In The proceedings of the 13th international ACM SIGACCESS conference on Computers and accessibility. ACM Press, New York, USA, 11–18.
[37]
Gareth Iacobucci. 2017. A service under pressure. BMJ 356 (2017), 2 pages.
[38]
Seray Ibrahim, Michael Clarke, Asimina Vasalou, and Jeff Bezemer. 2023. Common ground in AAC: how children who use AAC and teaching staff shape interaction in the multimodal classroom. Augmentative and Alternative Communication 1, 1 (2023), 1–12.
[39]
Seray B Ibrahim, Asimina Vasalou, and Michael Clarke. 2018. Design opportunities for AAC and children with severe speech and physical impairments. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. ACM Press, New York, USA, 1–13.
[40]
Seray B Ibrahim, Asimina Vasalou, and Michael Clarke. 2020. Can design documentaries disrupt design for disability?. In Proceedings of the interaction design and children conference. ACM Press, New York, USA, 96–107.
[41]
Don Ihde. 2002. Bodies in technology. Vol. 5. U of Minnesota Press, Minnesota, USA.
[42]
Don Ihde. 2010. Heidegger’s technologies: Postphenomenological perspectives. Fordham University Press, New York, USA.
[43]
Toru Ishida. 1999. Understanding digital cities. In Kyoto workshop on digital cities. Springer, Springer, New York, USA, 7–17.
[44]
Tiger F Ji, Brianna Cochran, and Yuhang Zhao. 2022. Vrbubble: Enhancing peripheral awareness of avatars for people with visual impairments in social virtual reality. In Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility. ACM Press, New York, USA, 1–17.
[45]
Daniel Kahneman. 2011. Thinking, Fast and Slow. Macmillan, New York, NY, USA.
[46]
Thomas P Kasulis, Yuasa Yasuo, Yasuo Yuasa, 1987. The body: Toward an Eastern mind-body theory. Suny Press, New York, USA.
[47]
Sat Bir S Khalsa, Lynn Hickey-Schultz, Deborah Cohen, Naomi Steiner, and Stephen Cope. 2012. Evaluation of the mental health benefits of yoga in a secondary school: A preliminary randomized controlled trial. The journal of behavioral health services & research 39 (2012), 80–90.
[48]
Logan Kugler. 2021. The state of virtual reality hardware. Commun. ACM 64, 2 (2021), 15–16.
[49]
Insoo Lee, Jinsung Lee, Kyunghan Lee, Dirk Grunwald, and Sangtae Ha. 2021. Demystifying commercial video conferencing applications. In Proceedings of the 29th ACM international conference on multimedia. ACM Press, New York, USA, 3583–3591.
[50]
Janice Light and Kathryn Drager. 2004. Re-thinking access to AAC technologies for young children: Simplifying the learning demands. Perspectives on Augmentative and Alternative Communication 13, 1 (2004), 5–12.
[51]
Kelly Mack, Emma McDonnell, Dhruv Jain, Lucy Lu Wang, Jon E. Froehlich, and Leah Findlater. 2021. What do we mean by “accessibility research”? A literature survey of accessibility papers in CHI and ASSETS from 1994 to 2019. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. ACM Press, New York, USA, 1–18.
[52]
Azeem Majeed, Edward John Maile, and Andrew B Bindman. 2020. The primary care response to COVID-19 in England’s National Health Service. Journal of the Royal Society of Medicine 113, 6 (2020), 208–210.
[53]
M Shannon McCord and Gloria Soto. 2004. Perceptions of AAC: An ethnographic investigation of Mexican-American families. Augmentative and alternative communication 20, 4 (2004), 209–227.
[54]
David McNaughton and Janice Light. 2013. The iPad and mobile technology revolution: Benefits and challenges for individuals who require augmentative and alternative communication., 107–116 pages.
[55]
Maurice Merleau-Ponty. 1996. Phenomenology of perception. Motilal Banarsidass Publisher, Dehli, India.
[56]
Karyn Moffatt, Golnoosh Pourshahid, and Ronald M Baecker. 2017. Augmentative and alternative communication devices for aphasia: the emerging role of “smart” mobile devices. Universal Access in the Information Society 16 (2017), 115–128.
[57]
Alison Moorcroft, Jennifer Allum, and Nerina Scarinci. 2022. Speech language pathologists’ responses to the rejection or abandonment of AAC systems. Disability and Rehabilitation 44, 16 (2022), 4257–4265.
[58]
A Moorcroft, N Scarinci, and C Meyer. 2018. A systematic review of the barriers and facilitators to the provision and use of low-tech and unaided AAC systems for people with complex communication needs and their families. Disability and Rehabilitation: Assistive Technology 14 (2018), 710–731. Issue 7.
[59]
A Moorcroft, N Scarinci, and C Meyer. 2021. “I’ve had a love-hate, I mean mostly hate relationship with these PODD books”: parent perceptions of how they and their child contributed to AAC rejection and abandonment. Disability and rehabilitation: assistive technology 16, 1 (2021), 72–82.
[60]
Martez Mott, John Tang, Shaun Kane, Edward Cutrell, and Meredith Ringel Morris. 2020. “i just went into it assuming that i wouldn’t be able to have the full experience” understanding the accessibility of virtual reality for people with limited mobility. In Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility. ACM Press, New York, USA, 1–13.
[61]
Timothy Neate, Vasiliki Kladouchou, Stephanie Wilson, and Shehzmani Shams. 2022. “Just Not Together”: The Experience of Videoconferencing for People with Aphasia during the Covid-19 Pandemic. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. ACM Press, New York, USA, 1–16.
[62]
Mmachi God’sglory Obiorah, Anne Marie Marie Piper, and Michael Horn. 2021. Designing AACs for people with aphasia dining in restaurants. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. ACM Press, New York, USA, 1–14.
[63]
Julian E Orr. 2016. Talking about machines: An ethnography of a modern job. Cornell University Press, Ithaca, New York.
[64]
Katherine Ott. 2013. Disability Things. University of Illinois Press, University of Illinois. 119–135 pages.
[65]
Phil Parette and Marcia Scherer. 2004. Assistive technology use and stigma. Education and training in developmental disabilities 39, 3 (2004), 217–226.
[66]
Betsy Phillips and Hongxin Zhao. 1993. Predictors of assistive technology abandonment. Assistive technology 5, 1 (1993), 36–45.
[67]
Jesse J Prinz. 2004. Gut reactions: A perceptual theory of emotion. oxford university Press, Oxford, UK.
[68]
Halley P Profita, Abigale Stangl, Laura Matuszewska, Sigrunn Sky, and Shaun K Kane. 2016. Nothing to hide: Aesthetic customization of hearing AIDS and cochlear implants in an online community. In Proceedings of the 18th International ACM SIGACCESS Conference on Computers and Accessibility. ACM Press, New York, USA, 219–227.
[69]
Graham Pullin. 2009. Design meets disability. MIT press, Massachusetts, USA.
[70]
Amon Rapp. 2021. Wearable technologies as extensions: a postphenomenological framework and its design implications. Human–Computer Interaction 38, 2 (2021), 1–39.
[71]
Pirkko Rautakoski. 2011. Training total communication. Aphasiology 25, 3 (2011), 344–365.
[72]
Jazz Rui Xia Ang, Ping Liu, Emma McDonnell, and Sarah Coppola. 2022. “In this online environment, we’re limited”: Exploring Inclusive Video Conferencing Design for Signers. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. ACM Press, New York, USA, 1–16.
[73]
Samuel C Sennott, Linda Akagi, Mary Lee, and Anthony Rhodes. 2019. AAC and artificial intelligence (AI). Topics in language disorders 39, 4 (2019), 389.
[74]
Tom Shakespeare 2006. The social model of disability. The disability studies reader 2 (2006), 197–204.
[75]
Kristen Shinohara and Jacob O Wobbrock. 2011. In the shadow of misperception: assistive technology use and social interactions. In Proceedings of the SIGCHI conference on human factors in computing systems. ACM Press, New York, NY, USA, 705–714.
[76]
Edo Shonin, William Van Gordon, and Mark D Griffiths. 2013. Meditation as medication: are attitudes changing?British Journal of General Practice 63, 617 (2013), 654–654.
[77]
Tobin Siebers. 2013. Disability and the theory of complex embodiment—for identity politics in a new register. The disability studies reader 4 (2013), 278–297.
[78]
Kiley Sobel, Alexander Fiannaca, Jon Campbell, Harish Kulkarni, Ann Paradiso, Ed Cutrell, and Meredith Ringel Morris. 2017. Exploring the design space of AAC awareness displays. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. ACM Press, New York, NY, USA, 2890–2903.
[79]
Katta Spiel and Robin Angelini. 2022. Expressive Bodies Engaging with Embodied Disability Cultures for Collaborative Design Critiques. In Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility. ACM Press, New York, USA, 1–6.
[80]
Kim Sterelny. 2010. Minds: extended or scaffolded?Phenomenology and the Cognitive Sciences 9, 4 (2010), 465–481.
[81]
Dag Svanæs. 2013. Interaction design for and with the lived body: Some implications of Merleau-Ponty’s phenomenology. ACM Transactions on Computer-Human Interaction (TOCHI) 20, 1 (2013), 1–30.
[82]
Arthur Theil, Lea Buchweitz, James Gay, Eva Lindell, Li Guo, Nils-Krister Persson, and Oliver Korn. 2020. Tactile board: a multimodal augmentative and alternative communication device for individuals with Deafblindness. In Proceedings of the 19th International Conference on Mobile and Ubiquitous Multimedia. ACM Press, New York, USA, 223–228.
[83]
S Kay Toombs. 1995. The lived experience of disability. Human studies 18, 1 (1995), 9–23.
[84]
Tavistock Trust. 2023. The Tavistock Trust For Aphasia Software Finder. https://aphasiatavistocktrust.org/aphasia-software-finder/ [Accessed 04-09-2023].
[85]
MA Tully, ME Cupples, WS Chan, K McGlade, and IS Young. 2005. Brisk walking, fitness, and cardiovascular risk: a randomized controlled trial in primary care. Preventive medicine 41, 2 (2005), 622–628.
[86]
Stephanie Valencia, Richard Cave, Krystal Kallarackal, Katie Seaver, Michael Terry, and Shaun K Kane. 2023. “The less I type, the better”: How AI Language Models can Enhance or Impede Communication for AAC Users. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. ACM Press, New York, USA, 1–14.
[87]
Stephanie Valencia, Michal Luria, Amy Pavel, Jeffrey P Bigham, and Henny Admoni. 2021. Co-designing socially assistive sidekicks for motion-based aac. In Proceedings of the 2021 ACM/IEEE International Conference on Human-Robot Interaction. ACM Press, New York, NY, USA, 24–33.
[88]
Francisco J Varela, Evan Thompson, and Eleanor Rosch. 2017. The embodied mind, revised edition: Cognitive science and human experience. MIT press, Cambridge, Massachusetts, USA.
[89]
Peter-Paul Verbeek. 2005. What things do: Philosophical reflections on technology, agency, and design. Penn State Press, Pennsylvania, USA.
[90]
Peter-Paul Verbeek. 2015. COVER STORY beyond interaction: a short introduction to mediation theory. interactions 22, 3 (2015), 26–31.
[91]
Annalu Waller. 2019. Telling tales: unlocking the potential of AAC technologies. International journal of language & communication disorders 54, 2 (2019), 159–169.
[92]
Stephanie Wilson, Abi Roper, Jane Marshall, Julia Galliers, Niamh Devane, Tracey Booth, and Celia Woolf. 2015. Codesign for people with aphasia through tangible design languages. CoDesign 11, 1 (2015), 21–34.
[93]
Peter Wright and John McCarthy. 2008. Empathy and experience in HCI. In Proceedings of the SIGCHI conference on human factors in computing systems. ACM Press, New York, NY, USA, 637–646.
[94]
Kexin Zhang, Elmira Deldari, Zhicong Lu, Yaxing Yao, and Yuhang Zhao. 2022. “It’s Just Part of Me:” Understanding Avatar Diversity and Self-presentation of People with Disabilities in Social Virtual Reality. In Proceedings of the 24th International ACM SIGACCESS Conference on Computers and Accessibility. ACM Press, New York, USA, 1–16.

Cited By

View all
  • (2024)Aligned Co-Design: An Interdependent, Multi-Modal Method for People with Motor and Communication DisabilitiesInternational Journal of Human–Computer Interaction10.1080/10447318.2024.2436038(1-19)Online publication date: 10-Dec-2024

Index Terms

  1. Beyond Repairing with Electronic Speech: Towards Embodied Communication and Assistive Technology
    Index terms have been assigned to the content through auto-classification.

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    CHI '24: Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems
    May 2024
    18961 pages
    ISBN:9798400703300
    DOI:10.1145/3613904
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 11 May 2024

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. Assistive technology
    2. embodiment.
    3. participatory design
    4. phenomenology

    Qualifiers

    • Research-article
    • Research
    • Refereed limited

    Funding Sources

    Conference

    CHI '24

    Acceptance Rates

    Overall Acceptance Rate 6,199 of 26,314 submissions, 24%

    Upcoming Conference

    CHI 2025
    ACM CHI Conference on Human Factors in Computing Systems
    April 26 - May 1, 2025
    Yokohama , Japan

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)1,127
    • Downloads (Last 6 weeks)210
    Reflects downloads up to 12 Dec 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Aligned Co-Design: An Interdependent, Multi-Modal Method for People with Motor and Communication DisabilitiesInternational Journal of Human–Computer Interaction10.1080/10447318.2024.2436038(1-19)Online publication date: 10-Dec-2024

    View Options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Login options

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media