1 Introduction
A landmark moment in Western philosophy and science occurred with Descartes’ dualist first principle,
“cogito ero sum” or
“I think therefore I am”, which instantiated a foundationalist view that the
mind and
body were separate – meaning consciousness was reducible to processes occurring
only within the mind [
23,
24]. In recent years, the Cartesian dualist position, which omits the role of the body has been significantly weakened by both neuroscientific and biological findings [
22,
67,
88].
Equally, there has been a significant growth in mainstream awareness of bodily techniques for ensuring general mental well-being such as meditation [
76], yoga [
47], fitness/exercise [
85], restful sleep [
31] and balanced diets [
1]. Specifically, meditation and yoga come from an Eastern philosophical tradition, which embraces the essential role of the body in everyday human conscious experience [
46]. Although these mainstream trends offer much promise, the importance of embodied interactions has
often been overlooked in the design of assistive technologies – particularly augmentative and alternative communication (AAC) devices [
9,
38,
39,
40].
Since the 1960s, AAC that generates speech (i.e., speech generating devices or SGDs) emerged as a technology designed to offer opportunities for people living with complex communication needs (CCN) to produce electronic speech through composing messages using symbol or lexical representations of language [
34,
38,
40]. Yet, recent research has found this AAC design to underplay many communities’ pre-existing autonomy and
more embodied forms of communicative expression [
9,
38,
40]. Looking beyond verbal dialogue, many people with CCN actively harness total and non-verbal communication strategies (i.e., gestures, facial expressions and physical props etc.) to successfully communicate [
71].
Research from both Ibrahim et al. [
40] and Bircanin et al. [
9] amongst separate communities found that AAC devices counter-intuitively undermined these embodied communication pathways prompting device abandonment. In response, Ibrahim et al. [
40], co-created
Whisper an AAC system which shape-shifts preventing the prominent status of the technology from physically blocking embodied gestures for communication. Meanwhile, Bircanin et al. [
9] and Alper [
3] have encouraged more tangible and low-fidelity AAC approaches even appropriating tangible objects of interest favoured by communities living with disabilities to support their self-expression and
embodied communication styles [
87].
Despite this noteworthy prior research, AAC devices still face high levels of abandonment from many communities worldwide [
57,
59,
91]. Simultaneously, the majority of mainstream AAC devices and research has continued to focus predominantly on electronic speech production [
20,
39,
91] and more recently even the rise of artificial intelligence (AI) techniques to improve SGDs [
30,
73,
86]. Yet, fundamentally, speech-generating AAC devices are –
reparative – aspiring to
replace or
fix communities natural speech and pre-existing non-verbal communication abilities [
40]. Concerningly, this design perspective emanates from the ableist medical model of disability, which defines disability as residing within the individual [
39,
74]. Equally, these AAC technologies fail to acknowledge the co-constructed nature of communication and embodied nature of human experience [
39].
In response, the embodied framework presents a more multidimensional account of people’s lived experiences. Consequently, we seek to demonstrate how the embodied framework and tangible co-design can help transcend traditional views of AAC technology. We demonstrate through a case study how the embodied framework and co-design activities can assist in generating novel insights for AAC technology. Finally, we reflect upon wider research implications for the embodied framework and concerns regarding nascent disembodying assistive technologies. In sum, in this short paper we offer three contributions:
(1)
Using key literature we argue for a more embodied theoretical framework of assistive technologies. Plus, discuss how embodied and tangible co-design activities can empower by facilitating purposeful critique and communication.
(2)
With these tenets, we present a case study demonstrating the generative effects of the embodied framework for co-designing AAC amongst people living with aphasia. Initially, we contextualise with relevant literature. Then from 25 hours of co-design sessions and 300 hours of volunteer time at an aphasia charity, we present four vignettes and concurrently apply the framework – to establish more empowering AAC design insights.
(3)
Finally, we situate and discuss wider research implications for the embodied framework and outline nascent concerns with disembodying technologies.
3 Case Study: Co-designing AAC with Communities Living with Aphasia
To emphasise the importance of the embodied framework (i.e., Figure
2) we draw on qualitative insights from co-designing AAC. Initially, we overview our research approach, introduce the language impairment aphasia and discuss available AAC interventions. We then present vignettes of frequent collaborators with aphasia to establish their communication experiences. For each vignette, we provide emergent design insights from using: the embodied framework and tangible co-design activities. Finally, we reflect on the disruptive effect of the embodied model for AAC research.
3.1 Research Procedure
In total, we have held 14 co-design activities at the Roberta Williams Speech and Language Therapy Clinic across a 14 month period beginning in July 2022. Co-design sessions ranged between: 12–19 people living with aphasia (avg: 13.7), 3–6 speech and language therapists (SLTs) (avg: 4) and 1 family member (avg: 0.3). Approximately, sessions took 1–2.5 hours. Ethical approval for all research was granted by the King’s College London Health Faculties Research Ethics Subcommittee.
3.2 Living with Aphasia
Aphasia is an acquired language impairment most commonly caused by stroke [
4,
7] but can be the result of any damage to the language centres of the brain. It can affect reading, writing, speech and comprehension [
4,
7]. Often, aphasia is a publicly invisible disability, affecting approximately one-third of stroke survivors yet less than 10% of the population know of the condition [
17]. Typically, people with aphasia’s communication abilities vary significantly [
4]. For instance, some people with aphasia might find speaking more challenging than writing or vice versa. The number of people living with aphasia will likely increase due to ageing global populations [
17] and increasingly people with aphasia face social exclusion and barriers to long-term speech and language therapy [
11,
37,
52]. People with aphasia face individualised challenges with technology [
56] particularly technologies which
remove tangible, physical and embodied interactions [
14,
43,
48,
49].
3.3 Available AAC for Aphasia
For people with aphasia dedicated hardware and subscription apps remain the most common AAC intervention. Typically, dedicated AAC devices are symbol-based with touchscreen, eye-gaze or manual switches as input e.g., Dynavox, Liberator and Lingraphica [
20]. Yet, there has been a substantial growth in mobile and tablet-based AAC applications – the Tavistock Trust for Aphasia provides an extensive repository of iOS and Android options [
54,
84]. Most AAC devices take the form of an electronic lexical or symbol-based dictionary, which present navigable grids of images, symbols or text e.g., Proloquo2Go and TouchChat [
56]. Most AAC for aphasia operates as an SGD in which the user composes messages before it is then read-aloud by the device
1.
3.4 Vignette Methodology
The vignettes were derived from reflective analysis of the researchers extensive engagement with older adults living with aphasia and by following the processes outlined in previous work: [
8,
63,
93]. Each vignette is drawn from three data sources: (1) 25 hours of coded video footage from co-design sessions, (2) ethnographic experiences of the first-author’s 300 hours spent observing/volunteering at the aphasia clinic and (3) coded data from exhaustive field notes documenting lived experiences. These data sources were collectively analysed enabling: identification of key quotes from transcripts and clear identification of people’s unique contextual/bodily communication strengths – this categorised data was then thematically analysed to craft vignettes [
16]. For each vignette, we then applied the embodied framework to generate key AAC design insights and share relevant co-developed AAC from people living with aphasia. Overall, we employed a qualitative methodological approach, analyzing our data inductively. Vignettes have been very helpful for synthesising the large amounts of field data – supporting triangulation of observations, identification of quotes and themes. The vignettes seek to show the real, unique and embodied experiences of people living with aphasia and not
just present abstract representations of people’s communication experiences.
3.5 Embodied Co-Design Techniques
We briefly outline three co-design techniques we frequently employed throughout sessions to develop
more embodied assistive technologies. These techniques draw on embodied and phenomenological frameworks of cognition especially Rapp’s [
70] internal aspects of interaction. Indeed, these techniques helped support our understanding of co-designer’s internalistic thought-styles whilst co-designing assistive technologies. Ensuring our co-designers had an empowered influence on technologies that would mediate their world relations, shape their cognitive abilities, sense of identity and eventual experience of different environments.
3.5.1 Low-fidelity prototyping.
Exploratory low-fidelity prototyping lets co-designers envision their own AAC and self-build using craft materials. This process lets participants design customised AAC for their unique challenges. Whereby co-designers can extensively customise their to AAC support their internal needs and intentionality within
these communication contexts (e.g., public transport) [
70].
3.5.2 High-fidelity prototypes.
The liberal use of high-fidelity prototypes mitigated cognitive abstraction for our co-designers. Initially, high-fidelity prototypes could be accessibly tried/tested. Then co-designers could simultaneously use the high-fidelity AAC as props to support personalised critiques in relation to their intentionality and needs across different contexts.
3.5.3 Experience prototyping.
In later co-design sessions, we used an actor to experience prototype and role-play with participants using AAC – thereby simulating real-world scenarios of usage. This was effective for recognising immediate problems with the AAC, first-person feelings/emotions and determining if the intervention obstructed co-designers’ pre-existing embodied styles of communication [
79].
3.6 Applying the Embodied Framework: Vignettes of AAC with People Living with Aphasia
We present four vignettes of communication strengths and challenges from people living with aphasia to elucidate different communication experiences, embodied perspectives and subsequent key insights for AAC devices. All names are pseudonyms.
3.6.1 Vignette 1.
Raymond has lived with aphasia for over 10 years. Most days he is fatigued, “its down to the [blood pressure] medication” causing nightly discomforts. Before his stroke, Raymond repaired computers and mobile phones – he is an ardent technologist and wears a smartwatch to “help me with [my] blood pressure”. He has no publicly visible physical impairment from his stroke, consequently Raymond faces difficulties with seat requesting on public transport, “Because they look at you like... erm why do you need a seat? Yeah. yeah... you’ve got to either... have a badge on... or... erm have a card that says I have aphasia... say that I’ve had a stroke”. To mitigate public challenges, Raymond wears his disability identification, a Stroke/Aphasia badge and medical wristband. His wrist arthritis limits his communicative hand gestures. Within pressured contexts involving communication with strangers, Raymond’s anxiety causes him to freeze rendering him unable to “download” his dialogue. Nonetheless, he greatly desires to “fix my speech... let me talk... and confidence... get your confidence back”. If Raymond has an important appointment, he is supported by family who almost “telepathically” understand his non-verbal communication styles.
3.6.2 Raymond’s Insights.
For Raymond, AAC design insights were established using the embodied model and his co-developed AAC smartwatch. The embodied model’s awareness of extended cognition noted that the environment and confidence profoundly impact Raymond’s communication. The context-specific nature of his communication anxieties, suggests any AAC should be regulatable and
only needed
just during challenging episodes. Next, vocal and technological competency are key to Raymond’s sense of identity and must be endorsed by AAC technology. During ensuing co-design activities, Raymond low-fidelity prototyped an AAC smartwatch application. Despite his aphasia, Raymond wanted to control the smartwatch with verbal commands as Siri is presently frustrating to use,
“I can’t get the words out and it cuts off!”. Furthermore, he is very accustomed to wearing a smartwatch and the AAC could provide discreet haptic sensations prompting mindful breathing and self-regulation of his contextual speech anxiety.
3.6.3 Vignette 2.
Anthony has right-sided hemiplegic bodily paralysis caused by his stroke in the 2000s. Consequently, Anthony wears a right-sided ankle brace and uses a walking stick – limiting mobility to carry additional objects, “The stick! Can’t carry!”. He has limited verbal dialogue yet Anthony leverages his appearance to support communication: he wears colourful badges related to his favourite artists/accessibility i.e., “On buses... blue aphasia badge and blue seat badge”, band/art T-shirts, sunglasses and fedora hats. Regularly, Anthony holds conversations concerning his elaborate outfits and the vocabulary comes more naturally as they relate to his hobbies or are rehearsed. If Anthony is word-finding he will employ verbal filler e.g., “errrr”, maintains conversational flow with hand-gestures and liberally searches on his phone e.g., “Search on phone... look up [types of] pasta”. In particular, he traverses Wikipedia to word-find, plus leverages the site images as a prop and pointing e.g., “Poke them and point at sign”. Whilst discussing music, Anthony enjoys playing music from his phone on high-volume and he sometimes draws on paper provided his hands are not shaking.
3.6.4 Anthony’s Insights.
Two insights were established using the embodied model and renewed during high-fidelity prototype testing. Currently, Anthony creatively leverages his body to scaffold communication via accessibility badges and fashionable style across different contexts (i.e., art galleries/concerts). Importantly, AAC should not detract from these effective non-verbal competencies. Furthermore, if Anthony chooses to express himself via: gestures, pointing, drawing (i.e., props), phone music and Wikipedia searches – AAC should reinforce this self-expression. During high-fidelity prototype testing, an AAC eInk smart-badge, which provides useful expressions to scaffold communication as a prop was criticised by Anthony along dimensions of the embodied framework. Firstly (1), the latency of the eInk refresh-rate was criticised for not meeting expectations rather perpetuating “frustration”, (2) the buttons were not tangible enough to accommodate his finger-presses when shaking and (3) the display lacked a 3D-printed case negating comfortable usage in public.
3.6.5 Vignette 3.
Joy has right-sided hemiplegic bodily paralysis caused by her stroke in 2018. To mitigate paralysis, Joy uses a walking stick, wears a velcro-fitted black leg-brace, cross-body bag for accessories and a sling arm-brace. Furthermore, Joy has
much self-belief for instance she will always complain if a coffee order is wrong,
“I will! [...] No! You can hear me! [...] Can I have my say please! [Exclaiming with hands] Do you know what I mean!?”. Challenging environments for Joy include escalators/stairs and public transport – but she always communicates her needs with transport operators,
“No me for me I go, Excuse me! Excuse me! Yeah! Yeah!”. In terms of technologies, Joy wears a
Hidden Disabilities lanyard 2 with attached Stroke/Aphasia badge, she uses a smartwatch with stretchable strap, tracks daily step-counts and uses the
SpokenAAC app for word-finding. Joy is partially verbal but greatly enjoys communicating verbally through abrupt changes in tone, pitch and loudness to add dramatic depths. Whilst word-finding, she writes on paper or prompts her AAC with speech-to-text/one-handed typing,
“Spoken! Yep! I use it a lot”. Often, she repeats words for emphasis and employs non-lexical vocalisations e.g.,
“shh!”. Non-verbally, Joy communicates through a diverse array of animated facial expressions, left-handed pointing/gestures e.g.,
“Here I’d like this! And this!”, pantomiming and interjecting with steups (i.e., teeth kissing) to communicate disapproval. Joy lives with her extended West Indian family who embrace these embodied communication styles.
3.6.6 Joy’s Insights.
For Joy, two AAC insights were established using the embodied model. Presently, Joy has the confidence to spend plenty of time in challenging environments for communication and mobility. Therefore, any AAC must be comfortably operable within these contexts without impacting mobility or causing public stigmas. Next, AAC must not interfere with Joy’s diverse cultural heritage including distinctive language practices and West Indian creole dialects. Indeed, for Joy unaided communication and intuitive exchanges are regularly essential to intimate co-constructed communication experiences shared with her community. Currently, SpokenAAC serves this purpose and offers word-finding support via phone speech-to-text interactions – critically the app does not interfere with intuitive exchanges.
3.6.7 Vignette 4.
Brian has limited hemiplegic mobility issues caused by his stroke in 2015. In public, Brian wears a Stroke/Aphasia badge coupled with a Hidden Disabilities Lanyard. To adapt to challenges with dialogue, Brian leverages his personal appearance to support communication he wears: football or Formula One team shirts, gold rings, chains and bracelets. Despite his stroke, Brian works at festivals throughout the UK, “Ummm... concerts... errr... for me... er... yes we do once every 6 months a year with the ice cream [vans]”. However, at these events he sometimes gets misguided accusations of “being drunk” as his speech slurs when more fatigued. When verbally communicating, Brian deliberately slows the pace of speech giving him adequate time to “let you get the words out” – enunciating each word patiently. He actively embraces personal challenges, “Yes! Yes! I think if you want it? You need to force yourself into doing it... err... sit back and hopefully it returns! No! No!”. Brian uses non-verbal communication liberally, he maintains upright posture and employs hand gestures with accompanying dialogue. When struggling with word-finding, Brian turns to verbal filler (i.e., “errr”), hand gestures and scaffolds with photos on his phone.
3.6.8 Brian’s Insights.
Two perspectives were established for Brian using the embodied model and acknowledged during experience prototyping. To begin, Brian is a very competent communicator for whom AAC should operate just as a safety net during communication difficulties with strangers. Otherwise, Brian does not need to be imbued with an intervention as he already communicates successfully in highly varied domains (i.e., festivals). Furthermore Brian is non-verbally very proficient, he liberally uses posture and hand gestures to self-express with speech – therefore AAC should not detract from these faculties with touch-screen input demands. During an experience prototyping workshop, his critique of an AAC smartwatch prototype can be categorised into the three dimensions of the embodied framework. Initially (1), the interface size was too cognitively exerting with his vision difficulties and undermined personal competency, (2) the velcro strap felt too tight and he preferred the looser links/style of his gold watch but (3) he appreciated the outward AAC smartwatch display augmented his intuitive use of expressive hand-gestures.
3.7 Ramifications for AAC Research
The embodied framework recognises the importance of AAC working in unison with the: (1) brain, (2) body, (3) environment – yet in these areas AAC has several design shortcomings. For our co-designers, archetypal AAC that generates speech would replace their dialogue and communication strengths. Plus, introduce established problems with AAC/SGDs into their lives including: learning demands [
50], exerting operational pressures [
13], mobility problems with heavy AAC form-factors [
21,
25], public stigmas [
58], slow sender-receiver styles of communication [
28] and more. In contrast, the embodied model has disrupted prevailing modes of thinking and provided novel design insights that look beyond traditional AAC functionality. We synthesise these insights into three recommendations for future AAC research.
3.7.1 Communication Occurs Beyond Words.
Our first recommendation, focuses on communication beyond words – in particular from non-verbal (i.e., eye-gaze and gesture) to leveraging material resources (i.e., appearance and props) [
40,
71]. However, AAC devices that speech generate can undermine co-designers preferred and empowering non-verbal communication modes. This is a medicalised approach, our co-designers non-verbal strategies represent natural variation in human communication and should
not be diminished by AAC interventions. Thus far, the notion of communication confidence has been under-explored in research and AAC should instill confidence in users low and embodied no-tech communication methods [
35]. In sum, we recommend exploration of AAC beyond
just functional verbal message transmission as communication is
more multidimensional [
40]. Using the embodied framework, AAC could look to enhance many non-verbal (i.e., gestural), environmental (i.e, situational) and psychological (i.e., emotional) factors [
20,
21,
40].
3.7.2 AAC Should Build on Pre-existing Competencies.
Our next recommendation, notes that AAC should not diminish users’ pre-existing competencies. Indeed, the vignettes emphasise that our co-designers are
already competent and resourceful communicators in a multitude of settings – despite potential challenges, most did not use any AAC technology at all. Beyond communication, AAC devices should
not limit users’ personal style/appearance and engagement with different settings. Indeed, the design of AAC devices can be ill-suited to social environments as some devices have an unappealing medical aesthetic and do
not support external personalisation or customisation [
78]. In contrast, our embodied model recognises that self-expression via clothing is a key modality for communication [
87].
3.7.3 More Focus on Regulatable AAC Technology.
Finally, we recommend the development of
more regulatable AAC technology [
62]. Indeed, as noted by Ibrahim et al. [
39] AAC can have an obstructive physical presence making it challenging to regulate in environments with limited space and the device manifest as an invasive presence obstructing pathways for communication (i.e., limited verbal and non-verbal). During collaboration with co-designers living with aphasia we also quickly established that AAC
must be regulatable – so as to not intervene and obstruct intimate exchanges with cultures and close family/friends [
53]. Consequently, AAC devices for many communities would be more effective as a safety net rather than a persistent intervention for communication. In addition, the cumbersome form-factor of some AAC devices can be challenging to use ‘on the go’ [
19] and dependent on adequate battery charging. Instead AAC devices should be designed to maximise users autonomy – augmenting pre-existing communication modes [
62], designed to be potentially
not used at all and in smaller concealable form-factors.