[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Next Article in Journal
Towards Explainable and Sustainable Wow Experiences with Technology
Previous Article in Journal
Comparative Study of Machine Learning Algorithms to Classify Hand Gestures from Deployable and Breathable Kirigami-Based Electrical Impedance Bracelet
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Brief Report

Exploring the Use of Virtual Characters (Avatars), Live Animation, and Augmented Reality to Teach Social Skills to Individuals with Autism

1
Counseling Psychology and Special Education, Brigham Young University, Provo, UT 84602, USA
2
Department of Vocational Teacher Education, Oslo Metropolitan University, 0167 Oslo, Norway
3
Institute for the Psychology of Special Needs, Eötvös Loránd University, 1053 Budapest, Hungary
*
Author to whom correspondence should be addressed.
Multimodal Technol. Interact. 2020, 4(3), 48; https://doi.org/10.3390/mti4030048
Submission received: 24 June 2020 / Revised: 22 July 2020 / Accepted: 10 August 2020 / Published: 11 August 2020

Abstract

:
Individuals with autism and other developmental disabilities struggle to acquire and appropriately use social skills to improve the quality of their lives. These critical skills can be difficult to teach because they are context dependent and many students are not motivated to engage in instruction to learn them. The use of multi-modal technologies shows promise in the teaching a variety of skills to individuals with disabilities. iAnimate Live is a project that makes virtual environments, virtual characters (avatars), augmented reality, and animation more accessible for teachers and clinicians. These emerging technologies have the potential to provide more efficient, portable, accessible, and engaging instructional materials to teach a variety of social skills. After reviewing the relevant research on using virtual environments virtual characters (avatars) and animation for social skills instruction, this article describes current experimental applications exploring their use via the iAnimate Live project.

1. Introduction

Current advances in technology have made devices like computers and tablets accessible to the general public to the extent that the potential to use these devices as learning tools in the classroom becomes more feasible. Much of this new technology can be particularly beneficial for children and youth who have disabilities [1].
While specialized instruction has been designed to meet the needs for teaching individual students with disabilities, some of the skills necessary for independence and career readiness some students lack interest in some of these teaching strategies or require increased prompting and reminding to actually engage in the desired behavior [2]. Students may also find some tasks being taught aversive [3]. In studying effects of rapport on teaching students with disabilities, McLaughlin and Carr [4] found that a staff member may establish poor rapport with a student who associates an aversive activity with him or her. These aversive activities can include chores or other tasks that the student does not want to do: learning appropriate bathroom or self-care skills, for example [5]. To escape from the demands of these activities, the student may act out aggressively [3,4]. To reduce such behavior problems, teachers must incorporate elements in teaching activities that make the experience more enjoyable and valuable to the student [2].
Several researchers have studied how technology enhanced video-based instruction can reduce students’ dependence on prompts and or reminders by thoroughly engaging and attracting students’ attention while learning skills, as well as by creating an enjoyable learning environment that may help prevent aversive responses to skills being taught [5,6]. Research has focused on the use of virtual environments (VE) [7], virtual characters (VC) such as avatars [5,6] and the addition of animated elements to video modeling for developing techniques that can teach skills to children or adolescents with autism spectrum disorder (ASD) in more enjoyable and attractive ways [2]. In addition to attracting students’ attention and creating a more enjoyable learning environment, using techniques such as VC and animation can be especially helpful in teaching necessary skills that are difficult to teach using other methods (i.e., showing students how to properly cover themselves during urination or teaching various safety skills) [5,6].

1.1. Virtual Environments

Virtual environments (VE) are used in virtual reality settings, allowing users to explore 3D computer-generated environments in which they can interact with simulated objects or characters as if experiencing them in a real-life situation [7]. This technology has been shown to efficiently and effectively teach students target skills that generalize over time [7].
VE has demonstrated efficacy in teaching social skills to students with disabilities. With the purpose of investigating the potential of VE for teaching social understanding to individuals with ASD, Mitchell, Parsons, and Leonard [7] used VE to teach six students ages 14–16 with ASD social understanding. Participants viewed three sets of videos portraying busses and cafes. Between sets, they were presented with a VE program providing a scenario of a cafe and asking them where they would choose to sit. All six of the participants received the same tasks; however, they were completed in different orders. If they made an inappropriate decision, the program gave verbal feedback explaining why the response was not correct. For example, if a student chose to sit at a crowded table of people she did not know when there were single tables available, her choice would indicate that she lacked social understanding, and the program would explain why this choice would be problematic. Mitchell et al. [7] found that after receiving the VE intervention, students demonstrated better social understanding on the remaining videos, as perceived by 10 ‘naive’ raters.
Additionally, a VE program that engaged participants with ASD in virtual reality training for a job interview improved participants’ job interview skills and their job interview confidence [8]. The purpose of the study was to determine the feasibility and efficacy of the virtual reality program. Participants included 26 individuals with ASD. After attending five training sessions which lasted approximately two hours, participants completed an assessment to express how enjoyable and helpful the program was in order to assess feasibility. Efficacy of the program was measured through role-play job interviews. Researchers discovered that participants were engaged with the program and rated it as enjoyable and that job interview skills were significantly improved [8]. Likewise, in a six-month follow-up of those who completed the job interview training, Smith et al. [9] found that the observed improvement from pretest to posttest was associated with more completed job interviews (r = 0.55, p = 0.02). Additionally, these researchers found that compared to controls these participants had 7.82 times greater odds of accepting an offer for a competitive position [9]. Thus, VE is effective for teaching individuals with ASD various social interactions [7,8,9].
Thus, according to current research, VE is effective for teaching necessary skills to students with disabilities. However, VE itself cannot be created by a layperson and is thus not readily accessible for most teachers. For example, in Self et al.’s [6] study, the VE had to be created by a computer programmer. Coles et al. [10] used standard 3D game engine software and Java programming, another method average teachers would not be able to create in their free time. Ehrlich and Miller [11] created a VE program (AViSS) that includes various virtual environments such as hallways, restrooms, and cafeterias to teach social skills to students with disabilities. While it may be very helpful for teachers to use in the future, it is currently still in the development phase. Therefore, other novel methods of teaching need to be explored. Animated videos, which can be designed in presently available apps, may be as effective for instruction as VE, but more cost effective and perhaps easier for teachers to create and modify independently.
Recent advances in technology have made the use of VE more accessible to those without specialized training [12,13]. There are now programs that allow educators with a moderate level of technological expertise to take advantage of instruction using VE, avatars, and animation. One program that makes it easy for practitioners to use VE and avatars to teach a variety of skills is Invirtua 3d Digital Puppeteer (Invirtua, 2016, Carson City, NV, USA). This software package allows practitioners to create VE and deliver instruction via a wide variety of avatars. Available avatars to select from include humans, fish, dinosaurs, and dragons. Students are able to select the avatar they want to learn from which is then controlled by the practitioner [12,13]. As technology advances, it is making interventions using AE, avatars, and animation more accessible to those practitioners with basic technology proficiency [12,13].
Another area of VE showing promise in the realm of education is augmented reality (AR) [14,15]. AR is a technology that creates hybrid virtual experience by overlaying real world situations and digital content [12]. Details about an individual’s proximal environment is identified with cameras on mobile devices. Digital information such as videos and audio is then overlaid to enhance the users’ environment. The AR system is characterized by (a) combining the real and virtual worlds, (b) providing interaction in real time, and (c) aligning real objects or places with digital information in 3D [12]. AR has successfully been used to teach a range of skills including academic skills such as math [1]. In a systematic review, researchers analyzed several studies to determine the advantages and challenges associated with AR for education. Based on the review, it was found that AR can support learning and teaching [15]. In another study by McMahon, Cihak, Wright, and Bell [16], researchers wanted to analyze the benefits of using AR to teach science vocabulary to students with ASD and intellectual disabilities (ID). Participants included three students with ID and one student with ASD in a postsecondary program with ages ranging from 19–25. Using a multiple-probe across-behaviors/skills design, short 3D simulation videos were displayed to the participants describing different vocabulary terms. Researchers used the Aurasma app so that the video would play when certain vocabulary cards were detected. The results indicated that the AR intervention was effective in increasing the participants’ science vocabulary acquisition [16].
Research has been done with regard to VC as well. Authors Alcorn et al. wanted to investigate how children with ASD would interact with the ECHOES environment and in which ways ac VC, Paul, would be in eliciting joint attention from the participant. Paul created two levels of the gaze including engagement and non-engagement in order to help the participant select an object on the screen. Participants included 29 males and 3 females between the ages of 5 and 14. Results indicated that the subjects were able to follow the VC’s gaze and gestures in order to respond. The authors also discovered that the children were excited and motivated to share that experience with others. However, the authors acknowledge that this cannot be verified due to the lack of baseline data concerning the participants’ social skills. The study did affirm that the VC was successful in engaging the participants [17].
In a large-scale multi-site intervention, researchers created a VC named Andy in order to teach social communication skills to children with ASD. Participants were 29 children from special units in primary schools in the UK. All children had been previously diagnosed with ASD. Researchers found through observation that children who had never greeted their teachers spontaneously before were doing so with Andy and then later with the teachers demonstrating generalization. In addition, participants who never interacted with typical peers did so with Andy. It is recognized, however, that such technology is not always readily available in the real-world [18].

1.2. Information and Communication Technologies

An area of research that has been discovering new ways to incorporate AR and VE is information and communication technologies (ICT). Examples of ICT have been numerous. One example is FORHHSS-TEA, a program created specifically for individuals with ASD. This project uses both VR and AR. The version with VR has the participant wear VR glasses and 3D scanners which helps them interact with a virtual environment. In AR, the subjects interact with a scenario that has been created with projection lights to help them practice independent task completion. This system uses other technologies such as facial analysis, eye tracking, IP camera, and biosignals which helps the researchers to evaluate the concentration level of the participant. The results of this study indicated that the participants were able to work more independently using the VR and AR as compared to the baseline [19].
The universities of Spain, Birmingham, and Pompeu Fabra created an AR program called a Pictogram Room to teach individuals with ASD skills using natural interactions. The goal of this project was to increase body awareness and self-recognition through music and visual supports. This study focused on individuals with ASD and intellectual disabilities who struggled with visual recognition of themselves and who demonstrated a mental age of approximately 15–18 months. The system involved a screen with a camera which caught and projected the image of the individual in front of the screen. Using infrared marks, the computer would then superimpose other images onto the real image of the individual. This project the led to creating educational computer games which are accessible on a website. The authors of this particular study recognize that more data needs to be taken in order to determine the efficacy and efficiency of the Pictogram Room [20].
Another ICT system is the Lands of Fog which was the outcome of the Integration of Children with Autism into Society using ICT project. The program integrates a full-body interaction system to help children with ASD learn how to play with a typically developing child. The setup includes a large floor projection in which a virtual, magical world covered by fog is presented. The rest of the world is revealed to the children as they participate in the game. As they progress, children earn creatures which then follow them throughout the rest of the game and model how to greet other creatures as a peer comes into proximation. As this greeting occurs, more creatures appear showing the children that in order to discover all the creatures they would need to collaborate. Participants in this study included 10 boys diagnosed with ASD between the ages of 10 to 14. Results indicated that the Lands of Fog was successful in fostering social interaction between the participants. Children who were involved in the study responded in a questionnaire that, after the treatment, they found it easier to create social relationships when playing the game [21].
One example of VE is video games. Mairena et al. conducted a study to discover if a full-body video game could elicit more social initiation in children with ASD as compared to free play. The study included 15 children between the ages of 4 and 6 with an ASD diagnosis. All the subjects participated in four sessions of playing the videogame Pico’s Adventure and four sessions of free play time. Data was obtained through observation. Results indicated that the children elicited more social initiation during the videogame sessions. The study also showed that the videogame helped reduce repetitive behaviors and increasing gestures. Even though the study was successful, the authors recognize the need to conduct additional work to support their hypothesis [22].
By harnessing the potential of VE, virtual characters, and AR practitioners have the ability to improve the outcomes of individuals with disabilities. In addition, studies have shown that individuals with ASD and other disabilities can be instrumental in helping design new technologies which can enhance the lives of others [18,23,24].

1.3. Animation

Creating, editing, and animating basic videos is a useful technology for teaching individuals with disabilities [2], and is accessible to anyone with elementary computer skills. Although some animation programs are more complex than others, some applications (apps) can create short animated clips that the user can edit and manipulate quite easily. These animated videos can be used by teachers or parents to help children with disabilities learn skills necessary for independence. Researchers have examined the potential effectiveness of virtual environments (VE), video modeling, and animated elements as teaching techniques to better engage, instruct, and increase interest of students with disabilities in skills they are being taught [2,5,13,14,15].
A modification to VE that has become both more cost effective and more efficient to produce is animation [2]. Animation may be particularly useful in teaching students’ difficult tasks that do not lend themselves to simple task analyses or that might be inappropriate for humans to model [2]. For example, researchers could use animated characters to illustrate toileting behaviors that would be inappropriate for a human child to model. Additionally, teachers could use animation to teach strategies for dealing with high-risk safety situations, such as school intruders, attracting the student’s attention to the animated character rather than a human intruder who might be particularly threatening to children with ASD. Therefore, animation may be as just as effective for teaching skills as other video-based, technology-mediated intervention, but more socially valid. Creating animated videos with ready-made animation apps may also be more efficient and feasible for classroom use compared to VE and other video modeling methods.
Little research has undertaken to examine the effects of animated videos on the capacity of students with disabilities to better learn target skills or tasks—especially social skills. However, some studies have successfully used animated elements to better attract students’ attention or interest in live video modeling. For example, Ohtake et al. [2] used animated elements in video hero modeling (VHM) to teach bathroom skills to Shinnosuke, a 12-year-old male student with ASD. VHM is similar to video modeling, except that the person correctly engaging in the target skill is a character with which the student is preoccupied. The video is then shown to the student immediately before engaging him in the target skill [2]. Consistent with Bellini and Akullian [17], the goal is that the student will be more attracted to and engaged in viewing the video because she is interested in the character; more engagement promotes better learning and utilization of the skill [2]. Before participating in VHM, Shinnosuke had been unsuccessful in learning target bathroom skills using traditional verbal, gestural, and model methods, as well as physical and picture prompts. However, after being exposed to VHM five times a week for 35 days, Shinnosuke improved his skill in the four target skills [2].
The findings with Shinnosuke are consistent with Ohtake and colleagues’ earlier [16] work in which they found that one student could not learn the target skills until after VHM was introduced. In this particular study, researchers were investigating how effective video self-modeling (VSM) would be in eliminating public undressing during urination in two elementary-aged students diagnosed with developmental disabilities. Using a multiple-probe design, they were able to decrease the exposure of body parts using VSM. However, with one participant, the component of hero modeling or VHM was added which then eliminated the participant’s public undressing [16]. Incorporating animated characters that preoccupy individual students into video modeling seems to be effective in teaching target behaviors such as bathroom skills to individual students [2,5,13,14,15]. However, more research is needed to determine whether using generic animated characters is just as effective in teaching students with disabilities, in addition to generalizing whether similarly animated videos are effective for other children and for adolescents with other disabilities.
Similarly, Drysdale, Lee, Anderson, and Moore [25] used animated elements in video self-modeling to depict in-toilet urination to two boys—a four-year-old and a five-year-old. These researchers found that after viewing the videos, the two boys acquired the target toileting skills, which were maintained after a four-week period and generalized to a different setting. However, one boy began using the toilet correctly before the video intervention, thus leaving the effectiveness of the video self-modeling with animation dependent on only one child [26]. Although the video’s effectiveness is not clear, the animated elements were successfully incorporated in the video to more appropriately depict toileting skills.
Similarly, McLay, Carnett, van der Meer, and Lang [5] used animated elements in video modeling to appropriately depict toileting skills, adding behavioral prompting and reinforcement to teach two boys with ASD—a seven-year-old and an eight-year-old—proper urination and defecation skills. Participants viewed the video model targeting urination first until the skill was acquired. The authors reported an increase in the percentage of independently completed steps as well as in-toilet voiding. These skills generalized to a school setting and maintained three to four months after the intervention was withdrawn. However, one child was not able to learn the proper defecation skills presented, and the child who did properly defecate had demonstrated this skill before beginning the intervention. Because behavioral strategies were used to teach toileting skills in addition to the videos, McLay and colleagues stressed the importance of future research to differentiate which effects could be attributed specifically to animation in video modeling. While both McLay and colleagues [5] and Drysdale and colleagues [25] did not find results clearly attributable to the animated elements in video modeling, these elements were useful in portraying situations that would otherwise be inappropriate to film. Both studies suggest that animated elements are promising features of video modeling that should be examined more fully.
While VE and animation appears to be relatively easy to implement and effective in a variety of applications, more research is needed to clarify conditions and populations for which they are most successful. The research reviewed in this paper provides multiple examples of isolated projects that have evaluated the effects of this technology on individuals with disabilities, but there are few ongoing systematic applications of VE and animation in schools or clinics. To address this need, our research team has undertaken an ongoing set of studies to examine the effects of VE, avatars, and animation, referred to as iAnimate Live.

2. iAnimate Live Project Materials and Methods

This manuscript presents original research evaluating the effects of adding virtual and animated elements to interventions designed to help individuals with disabilities use social skills to improve their quality of life. Included in this manuscript are two research studies that were conducted and one that is in the process of being implemented. The initial phase of the project focused on developing the technology infrastructure necessary to support the creation and adaptation of live animation and virtual environments in a lab on a university campus. The team received a grant to purchase the animation software, Invirtua 3d Digital Puppeteer (Invirtua, 2016, Carson City, NV, USA). The software includes a variety of avatars that can be piloted by trained students. These avatars include facial features which can be fully articulated by a skilled pilot, who controls the avatar using a dell Inspiron 7559 laptop with a Waycom Intuos touch tablet to control the body movements of the avatar. A separate Xbox 360 controller was used for the avatar’s eye movement. All software and equipment was sourced from Invirtua which is located in Carson City, Nevada, USA. With this infrastructure in place, the team set out to answer three major research questions:
  • Does adding virtual and/or animated components to video-based instruction effect independence and engagement of individuals with disabilities?
  • Is the use of virtual environments/live animation cost effective compared to traditional instructional methods?
  • How does adding virtual elements and/or live animations affect the social validity of the intervention?

2.1. Method

All phases of the study utilized a single subject research design which is a rigorous quantitative experimental research method commonly used for studying the effects of an intervention in small populations such as individuals with autism. Specifically, phase 1 utilized an alternating treatment design while phases utilized a non-concurrent multiple baseline design across participants. Both phases met the established quality indicators for single-subject design.

2.2. Setting and Participants

In each of the phases below, the main equipment used was the animation software Invirtua 3d Digital Puppeteer (Invirtua, 2016, Carson City, NV, USA) as was described above. Each of the studies was conducted in a controlled educational lab on a university campus. Before each phase, parents and participants were informed of the potential risks and signed a consent document. In the first phase, five participants between the ages of 8 and 12 were recruited from a specialized school. For the second phase of the study, participants included five individuals with ASD, four male and one female, between the ages of 8 and 10. For all phases of the study, each participant had a formal diagnosis of autism. The participants in the studies had moderate support needs. They all had the ability to verbally communicate both expressively and receptively. They also had IQs that fell within the below-average to normal range. All participants had deficits in social skills such as starting a conversation and understanding emotions.

2.3. Phase 1: Effects of Virtual and Animated Components on Engagement of Individuals with Disabilities

To answer the first question, the research team used an alternating treatments design comparing children’s interaction and engagement rates with human and avatar interventionists. Using the lab space and the puppeteer Live software, a group of young people with ASD aged 8–12 were recruited and randomly assigned to one of two groups: avatar first or human first [12]. Participants assigned to the avatar first condition spent their first visit having a structured conversation with an avatar piloted by a member of the research team. The next session these participants had a conversation with a member of the research team without the avatar. Participants assigned to the human first group experienced an identical set of interactions in reverse order. The research team found that individuals with ASD were more engaged and interactive with avatars than they were with human interventionists. They reported during social validity interviews that they looked forward to speaking with the avatar and were disappointed when they were asked to speak only with the human members of the team.

2.4. Phase 2: Teaching Social Skills to Individuals with Disabilities Through Virtual Characters

Another study conducted as part of the project explored the effects of using an avatar to teach social skills to children with ASD, ages 8–10, who had deficits in their ability to effectively start and maintain conversations with peers and adults [13]. This phase used a noncurrent multiple baseline design across participants. These individuals were taught to start a conversation in five steps via an avatar and practiced with a human. The five steps were look at the person and smile, stand an arm’s length away, use a nice voice, ask a question, and wait your turn to talk. All instruction was led by the avatar, while the human was present to provide modeling and act as a communicative partner for the child. Prior to intervention and throughout the instructional sessions, the child was given opportunities to use the skill with members of the research team and a caregiver.

2.5. Phase 3: Teaching Emotion Recognition Skills to Individuals with Disabilities Through Virtual Characters

The next phase in the iAnimate Live project will be to evaluate the effects of using an avatar to teach students with ASD emotion recognition skills. This will expand our understanding of the utility of an avatar and leverage some of the important technical advantages of using an avatar such as exaggerated facial expressions to simplify discrimination of emotional states. This study extends the research of Golan et al. [27] and Moore, Cheng, McGrath, and Powell [28]. Golan and colleagues studied the effects of using an animate series designed to enhance emotion comprehension in children with ASD. Participants were tested before and after the intervention on recognizing emotions at three levels of generalization. The researchers found that improvement of the intervention group was significantly above the progress of the clinical control group on all task levels, confirming that use of the animated series significantly improved emotion recognition in children with ASD. According to Moore, Cheng, McGarth, and Powell [28], the use of virtual avatars holds great potential for children with ASD. Their exploratory empirical study demonstrated that children with ASD could understand basic emotions represented by a humanoid avatar, as over 90% of the participants did this accurately. This phase is currently underway, but not enough data has been collected to make any inferences as to the effectiveness of the intervention.

3. iAnimate Live Project Data Collection, Measures, Analysis, and Results

3.1. Phase 1: Effects of Virtual and Animated Components on Engagement of Individuals with Disabilities

In order to facilitate data collection, all sessions during this study were audio and video recorded with parent and participant permission. Using an Excel data sheet, coders entered the data from the videos using a whole-interval data collection procedure. Four behaviors were selected to measure participant engagement. These included making eye contact, maintaining correct body position, sitting still, and listening or responding appropriately to the avatar or human interventionist. The research team found that individuals with ASD were more engaged and interactive with avatars, with an average of 43% engagement, than they were with human interventionists, which was an average of 24% engagement. Participants reported during social validity interviews that they looked forward to speaking with the avatar and were disappointed when they were asked to speak only with the human members of the team.

3.2. Phase 2: Teaching Social Skills to Individuals with Disabilities Through Virtual Characters

Two measures were used for data collection during this second phase. The first measure included social skills mastery. This was assessed by scoring the steps of starting a conversation as mentioned above. Researchers used direct observation to score these steps. Second, the Social Skills Improvement System (SSIS) was completed by parents before and after the intervention. Standard scores and percentiles were compared to evaluate any changes after the intervention. Following instruction, the children’s use of the skill improved on all metrics with all participants scoring at least 80% on the social skills mastery. For example, they were more willing to engage in conversations and used all the steps of the target skill when initiating them. In addition, these skills generalized to starting conversations with untrained peers who were available during the final sessions of the intervention. Comparing the SSIS, all participants increased their scores in regard to their social skills. Social validity data indicated that using an avatar was a socially acceptable and appropriate way to deliver intervention in community employment settings [29].

4. Discussion

Teaching children and youth with ASD can be more effective when supplemented with technological tools such as VE, video modeling, or augmented reality. Including animated elements in video-based instruction and modeling is a particularly promising development that may be just as effective as VE and live video modeling. In fact, for students who have a difficult time attending to and learning from regular live video modeling, animated elements may be more effective [2,25]. Also animated elements can effectively portray tasks that may be inappropriate to film, such as toilet training skills. Both phases provide evidence that talking to the avatar was preferred and resulted in higher quality interaction and learning, but it does not clarify why the experience of talking with an avatar was preferred over interacting with a human. Animated video-based instruction or video modeling may also be more feasible for teachers to create and use compared to VE, as simple and easy-to-use apps are currently available for teachers to use in creating short videos of animated characters exhibiting or teaching desired skills. We posit that the use of animation in video-based instruction or video modeling via pre-existing apps will not only be as effective as VE and standard video modeling, but more efficient and feasible for teachers to use in the classroom. Our current research on the iAnimate Live project confirms the value of VE with animation as a viable intervention tool to support exceptional learners. We hope that our future research will continue to explore how multimodal technologies can enhance and improve the lives and outcomes of individuals with autism and other developmental disabilities.

4.1. Implications for Practitioners

This review of the literature affirms that teachers and other interventionists may benefit from using VE and animation in their classrooms. An emerging literature base supports the use of these technologies to improve self-care skills that are often difficult to teach; social skills that might be challenging because students, especially students with developmental disabilities like autism, may be uninterested in learning; as well as important transition skills, researchers have suggested that a small investment in creating VE can yield important long-term benefits for teachers and clinicians. For example, teachers who create these resources will find them useful for initial instruction, precorrection, and many other applications within the classroom, but requiring very brief implementation.
Two barriers to the use of VE and animation need to be overcome. First, teachers need access to the technology to create and disseminate VE and/or animation. This includes mobile devices or laptop computers with cameras that can be used to develop the experiences. In addition to the hardware, some applications of VE and animation will require access to specific software to support viewing or creation. In some cases, the hardware and software are available within the same device. For example, iOS devices come with the Animoji software that allows users to share an animated version of themselves complete with facial expressions and eye movements.
Second, teachers need the technical skills to develop and use content in VE and animation. The newest software reduces the number of technical skills that are necessary to produce this content but archiving the materials and making it easy for students to access the content may be more challenging. Teachers would do well to invest in digital storage solutions like Dropbox, Box.com, Google Drive, and other cloud storage options since these services integrate with mobile devices and provide easy options for distributing links while restricting access to specific users.

4.2. Implications for Research

Using animated elements or augmented reality in video modeling and video-based instruction is a new area of research. Thus, more research is needed to determine its potential effectiveness in teaching skills to students with disabilities. Using animation as a teaching method is very promising, as it has the potential to better attract students’ attention compared to other methods [2,17]. Luzón and Letón [30] conducted as study with a sample of 255 13-to 6-year-old students without disabilities. In facilitating students’ learning of mathematics concepts, they discovered that animated handwritten mathematics text presented with effects outscored static animated handwritten text. Animated elements in video-based instruction or video modeling may be especially effective for students with disabilities who may have a hard time attending. The potential for animated elements to aid in teaching material that may be inappropriate to film in video modeling or self-modeling adds to animation’s usefulness.

4.3. Limitations

Several potential limitations were present during both phases. Limitation during phase 1 is that the positive results by the participants may be due to the novelty of the intervention. That is, they are paying more attention because the intervention is unlike anything they have experienced before. The magnitude of positive increases near the beginning of the study decrease as the novelty of the intervention wears away. The low engagement may be due to the fact participants were asked what they may have considered a long list of questions. Perhaps the consistency of lower engagement rates shows that the task was monotonous and failed to challenge some of the participants.
Another potential limitation is that during both phases we evaluated the intervention within a research lab which represented a controlled environment. During phase 2, parents and children reported using the social skills learned at school and in community settings, but the research team did not directly observe these changes and cannot independently verify these reports. We also did not collect generalization data in a traditional context or during baseline. The lack of baseline generalization data limits our ability to conclude that these skills learned during phase 2 generalized to interactions with peers.

5. Conclusions

Modern technologies have opened a new frontier for supporting students with disabilities in clinics and schools. Our studies on the iAnimate project have helped answer our research questions by helping us discover that (a) engagement of individuals with disabilities increased when speaking with a VC, (b) the use of VE and VC can be more cost effective in teaching social skills than traditional methods, (c) social validity results have shown that VE and VC are effective in engaging individuals with disabilities and helping them learn social skills. With access and training, practitioners can use these resources to improve the lives of their students, expanding students’ abilities to access opportunities and experiences which interest them to improve the quality of their lives. The iAnimate Live project has shown the potential for these emerging technologies to improve the lives of individuals with autism and other disabilities.

Author Contributions

Conceptualization R.O.K. and C.C., software and equipment R.O.K., methodology, data curation and data analysis R.O.K. and C.C., writing original draft R.O.K., C.C., K.S.K. and M.G., writing review and editing R.O.K., C.C., K.S.K. and M.G. All authors have made substantial contributions to the article and have agreed to the published version of the manuscript.

Funding

This research received no external funding.

Acknowledgments

The authors would like to thanks Bruna Gonçalves for her assistance.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Kellems, R.O.; Cacciatore, G.; Osborne, K. Using an augmented reality–based teaching strategy to teach mathematics to secondary students with disabilities. Career Dev. Transit. Except. Individ. 2019, 42, 253–258. [Google Scholar] [CrossRef]
  2. Ohtake, Y.; Takahashi, A.; Watanabe, K. Using an animated cartoon hero in video instruction to improve bathroom-related skills of a student with autism spectrum disorder. Educ. Train. Autism Dev. Disabil. 2015, 50, 343–355. [Google Scholar]
  3. McLaughlin, D.M.; Carr, E.G. Quality of rapport as a setting event for problem behavior; assessment and intervention. J. Posit. Behav. Interv. 2005, 7, 68–91. [Google Scholar] [CrossRef]
  4. McLay, L.; Carnett, A.; Meer, L.V.D.; Lang, R. Using a video modeling-based intervention package to toilet train two children with autism. J. Dev. Phys. Disabil. 2015, 27, 431–451. [Google Scholar] [CrossRef]
  5. Geiger, K.B.; Carr, J.E.; LeBlanc, L.A. Function-based treatments for escape-maintained problem behavior: A treatment-selection model for practicing behavior analysts. Behav. Anal. Pract. 2010, 3, 22–32. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  6. Self, T.; Scudder, R.R.; Weheba, G.; Crumrine, D. A virtual approach to teaching safety skills to children with autism spectrum disorder. Top. Lang. Disord. 2007, 27, 242–253. [Google Scholar] [CrossRef] [Green Version]
  7. Mitchell, P.; Parsons, S.; Leonard, A. Using virtual environments for teaching social understanding to 6 adolescents with autistic spectrum disorders. J. Autism Dev. Disord. 2007, 37, 589–600. [Google Scholar] [CrossRef]
  8. Smith, M.J.; Ginger, E.J.; Wright, K.; Wright, M.A.; Taylor, J.L.; Humm, L.B.; Fleming, M.F. Virtual reality job interview training in adults with autism spectrum disorder. J. Autism Dev. Disord. 2014, 44, 2450–2463. [Google Scholar] [CrossRef] [Green Version]
  9. Smith, M.J.; Fleming, M.F.; Wright, M.A.; Losh, M.; Humm, L.B.; Olsen, D.; Bell, M.D. Brief report: Vocational outcomes for young adults with autism spectrum disorders at six months after virtual reality job interview training. J. Autism Dev. Disord. 2015, 45, 3364–3369. [Google Scholar] [CrossRef]
  10. Coles, C.D.; Strickland, D.C.; Padgett, L.; Bellmoff, L. Games that “work”: Using computer games to teach alcohol-affected children about fire and street safety. Res. Dev. Disabil. 2007, 28, 518–530. [Google Scholar] [CrossRef]
  11. Ehrlich, J.; Miller, J.R. A virtual environment for teaching social skills: AViSSS. Comput. Graph. Appl. IEEE 2009, 29, 10–16. [Google Scholar] [CrossRef] [PubMed]
  12. Kellems, R.O.; Ferguson, R.; Goncalves, B. Exploring the use of live animation and virtual characters (avatars) to teach social skills to individuals with disabilities. In Proceedings of the Council for Exceptional Children, Tampa, FL, USA, 14–16 October 2018. [Google Scholar]
  13. Charlton, C.T.; Kellems, R.O.; Black, B.; Bussey, H.C.; Ferguson, R.; Goncalves, B.; Vallejo, S. Effectiveness of avatar-delivered instruction on social initiations by children with autism spectrum disorder. Res. Autism Spectr. Disord. 2020, 71, 101494. [Google Scholar] [CrossRef]
  14. Sommerauer, P.; Muller, O. Augmented reality in informal learning environments: A field experiment in a mathematics exhibition. Comput. Educ. 2014, 79, 59–68. [Google Scholar] [CrossRef]
  15. Akçayır, M.; Akçayır, G. Advantages and challenges associated with augmented reality for education: A systematic review of the literature. Educ. Res. Rev. 2017, 20, 1–11. [Google Scholar] [CrossRef]
  16. McMahon, D.D.; Cihak, D.F.; Wright, R.E.; Bell, S.M. Augmented reality for teaching science vocabulary to postsecondary education students with intellectual disabilities and autism. J. Res. Technol. Educ. 2016, 48, 38–56. [Google Scholar] [CrossRef]
  17. Alcorn, A.; Pain, H.; Rajendran, G.; Smith, T.; Lemon, O.; Pomsta, K.P.; Bernardini, S. Social communication between virtual characters and children with autism. In International Conference on Artificial Intelligence in Education; Springer: Berlin, Germany, 2020; pp. 7–14. [Google Scholar]
  18. Bernardini, S.; Pomsta, K.P.; Smith, T.J. ECHOES: An intelligent serious game for fostering social communication in children with autism. Inf. Sci. 2014, 264, 41–60. [Google Scholar] [CrossRef]
  19. Sevilla, J.; Vera, L.; Herrera, G.; Fernández, M. FORHHSS-TEA, support to the individual work system for people with autism spectrum disorder using virtual and augmented reality. In Spanish Computer Graphics Conference; The Eurographics Association: Delft, The Netherlands, 2018; pp. 55–64. [Google Scholar]
  20. Herrera, G.; Casas, X.; Sevilla, J.; Rosa, L.; Pardo, C.; Plaza, J.; Jordan, R.L.; Groux, S. Pictogram room: Natural interaction technologies to aid in the development of children with autism. Annu. Clin. Health Psychol. 2012, 8, 39–44. [Google Scholar]
  21. Guiard, J.M.; Crowell, C.; Pares, N.; Heaton, P. Sparking social initiation behaviors in children with autism through full-body interaction. Int. J. Child Comput. Interact. 2017, 11, 62–71. [Google Scholar]
  22. Mairena, M.A.; Guiard, J.M.; Malinverni, L.; Padillo, V.; Valero, L.; Hervás, A.P.N. A full-body interactive videogame used as a tool to foster social initiation conducts in children with autism spectrum disorders. Res. Autism Spectr. Disord. 2019, 67, 101438. [Google Scholar] [CrossRef]
  23. Malinverni, L.; Guiard, J.M.; Padillo, V.; Mairena, M.-A.; Hervás, A.; Pares, N. Participatory design strategies to enhance the creative contribution of children with special needs. In Proceedings of the 13th International Conference on Interaction Design and Children, New York, NY, USA, 17–20 June 2014; pp. 85–94. [Google Scholar]
  24. Frauenberger, C.; Judith, G.; Bright, W.K. Designing technology for children with special needs: Bridging perspectives through participatory design. CoDesign 2011, 7, 1–28. [Google Scholar] [CrossRef]
  25. Drysdale, B.; Lee, C.Q.; Anderson, A.; Moore, D.W. Using video modeling incorporating animation to teach toileting to two children with autism spectrum disorder. J. Dev. Phys. Disabil. 2015, 27, 149–165. [Google Scholar] [CrossRef]
  26. Bellini, S.; Akullian, J. A meta-analysis of video modeling and video self-modeling interventions for children and adolescents with autism spectrum disorders. Except. Child. 2007, 73, 264–287. [Google Scholar] [CrossRef]
  27. Golan, O.; Ashwin, E.; Granader, Y.; McClintock, S.; Day, K.; Leggett, V.; Cohen, S.B. Enhancing emotion recognition in children with autism spectrum conditions: An intervention using animated vehicles with real emotional faces. J. Autism Dev. Disord. 2010, 40, 269–279. [Google Scholar] [CrossRef] [PubMed]
  28. Moore, D.; Cheng, Y.; McGrath, P.; Powell, N.J. Collaborative virtual environment technology for people with autism. Focus Autism Other Dev. Disabil. 2005, 20, 231–243. [Google Scholar] [CrossRef]
  29. Kellems, R.O.; Morningstar, M.E. Using video modeling delivered through iPods to teach vocational tasks to young adults with autism spectrum disorders. Career Dev. Transit. Except. Individ. 2012, 35, 155–167. [Google Scholar] [CrossRef]
  30. Luzón, J.M.; Letón, E. Use of animated text to improve the learning of basic mathematics. Comput. Educ. 2015, 88, 119–128. [Google Scholar] [CrossRef]

Share and Cite

MDPI and ACS Style

Kellems, R.O.; Charlton, C.; Kversøy, K.S.; Győri, M. Exploring the Use of Virtual Characters (Avatars), Live Animation, and Augmented Reality to Teach Social Skills to Individuals with Autism. Multimodal Technol. Interact. 2020, 4, 48. https://doi.org/10.3390/mti4030048

AMA Style

Kellems RO, Charlton C, Kversøy KS, Győri M. Exploring the Use of Virtual Characters (Avatars), Live Animation, and Augmented Reality to Teach Social Skills to Individuals with Autism. Multimodal Technologies and Interaction. 2020; 4(3):48. https://doi.org/10.3390/mti4030048

Chicago/Turabian Style

Kellems, Ryan O., Cade Charlton, Kjartan Skogly Kversøy, and Miklós Győri. 2020. "Exploring the Use of Virtual Characters (Avatars), Live Animation, and Augmented Reality to Teach Social Skills to Individuals with Autism" Multimodal Technologies and Interaction 4, no. 3: 48. https://doi.org/10.3390/mti4030048

APA Style

Kellems, R. O., Charlton, C., Kversøy, K. S., & Győri, M. (2020). Exploring the Use of Virtual Characters (Avatars), Live Animation, and Augmented Reality to Teach Social Skills to Individuals with Autism. Multimodal Technologies and Interaction, 4(3), 48. https://doi.org/10.3390/mti4030048

Article Metrics

Back to TopTop