Keywords

1 Introduction

Cooking and shopping are examples of Instrumental Activities of Daily Living (IADL) which contribute to independent living. Although mainly automatized, IADL can be disturbed by various cognitive impairments (attentional processes, executive functions, short memory), as in Alzheimer’s disease [1,2,3]. During the last decade, research has focused on developing non-pharmacological solutions likely to maintain or enhance independence of people with Alzheimer’s disease in everyday activities. In this context, some studies have underlined the importance of Alzheimer’s disease patients staying cognitively active to prevent functional deterioration over time [4, 5]. However, to efficiently improve daily functioning, interventions in Alzheimer’s disease should (i) employ specific functional tasks involving different ecological contexts, (ii) target simple goals, (iii) structure training tasks, sessions and intervention, (iv) use feedback to keep patients engaged, and (v) tune the difficulty to appropriate levels [6, 7]. Indeed, for instance, interventions using decontextualized tasks to restore cognitive processes have failed to show any effects on everyday life of trained people, as underlined by several studies involving 11,430 participants [8,9,10], in elderly people with mild cognitive impairments [11, 12]. Alternatively, functional rehabilitation-based interventions directly focus on improving autonomy on specific IADL [13, 14]. In such interventions, patients are confronted to complex situations that they could encountered in their everyday life.

  • to choose stimulus parameters (e.g., their nature, intensity) and virtual objects (e.g., their appearance, their behaviors), their mode of presentation (e.g., progressive or brutal, accompanied or not by narration);

  • manage different levels of difficulty in order to accompany the person in their progression, by choosing

  • sufficient difficulty to maintain interest, but reasonable enough not to discourage the person;

  • to amplify the sensory or multisensory returns in such a way that they are very perceptible;

  • to place the person as the main actor and co-creator of the experienced situation;

  • to generate ecologically valid behaviors, soliciting together the components of the functioning;

  • to involve the patient in his training (motivation to engage, to repeat successively and to start over another day), because you have to interact to move forward in the scenario, with tasks that are rewarding or enjoyable and verbal encouragement;

  • to practice cognitive rehabilitation techniques (e.g., spaced-out recovery, learning method without error, fading technique).

The paper is organized around the following sections (based on [15]). In Sect. 2, we present a review of the approaches proposed in the real world and then we present different approaches and numerical tools developed for the evaluation and rehabilitation of cognitive disorders related to Alzheimer’s disease. In Sect. 3 we present our innovative tool based on a virtual kitchen environment. The article ends with a conclusion and proposes ways of improvement.

2 Related Works

Several learning techniques have been used, independently or combined [16,17,18,19,20]. These techniques can be classified as either errorless learning approach or self-generation based approach. Errorless learning approach gathers methods aiming to limit the encoding of non-pertinent information. For this objective, patients receive helps during the training sessions. In addition, they can receive positive feedback which reinforces the encoding of their correct actions. Among these methods, based on implicit learning processes, one can cite “pure” errorless learning method [21], forward or backward chaining methods, the classical vanishing-cues method [22], and spaced-retrieval methods with fixed or increasing time intervals [23]. Contrary to the errorless learning approach, the approach with auto-generation focuses on the solicitation of cognitive processes which underline the formulation of answers, while errors limitation is not a primary goal. It gathers methods aiming to provide the minimum cues which are necessary for the patient to execute the required actions. Thus, patients are first left free to autonomously produce actions and receive feedback in case of difficulties. Trial and error based methods and reversed vanishing-cue methods are examples inscribed in the approach with auto-generation. The errorless approaches with auto-generation have also been proposed [24].

In the context of cognitive rehabilitation of the (I)ADLs in AD, the errorless learning methods have often been naturally combined with specific (i.e., for each step) direct instructions [25, 26]). The principle is based on the limitation of error production during the learning process, strategies to gain commitment of the patients, and a guided practice to promote learning. In practice, the activities to train are firstly structured in a series of actions, to obtain simple and concrete standardized instructions. During the learning process, these instructions can be used to engage, reinforce the correct answer, guide, and/or reduce error production. They can also be of different natures (e.g., oral, physical). In this context, Padilla [27] has recommended adapting the cognitive demand during the activities according to the severity of the functional impairment. Thus, he has proposed the following levels of cueing: a neutral instruction (e.g., it’s noon), an oral general direct instruction (e.g., now, prepare a salad with corn, tomato, salad) or an oral specific instruction (e.g., put salad leaves in the bowl), followed by help via gestures, such as showing the target object or the action to perform, and physical priming (e.g., touching the bowl while repeating the specific instruction). Similar hierarchies according to an increasing level of assistance — from the most cognitive help to the most physical help — can be found in the context of the ecological assessment of daily functioning with Alzheimer’s disease [28]. Several studies support that the errorless approach and cueing can successfully improve the performance of patients with mild to moderate Alzheimer’s disease on various IADLs. Lancioni et al. [29] have obtained positive results by using mainly specific oral instructions and some physical assistance during the patient’s performance, if needed. Simard and Grandmaison obtained positive results, maintained over five weeks, using a learning method that combined the spaced retrieval method and three levels of assistance: a full demonstration of the task, prompts with specific oral instructions, and a small measure of guidance if needed when the patient could name and carry out each step [30]. Following a successful trial, the level of assistance decreased and retrieval delay increased for the next trial, whereas if an error occurred, the level of assistance increased and retrieval delay decreased for the next trial. Dechamps et al. [31] obtained positive results maintained over one month using errorless learning and learning by modeling. The authors showed the effectiveness of these two methods over a trial-and-error method. In their errorless learning, oral and specific pictured plus written instructions were given and then hidden. Then, if patients did not perform the action after five seconds, they received as prompting the same pictured plus written instructions. Physical help was given if needed. In learning by modeling, the task was firstly demonstrated with specific oral instructions. Then, if an error occurred, the instruction was repeated, and if, despite this cue, patients were unable to continue, the sequence was shown again.

In the aim to develop functional support, neuropsychology has focused on the contributions of Virtual Reality (VR) [32,33,34]. In addition to integrating the interests of classic functional rehabilitation offers additional possibilities. In particular, VR allows:

  • to propose situations that are difficult to achieve in real life (e.g., to fly under the storm);

  • to choose the parameters to modify the emotional, motor and cognitive experience of the real situation simulated;

  • to provide complex, appropriate, and controlled multisensory feedback, according to the patient’s actions, abilities or level of performance;

  • to provide sensory feedback close to those delivered in everyday life (e.g., sound of a coffee maker on the way) or not (e.g., score display, sounds in case of errors);

  • to select behavioral interfaces adapted to people’s sensory and motor abilities;

  • to tune the level of interactivity and difficulty;

  • to repeat tasks in a standardized way, while minimizing the environmental variability;

  • to place the patient in a secure environment, the consequences of possible errors remaining only virtual (e.g., forget to extinguish the hob);

  • to provide portable systems, allowing the therapist to easily intervene in several places, including the patients’ home;

  • to provide inexpensive, space-saving and autonomous systems to offer remote care, accessible to patients’ homes;

  • to integrate social aspects through group work (with other sick people or virtual characters) or through interactions with the therapist (real or virtual);

  • to collect performance and behavioral data in real time and in a non-intrusive way;

  • to interrupt the session at any time if necessary (e.g., external event, comment), to resume training where the person was, or again from the beginning;

  • to free the therapist of various practical tasks, allowing him/her to focus on the situation, and make good decisions about the support provided;

  • to involve the patient in the training using various motivational techniques, such as verbal encouragement, score display, or a virtual coach.

These benefits have spurred the development of many VR systems aimed at improving autonomy, in various domains and clinical populations. The following parts are intended to illustrate this variety, by the presentation of some systems. The latter place themselves in the desire to improve the daily functioning, in people with deficiencies of origin acquired or innate, but target certain particular behaviors.

3 System Description

We developed a VR based system which allows patients-users to perform a familiarization task and 10 cooking tasks in a learning condition adapted to their disease. We used Unity 3D (http://unity3d.com/, version 4.1.3) as game engine.

3.1 Tasks Design

In order to increase the opportunity for patients to choose the tasks they want to train (as recommended by [35]), we selected 10 cooking activities likely to be useful for pleasant for the French elderly population (Table 1). In addition, the activities were selected to require similar number of utensils and ingredients. To create simple verbal instructions for our error-reduction training method, we broke down each activity into a sequence of 12 or 13 motor steps. The number of steps and also the length and the syntax of instructions were closed between tasks. To ensure that the sequences corresponded to what people are used to doing, we have submitted the scripts of tasks to 10 healthy subjects. This work leads us to rephrase several actions to make the instructions more natural and understandable. Oral instructions were synthetic voices generated from texts and recorded in .mp3 format. We also designed a generic familiarization task which proposes to the patients-users to execute the different basic interactions by manipulating colored geometries in 8 instructions-steps.

Table 1. Short names of activities and corresponding general instructions

3.2 Virtual Environment

Based on our tasks design, we established a technical documentation to list:

  • all the necessary machines, perishable items and utensils;

  • the real dimensions of objects

  • the visual feedback potentially used in real-life situations to follow the task progression.

For example, the coffee machine has a water gauge, to check whether there is water in the tank. All the actions were associated with visual and/or auditory feedback to ease immersion, and to inform the user about the state of the task (Fig. 1).

Fig. 1.
figure 1

Kitchen model in 3D Studio Max (left), and kitchen model rendered in Unity 3D (right).

We created the 3D models (e.g., drawer, salad leaf, colliders) using 3D Studio Max (2013/2014). The models have been optimized for real-time interaction. PhotoFiltre Studio X (version 10.7.3) was used to create the textures, either entirely or from photographs of real objects taken in a white box. The models were then exported in .fbx format to be imported into Unity 3D. We also added sounds to provide feedback of actions performed (e.g., opening the salad spinner, heating butter, closing the tap, setting up filter in the filter holder, spoon placed in a cup). To provide a more realistic rendering than in previous versions, we modeled a new kitchen environment, based on real therapeutic cooking. In particular, we have taken over his furniture (dimensions, textures, and relative provisions).

The phase of modeling first required documentation on the actual measurements of objects, as well as on their appearance. Then, the objects were modeled and mapped under 3D Studio Max and then imported into Unity 3D.

3.3 General Software Architecture

Our virtual kitchen consists of three main software components: (i) a menu for configuration, (ii) a tasks management system, (iii) and a data (user’s actions) saving manager (see Fig. 2). It was mainly developed using the C# programming language. The menu allows therapists-users to choose the mode in which the task will be launched.

Fig. 2.
figure 2

General software architecture.

There are three available modes (i) familiarization, (ii) demonstration, and (iii) training. In the demonstration mode, the task is performed automatically. In the modes familiarization and training, the task can be performed by the actions of patients-users. For the demonstration and training modes, the therapists-users can precise which cooking task she/he wants to run among a list of ten cooking task (the first is selected by default). The setting duration indicates the time given to patients-users to perform actions before the software launches assistance during the training mode. The familiarization mode launches a familiarization task. Finally, the menu allows therapists-users to specify different types of information aiming to facilitate the identification of the recorded data for a subsequent analysis.

The tasks management system instantiates the 3D objects (i.e., kitchen environment, objects required for the launched task), activates the “task manager” script corresponding to the selected Mode, and set the duration parameters if necessary. The task manager indicates the generic task’s behaviors, including the ending conditions or the management of data storage and saving. Interactive objects are associated to generic scripts which allow managing the object’s behavior in function of some objects properties (i.e., is a lid, is a container, is draggable).

The event manager triggers the modifications corresponding to a performed action. Actions are user actions on a virtual object via the computer mouse or the interactions between two virtual objects. Moreover, at each step of the task in the training mode, the event manager sends information (including, assistances given to the patients-users, time) to the temporary data storage module. When patients-users have completed the task, those data are saved by the Data saving in a CSV file in a row. It allows therapists-users to read data from the local desktop. The required objects were automatically displayed according to the task selected from the main menu.

3.4 Interaction System

We made some objects interactive only when it was required and designed the system of interactions as simple as possible. Indeed, the targeted user population is unfamiliar with the new technologies and the use of the mouse [36]. We wanted to the patients-users from making errors and from being interrupted in the execution because of technical difficulties. The system focuses more on the cognitive aspect of the task performance than on the aspects related to sensory-motor coordination or spatial control.

Each step described in the phase of tasks design can be categorized into press-type manipulation and drag-type manipulation. For the simple actions with a single object (e.g., opening a lid, closing the tap, pressing a button), the system detects a trigger when the left button of the computer mouse was pressed down while the mouse cursor rolled over a detection area. Virtual objects were moved by clicking and dragging the object. Because the mouse is a 2D interface, the depth dimension was automatically calculated. When the mouse button was released, the object was virtually dropped. When the object arrived at a destination of interest (e.g., when the coffee filter arrived at the port filter), the action was performed (e.g., the filter holder was placed in the filter holder). To ease the manipulation, the objects’ trajectory was constrained along invisible Bezier curves, which were recalculated in real time from the current position of the object (to be able to release the object mid-way and resume).

4 Conclusion and Future Work

We presented the design and development of an innovative interactive tool based on a virtual kitchen environment that allows performing different tasks. This tool has been proposed for the preventive treatment of Alzheimer’s disease. The cooking tasks are performed using computer mouse. Gradual assistance is provided to the patient so that he/she trains and learns to perform the tasks requested. In order for the training to be relevant and effective, no errors are allowed by the system. Preliminary results showed that our tool allowed patients to learn the proposed tasks and to maintain this knowledge over time. In the future work we plan to investigate the use of a tactile surface in order to make easier for the patient to select grad and drop the objects.