Keywords

1 Introduction

Cerebral palsy (CP) is a condition directly related to a lesion in the brain that occurs early in the life of a child. Children with cerebral palsy (CP) have permanent issues with posture and movement that impact participation in daily activities. They may also experience musculoskeletal changes, cognitive impairment, communication and behavioral concerns [1]. There are several subtypes of cerebral palsy based on the type of muscle tone (spasticity, dyskinetic, hypotonia, and mixed tone) and location of impairment (quadriplegia, hemiplegia, diplegia, and others) [2]. Hemiplegia is a type of CP where the child experiences limitations in posture and movement on one side of the body. A child with hemiplegia does not use the impaired arm as often as the unaffected arm due to repeated experiences of failure in using that arm. Human computer interaction (HCI) is a method that supplement traditional rehabilitation therapy, such as occupational and physical therapy, to create experiences and environments to provide children with successful opportunities to promote the use of their affected hand or limb without feeling a sense of failure. Research has shown that the use of specially designed computer games can motivate and help children to enhance the use the affected limb while also strengthening the muscles involved and any related affected functionality [35].

Introduction of low cost, off the shelf sensors, such as the Leap Motion [6], have increased the accessibility to and usability of equipment that was previously too expensive for many applications. The Leap Motion was specifically designed to detect hand motions and gestures. It operates over a small range and high precision due to its use of infrared optics. Rehabilitation therapists indicate that the Leap Motion has potential for rehabilitation and that it would be an effective motivational tool young people within a home environment without a therapist being present [7].

The purpose of this paper is twofold. First, we will compare three classification techniques, decision trees, Support Vector Machines (SVM) and k-nearest neighbors (KNN), to recognize and classify static gestures from the Leap Motion based on the position of the hand and fingers as well as the joint angles. Secondly, a game will be created to detect these gestures and will be evaluated by both student volunteers and occupational therapy experts in the field.

2 Background

Evidence for rehabilitation in the area of cerebral palsy has expanded in recent years due to new technologies and methodologies. Specific interventions have been researched to ensure efficacy, cost-effectiveness and safety [8]. In addition, the World Health Organization (WHO) has established the International Classification of Functioning, Disability, and Health (ICF) that is intended to serve as a collaborative global framework and scientific tool to measure health and disability. The ICF has shifted the focus from disability and impairment to that of function and participation within context of the social and physical environment [9]. It is for these reasons that it is important to develop rehabilitation interventions that are both evidence-based and consider contextual factors and participation.

Reference [8] completed a systematic review of smaller systematic reviews for interventions related to children with cerebral palsy. Therapeutic interventions that were found to have the strongest evidence included: bimanual training; Botulinum toxin (Botox) injections; context-focused therapy; goal-directed functional training using a motor-learning approach; therapeutic home programs; and Botox followed up by occupational therapy. Human computer interaction is an area within rehabilitation therapy that provides a new and innovative method for intervention. Virtual reality games are used in rehabilitation to promote movement and strengthening within a motivating environment [10]. Evidence is emerging to determine the efficacy of virtual reality and influence of functional outcomes related to children with cerebral palsy [11]. However, the principles of Virtual Reality (VR) support strong evidence because it can be designed to emphasize motor learning, bimanual training and goal-directed training within the home environment [10, 12]. The child’s participation in rehabilitation through the use of highly motivational VR games within the context of their home also supports WHO initiatives.

Gaming systems such as the commercially-available gaming systems and robotic arm systems are most commonly used in the clinic setting. Children with CP are often unable to use commercially-available systems due to movement restrictions in the upper extremities. Additionally, they are not beneficial to the occupational therapist because they do not specifically measure small upper extremity movements such as finger extension, wrist extension, ulnar/radial deviation, and forearm supination [13].

3 Related Work

VR has been used in the treatment of CP with an increased success rate compared to conventional exercises. The authors of [4] show that children with and without CP found that VR exercises are more interesting than conventional exercises. These children also were able to hold exercises longer and showed an increased range of motion during VR exercises compared to conventional exercises. The parents for the children also noticed their children having more fun during VR exercises and believe that their children would continue the exercises at home. The authors of [14] also agree with this, stating that a VR training program has potential to improve reaching abilities and control in children with CP.

The Leap Motion controller has been used in game based physical therapy. Reference [7] evaluated the usefulness of the Leap Motion controller for a clinical environment by developing game-like versions of existing rehabilitation activities that were evaluated by clinicians. The results of their trial show that the Leap Motion does have potential to be used in place of some traditional techniques, especially in the home and for young people. Reference [15] focused on the responses from patients. The patients in this study said that the game presented to them was very engaging and addressing a need of practicing movements that are related to daily functions. Also, the patients said that they would play this game if provided as part of home therapy program.

Work has also been done with using the Leap Motion controller in terms of gesture recognition. Reference [16] shows classification techniques using the Leap Motion controller for both static and dynamic gestures. Reference [17] also presents a gesture recognition system using the Leap Motion control made for therapy applications, including a list of gestures created with the help of therapists. Both these systems, however, are lacking a game aspect to keep the patient involved. These present a good starting point, but missing the game component could lead to non-compliance similar to that of conventional therapy exercises.

4 Experimental Procedure

4.1 Equipment

This paper focuses on the use of the Leap Motion controller. As shown in Fig. 1, the Leap Motion consists of three infrared (IR) light emitters and two IR cameras. Since this system uses stereo vision, it can be categorized as an optical tracking system instead of depth based tracking system [18]. The Leap Motion controller provides detailed information about a user’s hand, including the position of the wrist, palm, and finger digits in the Cartesian space, as well as the direction of the hand and finger digits. This information can be used to determine joint angles of the wrist and knuckles. The Leap Motion controller can also provide other information, such as what fingers are extended, the normal vector to the palm, information about the forearm and any tools being used within the Leap Motion controller’s field of view.

Fig. 1.
figure 1

A view of the real (left) and the schematic (right) of the leap motion controller

4.2 Data Collection

A gesture library was created for this prototype with the help of an occupational therapist. The goal of this was to actually recognize gestures that are used as exercises. Figure 2 shows the gestures that were chosen for this library.

Fig. 2.
figure 2

Gesture library for the prototype. from top left corner to bottom right: extension of the index finger, extension of the middle finger, extension of the ring finger, extension of the pinky finger, extension of four fingers, extension of the thumb, ulnar deviation, radial deviation, supination of the forearm, extension of the wrist.

The training data for the various classification procedures was gathered by having student volunteers perform the gestures in a controlled environment. A visualization tool was developed using the Unity Game Engine [19] and the Leap Motion API. UI tools were placed in the top left corner to allow the administrator of the data collection to easier start and stop data collection and save the data. Also, the volunteer can see a visualization their hand on the screen to allow the administrator and the volunteer to verify that the gesture is seen correctly by the Leap Motion Controller. Figure 3 shows this UI Design.

Fig. 3.
figure 3

View of the data collection program of extension of the index finger

The volunteer was given photos of the gestures before data collection begun so they could know what they had to do. The volunteer then placed their right hand above the Leap Motion controller and made the first gesture. When it was shown correctly on the screen, the administrator collected approximately 5 s of data at a sample rate of 50 Hz and saved it. The volunteer then made the second gesture, and was recorded the same way. This task was repeated till all gestures were recorded, and the whole process was repeated two more times. We did not record all the data points produced by the Leap Motion control. Instead, we only recorded the features that were useful to determine the gesture. These included what fingers were extended, the direction of the forearm and the hand, the normal vector of the palm, and joint angles of the wrist and knuckles.

5 Analysis

We used three different methods of classification: Decision Tree, K-Nearest Neighbors (KNN), and Support Vector Machines (SVM). First, we had to determine what features are used to classify which gestures. For example, only the Supination of the Forearm gesture has the palm’s normal vector in the positive Y direction. This feature can then be ignored for all other gestures in this gesture library. A full list of the features that were used to classify each gesture is shown in Table 1.

Table 1. List of feautres used to identify each gesture

We made ten different decision trees to classify the ten different gestures. The reason for this was based on game play. During gameplay, we can assume what gesture someone is supposed to make by where they are in the game, since they would only make certian gestures at certian points. With this assumption, we can classify the gesture with a decision tree with fewer levels than a single tree that would classify all gestures at once. One of the decision trees is shown in Fig. 4. The tolerances for the joint angles and directional vectors were determined by looking at the training data to verify that a majority of the training data would be classified correctly, and to allow some error from any potential input from a game.

Fig. 4.
figure 4

Decision tree for extension of the index finger

The KNN analysis was developed using the built in Matlab functions. The number of neighbors chosen was 294 as it still yielded a very low error. No other parameters were changed. This approach only used one model to classify gestures, unlike the decision tree approach mentioned above. We only used one model, because KNN can easily handle multiple classes without consuming too much time. This approach also was the only one to use all features collected to classify the gesture.

Lastly, we used 10 SVMs also developed using the built in Matlab functions using the Gaussian Radial Basis Function as the kernel function. We made 10 different SVMs for the same reason as mentioned with the decision trees. The feature vector for each SVM comes from the Table 1.

Table 2 shows a comparison between the 3 different methods. The method used to verify the classification models developed was resubstitution. Each model had the data samples that were supposed to match the model resubstituted back into the model to give the results below. As shown, KNN yielded the best results. Most of the decision tree models were above 90 %, and further modification on the tolerances for the middle, ring, and pinky finger extensions would help fix this.

Table 2. Classification results using resubstitution of training data

6 Game

For the game prototype development, we once again used the Unity Game Engine and the Leap Motion API. The game consists of two phases. The first is a fifteen second rest period. The player is not required to do anything during this phase. A picture of the next gesture is shown so that the player can prepare. During the second phase, the player is to match a gesture that is shown on the screen. Both a picture of the gesture and a visualization of the hand as seen from the Leap Motion are shown to players, so that they can see what they are doing with respect to real life and the Leap Motion itself. When the gesture is matched, the top left corner turns green, and red when it is not matched. The score increments by one for every second the gesture is held. A screen capture of this game is shown in Fig. 5. This is supposed to help strengthen the hand. We used the decision tree models for this game due to the lack of open source classification software readily available for Unity. The recognition of gestures did not seem to be affected by using decision trees in terms of response and recognizing most gestures. Certain gestures, however, did require more exact positioning than was expected by the authors.

Fig. 5.
figure 5

Screen capture of the unity leap motion game

Student volunteers played the game then filled out a three question survey afterwards. All but question was on a Likert Scale of 1–5 with 5 being the most positive and 1 being the most negative. The mean of the responses to the question “I feel the overall control interface is easy to use” was 3.67, but the mean of the responses to “I feel that with practice, I could become proficient in using the control interface” was 4.83. This shows that people feel that playing the game more would lead them to a higher score, which would then improve range of motion. The mean of the responses to the question “The tasks presented on the screen are easy to understand” was 4.33. The only comment from the volunteers that one of the pictures was rotated from the Leap Motion model, which caused some confusion.

A video of one of the authors playing the game was made and sent out 2 area hospitals to be evaluated by physical and occupational pediatric therapists. This prototype got mixed reviews. The mean of “The Leap Motion appears easy to use” and “I feel that I could become proficient in using the Leap Motion” was 3.78, while the mean of “The tasks on the screen are easy to understand” was a 4. When asked about an improved version of the prototype, the most interesting response was to “I feel patients would be motivated to use an improved version of this prototype” which had a mean of 2.87. The comments provided by theses therapists said that the game needs to be more engaging, fun, and interactive to help hold a patient’s attention.

7 Conclusion

In this paper, we have presented an analysis of classification techniques on data gathered from the Leap Motion controller. Decision trees provided over 90 % accuracy for the majority of gestures, but KNN and SVM provided much more accurate results. This is believed to be due to the tolerances chosen for the joint angles of the decision tree now allowing for a wide enough variance to properly classify certain gestures. Further adjustment of the tolerances should yield better classification results.

A game prototype also was presented. The reviews of the student volunteers playing this prototype said that the interface was easy to use and could easily become proficient in using it. Therapists viewing a demo of the game also had positive feedback in terms of using the Leap Motion controller and the way tasks were presented to the user. Therapists did comment on the engagement level of the game, saying that patients might not feel motivated to use the current or improved version of this prototype, saying that the game needs to have more features to keep the patient’s attention so that they feel motivated to use the system. Based on the feedback from the student volunteers and the therapists, there is enough evidence to develop a new version of the game which incorporate other gestures and data modalities, as well as a more engaging interface.

8 Future Work

The next phase of this prototype is to expand on the gesture library. Adding gestures will increase the number of features to be viewed to distinguish gestures from each other. Gestures added could be either static or dynamic. This would then mean that more analysis of various gesture recognition algorithms would be needed to determine the best use for dynamic gestures using the Leap Motion.

Also, a therapist user interface will be added. This will allow therapists to view any important data gathered during gameplay sessions. This will also enable the potential for telerehabilitation, since the therapist can then view the data from sessions the patient does at home. This interface will also allow the therapist to control the exercises, such as the order of the gestures and the difficulty of the exercises, or how accurate the gesture has to be.

Lastly, a more engaging game will be developed. The current game is very basic, and therapists have commented on it. A more engaging game might help therapists feel that the patient would feel motivated to play this game, especially in pediatrics.