Abstract
With the development of artificial intelligence, more and more attention has been paid to ho me intelligent terminal products. Compared with the common products, the family control intelligent terminal products are more complex in functional architecture and hierarchy, and the existing interactive gestures can not meet the functional requirements. The aim of this paper is to design a new interaction gesture for home control intelligent terminal. Analyzes the need and feasibility of using gesture interaction in this type of product and explores several factors that affect the accuracy of gesture interaction. We obtained user needs for product functions through questionnaire surveys, and then proposed four principles for gesture design for smart home products based on home control and designed a set of interactive gestures based on this criterion. Finally, we use Kinect V2 to perform gesture recognition experiments and issue a Likert scale questionnaire to obtain users’ subjective experience. The recognition rate of the gesture system is more than 80%, and it has good recognition ability. 0.5 m–4.5 m is the best recognition area. When the distance increases, the accuracy will decrease. However, due to the limitation of family space, there are few areas over 4.5 m. The subjective scores of users are over 9 points, indicating that the gesture system of the device has good recognition and user experience.
You have full access to this open access chapter, Download conference paper PDF
Similar content being viewed by others
Keywords
1 Introduction
With the continuous progress of science and technology, artificial intelligence has made great progress in recent years. Artificial intelligence has been widely used in many areas of our life: face recognition, intelligent wearable devices, speech recognition, deep learning, intelligent robots and so on. Gesture interaction, as an important research area of artificial intelligence, has been applied in VR, somatic games, intelligent wearable devices and interface interaction of automobile, such as Kinect V2, which accomplishes specific tasks by identifying the user’s actions [1]. Compared with the traditional use of game handle, the use of gesture interaction can greatly enhance the user’s interactivity, let the user have the feeling of immersive experience, and improve the user’s use experience (Fig. 1).
Home control intelligent terminals have been available for a long time. As an important part of the smart home [2], most of these products use multi-channel interaction methods such as hardware interaction, interface interaction, voice interaction, to perform real-time monitoring and data sharing of various smart devices in the home environment. It is also responsible for the management and control of home smart devices, and completes tasks assigned by users. Taking Changhong’s home intelligent terminal product as an example, the product classifies the commonly used devices in home life: light management, surveillance camera, device control, safety protection, and centralized management. Users can operate most functions in the home environment as long as they use the product, which provides great convenience for users (Fig. 2).
In the family environment, smart terminal products mainly use APP to interact with the machine interface [3], and some of them use intelligent voice interaction. There are few such products that use gesture interaction. However, gesture interaction, which is an important development trend of future human-computer interaction, combined with smart terminal products will bring a completely new interactive experience. Although the existing research has some research on the application of gesture interaction in three-dimensional environment, but is limited to simple gesture interaction, use static gestures for interactive operations and have not conducted in-depth research on the use of gestures in products with complex functional levels, such as control-type smart terminals (Fig. 3).
We propose a new gesture design method: Giving basic gestures layer by layer, and finally combine basic gestures bound at different levels to form dynamic gestures. And it is applied to home control intelligent terminals to control the turning on and off of the device, so that users can clearly understand the specific meaning of the gestures they make, forming a clear hierarchical concept in the user’s mind.
The rest of this paper is structured as follows. Section 3 introduces the existing architecture of home control intelligent terminal products and conducts a hierarchical analysis. Section 4 conducts in-depth design research on gesture interaction of home control smart terminals, analyzes functional requirements, based on the four dimensions of nature, difference, simplicity, and efficiency, a set of interactive gestures was designed according to the architecture level. Sections 5 and 6 conducted experiments on gesture interactions, processed and analyzed the obtained data, and finally introduced our conclusions.
2 Related Works
There are many scholars doing research on gesture interaction. JI, R [4] studied the application of natural interaction in family life, and proposed four natural interaction design strategies for smart homes, including anthropomorphic interaction objects and inclusive operation space, but did not propose specific gesture interaction design solutions.
Wan, H, G [5] studied three-dimensional scene-oriented freehand gesture interaction methods and put forward: user requirements principle, user experience principle, environment limitation principle, and designed the gesture model. Using static gestures to complete the static gesture movement and rotation Responsive action. Although this type of gesture can achieve the expected result very well, but it is only suitable for gesture interaction with simple function commands, and is not suitable for systems with complex functional levels. Wu, H. [6] designed a set of gesture interaction models for watching TV. They collected the common functions and operations in TV, and adopted the translation and rotation of different basic gestures to achieve the corresponding functions. Ntoa, S [7] studied the user experience of using gesture interaction in large screen devices. Ram Pratap Sharma [8] researched the gesture interaction by recognizing the shape and orientation of the hand to generate a static gesture image.
Li, X [9] studied the user experience of gesture interaction and voice interaction in smart TVs and found that the operation of the selection process requires high efficiency, so most users like gesture interaction. It shows that when designing a TV interface for selection and control, gesture interaction should be strengthened, and home control intelligent terminals mainly control different devices, and all perform control operations, which are also applicable to gesture interaction.
Min Li [10] researched the home intelligent interactive terminal system and analyzed the characteristics of smart home. By using a computer, mobile phone, and intelligent interactive terminal to control monitors, Smart Appliances, and using smart switches to control normal appliances, to achieve the purpose of controlling all the equipment in the home.
3 Hierarchical Analysis of Home Control Intelligent Terminal
Home control intelligent terminals need to control a large number of home devices, and have more complex information architecture and functional levels than ordinary products. The first level of architecture needs to be representative, which can summarize the second level. We refer to the existing Changhong’s home intelligent terminal architecture, and classify the functional devices commonly used in family life according to the types of devices involved and the nature of their functions [10]. Then divided the first level of the information architecture into 4 subsystem:
-
1.
Lighting system: responsible for switching control and brightness adjustment of all lighting-related equipment in the home environment.
-
2.
Home electrical system: responsible for regulating the switches, functions, etc. of various household devices
-
3.
Door and window system: responsible for the opening and closing of door locks, opening and closing of curtains, opening and closing of windows, etc.
-
4.
Dialing system: Responsible for telephone contact with family members, and is equipped with emergency dialing and other help functions.
The above 4 systems basically cover the functions of the equipment involved in the home. The second level under each system is the control operation of specific equipment, for example, the lighting system includes: kitchen lights, toilet lights, living room lights, etc. switches and brightness control (Fig. 4).
In designing gesture interaction of such complex information systems, we must fully consider factors such as control functions, number of gestures, interaction efficiency, and gesture fluency. Because gesture interaction requires simple and fast implementation, the complex functions of some devices are more sophisticated and require multiple gestures to complete. The increase in gestures will affect the device’s recognition burden, increase the user’s learning costs and the interactive experience.
There are many devices in family life, but not all of them are used frequently. Many devices may only be turned on once long. If gesture binding is performed on all products, the total number of gestures is increased and users rarely use. This gesture causes a waste of resources; and because some devices are closely related to our lives, they need to be used frequently. Designing gestures for such devices can effectively help users complete tasks in daily life. Therefore, in the process of designing gestures in this paper, the functions of different devices are screened based on the frequency and importance of use, and gestures are mainly designed for functions with high frequency and importance. For example, in a lighting system, gesture interaction is responsible for controlling the switching of lights in different home areas without the need to control the adjustment of brightness.
4 Research on Gesture Interaction Design for Home Control Intelligent Terminal
4.1 Design Principle of Gesture Interaction for Intelligent Control Terminal
Yang Li’s research [11] shows that when dealing with any project related to ergonomics, we should consider the relationship among human, machine and environment, and take health, safety, comfort and efficiency as the starting point of our design [12]. Because recognition efficiency and user interaction experience are affected by gestures, this article considers the following four perspectives when designing gesture interaction:
-
1.
Naturalness: Gesture interaction is the most natural and the most human instinct in all interaction modes. We should focus on the normal interaction habits of people in the design of gesture model [13], so that the gesture is consistent with people’s daily behavior process of completing specific matters, and integrate the user’s daily behavior into the involved gesture model. It enables users to connect natural gestures with gesture models to reduce the difficulty of gesture learning and improve their user experience [14]. And the design of gesture should also conform to people’s own natural physiological structure, reasonably control the interaction time, reduce the user’s long-time hand lifting, and avoid doing some gestures that do not conform to people’s physiological structure, such as: wrist bending for a long time; if dynamic gesture is designed, the range of action should be considered reasonably to avoid excessive damage to the user’s body and reduce user and family control The fatigue in the interaction process of manufacturing intelligent terminal products.
-
2.
Difference: The latest gesture-based devices have improved the recognition accuracy and effective recognition distance, but when the gesture modeling design is relatively similar, it will still cause system error recognition, increase the number of errors, and seriously affect the user experience. Therefore, designers should pay more attention to the differences between each gesture in the process of gesture model design, and make the designed gesture have a greater differentiation on the basis of meeting people’s interaction habits.
-
3.
Simplicity: Compared with other interaction methods, the biggest feature of gesture interaction is that it can give instructions and complete related tasks naturally, quickly and efficiently, so some complex gestures should be avoided in the design process; in the process of gesture system design, the realization of a certain instruction may need to combine multiple gestures, and the increase of gestures will inevitably improve the learning difficulty and efficiency of users Cognitive load, so the number of corresponding gestures is also a key factor to be considered in the design of gesture model. On the premise of successfully realizing gesture recognition, strive to be concise and minimize the number of gestures required for a single command.
-
4.
Efficiency: If a command needs two or more gesture combinations to express, we need to focus on the fluency and recognition efficiency of each gesture [15]. When the two gestures are obviously uncomfortable or the hand muscles are uncomfortable during the conversion process, it means that there is a problem in the fluency of the combined gesture, and one of the gestures needs to be replaced. If there is a recognition error in the connection process of one gesture and other gestures, it is necessary to analyze the gesture, and decide whether to redesign or not according to the reasons The ultimate goal of giving up the gesture is to make gesture interaction more efficient, user interaction more comfortable, machine recognition smoothly, and improve interaction efficiency.
4.2 Gesture Design for Home Control Intelligent Terminal
We studied the products and functions commonly used in family life, and based on the above four gesture interaction design factors, we considered the user’s psychology, interaction environment, and operation efficiency to design gesture models.
Product Functional Requirements Survey.
We studied 13 types of functional devices commonly used in households, focus on the importance and frequency of features and products. A questionnaire survey was conducted on 47 users of different ages and cultural backgrounds, including teachers; students; takeaway delivery staff; cleaners; porters, aged 19 to 58 years. The importance and frequency of different functional devices in family life are scored according to a scale of 10 points (Fig. 5).
The horizontal axis of each point in the figure above is the score of frequency of use, and the vertical axis is the score of importance. A higher score indicates that the device is more important and used more frequently. It can be seen that the functions in the upper right circle have higher usage frequency scores and importance scores. Among them, the functions in the blue circle, such as light control, door and window control, need to be considered as important factors in family life. Medical devices and other devices are important because they are used too frequently. No related gesture design. According to the information architecture related content discussed in Sect. 3, the functional architecture of intelligent products and related equipment with high frequency and importance is established (Table 1).
Basic Gesture Design.
We propose a new gesture design pattern: layer basic gestures, and finally combine basic gestures bound at different levels to form dynamic gestures. Due to the complex information architecture of home control intelligent terminal products, there are many functions that need to be controlled. This mode is applied to such complex information architecture products to enable users to clearly understand the specific meanings of the gestures they make. A clear hierarchy concept is formed in the user’s mind. One gesture corresponds to one hierarchy, instead of blindly imitating the complex gestures provided, thereby reducing the user’s learning cost.
The existing gesture interaction models are mainly divided into two types: static gesture and dynamic gesture [16]. Static gestures are the most basic gestures, while dynamic gestures are arranged and combined on the basis of static gestures. According to the architecture of home intelligent terminal, we designed 8 basic gestures, and then designed a new dynamic gesture by translating, rotating, and combining the 8 gestures. Then we designed dynamic gestures by performing operations such as panning and combining the 8 gestures, and inductively integrated all static gestures and dynamic gestures to form a set of gesture models for home control intelligent terminals.
According to the functional architecture table of the home control terminal, the control intelligent terminal product is divided into two levels. The first level contains four systems: lighting control system, home appliance control system, security control system, and dial system. It is the foundation of the entire gesture interaction system. The gesture design in this part needs to be simple and convenient for the machine to quickly recognize; The second level of information is detailed into the specific functions of the device. Gesture design should be based on the correlation between gestures and real actions on the basis of ensuring recognition. Also, if the conditions allow, it can provide a higher degree of freedom, allowing users to bind the second-level functions according to personal preferences and the actual needs of the family, such as interactive gestures to turn on the light, and can be used for bedrooms and living rooms. Sequence coding, which corresponds to the sequence of numbers to the corresponding gesture.
According to the functional positioning of the first level and the second level, the items of each level are individually bound to the basic gestures. Considering that the use of hand shape changes for long-distance gesture interaction may reduce the accuracy of recognition. Too many gestures and complicated gestures will increase the user’s memory burden. The arm has a larger range of motion than the hand type and requires less recognition accuracy, can improve the efficiency of camera capture. Using the movement changes between the torso of the arm for gesture design (Table 2).
Dynamic Gesture Design.
During the dynamic gesture design process, the smoothness of the connection between the first-level gesture and the second-level gesture is carefully considered, and the two gestures are converted in accordance with the four factors of nature, difference, simplicity, and efficiency. First, bind the four items in the first level with the basic gestures based on the connection between the gestures and the items. Among the eight static gestures, select gesture 1, gesture 6, gesture 7, and gesture 8 as the corresponding four subsystems in the first level. Gestures, and then perform gesture binding on the second level of each subsystem, and combine the remaining seven gestures with the second-level devices to form the final dynamic gesture, as shown in Table 3:
4.3 Interactive Area Analysis
The interaction distance of indoor space will affect the accuracy of gesture recognition to a certain extent. The home intelligent control terminal uses the device’s own camera and cameras placed in the family living room, study, etc. for gesture capture. It needs to meet the interactive distance requirements of each space, so it is necessary to analyze the distance of the indoor environment. Zhu, M.B’s research [17] shows that the per capita living space in China’s cities is about 39 m2, and the per capita area in rural housing is 47 m2. Based on 2–3 people per household, the housing area of most Chinese families is 70 m2, −130 m2.
The space is mainly divided into six parts: bedroom, kitchen, balcony, living room, dining room and bathroom. The living room covers an area of about 30% of the entire building surface, between 20 m2 and 40 m2 (Fig. 6).
With reference to the living room related scale standard, this article uses 11 m as the maximum length of the living room, sets the camera in the center of the space, and divides the test distance into three levels: short distance (0.5 m–1.5 m), middle distance (1.5 m–3.5 m), Long distance (3.5 m–5.5 m), and use the above three distances as variables to experiment the usability of interactive gestures.
5 Gesture Interaction Experiment
5.1 Purpose
In order to verify the usability of hand gesture interaction model for home control terminals in actual use, the specific situation of the designed hand gesture ensemble in actual use, including the accuracy of hand gesture recognition, error rate of hand gesture recognition, user’s subjective feeling when using a certain hand gesture. For practical use of home control intelligent terminal products.
5.2 Participant Selection
The participants selected 47 people, aged from 18 to 56 years, and their occupations were: teacher; student; shopper; cleaners; porters. All have long-term family life experience, with height between 158 cm and 187 cm, without any physical or psychological illness. There were 25 male participants and 22 female participants.
5.3 Experimental Variables
Interaction distance is one of the important factors affecting the accuracy of gesture recognition. The accuracy of gesture recognition is different at different distances. Therefore, according to the previous analysis of environmental factors in the family, the test distance is divided into three levels: close distance (0.5 m–1.5 m), middle distance (1.5 m–3.5 m), and long distance (3.5 m–5.5 m). For each of these three distances, the usability test of the gesture model is performed.
5.4 Experimental Equipment
Kinect V2; Windows 10 System; Unity3D 5.5; Kinect Manager.
5.5 Experimental Place
This experiment was carried out in the laboratory of Nanjing University of Science and Technology.
5.6 Experiment Process
-
1.
47 participants were numbered from 1 to 47, and the corresponding gender was recorded.
-
2.
Before the test, provide 30 min for the participants to learn the hand gesture interaction ensemble, and make sure that the participants master all the interaction gestures skillfully.
-
3.
The participants were brought into the test room to familiarize themselves with the test environment for 15 min, and then the main subjects were informed of the relevant requirements, experimental methods and precautions in the experiment.
-
4.
Connect Kinect V2, calibrate the instrument and open the software for experiment.
-
5.
The participants were asked to do a test to focus the lens, and then enter the formal experiment.
-
6.
According to the distance, it can be divided into three points: near, middle and far, and the three points are tested respectively. Each gesture provides 15 s of experimental time. If the recognition is still not successful after 15 s, it indicates that the interactive gesture may have usability problems. If unrecognized phenomena occur in multiple test subjects, you need to analyze the two gestures, whether they are in the connection process or gestures. There is a problem with its own recognition, and then it is determined whether the gesture needs to be replaced or redesigned according to the reason. At the end of each gesture test, there will be a voice prompt, and then two seconds for the next gesture test. There are special personnel to record the number of errors, correct recognition time and other data in each test.
-
7.
At the end of the test, participants were invited to score the four indicators of all static gestures and dynamic gestures by completing the five-point Likert scale questionnaire [18]: natural A1, difference A2, simple A3, and high efficiency A4. Fill in 9 points if you think a gesture is very good, and 1 point if you think a gesture is very bad. After that, the next participant was invited to enter the test room, and repeat the above steps for gesture interaction test until the end of the experiment (Table 4).
6 Experimental Results
6.1 Analysis of Experimental Data
Analyzing the data of 47 participants in the static gesture experiment, as shown in Fig. 7. The recognition rate of eight basic gestures is relatively high, among which gesture 7 has the lowest recognition accuracy, 91.48%, higher than 90%. It shows that eight basic gestures are very good in the recognition of single body, and can be used for smooth gesture recognition.
The data of dynamic gesture recognition shows that during close-range and medium-range gesture interactions, the overall recognition of gestures is greater than 80%, of which the minimum recognition rate of close-range gesture interaction is 87.23%, and the average recognition rate is 90.9%; The recognition rate of gestures at medium distances is not much different from the recognition rate of short distances. The average recognition rate is 89.5%, which indicates that the recognition accuracy of the designed gesture model in middle and short distances is less affected by distance. When the distance becomes longer, the accuracy of gesture recognition is significantly reduced, which indicates that the accuracy of gesture recognition is affected by the increase of distance in the range of 3.5 m–5 m (Figs. 8 and 9).
6.2 Analysis of Questionnaire Data
Analytic Hierarchy Process.
AHP [19] deals with complex decision-making problems in a hierarchical manner, calculates the feature vector of the judgment matrix, and calculates the final weighted weights to obtain an evaluation model for each dimension of gesture interaction availability.
We constructs a judgment matrix, compares the importance of each pair of evaluation indicators under the same criteria, and scores the indicators according to their importance levels. Finally, a judgment matrix is constructed based on the results of the questionnaire. As a result of comparing element i and element j according to importance, the judgment matrix has the following properties: \( a_{ij} = 1/a_{ji} \).
Construct a judgment matrix R = (aij) n1 × n1, i = 1, 2, …, n1 j = 1, 2, …, n1, n1 is the number of evaluation indexes
Index Weight Calculation.
Use AHP to analyze the data collected by the five-point Likert scale to obtain the overall score of each gesture [20].
First set the weights of the four evaluation indicators based on naturalness A1, difference A2, simple A3 and high efficiency A4. Referring to the scoring standards in Table 5, compare the importance of the four indicators of naturalness, difference, simplicity, and efficiency, and invite two experts to score the index weights of the above four indicators. Take the approximate value of the average of all the scoring results, optimize the index weights, and sort out the final specific scores, as shown in Table 6.
Establish a judgment matrix:
Sum calculation for each column of the judgment matrix R:
The normalization process is performed on each column of the judgment matrix R, and the new matrix B is obtained as:
Sum the indexes of each row of the matrix B to obtain the feature vector Ti as:
Normalize the feature vector Ti to get the index weight value Wi as:
Calculate the maximum characteristic root λmax of the judgment matrix R:
Calculate the consistency index CI of the judgment matrix R:
When CI is close to 0, there is satisfactory consistency; the larger the CI, the more inconsistent; when CI = 0, there is 100% consistency. To measure the size of CI, a random consistency indicator RI is introduced:
When constructing a judgment matrix, the evaluation of the weight relationship will be affected by human subjective limitations. In order to establish a reasonable multi-level index weight model, it is necessary to systematically check the consistency of the judgment matrix. The test results will be fed back to the previous step. The matrix system thus optimizes the weights (Table 7).
Calculation formula for consistency correlation detection:
Calculated:
The CR value is less than 0.1, and the consistency test passes. The weights shown in the table below can be used for analysis and calculation.
Finally, the weight of each indicator and the user’s score are multiplied to get the number of scores of a single indicator, and then the individual scores of these four indicators are added to get the overall subjective score of the gesture (Fig. 10).
The user comprehensive scores of basic gesture and overall gesture are all above 7 points, which indicates that the interactive gesture designed in this paper has a good user experience. The highest score of gesture 1 + 2 (control electric curtain) is 8.73 points, which may be due to the combination of gesture and user’s normal use habits; the lowest score of gesture 6 + 3 is 7.87 points, which indicates that the gesture needs to be re optimized (Fig. 11).
7 Discussion
This article is aimed at home control intelligent terminal products. Based on its complex information architecture, a new gesture design method is proposed: basic gestures are layered, and basic gestures bound at different levels are combined to form dynamic gestures. According to the standards of the first level and the second level, the functional information of the control-type intelligent terminal products is divided. Draw up 8 static gestures, follow the four design principles to bind the static gestures of the commands at each level, and then combine the functions of the two levels to design a set of dynamic gesture interaction models based on the hierarchical architecture. User interviews and questionnaires verify the feasibility of this gesture set. The experimental results prove that the accuracy rate of the gesture recognition is higher in the three distances of middle, near and far distances, and the user has good feedback during the use, which initially proves the usability of the interactive gesture.
In the subsequent research, the gesture model will be deeply optimized. It is hoped that the gesture design concepts and gesture schemes proposed in this article can be adopted by home control intelligent terminal products, and also use gestures in such products Interactive research provides some reference.
8 Conclusion
This paper proposes a new gesture design method for home control intelligent terminal products based on its complex information architecture: Basic gestures are bound according to levels, and finally, basic gestures bound at different levels are combined to form dynamic gestures. According to the standards of the first level and the second level, the functional information of the control-type intelligent terminal products is divided. Draw up 8 static gestures, follow the four design principles to bind the static gestures of the commands at each level, and then combine the functions of the two levels to design a set of dynamic gesture interaction models based on the hierarchical architecture. User interviews and questionnaires verify the feasibility of this gesture set. The experimental results prove that the accuracy rate of the gesture recognition is higher in the three distances of middle, near and far distances, and the user has good feedback during the use, which initially proves the usability of the interactive gesture.
In the subsequent research, we will carry out deep optimization on this gesture model. We hope that the gesture scheme proposed in this article can be adopted by home control smart terminal products and also used in home control smart terminal products in the future. The research on gesture interaction provides some reference.
References
Panger, G.: Kinect in the kitchen: testing depth camera interactions in practical home environments. In: CHI 12 Extended Abstracts on Human Factors in Computing Systems (2012)
Mittal, Y., Toshniwal, P., Sharma, S., Singhal, D., Gupta, R., Mittal, V.K.: A voice-controlled multi-functional smart home automation system. In: 2015 Annual IEEE India Conference (INDICON), New Delhi, pp. 1–6 (2015)
Miroslav, B., Ondrej, K.R.: Vision of smart home point solution as sustainable intelligent house concept. IFAC Proc. Vol. 46(28), 383–387 (2013). ISSN 1474-6670, ISBN 9783902823533
Ji, R., Gong, M.S.: Application of natural interaction design in smart home. Packag. Eng. 40(22), 208–213 (2019)
Wan, H.G., Li, T., Feng, L.W., Chen, Y.S.: An approach to free-hand interaction for 3D scene modeling. Trans. Beijing Institute Technol. 39(02), 175–180 (2019)
Wu, H., Wang, J., Zhang, X.: User-centered gesture development in TV viewing environment. Multimed Tools Appl. 75, 733–760 (2016)
Ntoa, S., Birliraki, C., Drossis, G., Margetis, G., Adami, I., Stephanidis, C.: UX design of a big data visualization application supporting gesture-based interaction with a large display. In: Yamamoto, S. (ed.) HIMI 2017. LNCS, vol. 10273, pp. 248–265. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-58521-5_20
Ram, P.S., Verma, G.K.: Human computer interaction using hand gesture. Procedia Comput. Sci. 54, 721–727 (2015). ISSN 1877-0509
Li, X., Guan, D., Zhang, J., Liu, X., Li, S., Tong, H.: Exploration of ideal interaction scheme on smart TV: based on user experience research of far-field speech and mid-air gesture interaction. In: Marcus, A., Wang, W. (eds.) HCII 2019. LNCS, vol. 11584, pp. 144–162. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-23541-3_12
Min, L., Gu, W., Chen, W., He, Y., Wu, Y., Zhang, Y.: Smart home: architecture, technologies and systems. Procedia Comput. Sci. 131, 393–400 (2018). ISSN 1877-0509
Li, Y., Huang, J., Tian, F., Wang, H.-A., Dai, G.-Z.: Gesture interaction in virtual reality. Virtual Reality Intell. Hardware 1(1), 84–112 (2019). ISSN 2096-5796
Grandhi, S.A., Joue, G., Mittelberg, I.: Understanding naturalness and intuitiveness in gesture production: insights for touchless gestural interfaces. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 821–824. ACM (2011)
Wigdor, D., Wixon, D.: Brave NUI World: Designing Natural User Interfaces for Touch and Gesture. Elsevier, Amsterdam (2011)
Shin, Y.K., Choe, J.H.: Remote control interaction for individual environment of smart TV. In: IEEE International Symposium on Personal. IEEE Xplore (2011)
Annelies, K., Sujit, S.: Language ideologies on the difference between gesture and sign. Lang. Commun. 60, 44–63 (2018). ISSN 0271-5309
Dinh, D.-L., Jeong, T.K., Kim, T.-S.: Hand gesture recognition and interface via a depth imaging sensor for smart home appliances. Energy Procedia 62, 576–582 (2014). ISSN 1876-6102
Zhu, M.B., Li, S.: The housing inequality in China. Res. Econ. Manag. 39(09), 91–101 (2018)
Khaoula, B., Majida, L., Samira, K., Mohamed, L.K., Abir, E.Y.: AHP-based approach for evaluating ergonomic criteria. Procedia Manuf. 32, 856–863 (2019). ISSN 2351-9789
Gil, M., Lubiano, M., de la Rosa de Sáa, S., Sinova, B.: Analyzing data from a fuzzy rating scale-based questionnaire. A case study. Psicothema 27, 182–191 (2015)
Camargo, M., Wendling, L., Bonjour, E.: A fuzzy integral based methodology to elicit semantic spaces in usability tests. Int. J. Ind. Ergon. 44, 11–17 (2014)
Author information
Authors and Affiliations
Corresponding authors
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Jiang, B., Wang, X., Wu, Y. (2020). Research on Gesture Interaction Design for Home Control Intelligent Terminals. In: Kurosu, M. (eds) Human-Computer Interaction. Multimodal and Natural Interaction. HCII 2020. Lecture Notes in Computer Science(), vol 12182. Springer, Cham. https://doi.org/10.1007/978-3-030-49062-1_3
Download citation
DOI: https://doi.org/10.1007/978-3-030-49062-1_3
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-49061-4
Online ISBN: 978-3-030-49062-1
eBook Packages: Computer ScienceComputer Science (R0)