CN111488823A - Dimension-increasing gesture recognition and interaction system and method based on two-dimensional laser radar - Google Patents
Dimension-increasing gesture recognition and interaction system and method based on two-dimensional laser radar Download PDFInfo
- Publication number
- CN111488823A CN111488823A CN202010271685.9A CN202010271685A CN111488823A CN 111488823 A CN111488823 A CN 111488823A CN 202010271685 A CN202010271685 A CN 202010271685A CN 111488823 A CN111488823 A CN 111488823A
- Authority
- CN
- China
- Prior art keywords
- module
- laser radar
- dimensional
- gesture recognition
- swing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
- G06V40/28—Recognition of hand or arm movements, e.g. recognition of deaf sign language
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Psychiatry (AREA)
- Theoretical Computer Science (AREA)
- Social Psychology (AREA)
- Human Computer Interaction (AREA)
- General Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Computer Networks & Wireless Communication (AREA)
- Electromagnetism (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Optical Radar Systems And Details Thereof (AREA)
Abstract
The invention provides a dimension-increasing gesture recognition and interaction system and method based on a two-dimensional laser radar. And acquiring data through a swinging laser radar, and acquiring periodic information in the data of the three-dimensional point cloud reconstruction module so as to estimate the third-dimensional data and acquire the three-dimensional point cloud. The periodic point clouds are graphed via a graphing module. And performing gesture recognition by using a trained network in a gesture recognition and feature extraction module, and performing feature extraction in the imaged picture. The multi-frame comparison module identifies long-time and multi-gesture combined gestures. And the interactive control module completes intention identification and interactive control. The invention has simple structure, high precision and high cost performance. The interaction method can be applied to various occasions and applications, and has high flexibility.
Description
Technical Field
The invention belongs to the field of human-computer interaction and laser radar, is applied to gesture recognition in demonstration and interaction to realize interaction, and particularly relates to a dimension-increasing gesture recognition and interaction system and method based on a two-dimensional laser radar.
Background
With the development of the AR and VR technologies, the interactive technologies applied to the AR and VR technologies are rapidly changing and developing. Higher demands are made on the reliability, usability, non-sensibility and immersion of the use experience of the interactive technology. The man-machine interaction technology is not limited to simple key operations such as keyboard and mouse, more advanced interaction technologies such as somatosensory interaction, gesture interaction, expression interaction and gesture interaction are proposed and applied, and even brain-machine interfaces are also widely concerned and researched.
With the development of laser radar technology, laser radar is widely applied by virtue of the characteristics of high precision and good stability. The low-cost two-dimensional laser radar enables the application of the laser radar technology in many fields without cost limitation.
Interaction techniques based on gesture recognition are a very promising technique in presentation and interaction. The method based on the image is popular, and compared with gesture recognition based on the image shot, the laser radar can provide more accurate information such as position, angle and the like, so that possibility is provided for a high-precision interaction technology.
Disclosure of Invention
Aiming at the defects of the prior art, the invention aims to provide a dimension-increasing gesture recognition and interaction system and method based on a two-dimensional laser radar. The two-dimensional laser radar is added with a swinging accessory to realize the dimension increase of the radar, and the gesture recognition and the intention analysis are realized by analyzing the three-dimensional point cloud, so that the interaction is realized. The system comprises a laser radar swing module, a three-dimensional point cloud reconstruction module, a graphing module, a gesture recognition and feature extraction module, a multi-frame comparison module and an interaction control module. And acquiring data through a swinging laser radar, and acquiring periodic information in the data of the three-dimensional point cloud reconstruction module so as to estimate the third-dimensional data and acquire the three-dimensional point cloud. The periodic point clouds are graphed via a graphing module. And performing gesture recognition by using a trained network in a gesture recognition and feature extraction module, and performing feature extraction in the imaged picture. The multi-frame comparison module identifies long-time and multi-gesture combined gestures. And the interactive control module completes intention identification and interactive control. The invention has simple structure, high precision and high cost performance. The interaction method provided by the invention can be applied to various occasions and various applications, and has high flexibility.
The invention specifically adopts the following technical scheme:
an dimension-increasing gesture recognition and interaction system based on a two-dimensional laser radar is characterized by comprising: the system comprises a laser radar swing module, a three-dimensional point cloud reconstruction module, a graphing module, a gesture recognition and feature extraction module, a multi-frame comparison module and an interaction control module;
the laser radar swing module controls the laser radar to perform reciprocating swing with a fixed angle on a swing plane which forms an included angle with the rotation plane;
the three-dimensional point cloud reconstruction module takes scanning data of a laser radar as input to carry out three-dimensional point cloud reconstruction, and comprises the following steps: the system comprises a period calculation module, a third-dimensional interpolation module, a coordinate conversion module and a region identification module; the period calculation module is used for calibrating and recording the start of each swing period of the laser radar; the third-dimensional interpolation module is used for estimating the swing angle of each two-dimensional point to obtain a third-dimensional component; the coordinate conversion module is used for converting the polar coordinate into a rectangular coordinate; the region identification module is used for identifying points in the gesture identification region from all the scanning points and is used for gesture identification;
the imaging module uses the information carried by the three-dimensional point cloud to carry out imaging on the point cloud which is subjected to the third-dimensional interpolation and selected by the area identification module, retains gesture information including the position and the form of the gesture, and ensures that the subsequent gesture identification and feature extraction module can acquire accurate information.
The gesture recognition and feature extraction module recognizes gestures through a neural network which is formed by a graphical point cloud picture through machine learning training, and extracts the positions of the gestures and the angle features of the gestures;
the multi-frame comparison module compares continuous actions or actions formed by a plurality of gestures according to the gestures identified by the gesture identification and feature extraction module;
and the interaction control module converts the results of the gesture recognition and feature extraction module and/or the multi-frame comparison module into corresponding intentions and completes interaction control according to a preset interaction mode. It can be combined with other matched software and hardware to complete the expected interactive operation.
Preferably, the rotation plane and the oscillation plane are perpendicular to each other.
Preferably, the third dimension interpolation module interpolates the value θ of the swing angle, and the formula is as follows:
wherein, N is the total number of points scanned in a period, N is the current scanning point which is the number of the scanning points in the whole period, and thetamaxIs the maximum swing angle.
Preferably, the gesture recognition area is a rectangular parallelepiped area. Which may be mapped to a screen or other field of view, through gestures to interact with the gesture location and screen.
Preferably, the imaging module divides the cuboid identification area into a plurality of levels from the bottom height to the top height, corresponding to a plurality of gray values, determines the height information of the point cloud as one of the gray values according to the ratio of the height information of the point cloud from the bottom height to the top height, and extends a pixel of the gray value converted from z to the position of x and y to form a gray map.
Preferably, the interactive control module is defined with a confirmation operation surface in a rectangular parallelepiped region. I.e. in some operations the operation is confirmed only after the hand has passed this face; the control module sets an interactive language library according to the content to be controlled, the interactive language is an interactive mode which is approved and used by the interactive control module and the object to be interacted, and corresponding functions can be realized through the mode. If the input of the gesture exists in the interactive language library, corresponding interactive control is implemented through the interactive control module.
Preferably, the lidar wobble module only provides wobble, but not data of a wobble angle, and the apparatus comprises: the device comprises a laser radar fixing device, a support, a movable sleeve and a turntable; the laser radar fixing device is arranged on the support through a first hinge; the rotating plane of the turntable is vertical to the support; the sleeve end of the movable sleeve is eccentrically connected with the rotary table, and the connecting rod end is connected with the edge of the laser radar fixing device through a second hinge.
The distance of a part of scanned points can have a sudden change in a certain period of the swing period through a specially designed structure. The period calculation sub-module is swept to obtain a period by analyzing a portion of the points scanned for periodic discontinuities in distance.
Preferably, the period calculation module calibrates the start of each swing period of the laser radar, and is based on a first hole of the laser radar fixing device in the scanning direction of the laser radar and a second hole formed in a corresponding position on the support.
Preferably, in the working process of the laser radar swing module and the laser radar, the working process of the three-dimensional point cloud reconstruction module comprises the following steps:
step S1: the period calculation module analyzes the collected scanning points, takes the maximum jump scanning points generated when the positions of the first opening and the second opening are superposed as reference, takes two adjacent maximum jump points as a period, and records the total number of points scanned in the period;
step S2: the three-dimensional interpolation module adopts the following formula to perform interpolation:
wherein, theta is the value of the swing angle, N is the total number of points scanned in a period, N is the number of the current scanning point in the whole period, and theta ismaxIs the maximum swing angle;
x=lcosαsinθ,y=lsinα,z=lcosαcosθ;
converting the polar coordinate system into a rectangular coordinate system;
and 4, step 4: the region recognition module recognizes points located in the gesture recognition region.
Compared with the prior art, the invention and the preferred scheme thereof have the following beneficial effects:
1. the scheme of the invention is constructed based on the two-dimensional laser radar as a measuring sensor, and has the advantages of low price, high data precision and reliable measured data;
2. the invention does not require the swing module to provide a parameter of a third dimension, namely a swing angle, and completely depends on the data of the independent two-dimensional laser radar to carry out three-dimensional scanning, and the swing accessory can be a simple mechanical structure without an additional data number measuring and transmitting part, so that a system hardware system is simplified, and the cost is reduced;
3. the invention can not only carry out gesture recognition, but also obtain gesture position and angle information, so that the interaction can be more refined;
4. the interactive method can be applied to various occasions and applications, and has high flexibility.
Drawings
The invention is described in further detail below with reference to the following figures and detailed description:
FIG. 1 is a block diagram of a system architecture according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a lidar swing module according to an embodiment of the invention;
FIG. 3 is a schematic view of an angular representation and rotational orientation of an embodiment of the present invention;
FIG. 4 is a schematic diagram of an interaction area for gesture recognition according to an embodiment of the present invention;
in the figure: 1-a first hinge; 2-a second hinge; 3-a sliding sleeve; 4-a turntable; 5-laser radar; 6-calculate the aperture for the cycle (second aperture); 7-laser radar fixing device; 8-an opening in the lidar fixture (first opening); 9-laser radar rotation mode; 10-laser radar swing mode; 11-origin of coordinates; 12-confirmation operation face; 13-identify the region.
Detailed Description
In order to make the features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in detail as follows:
as shown in fig. 1, the system for the dimension-increasing gesture recognition and interaction method based on the two-dimensional lidar provided in this embodiment includes: laser radar swing module, three-dimensional point cloud rebuild module, graphing module, gesture recognition and feature extraction module, multiframe comparison module and interactive control module, wherein:
laser radar swing module as shown in fig. 2, laser radar is attached on laser radar fixing device 7 with light-transmitting area, and when trompil 8 on fixing device 7 and the used trompil 6 of calculation cycle coincide, laser radar's swing angle is zero degree, and laser radar makes swing motion around first hinge 1. The laser rotation mode is shown as 9 and the laser oscillation mode is shown as 10. The swinging shaft of the swinging structure of the rotary radar is vertically intersected with the laser radar rotating shaft, namely the rotating shaft of the first hinge 1 passes through the laser radar rotating shaft, so that the coordinate calculation is simplified. The second hinge 2 captures a component of the circular motion in one direction and transmits it to the lidar mounting device 7 and thus to the lidar 5. The sliding sleeve 3 transmits the rotation while overcoming the distance change between the connection points. The turntable 4 rotates at a constant speed, providing it with a component of circular motion.
As a preferable mode, the lidar 5 in the embodiment adopts Siman RP L IDAR A2, connects the lidar with a public head plug of XH2.54-5P specification, connects with a USB data line for a computer after passing through a module matched with Siman RP L IDAR A2, and scans point information to enter a three-dimensional point cloud reconstruction module.
The three-dimensional point cloud reconstruction module comprises: the system comprises a period calculation module, a third-dimensional interpolation module, a coordinate conversion module and an area identification module. And firstly receiving and storing the scanning data transmitted by the data line, and then carrying out three-dimensional point cloud reconstruction. The method comprises the following specific steps:
step 1: the period calculation module functions to analyze the stored scan points because of the particular configuration of the lidar fixture 7, when the lidar rotates to the position of the opening 8 of the lidar fixture, there is a jump from a continuous value to a higher value, which is the largest when just the opening 8 in the fixture 7 coincides with the opening 6 used for the calculation period, and the period is obtained from the two maximum jump points, and the total number of points scanned within a period is recorded.
Step 2: the third dimension interpolation module functions to interpolate using the following equations.
Wherein, N is the total number of points scanned in a period, N is the current scanning point which is the number of the scanning points in the whole period, and thetamaxThe maximum swing angle is the degree of the included angle shown in fig. 2 as reference number 10.
x ═ lcos α sin θ, y ═ lsin α, and z ═ lcos α cos θ.
And 4, step 4: the region recognition module functions, and as shown in fig. 4, the origin of coordinates 11 is the position of the laser radar, and the detectable region thereof is in the shape of a sector rotated by 360 degrees around the vertex, but only in the recognition region 13, the gesture position is recognized beyond the confirmation operation surface 12.
The imaging module converts the height z of the scanning point after coordinate conversion into 256 gray values, and extends a pixel of the gray value converted from z at the position of x and y, and the finally formed image is developed in a similar form of the length h1 and the width h2 in fig. 4, and the gray value of the pixel point is converted from the height z. The formula for the gray value conversion is as follows:
wherein G represents a gray scale value having a magnitude of 0 to 1; h3 is the height of the rectangular parallelepiped identification area as shown in fig. 4, and z' is the bottom height of the rectangular parallelepiped identification area.
The gesture recognition and feature extraction module analyzes the point cloud in one period which is mapped by the mapping module. The method is characterized in that a graphical cloud point image of what gesture is marked is used for training a gesture recognition network with high accuracy through a machine learning method. The gesture recognition network does not directly give out what the gesture is, but gives out the possible probability of each gesture, and if the probability of each gesture does not exceed a set value, the gesture recognition network recognizes that other gestures are not processed.
As a preferred approach, the noted gestures may include a relatively stationary fist, an open palm, a closed palm, a single finger pointing, and a sliding and poking motion relative to each other.
As a preferred mode, the information mainly extracted by feature extraction is a gesture position, the color of the gray-scale map contains the up-and-down information of the gesture, and the position of the gesture in the gray-scale map contains the left-and-right and front-and-back information of the gesture. The change of the gray scale and the position of the gesture also contain angle information of the gesture, and extraction can be carried out if needed.
As a preferable mode, the labeling of the gesture can adopt a manual labeling mode to improve the accuracy of labeling.
The multi-frame comparison module adopts a storage comparison mode, firstly, some operations are set to be composed of the gestures, and if the combination is found by the multi-frame comparison module, the information is sent to the interactive control module. It is mainly applicable to the case of relative motion gestures such as swipe and swipe.
And the interaction control module receives the gesture recognition and feature extraction module and the multi-frame comparison module. The output input of the graphical module may also be used for gesture display if gestures are desired to be displayed. The main functions are intended recognition and interactive control.
As a preferred mode, the interactive control module needs to set an interactive language library according to the content to be controlled, the interactive language is an interactive mode that the interactive control module and the object to be interacted both recognize and use, and corresponding functions can be realized through the mode. If the input of the gesture exists in the interactive language library, corresponding interactive control is implemented through the interactive control module.
As a preferable mode, a confirmation operation surface, such as a mark 12 shown in fig. 4, is provided in the rectangular parallelepiped type recognition area, that is, the operation is confirmed after the hand passes over the surface, and particularly, for the operation such as clicking, the position adjustment is performed after the operation surface is confirmed, and then the operation is confirmed after passing over the confirmation operation surface. The interactive control module needs to confirm the operation through information such as the position of the gesture.
The present invention is not limited to the above preferred embodiments, and other various forms of dimension-increasing gesture recognition and interaction system and method based on two-dimensional lidar can be derived by anyone skilled in the art from the teachings of the present invention.
Claims (9)
1. An dimension-increasing gesture recognition and interaction system based on a two-dimensional laser radar is characterized by comprising: the system comprises a laser radar swing module, a three-dimensional point cloud reconstruction module, a graphing module, a gesture recognition and feature extraction module, a multi-frame comparison module and an interaction control module;
the laser radar swing module controls the laser radar to swing at a fixed angle on a swing plane which forms an included angle with the rotation plane;
the three-dimensional point cloud reconstruction module takes scanning data of a laser radar as input to carry out three-dimensional point cloud reconstruction, and comprises the following steps: the system comprises a period calculation module, a third-dimensional interpolation module, a coordinate conversion module and a region identification module; the period calculation module is used for calibrating and recording the start of each swing period of the laser radar; the third-dimensional interpolation module is used for estimating the swing angle of each two-dimensional point to obtain a third-dimensional component; the coordinate conversion module is used for converting the polar coordinate into a rectangular coordinate; the region identification module is used for identifying points in the gesture identification region from all the scanning points;
the imaging module uses the information carried by the three-dimensional point cloud to carry out imaging on the point cloud which is subjected to the third-dimensional interpolation and selected by the area identification module, and retains the information of the gesture, including the position and the form of the gesture;
the gesture recognition and feature extraction module recognizes gestures through a neural network which is formed by a graphical point cloud picture through machine learning training, and extracts the positions of the gestures and the angle features of the gestures;
the multi-frame comparison module compares continuous actions or actions formed by a plurality of gestures according to the gestures identified by the gesture identification and feature extraction module;
and the interaction control module converts the results of the gesture recognition and feature extraction module and/or the multi-frame comparison module into corresponding intentions and completes interaction control according to a preset interaction mode.
2. The two-dimensional lidar based dimension-increasing gesture recognition and interaction system of claim 1, wherein: the rotation plane and the swing plane are perpendicular to each other.
3. The two-dimensional lidar based dimension-increasing gesture recognition and interaction system of claim 1, wherein:
the third dimension interpolation module inserts the value theta of the swing angle, and the formula is as follows:
wherein N is the total number of points scanned in one period, and N isThe current scanning point is the second scanning point in the whole period, thetamaxIs the maximum swing angle.
4. The two-dimensional lidar based dimension-increasing gesture recognition and interaction system of claim 1, wherein: the gesture recognition area is a cuboid area.
5. The two-dimensional lidar based dimension-increasing gesture recognition and interaction system of claim 4, wherein: the imaging module divides the cuboid identification area into a plurality of levels from the bottom height to the top height, the height information of the point cloud is determined to be one of a plurality of gray values according to the proportion of the height information of the point cloud from the bottom height to the top height, and a pixel of the gray value converted from z extends out of the position of the point cloud from x and y to form a gray image.
6. The two-dimensional lidar based dimension-increasing gesture recognition and interaction system of claim 4, wherein: the interactive control module is defined with a confirmation operation surface in a rectangular parallelepiped area.
7. The two-dimensional lidar based dimension-increasing gesture recognition and interaction system of claim 2, wherein: the lidar swing module includes: the device comprises a laser radar fixing device, a support, a movable sleeve and a turntable; the laser radar fixing device is arranged on the support through a first hinge; the rotating plane of the turntable is vertical to the support; the sleeve end of the movable sleeve is eccentrically connected with the rotary table, and the connecting rod end is connected with the edge of the laser radar fixing device through a second hinge.
8. The two-dimensional lidar based dimension-increasing gesture recognition and interaction system of claim 7, wherein: the period calculation module is used for calibrating the starting of each swing period of the laser radar, and is based on a first hole of the laser radar fixing device in the scanning direction of the laser radar and a second hole arranged at a corresponding position on the support.
9. The method for recognizing and interacting the dimension-increasing gesture recognition and interaction system based on the two-dimensional laser radar as claimed in claim 8, wherein in the working process of the laser radar swing module and the laser radar, the working process of the three-dimensional point cloud reconstruction module comprises the following steps:
step S1: the period calculation module analyzes the collected scanning points, takes the maximum jump scanning points generated when the positions of the first opening and the second opening are superposed as reference, takes two adjacent maximum jump points as a period, and records the total number of points scanned in the period;
step S2: the three-dimensional interpolation module adopts the following formula to perform interpolation:
wherein, theta is the value of the swing angle, N is the total number of points scanned in a period, N is the number of the current scanning point in the whole period, and theta ismaxIs the maximum swing angle;
step 3, the coordinate conversion module converts the polar coordinate formed by the swing angle theta of the laser radar, the rotation angle α of the laser radar and the ranging distance l of the laser radar into a coordinate conversion formula:
x=lcosαsinθ,y=lsinα,z=lcosαcosθ;
converting the polar coordinate system into a rectangular coordinate system;
and 4, step 4: the region recognition module recognizes points located in the gesture recognition region.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010271685.9A CN111488823B (en) | 2020-04-09 | 2020-04-09 | Dimension-increasing gesture recognition and interaction system and method based on two-dimensional laser radar |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010271685.9A CN111488823B (en) | 2020-04-09 | 2020-04-09 | Dimension-increasing gesture recognition and interaction system and method based on two-dimensional laser radar |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111488823A true CN111488823A (en) | 2020-08-04 |
CN111488823B CN111488823B (en) | 2022-07-08 |
Family
ID=71798257
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010271685.9A Active CN111488823B (en) | 2020-04-09 | 2020-04-09 | Dimension-increasing gesture recognition and interaction system and method based on two-dimensional laser radar |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111488823B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112241204A (en) * | 2020-12-17 | 2021-01-19 | 宁波均联智行科技有限公司 | Gesture interaction method and system of vehicle-mounted AR-HUD |
CN112363156A (en) * | 2020-11-12 | 2021-02-12 | 苏州矽典微智能科技有限公司 | Air gesture recognition method and device and intelligent equipment |
CN112904999A (en) * | 2020-12-30 | 2021-06-04 | 江苏奥格视特信息科技有限公司 | Augmented reality somatosensory interaction method and system based on laser radar |
CN114245542A (en) * | 2021-12-17 | 2022-03-25 | 深圳市恒佳盛电子有限公司 | Radar induction lamp and control method thereof |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103455144A (en) * | 2013-08-22 | 2013-12-18 | 深圳先进技术研究院 | Vehicle-mounted man-machine interaction system and method |
CN104808192A (en) * | 2015-04-15 | 2015-07-29 | 中国矿业大学 | Three-dimensional laser scanning swing device and coordinate conversion method thereof |
US20150234463A1 (en) * | 2013-03-11 | 2015-08-20 | Magic Leap, Inc. | Systems and methods for a plurality of users to interact with each other in augmented or virtual reality systems |
CN106199626A (en) * | 2016-06-30 | 2016-12-07 | 上海交通大学 | Based on the indoor three-dimensional point cloud map generation system and the method that swing laser radar |
CN108361780A (en) * | 2018-01-25 | 2018-08-03 | 宁波隔空智能科技有限公司 | Cooker hood controller based on microwave radar Gesture Recognition and control method |
CN108535736A (en) * | 2017-03-05 | 2018-09-14 | 苏州中德睿博智能科技有限公司 | Three dimensional point cloud acquisition methods and acquisition system |
CN108873715A (en) * | 2018-07-04 | 2018-11-23 | 深圳众厉电力科技有限公司 | Intelligent home control system based on gesture identification |
CN110784253A (en) * | 2018-07-31 | 2020-02-11 | 深圳市白麓嵩天科技有限责任公司 | Information interaction method based on gesture recognition and Beidou satellite |
-
2020
- 2020-04-09 CN CN202010271685.9A patent/CN111488823B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150234463A1 (en) * | 2013-03-11 | 2015-08-20 | Magic Leap, Inc. | Systems and methods for a plurality of users to interact with each other in augmented or virtual reality systems |
CN103455144A (en) * | 2013-08-22 | 2013-12-18 | 深圳先进技术研究院 | Vehicle-mounted man-machine interaction system and method |
CN104808192A (en) * | 2015-04-15 | 2015-07-29 | 中国矿业大学 | Three-dimensional laser scanning swing device and coordinate conversion method thereof |
CN106199626A (en) * | 2016-06-30 | 2016-12-07 | 上海交通大学 | Based on the indoor three-dimensional point cloud map generation system and the method that swing laser radar |
CN108535736A (en) * | 2017-03-05 | 2018-09-14 | 苏州中德睿博智能科技有限公司 | Three dimensional point cloud acquisition methods and acquisition system |
CN108361780A (en) * | 2018-01-25 | 2018-08-03 | 宁波隔空智能科技有限公司 | Cooker hood controller based on microwave radar Gesture Recognition and control method |
CN108873715A (en) * | 2018-07-04 | 2018-11-23 | 深圳众厉电力科技有限公司 | Intelligent home control system based on gesture identification |
CN110784253A (en) * | 2018-07-31 | 2020-02-11 | 深圳市白麓嵩天科技有限责任公司 | Information interaction method based on gesture recognition and Beidou satellite |
Non-Patent Citations (1)
Title |
---|
周旭峰等: "基于组合RNN网络的EMG信号手势识别", 《光学精密工程》 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112363156A (en) * | 2020-11-12 | 2021-02-12 | 苏州矽典微智能科技有限公司 | Air gesture recognition method and device and intelligent equipment |
CN112241204A (en) * | 2020-12-17 | 2021-01-19 | 宁波均联智行科技有限公司 | Gesture interaction method and system of vehicle-mounted AR-HUD |
CN112904999A (en) * | 2020-12-30 | 2021-06-04 | 江苏奥格视特信息科技有限公司 | Augmented reality somatosensory interaction method and system based on laser radar |
CN114245542A (en) * | 2021-12-17 | 2022-03-25 | 深圳市恒佳盛电子有限公司 | Radar induction lamp and control method thereof |
CN114245542B (en) * | 2021-12-17 | 2024-03-22 | 深圳市恒佳盛电子有限公司 | Radar induction lamp and control method thereof |
Also Published As
Publication number | Publication date |
---|---|
CN111488823B (en) | 2022-07-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111488823B (en) | Dimension-increasing gesture recognition and interaction system and method based on two-dimensional laser radar | |
US12086935B2 (en) | Predictive information for free space gesture control and communication | |
Memo et al. | Head-mounted gesture controlled interface for human-computer interaction | |
US6204852B1 (en) | Video hand image three-dimensional computer interface | |
CN108776773B (en) | Three-dimensional gesture recognition method and interaction system based on depth image | |
US8994652B2 (en) | Model-based multi-hypothesis target tracker | |
US6147678A (en) | Video hand image-three-dimensional computer interface with multiple degrees of freedom | |
Leiva et al. | Gestures à go go: Authoring synthetic human-like stroke gestures using the kinematic theory of rapid movements | |
US20130120250A1 (en) | Gesture recognition system and method | |
CN109145802B (en) | Kinect-based multi-person gesture man-machine interaction method and device | |
JP2004280834A (en) | Motion recognition system using virtual writing plane, and recognition method thereof | |
WO2009155465A1 (en) | Gesture-based control system for vehicle interfaces | |
JP2008052377A (en) | Information output device | |
CN113689577A (en) | Method, system, device and medium for matching virtual three-dimensional model and entity model | |
Olsen et al. | A Taxonomy of Modeling Techniques using Sketch-Based Interfaces. | |
KR20160141023A (en) | The method of dynamic and static gesture recognition using depth camera and interface of immersive media contents | |
KR20190027287A (en) | The method of mimesis for keyboard and mouse function using finger movement and mouth shape | |
JP4308306B2 (en) | Print output control means | |
Bai et al. | Poster: Markerless fingertip-based 3D interaction for handheld augmented reality in a small workspace | |
Gao et al. | Challenges and solutions for vision-based hand gesture interpretation: A review | |
CN110888536A (en) | Finger interaction recognition system based on MEMS laser scanning | |
CN116880687B (en) | Suspension touch method based on monocular multi-algorithm | |
CN118230421B (en) | Multi-mode gesture recognition method and system based on deep learning | |
Zhang et al. | A Non-parametric RDP Algorithm Based on Leap Motion | |
Leiva et al. | Stroke gesture synthesis in human-computer interaction |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |