CN102221891A - Method and system for realizing optical image gesture recognition - Google Patents
Method and system for realizing optical image gesture recognition Download PDFInfo
- Publication number
- CN102221891A CN102221891A CN2011101951496A CN201110195149A CN102221891A CN 102221891 A CN102221891 A CN 102221891A CN 2011101951496 A CN2011101951496 A CN 2011101951496A CN 201110195149 A CN201110195149 A CN 201110195149A CN 102221891 A CN102221891 A CN 102221891A
- Authority
- CN
- China
- Prior art keywords
- gesture
- user
- command
- instruction
- startup command
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 47
- 230000003287 optical effect Effects 0.000 title abstract 5
- 230000008569 process Effects 0.000 claims abstract description 26
- 230000008676 import Effects 0.000 claims description 58
- 238000007689 inspection Methods 0.000 claims description 24
- 208000003443 Unconsciousness Diseases 0.000 description 4
- 230000009471 action Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 230000004044 response Effects 0.000 description 3
- 230000004069 differentiation Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
Images
Landscapes
- User Interface Of Digital Computer (AREA)
Abstract
The invention discloses a method and a system for realizing optical image gesture recognition, which comprises the steps as follows: firstly, a user is judged whether to input a start command or not; if yes, a command matched with the start command input by the user is actuated and timing is started; and secondly, the user is judged whether to input a gesture command or not during the process of actuating the command matched with the start command input by the user; and if yes, a command matched with the gesture command input by the user is actuated, otherwise, the first step is returned after the timing is completed. The method and the system for realizing the optical image gesture recognition can effectively solve the problem of erroneous judgment easily appeared in the optical image gesture recognition of the prior art and can prevent erroneous judgment. The invention also discloses the system for realizing the optical image gesture recognition.
Description
Technical field
The present invention relates to the Gesture Recognition field, relate more specifically to a kind of method and system of realizing the video gesture identification.
Background technology
In Gesture Recognition, the corresponding instruction of gesture.In the system of the video gesture identification of prior art, some unconscious actions of user are very close with gesture command, and system thinks that the user has imported gesture command by mistake, and cause wrong response; Simultaneously, the user is when the input gesture command, and some actions of onlooker also can cause the erroneous judgement of system.
As seen, because planting, the system of the video gesture identification of prior art do not have the whether effectively mechanism of gesture command of differentiation, make and to judge that the user is input gesture command or unconscious behavior and cause wrong response, unnecessary like this response has not only increased user's operation burden, and wasted user's time, reduced operating efficiency.
Summary of the invention
The object of the present invention is to provide a kind of method and system of realizing the video gesture identification, solve the problem that in the video gesture identification of prior art, occurs erroneous judgement easily, prevent erroneous judgement.
To achieve these goals, the invention provides a kind of method that realizes the video gesture identification, comprise step 1: judge whether the user has the input startup command,, carry out and import the corresponding instruction of startup command, and start timing with the user if having; Step 2: import in the process of the corresponding instruction of startup command with the user in described execution, judge whether the user has the input gesture command,, carry out and import the corresponding instruction of gesture command with the user if having, otherwise, after finishing, timing returns step 1.
Preferably, described step 1 also specifically comprises: receive the gesture motion that the user imports startup command;
According to first corresponding tables between the startup command and instruction, judge in described first corresponding tables, whether to include the corresponding gesture of gesture motion of importing startup command with described user; In described first corresponding tables, include when importing the corresponding gesture of the gesture motion of startup command, carry out and import the corresponding instruction of startup command with the user with described user, and the startup timing.
Preferably, described step 2 specifically comprises: import in the process of the corresponding instruction of startup command with the user in described execution, the gesture motion that the reception user imports gesture command judges whether include the corresponding gesture of gesture motion of importing gesture command with described user in described second corresponding tables according to second corresponding tables between the gesture command and instruction; In described second corresponding tables, include when importing the corresponding gesture of the gesture motion of gesture command, carry out and import the corresponding instruction of gesture command, otherwise after the timing end, return step 1 with the user with described user.
Preferably, import in the process of the corresponding instruction of startup command with the user carrying out, or execution imports in the process of the corresponding instruction of gesture command with the user, show that to the user current interface of current state correspondence is fed back.
Preferably, described startup command and gesture command all obtain by the operation that camera receives the user.
The present invention also provides a kind of system that realizes the video gesture identification, comprises performance element, first judging unit and second judging unit, and described performance element is used to carry out with the user imports the corresponding instruction of startup command, and starts timing; Described judging unit is used to judge whether the user has the input startup command, if having, described performance element is carried out and imported the corresponding instruction of startup command with the user, and starts timing; Described second judging unit is used for importing described execution and user the process of the corresponding instruction of startup command, judge whether the user has the input gesture command, if have, described performance element is carried out and is imported the corresponding instruction of gesture command with the user, otherwise, after timing finishes, return described first judging unit and judge again.
Preferably, described system also comprises receiving element, is used to receive the gesture motion that the user imports startup command, perhaps carries out at described performance element and imports in the process of the corresponding instruction of startup command with the user, receives the gesture motion that the user imports gesture command.
Preferably, described receiving element is a camera.
Preferably, described system also comprises first inspection unit, be used to check first corresponding tables between the startup command and instruction, the inspection that described first judging unit carries out according to described first inspection unit, whether judgement includes the corresponding gesture of gesture motion of importing startup command with described user in described first corresponding tables, in described first corresponding tables, include when importing the corresponding gesture of the gesture motion of startup command with described user, carry out and import the corresponding instruction of startup command, and start timing with the user.
Preferably, described system also comprises second inspection unit, be used to check second corresponding tables between the gesture command and instruction, the inspection that described second judging unit carries out according to described second inspection unit, whether judgement includes in described second corresponding tables is imported the corresponding gesture of the gesture motion of gesture command with described user and includes in described first corresponding tables when importing the corresponding gesture of the gesture motion of gesture command with described user, carry out and import the corresponding instruction of gesture command, judge usefulness otherwise after timing finishes, return described first judging unit with the user.
Compared with prior art, the method and system of realization video gesture identification of the present invention uses new flow process to carry out gesture identification, judge earlier by using startup command, when the gesture motion of user input is consistent with predefined gesture as startup command, just carry out and import the corresponding instruction of startup command, and start timing with the user; Then, import in the process of the corresponding instruction of startup command with the user in execution, judge whether the user has the input gesture command,, carry out and import the corresponding instruction of gesture command with the user if having, otherwise, after timing finishes, return and rejudge the user whether the startup command stage is arranged, promptly, begin the identification of gesture command more earlier by the input startup command, can get rid of the unconscious action of user effectively, prevent erroneous judgement.
By following description also in conjunction with the accompanying drawings, it is more clear that the present invention will become, and these accompanying drawings are used to explain embodiments of the invention.
Description of drawings
In order to be illustrated more clearly in the technical scheme of the embodiment of the invention, to do to introduce simply to the accompanying drawing of required use among the embodiment below, apparently, accompanying drawing in describing below only is some embodiments of the present invention, for those of ordinary skills, under the prerequisite of not paying creative work, can also obtain other accompanying drawing according to these accompanying drawings.
Fig. 1 realizes the process flow diagram of the method for video gesture identification for the present invention.
Fig. 2 realizes the process flow diagram of a specific embodiment of the method for video gesture identification for the present invention.
Fig. 3 realizes the structural drawing of the system of video gesture identification for the present invention.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the invention, the technical scheme among the embodiment is clearly and completely described, obviously, described embodiment only is the present invention's part embodiment, rather than whole embodiment.With reference now to accompanying drawing, describe embodiments of the invention, the similar elements label is represented similar elements in the accompanying drawing.
Please refer to Fig. 1, the process flow diagram for the present invention realizes the method for video gesture identification comprises:
Step 101: judge whether the user has the input startup command,, carry out and import the corresponding instruction of startup command, and start timing with the user if having;
Step 102: import in the process of the corresponding instruction of startup command with the user in described execution, judge whether the user has the input gesture command,, carry out and import the corresponding instruction of gesture command with the user if having, otherwise, after finishing, timing returns step 101.
Concrete, below in conjunction with specific embodiment each step shown in Figure 1 is further described.
In the Gesture Recognition, need gesture and instruction corresponding tables,, carry out corresponding operating according to the instruction of user's gesture correspondence so that system, searches gesture and instruction corresponding tables according to user's gesture.In inventive embodiments, the gesture motion of predefined can be used as " startup command " is formed by following gesture motion combination in any: A. user face forward is in the face of equipment; B. eyes of user is watched equipment attentively; C. user's palm sensing equipment.Like this, set up first corresponding tables between gesture A, B, C and instruction, certainly, the predefined gesture of first corresponding tables is not limited thereto.
Same, also want corresponding second corresponding tables of setting up between the gesture command and instruction, with the corresponding instruction of gesture motion commonly used, so that system is according to user's gesture, search second corresponding tables between the gesture command and instruction, identify user's gesture and carry out corresponding operating according to the instruction of user's gesture correspondence.
Wherein, described startup command and gesture command all obtain by the operation that the camera of installing on the equipment receives the user.
Please refer to Fig. 2, the present invention realizes that a specific embodiment of the method for video gesture identification can comprise the steps:
Step 201: beginning;
Step 202: receive the gesture motion that the user imports startup command;
Step 203: check first corresponding tables between the startup command and instruction;
Step 204: judge whether the gesture motion of the current input startup command of user meets the gesture of the startup command in described first corresponding tables, if, then enter step 205, otherwise, step 201 returned;
Step 205: carry out and import the corresponding instruction of startup command, and start timing, show the current interface feedback of current state correspondence simultaneously to the user with the user;
Step 206: in step 205 process, judge whether system receives the gesture command that the user imports,, then enter step 208 if having, otherwise, enter step 207;
Step 207: when timing finishes, return step 201 again, otherwise, return step 206;
Step 208: check second corresponding tables between the gesture command and instruction;
Step 209: in described second corresponding tables, include when importing the corresponding gesture of the gesture motion of gesture command, enter step 210 with described user, otherwise, return step 206;
Step 210: carry out and import the corresponding instruction of gesture command, show the current interface feedback of current state correspondence simultaneously to the user, and pick up counting again with the user;
Step 211: when timing finishes, return step 201 again, otherwise, return step 206.
Step 201 to step 211 couple user makes gesture before camera situation has been described in detail, it is to be noted, method provided by the invention is except this sensor of camera, can also carry out gesture identification by sensors such as touch-screens, at this moment, the step 201 of execution is to 211 similar, difference is, if carry out gesture identification by touch-screen, then the user need import gesture on touch-screen, rather than makes gesture before camera.
Please refer to Fig. 3, the structural drawing for the present invention realizes the system of video gesture identification comprises:
To step 211, system shown in Figure 3 can also comprise in conjunction with said method step 201:
Receiving element 304 is used to receive the gesture motion that the user imports startup command, perhaps carries out at described performance element 301 and imports in the process of the corresponding instruction of startup command with the user, receives the gesture motion that the user imports gesture command.Wherein, described receiving element 304 is a camera.
The operation of carrying out to pair above-mentioned each unit of step 210 below in conjunction with above-mentioned steps 201 is elaborated: when the user makes gesture motion as startup command before camera, described receiving element 304(is a camera) receive the gesture motion that the user imports startup command, simultaneously, described first inspection unit 305 receives the current gesture motion that the user imports according to receiving element 304, checks first corresponding tables between the startup command and instruction; The inspection that described first judging unit 302 carries out according to described first inspection unit 305, whether judgement includes the corresponding gesture of gesture motion of importing startup command with described user in described first corresponding tables, if, notify described performance element 301 to carry out and import the corresponding instruction of startup command, and start timing with the user; Import in the process of the corresponding instruction of startup command with the user in described performance element 301 execution, when the user makes gesture motion as gesture command before camera, described receiving element 304(is a camera) receive the gesture motion that the user imports gesture command, simultaneously, described second inspection unit 306 receives the current gesture motion of user's input according to receiving element 304, check second corresponding tables between the gesture command and instruction, the inspection that described second judging unit 303 carries out according to described second inspection unit 306, if whether judgement includes the corresponding gesture of gesture motion of importing gesture command with described user in described second corresponding tables, notify described performance element 301 to carry out and import the corresponding instruction of gesture command, judge again otherwise after timing finishes, return described first judging unit 302 with the user.
Be appreciated that ground, system provided by the invention accepts the unit except this sensor conduct of camera, can also carry out gesture identification by sensors such as touch-screens, at this moment, the course of work is similar, and difference is, if carry out gesture identification by touch-screen, then the user need import gesture on touch-screen, rather than makes gesture before camera.
In addition, because the various gesture states of gesture all can carry out the interface feedback by display unit in real time, thereby, brought natural, true to nature operating experience to the user.
In sum, the method and system of realization video gesture identification of the present invention uses new flow process to carry out gesture identification, judge earlier by using startup command, when the gesture motion of user input is consistent with predefined gesture as startup command, just carry out and import the corresponding instruction of startup command, and start timing with the user; Then, import in the process of the corresponding instruction of startup command with the user in execution, judge whether the user has the input gesture command,, carry out and import the corresponding instruction of gesture command with the user if having, otherwise, after timing finishes, return and rejudge the user whether the startup command stage is arranged, promptly, begin the identification of gesture command more earlier by the input startup command, can get rid of the unconscious action of user effectively, prevent erroneous judgement.
Above invention has been described in conjunction with most preferred embodiment, but the present invention is not limited to the embodiment of above announcement, and should contain various modification, equivalent combinations of carrying out according to essence of the present invention.
Claims (10)
1. a method that realizes the video gesture identification is characterized in that, described method comprises step:
Step 1: judge whether the user has the input startup command,, carry out and import the corresponding instruction of startup command, and start timing with the user if having;
Step 2: import in the process of the corresponding instruction of startup command with the user in described execution, judge whether the user has the input gesture command,, carry out and import the corresponding instruction of gesture command with the user if having, otherwise, after finishing, timing returns step 1.
2. the method for realization video gesture identification as claimed in claim 1 is characterized in that, described step 1 also specifically comprises:
Receive the gesture motion that the user imports startup command;
According to first corresponding tables between the startup command and instruction, judge in described first corresponding tables, whether to include the corresponding gesture of gesture motion of importing startup command with described user;
In described first corresponding tables, include when importing the corresponding gesture of the gesture motion of startup command, carry out and import the corresponding instruction of startup command with the user with described user, and the startup timing.
3. the method for realization video gesture identification as claimed in claim 1 is characterized in that, described step 2 specifically comprises:
Import in the process of the corresponding instruction of startup command with the user in described execution, receive the gesture motion that the user imports gesture command; According to second corresponding tables between the gesture command and instruction, judge in described second corresponding tables, whether to include the corresponding gesture of gesture motion of importing gesture command with described user;
In described second corresponding tables, include when importing the corresponding gesture of the gesture motion of gesture command, carry out and import the corresponding instruction of gesture command, otherwise after the timing end, return step 1 with the user with described user.
4. the method for realization video gesture identification as claimed in claim 1, it is characterized in that, import in the process of the corresponding instruction of startup command with the user carrying out, or execution imports in the process of the corresponding instruction of gesture command with the user, show that to the user current interface of current state correspondence is fed back.
5. the method for realization video gesture identification as claimed in claim 1 is characterized in that, described startup command and gesture command all obtain by the operation that camera receives the user.
6. system that realizes the video gesture identification is characterized in that comprising:
Performance element is used to carry out with the user and imports the corresponding instruction of startup command, and starts timing;
First judging unit is used to judge whether the user has the input startup command, if having, described performance element is carried out and imported the corresponding instruction of startup command with the user, and starts timing;
Second judging unit, import in the process of the corresponding instruction of startup command with the user in described performance element execution, judge whether the user has the input gesture command, if have, described performance element is carried out and is imported the corresponding instruction of gesture command with the user, otherwise, after timing finishes, return described first module and judge again.
7. the system of realization video gesture identification as claimed in claim 6 is characterized in that, described system also comprises:
Receiving element is used to receive the gesture motion that the user imports startup command, perhaps carries out at described performance element and imports in the process of the corresponding instruction of startup command with the user, receives the gesture motion that the user imports gesture command.
8. the system of realization video gesture identification as claimed in claim 7 is characterized in that, described receiving element is a camera.
9. the system of realization video gesture identification as claimed in claim 6 is characterized in that, described system also comprises:
First inspection unit, be used to check first corresponding tables between the startup command and instruction, the inspection that described first judging unit carries out according to described first inspection unit, whether judgement includes the corresponding gesture of gesture motion of importing startup command with described user in described first corresponding tables, in described first corresponding tables, include when importing the corresponding gesture of the gesture motion of startup command with described user, described performance element is carried out and is imported the corresponding instruction of startup command with the user, and starts timing.
10. the system of realization video gesture identification as claimed in claim 6 is characterized in that, described system also comprises:
Second inspection unit, be used to check second corresponding tables between the gesture command and instruction, the inspection that described second judging unit carries out according to described second inspection unit, whether judgement includes in described second corresponding tables is imported the corresponding gesture of the gesture motion of gesture command with described user and includes in described first corresponding tables when importing the corresponding gesture of the gesture motion of gesture command with described user, described performance element is carried out and is imported the corresponding instruction of gesture command with the user, judges usefulness otherwise return described first judging unit after timing finishes.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2011101951496A CN102221891A (en) | 2011-07-13 | 2011-07-13 | Method and system for realizing optical image gesture recognition |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2011101951496A CN102221891A (en) | 2011-07-13 | 2011-07-13 | Method and system for realizing optical image gesture recognition |
Publications (1)
Publication Number | Publication Date |
---|---|
CN102221891A true CN102221891A (en) | 2011-10-19 |
Family
ID=44778456
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN2011101951496A Pending CN102221891A (en) | 2011-07-13 | 2011-07-13 | Method and system for realizing optical image gesture recognition |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN102221891A (en) |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103136508A (en) * | 2011-12-05 | 2013-06-05 | 联想(北京)有限公司 | Gesture identification method and electronic equipment |
CN103442177A (en) * | 2013-08-30 | 2013-12-11 | 程治永 | PTZ video camera control system and method based on gesture identification |
CN103914126A (en) * | 2012-12-31 | 2014-07-09 | 腾讯科技(深圳)有限公司 | Multimedia player control method and device |
CN103988142A (en) * | 2011-11-04 | 2014-08-13 | 托比伊科技公司 | Portable device |
CN104040464A (en) * | 2012-01-10 | 2014-09-10 | 戴姆勒股份公司 | Method and device for operating functions in a vehicle using gestures performed in three-dimensional space, and related computer program product |
CN104182404A (en) * | 2013-05-22 | 2014-12-03 | 腾讯科技(深圳)有限公司 | Method and device for realizing shortcut operations of browser, browser and mobile terminal |
CN106030462A (en) * | 2014-02-17 | 2016-10-12 | 大众汽车有限公司 | User interface and method for switching from a first operating mode of a user interface to a 3D gesture mode |
WO2017035824A1 (en) * | 2015-09-05 | 2017-03-09 | 何兰 | Method and atm for displaying different information according to different hand gestures |
WO2017035825A1 (en) * | 2015-09-05 | 2017-03-09 | 何兰 | Method and atm for prompting when displaying different information according to different hand gestures |
CN107390881A (en) * | 2017-09-14 | 2017-11-24 | 西安领讯卓越信息技术有限公司 | A kind of gestural control method |
US10037086B2 (en) | 2011-11-04 | 2018-07-31 | Tobii Ab | Portable device |
CN109343708A (en) * | 2013-06-13 | 2019-02-15 | 原相科技股份有限公司 | Device with gesture sensor |
CN111813321A (en) * | 2020-08-12 | 2020-10-23 | Oppo广东移动通信有限公司 | Gesture control method and related device |
CN112328087A (en) * | 2020-11-19 | 2021-02-05 | 成都金都超星天文设备有限公司 | Method for controlling planetarium by gestures and control system |
CN113253847A (en) * | 2021-06-08 | 2021-08-13 | 北京字节跳动网络技术有限公司 | Terminal control method and device, terminal and storage medium |
US11740703B2 (en) | 2013-05-31 | 2023-08-29 | Pixart Imaging Inc. | Apparatus having gesture sensor |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101853568A (en) * | 2010-04-13 | 2010-10-06 | 鸿富锦精密工业(深圳)有限公司 | Gesture remote control device |
CN101957707A (en) * | 2009-07-13 | 2011-01-26 | 纬创资通股份有限公司 | Method and electronic device for multi-mode touch by utilizing multiple single-point touch instruction |
CN102117116A (en) * | 2009-12-30 | 2011-07-06 | 微盟电子(昆山)有限公司 | Moving object recognition method and instruction input method based on moving object recognition |
-
2011
- 2011-07-13 CN CN2011101951496A patent/CN102221891A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101957707A (en) * | 2009-07-13 | 2011-01-26 | 纬创资通股份有限公司 | Method and electronic device for multi-mode touch by utilizing multiple single-point touch instruction |
CN102117116A (en) * | 2009-12-30 | 2011-07-06 | 微盟电子(昆山)有限公司 | Moving object recognition method and instruction input method based on moving object recognition |
CN101853568A (en) * | 2010-04-13 | 2010-10-06 | 鸿富锦精密工业(深圳)有限公司 | Gesture remote control device |
Cited By (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103988142B (en) * | 2011-11-04 | 2018-06-26 | 托比公司 | Mancarried device |
CN103988142A (en) * | 2011-11-04 | 2014-08-13 | 托比伊科技公司 | Portable device |
US10409388B2 (en) | 2011-11-04 | 2019-09-10 | Tobii Ab | Portable device |
US10061393B2 (en) | 2011-11-04 | 2018-08-28 | Tobii Ab | Portable device |
US9772690B2 (en) | 2011-11-04 | 2017-09-26 | Tobii Ab | Portable device |
US10037086B2 (en) | 2011-11-04 | 2018-07-31 | Tobii Ab | Portable device |
CN103136508A (en) * | 2011-12-05 | 2013-06-05 | 联想(北京)有限公司 | Gesture identification method and electronic equipment |
CN103136508B (en) * | 2011-12-05 | 2018-03-13 | 联想(北京)有限公司 | Gesture identification method and electronic equipment |
CN104040464A (en) * | 2012-01-10 | 2014-09-10 | 戴姆勒股份公司 | Method and device for operating functions in a vehicle using gestures performed in three-dimensional space, and related computer program product |
CN103914126A (en) * | 2012-12-31 | 2014-07-09 | 腾讯科技(深圳)有限公司 | Multimedia player control method and device |
CN104182404A (en) * | 2013-05-22 | 2014-12-03 | 腾讯科技(深圳)有限公司 | Method and device for realizing shortcut operations of browser, browser and mobile terminal |
US11740703B2 (en) | 2013-05-31 | 2023-08-29 | Pixart Imaging Inc. | Apparatus having gesture sensor |
CN109343708A (en) * | 2013-06-13 | 2019-02-15 | 原相科技股份有限公司 | Device with gesture sensor |
CN103442177A (en) * | 2013-08-30 | 2013-12-11 | 程治永 | PTZ video camera control system and method based on gesture identification |
CN106030462A (en) * | 2014-02-17 | 2016-10-12 | 大众汽车有限公司 | User interface and method for switching from a first operating mode of a user interface to a 3D gesture mode |
WO2017035825A1 (en) * | 2015-09-05 | 2017-03-09 | 何兰 | Method and atm for prompting when displaying different information according to different hand gestures |
WO2017035824A1 (en) * | 2015-09-05 | 2017-03-09 | 何兰 | Method and atm for displaying different information according to different hand gestures |
CN107390881A (en) * | 2017-09-14 | 2017-11-24 | 西安领讯卓越信息技术有限公司 | A kind of gestural control method |
CN111813321A (en) * | 2020-08-12 | 2020-10-23 | Oppo广东移动通信有限公司 | Gesture control method and related device |
CN112328087A (en) * | 2020-11-19 | 2021-02-05 | 成都金都超星天文设备有限公司 | Method for controlling planetarium by gestures and control system |
CN113253847A (en) * | 2021-06-08 | 2021-08-13 | 北京字节跳动网络技术有限公司 | Terminal control method and device, terminal and storage medium |
CN113253847B (en) * | 2021-06-08 | 2024-04-30 | 北京字节跳动网络技术有限公司 | Terminal control method, device, terminal and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102221891A (en) | Method and system for realizing optical image gesture recognition | |
US11249554B2 (en) | Method and apparatus for selecting between multiple gesture recognition systems | |
CN102937832B (en) | A kind of gesture method for catching of mobile terminal and device | |
US20190286320A1 (en) | Control method and apparatus for display screen | |
CN111787223B (en) | Video shooting method and device and electronic equipment | |
US9489951B2 (en) | Information processing system, information processing method, communication terminal, information processing apparatus, and control method and control program thereof | |
CN103124327A (en) | Method and apparatus for taking a self camera recording | |
CN105446607A (en) | Camera touch shooting method and touch terminal thereof | |
US9641743B2 (en) | System, method, and apparatus for controlling timer operations of a camera | |
CN104463119A (en) | Composite gesture recognition device based on ultrasound and vision and control method thereof | |
CN102830891A (en) | Non-contact gesture control equipment and locking and unlocking method thereof | |
CN104333793A (en) | Gesture remote control system | |
CN112788244B (en) | Shooting method, shooting device and electronic equipment | |
CN103345454A (en) | Peripheral device connection method of mobile terminal | |
CN112818825B (en) | Working state determining method and device | |
CN113780045B (en) | Method and apparatus for training distance prediction model | |
CN106407386B (en) | Method and device for improving topic searching efficiency | |
CN109726695A (en) | Optical finger print icon display method, electronic device and computer readable storage medium | |
CN104516566A (en) | Handwriting input method and device | |
CN113625867A (en) | Gesture control method, device, equipment and storage medium | |
CN113096193A (en) | Three-dimensional somatosensory operation identification method and device and electronic equipment | |
CN112905837A (en) | Video file processing method and device and electronic equipment | |
CN109213312B (en) | Electronic device and display control method thereof | |
CN104461174A (en) | Optical touch system and optical touch control method | |
CN103679124A (en) | Gesture recognition system and method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C53 | Correction of patent of invention or patent application | ||
CB02 | Change of applicant information |
Address after: Xue Cheng Ke Zhu Road Guangzhou City, Guangdong province 510663 science No. 192 Applicant after: Guangzhou Shiyuan Electronic Technology Company Limited Address before: Xue Cheng Ke Zhu Road Guangzhou City, Guangdong province 510663 science No. 192 Applicant before: Guangzhou Shiyuan Electronic Technology Co.,Ltd. |
|
C12 | Rejection of a patent application after its publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20111019 |