CN111078002A - Suspended gesture recognition method and terminal equipment - Google Patents
Suspended gesture recognition method and terminal equipment Download PDFInfo
- Publication number
- CN111078002A CN111078002A CN201911143620.XA CN201911143620A CN111078002A CN 111078002 A CN111078002 A CN 111078002A CN 201911143620 A CN201911143620 A CN 201911143620A CN 111078002 A CN111078002 A CN 111078002A
- Authority
- CN
- China
- Prior art keywords
- area
- recognized
- user
- determining
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/107—Static hand or arm
- G06V40/113—Recognition of static hand signs
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Multimedia (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The embodiment of the invention provides a suspension gesture recognition method and terminal equipment. The method comprises the following steps: receiving a hover gesture input of a user; in response to the suspension gesture input, taking a to-be-recognized picture of the palm of the user; the photo to be recognized at least comprises a first area and a second area, wherein the first area is an area where a thumb of the palm of the user is located, the second area is an area where a target finger is located, and the second area is divided into at least two sub-areas according to a knuckle on the target finger; determining a target sub-area which is overlapped with the first area in the second area of the photo to be recognized; and determining the preset content corresponding to the target sub-region as the recognition result of the suspended gesture input. The embodiment of the invention solves the problem of the input operation of contacting the screen in the prior art.
Description
Technical Field
The invention relates to the technical field of mobile communication, in particular to a suspended gesture recognition method and terminal equipment.
Background
With the rapid development of mobile communication technology, terminal devices such as smart phones have become indispensable tools in various aspects of people's lives. The functions of the terminal equipment are gradually improved, the terminal equipment does not only play a role in communication, various intelligent services are provided for users, and great convenience is brought to the work and life of the users.
In the using process of the terminal device, a user usually needs touch operation to input content; touch operation usually requires a user's finger or a stylus to be performed, both directly and indirectly contacting the screen (touch screen). However, the input operation of directly or indirectly contacting the screen has certain disadvantages; for example, a trace of touch is easily left on the screen, and sweat of a user finger or a trace of touching the screen by a stylus in the process of touching the screen by the user finger or the stylus reduces the aesthetic degree of the screen, and affects the user to watch the content on the screen.
Disclosure of Invention
The embodiment of the invention provides a suspended gesture recognition method and terminal equipment, and aims to solve the problem of the input operation of contacting a screen in the prior art.
In order to solve the technical problem, the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides a method for recognizing a hover gesture, where the method includes:
receiving a hover gesture input of a user;
in response to the suspension gesture input, taking a to-be-recognized picture of the palm of the user; the photo to be recognized at least comprises a first area and a second area, wherein the first area is an area where a thumb of the palm of the user is located, the second area is an area where a target finger is located, and the second area is divided into at least two sub-areas according to a knuckle on the target finger;
determining a target sub-area which is overlapped with the first area in the second area of the photo to be recognized;
and determining the preset content corresponding to the target sub-region as the recognition result of the suspended gesture input.
In a second aspect, an embodiment of the present invention further provides a terminal device, where the terminal device includes:
the input receiving module is used for receiving the suspended gesture input of a user;
the picture shooting module is used for responding to the suspended gesture input and shooting a picture to be recognized of the palm of the user; the photo to be recognized at least comprises a first area and a second area, wherein the first area is an area where a thumb of the palm of the user is located, the second area is an area where a target finger is located, and the second area is divided into at least two sub-areas according to a knuckle on the target finger;
the area determination module is used for determining a target sub-area which is overlapped with the first area in the second area of the photo to be recognized;
and the result identification module is used for determining the preset content corresponding to the target sub-region as the identification result of the suspended gesture input.
In a third aspect, an embodiment of the present invention further provides a terminal device, where the terminal device includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and when the processor executes the computer program, the steps in the hover gesture recognition method described above are implemented.
In a fourth aspect, the embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements the steps in the hover gesture recognition method described above.
In the embodiment of the invention, the input of the suspended gesture of the user is received; in response to the suspension gesture input, taking a to-be-recognized picture of the palm of the user; determining a target sub-area which is overlapped with the first area in the second area of the photo to be recognized; determining preset content corresponding to the target subarea as a recognition result of the suspended gesture input, realizing suspended input without touching a terminal equipment screen, avoiding sweat of a user finger or trace of touching the screen by a touch pen, reducing the aesthetic degree of the screen, and influencing the content of the user watching the screen; and because the palm faces the terminal equipment during the input of the suspended gesture, the back of the palm cannot know the specific gesture, and the input safety is enhanced.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive labor.
Fig. 1 is a flowchart illustrating a method for recognizing a hover gesture according to an embodiment of the present invention;
FIG. 2 shows a schematic diagram of a first example of embodiment of the invention;
FIG. 3 shows a schematic diagram of a second example of embodiment of the invention;
FIG. 4 shows a flow chart of a third example of embodiment of the invention;
fig. 5 shows one of block diagrams of a terminal device provided by an embodiment of the present invention;
fig. 6 shows a second block diagram of the terminal device according to the embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
In various embodiments of the present invention, it should be understood that the sequence numbers of the following processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
Referring to fig. 1, an embodiment of the present invention provides a method for recognizing a hover gesture, where the method includes:
The terminal equipment can be provided with a distance sensor, a sensing area is formed in the distance range detected by the distance sensor, after a sensing signal is detected in the sensing area, a camera of the terminal equipment is started, a picture is shot, and whether the palm of a user is the palm or not is confirmed.
After the palm of the user is identified, receiving a suspended gesture of the user; the suspension gesture is that when a user performs an input operation, a finger is not in contact with a terminal device (such as a screen) and is in a suspension state with the screen; the input operation is performed by changing different gestures.
When the condition that the user listens to the hanging gesture for inputting is detected, shooting a picture to be recognized, including the palm of the user; optionally, if the hanging gesture appears on the front side of the screen, a front-facing camera can be called to take a picture to be recognized; and if the suspension gesture appears on the back of the terminal equipment, such as the front of the rear cover, calling the rear camera to take a picture.
The photo to be recognized at least comprises a first area where the thumb of the palm of the user is located and a second area where the target finger is located; the target finger is at least one finger of an index finger, a middle finger, a ring finger and a little finger; the hover gesture is formed by the first region overlying the second region.
As a first example, as shown in fig. 2, where S1 is a first region and S2 is a second region; each finger in the second region is divided into three sub-regions according to the finger joints, such as the index finger, which is divided into A, B, C sub-regions according to the finger joints.
And detecting the photo to be recognized, and detecting a target sub-area overlapped with the first area in the second area, namely a sub-area covered by the thumb in the target finger.
It can be understood that if there is no target sub-region overlapping with the first region in the photo to be recognized, the flow ends.
And 104, determining preset content corresponding to the target sub-area as a recognition result of the suspended gesture input.
Each sub-area is provided with preset content, the preset content can be characters, letters, punctuations or other input content, still referring to fig. 2, each sub-area in fig. 2 is provided with letters;
and taking the preset content of the target sub-area as the recognition result of the suspended gesture input to realize the suspended input operation.
In the above embodiment of the present invention, the input of the suspension gesture of the user is received; in response to the suspension gesture input, taking a to-be-recognized picture of the palm of the user; determining a target sub-area which is overlapped with the first area in the second area of the photo to be recognized; determining preset content corresponding to the target subarea as a recognition result of the suspended gesture input, realizing suspended input without touching a terminal equipment screen, avoiding sweat of a user finger or trace of touching the screen by a touch pen, reducing the aesthetic degree of the screen, and influencing the content of the user watching the screen; in addition, as the palm center faces to the terminal equipment during the suspension gesture input, the back of the palm cannot know the specific gesture, and the input safety is enhanced; the embodiment of the invention solves the problem of the input operation of contacting the screen in the prior art.
Optionally, in an embodiment of the present invention, the determining a target sub-area, which overlaps with the first area, in the second area of the photo to be recognized includes a first method or a second method, specifically,
the first method is as follows:
determining a target sample picture which meets a preset similarity requirement with the picture to be recognized in the terminal equipment, and determining a preset region of a target finger in the target sample picture as a target sub-region of the picture to be recognized;
comparing the photo to be recognized with the sample picture in the first mode; the sample picture is preset, for example, a picture of a gesture of the user when inputting each content is collected as a sample.
And determining a target sample picture meeting the preset similarity requirement with the picture to be recognized according to a preset similarity algorithm, wherein in the similarity comparison process, the gesture state is compared, and the biological characteristics such as the size, the shape and the like of the palm can also be compared, so that other users except the user authenticated in advance by the terminal equipment are prevented from operating the terminal equipment, and the safety of the terminal equipment is enhanced.
And after the target sample picture is determined, determining a preset area of the target sample picture as a target sub-area of the photo to be recognized.
The second method comprises the following steps:
and performing edge detection on the photo to be recognized, and determining a target sub-area of the target finger covered by the fingertip of the thumb in the photo to be recognized.
Among these, edge detection is a fundamental problem in image processing and computer vision, and the purpose of edge detection is to identify points in digital images where brightness changes are significant. The significant change in the image attribute generally reflects the important events and changes of the attribute, the outline of the palm in the photo to be recognized is obtained through edge detection, and then the target sub-area of the thumb covered on the target finger is determined.
Optionally, in an embodiment of the present invention, the receiving a hover gesture input of a user includes:
and when detecting that the palm of the user is within the preset distance range of the terminal equipment, determining that the suspended gesture input of the user is received.
Wherein, a distance sensor, such as an infrared sensor, etc., can be arranged in the terminal device; and when the distance sensor detects that the palm of the user exists within a preset distance range, the suspended gesture input of the user is determined to be received.
Optionally, in an embodiment of the present invention, the method further includes:
when the gesture state of the suspended gesture input is detected to change within a preset time interval, a picture to be recognized is sequentially shot for each suspended gesture input, and target contents corresponding to each suspended gesture input are sequenced according to a shooting sequence to obtain alternative contents.
When the terminal device detects input of the suspension gesture and the gesture state of the input of the suspension gesture changes, pictures are respectively shot for the suspension gesture of each state, then the alternative contents corresponding to each suspension gesture are sequenced according to the shooting sequence of the pictures to obtain continuous alternative contents, and therefore a user can continuously input a plurality of contents by changing different gestures.
Optionally, as a second example, referring to fig. 3, the content shown in fig. 3 may be displayed in a screen of the terminal device, where each column in the screen of fig. 3 corresponds to one finger, for example, a column where ABC is located corresponds to an index finger, each row in the column corresponds to a sub-area, and the content displayed in each row is preset content of the sub-area; displaying the form in a screen when a user performs suspended gesture input to help the user to realize quick input; if the user forgets the input content corresponding to each sub-area, the table can be viewed.
Optionally, in an embodiment of the present invention, after the taking of the to-be-recognized picture of the palm of the user, the method includes:
determining an operation instruction corresponding to the suspended gesture input according to an included angle between the length direction of the user palm of the photo to be recognized and a preset reference direction of the terminal equipment;
after the obtaining of the alternative content, the method includes:
and determining the alternative content as the input content of the operation instruction.
Each suspension input can correspond to different operation instructions, and the operation instructions can be selected by adjusting the direction of the palm during input; for example, taking the length direction of the terminal device as a preset reference direction, if an angle between the length direction of the palm and the length direction of the terminal device is 90 degrees, inputting the corresponding screen unlocking operation by the suspended gesture; or if the length direction of the palm and the length direction of the terminal device form an angle of 30-60 degrees, the suspended gesture input corresponds to the screen locking operation.
And after the operation instruction is determined, the identified alternative content is used as the input content under the operation instruction.
As a third example, referring to fig. 4, taking a screen unlocking operation as an example, an application of the suspension gesture recognition method provided in the embodiment of the present invention mainly includes the following processes:
The method mainly comprises the steps of detecting, acquiring and recording the suspended gesture input operation of a user, wherein the terminal device comprises an infrared module, a camera module and a transmission module.
Specifically, the infrared module may initially set a preset distance range (spatial range) in which a sector area with a radius of X meters is recognized as a suspended input with the center of the screen of the terminal device as a sector vertex, and detect whether the palm of the user enters the preset distance range through the infrared module.
When the infrared module detects that the palm of the user enters the preset space area, an instruction is sent to the camera module, the front camera is started, and the front camera acquires a to-be-recognized picture of the suspended gesture operation of the user within the preset distance range; and then, the photo to be recognized is transmitted to the algorithm module through the transmission module, and the gesture of the user is recognized by the algorithm module.
And 402, identifying the shot picture to be identified to obtain the input content of the user.
And acquiring the sequential input content of the user according to the sequential order of each gesture. For example, when the user inputs the ABCD when setting the screen lock password, the password is the ABCD, and the password may be input in the encryption order during decryption, that is, the user may decrypt by inputting the ABCD.
With reference to fig. 2, in the password mode: x (thumb) touches A-M subareas in different orders to create passwords, the clicking times of each subarea are not limited, clicking can be repeated, and users can set passwords with different complexities according to own habits, such as: AABCFDA, ABCDFEHLM, etc.
After the screen is locked, a preset gesture is used for triggering an unlocking operation instruction to enter a decryption mode.
And step 403, unlocking the screen according to the input content.
And (3) during decryption: and X touches the A-M sub-regions in a preset password sequence, if the preset password is the same as the preset password, decryption/unlocking is realized, and if the preset password is different from the preset password, unlocking fails.
In addition, different gesture holding times can be set to correspond to different operation instructions, for example, a password input instruction is set to be shorter, and an input ending instruction is set to be longer.
In the above embodiment of the present invention, the input of the suspension gesture of the user is received; in response to the suspension gesture input, taking a to-be-recognized picture of the palm of the user; determining a target sub-area which is overlapped with the first area in the second area of the photo to be recognized; determining preset content corresponding to the target subarea as a recognition result of the suspended gesture input, realizing suspended input without touching a terminal equipment screen, avoiding sweat of a user finger or trace of touching the screen by a touch pen, reducing the aesthetic degree of the screen, and influencing the content of the user watching the screen; and because the palm faces the terminal equipment during the input of the suspended gesture, the back of the palm cannot know the specific gesture, and the input safety is enhanced.
With the above description of the method for recognizing a hover gesture according to the embodiment of the present invention, a terminal device according to the embodiment of the present invention will be described with reference to the accompanying drawings.
Referring to fig. 5, an embodiment of the present invention further provides a terminal device 500, including:
the input receiving module 501 is configured to receive a hover gesture input of a user.
Wherein, terminal equipment 500 can set up distance sensor, forms a response region through the distance range that distance sensor detected, detects the response region and appears the sensing signal after, starts terminal equipment 500's camera, shoots the picture and confirms whether for the user's palm.
After the palm of the user is identified, receiving a suspended gesture of the user; the hover gesture is that when a user performs an input operation, a finger does not contact the terminal device 500 (such as a screen) and is in a hover state with the screen; the input operation is performed by changing different gestures.
A picture taking module 502, configured to take a picture to be recognized of a palm of a user in response to the dangling gesture input; the photo to be recognized at least comprises a first area and a second area, wherein the first area is an area where a thumb of the palm of the user is located, the second area is an area where a target finger is located, and the second area is divided into at least two sub-areas according to a knuckle on the target finger.
When the condition that the user listens to the hanging gesture for inputting is detected, shooting a picture to be recognized, including the palm of the user; optionally, if the hanging gesture appears on the front side of the screen, a front-facing camera can be called to take a picture to be recognized; if the dangling gesture appears on the back side of the terminal device 500, for example, the front side of the rear cover, the rear camera is called to take a picture.
The photo to be recognized at least comprises a first area where the thumb of the palm of the user is located and a second area where the target finger is located; the target finger is at least one finger of an index finger, a middle finger, a ring finger and a little finger; the hover gesture is formed by the first region overlying the second region.
A region determining module 503, configured to determine a target sub-region, which overlaps with the first region, in the second region of the photo to be recognized.
And detecting the photo to be recognized, and detecting a target sub-area overlapped with the first area in the second area, namely a sub-area covered by the thumb in the target finger.
It can be understood that if there is no target sub-region overlapping with the first region in the photo to be recognized, the flow ends.
A result identification module 504, configured to determine the preset content corresponding to the target sub-region as an identification result of the hover gesture input.
Each sub-area is provided with preset content, the preset content can be characters, letters, punctuations or other input content, still referring to fig. 2, each sub-area in fig. 2 is provided with letters;
and taking the preset content of the target sub-area as the recognition result of the suspended gesture input to realize the suspended input operation.
Optionally, in this embodiment of the present invention, the area determining module 503 includes:
the first determining submodule is used for determining a target sample picture which meets the requirement of preset similarity with the picture to be recognized in the terminal equipment, and determining a preset area of a target finger in the target sample picture as a target sub-area of the picture to be recognized; or
And the second determining submodule is used for carrying out edge detection on the photo to be recognized and determining a target sub-area of the target finger covered by the fingertip of the thumb in the photo to be recognized.
Optionally, in this embodiment of the present invention, the input receiving module 501 includes:
and the detection submodule is configured to determine that a suspended gesture input of the user is received when the palm of the user is detected to be within the preset distance range of the terminal device 500.
Optionally, in this embodiment of the present invention, the terminal device 500 further includes:
and the dynamic input module is used for sequentially shooting a picture to be recognized for each suspended gesture input when detecting that the gesture state of the suspended gesture input changes within a preset time interval, and sequencing target contents corresponding to each suspended gesture input according to a shooting sequence to obtain alternative contents.
Optionally, in this embodiment of the present invention, the terminal device 500 includes:
the instruction determining module is used for determining an operation instruction corresponding to the suspended gesture input according to an included angle between the length direction of the user palm of the photo to be recognized and a preset reference direction of the terminal equipment 500;
and
and the content determining module is used for determining the alternative content as the input content of the operation instruction.
The terminal device 500 provided in the embodiment of the present invention can implement each process implemented by the terminal device 500 in the method embodiments of fig. 1 to fig. 4, and for avoiding repetition, details are not described here again.
In the embodiment of the present invention, the input receiving module 501 receives the suspended gesture input of the user; in response to the hover gesture input, the photo taking module 502 takes a to-be-recognized photo of the palm of the user; the region determining module 503 determines a target sub-region overlapping with the first region in the second region of the photo to be recognized; the result recognition module 504 determines the preset content corresponding to the target sub-region as a recognition result of the suspension gesture input, so that the suspension input is realized without touching the screen of the terminal device 500, sweat of a finger of a user or a trace of touching the screen by a touch pen is avoided, the attractiveness of the screen is reduced, and the user can watch the content on the screen; in addition, as the palm faces the terminal device 500 during the suspension gesture input, the back of the palm cannot know specific gestures, so that the input safety is enhanced; the embodiment of the invention solves the problem of the input operation of contacting the screen in the prior art.
Fig. 6 is a schematic diagram of a hardware structure of a terminal device for implementing various embodiments of the present invention;
the terminal device 600 includes but is not limited to: a radio frequency unit 601, a network module 602, an audio output unit 603, an input unit 604, a sensor 605, a display unit 606, a user input unit 607, an interface unit 608, a memory 609, a processor 610, and a power supply 611. Those skilled in the art will appreciate that the terminal device configuration shown in fig. 6 does not constitute a limitation of the terminal device, and that the terminal device may include more or fewer components than shown, or combine certain components, or a different arrangement of components. In the embodiment of the present invention, the terminal device includes, but is not limited to, a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted terminal, a wearable device, a pedometer, and the like.
The radio frequency unit 601 is used for receiving a suspension gesture input of a user;
a processor 610 for taking a to-be-recognized picture of the palm of the user in response to the hover gesture input; the photo to be recognized at least comprises a first area and a second area, wherein the first area is an area where a thumb of the palm of the user is located, the second area is an area where a target finger is located, and the second area is divided into at least two sub-areas according to a knuckle on the target finger;
determining a target sub-area which is overlapped with the first area in the second area of the photo to be recognized;
and determining the preset content corresponding to the target sub-region as the recognition result of the suspended gesture input.
In the embodiment of the invention, the suspended gesture input of a user is received; in response to the suspension gesture input, taking a to-be-recognized picture of the palm of the user; determining a target sub-area which is overlapped with the first area in the second area of the photo to be recognized; determining preset content corresponding to the target subarea as a recognition result of the suspended gesture input, realizing suspended input without touching a terminal equipment screen, avoiding sweat of a user finger or trace of touching the screen by a touch pen, reducing the aesthetic degree of the screen, and influencing the content of the user watching the screen; in addition, as the palm center faces to the terminal equipment during the suspension gesture input, the back of the palm cannot know the specific gesture, and the input safety is enhanced; the embodiment of the invention solves the problem of the input operation of contacting the screen in the prior art.
It should be noted that, in this embodiment, the terminal device 600 may implement each process in the method embodiment of the present invention and achieve the same beneficial effects, and for avoiding repetition, details are not described here again.
It should be understood that, in the embodiment of the present invention, the radio frequency unit 601 may be used for receiving and sending signals during a message sending and receiving process or a call process, and specifically, receives downlink data from a base station and then processes the received downlink data to the processor 610; in addition, the uplink data is transmitted to the base station. In general, radio frequency unit 601 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like. Further, the radio frequency unit 601 may also communicate with a network and other devices through a wireless communication system.
The terminal device provides the user with wireless broadband internet access through the network module 602, such as helping the user send and receive e-mails, browse webpages, access streaming media, and the like.
The audio output unit 603 may convert audio data received by the radio frequency unit 601 or the network module 602 or stored in the memory 609 into an audio signal and output as sound. Also, the audio output unit 603 can also provide audio output related to a specific function performed by the terminal apparatus 600 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 603 includes a speaker, a buzzer, a receiver, and the like.
The input unit 604 is used to receive audio or video signals. The input Unit 604 may include a Graphics Processing Unit (GPU) 6041 and a microphone 6042, and the Graphics processor 6041 processes image data of a still picture or video obtained by an image capturing apparatus (such as a camera) in a video capture mode or an image capture mode. The processed image frames may be displayed on the display unit 606. The image frames processed by the graphic processor 6041 may be stored in the memory 609 (or other storage medium) or transmitted via the radio frequency unit 601 or the network module 602. The microphone 6042 can receive sound, and can process such sound into audio data. The processed audio data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 601 in case of the phone call mode.
The terminal device 600 further comprises at least one sensor 605, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the luminance of the display panel 6061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 6061 and/or the backlight when the terminal apparatus 600 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally three axes), detect the magnitude and direction of gravity when stationary, and can be used to identify the terminal device posture (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration identification related functions (such as pedometer, tapping), and the like; the sensors 605 may also include fingerprint sensors, pressure sensors, iris sensors, molecular sensors, gyroscopes, barometers, hygrometers, thermometers, infrared sensors, etc., which are not described in detail herein.
The display unit 606 is used to display information input by the user or information provided to the user. The Display unit 606 may include a Display panel 6061, and the Display panel 6061 may be configured by a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like.
The user input unit 607 may be used to receive input digital or contents information and generate key signal inputs related to user settings and function control of the terminal device. Specifically, the user input unit 607 includes a touch panel 6071 and other input devices 6072. Touch panel 6071, also referred to as a touch screen, may collect touch operations by a user on or near it (e.g., operations by a user on or near touch panel 6071 using a finger, stylus, or any suitable object or accessory). The touch panel 6071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 610, receives a command from the processor 610, and executes the command. In addition, the touch panel 6071 can be implemented by various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. The user input unit 607 may include other input devices 6072 in addition to the touch panel 6071. Specifically, the other input devices 6072 may include, but are not limited to, a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a track ball, a mouse, and a joystick, which are not described herein again.
Further, the touch panel 6071 can be overlaid on the display panel 6061, and when the touch panel 6071 detects a touch operation on or near the touch panel 6071, the touch operation is transmitted to the processor 610 to determine the type of the touch event, and then the processor 610 provides a corresponding visual output on the display panel 6061 according to the type of the touch event. Although in fig. 6, the touch panel 6071 and the display panel 6061 are two independent components to implement the input and output functions of the terminal device, in some embodiments, the touch panel 6071 and the display panel 6061 may be integrated to implement the input and output functions of the terminal device, and this is not limited here.
The interface unit 608 is an interface for connecting an external device to the terminal apparatus 600. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 608 may be used to receive input (e.g., data information, power, etc.) from an external device and transmit the received input to one or more elements within the terminal apparatus 600 or may be used to transmit data between the terminal apparatus 600 and an external device.
The memory 609 may be used to store software programs as well as various data. The memory 609 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 609 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 610 is a control center of the terminal device, connects various parts of the entire terminal device by using various interfaces and lines, and performs various functions of the terminal device and processes data by running or executing software programs and/or modules stored in the memory 609 and calling data stored in the memory 609, thereby performing overall monitoring of the terminal device. Processor 610 may include one or more processing units; preferably, the processor 610 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 610.
The terminal device 600 may further include a power supply 611 (such as a battery) for supplying power to various components, and preferably, the power supply 611 may be logically connected to the processor 610 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system.
In addition, the terminal device 600 includes some functional modules that are not shown, and are not described in detail herein.
Preferably, an embodiment of the present invention further provides a terminal device, which includes a processor 610, a memory 609, and a computer program that is stored in the memory 609 and can be run on the processor 610, and when being executed by the processor 610, the computer program implements each process of the above-mentioned embodiment of the suspension gesture recognition method, and can achieve the same technical effect, and in order to avoid repetition, details are not described here again.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the above-mentioned embodiment of the suspended gesture recognition method, and can achieve the same technical effect, and in order to avoid repetition, the detailed description is omitted here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.
Claims (10)
1. A method of dangling gesture recognition, the method comprising:
receiving a hover gesture input of a user;
in response to the suspension gesture input, taking a to-be-recognized picture of the palm of the user; the photo to be recognized at least comprises a first area and a second area, wherein the first area is an area where a thumb of the palm of the user is located, the second area is an area where a target finger is located, and the second area is divided into at least two sub-areas according to a knuckle on the target finger;
determining a target sub-area which is overlapped with the first area in the second area of the photo to be recognized;
and determining the preset content corresponding to the target sub-region as the recognition result of the suspended gesture input.
2. The hover gesture recognition method of claim 1, wherein the determining a target sub-region of the second region of the photo to be recognized that overlaps the first region comprises:
determining a target sample picture which meets a preset similarity requirement with the picture to be recognized in the terminal equipment, and determining a preset region of a target finger in the target sample picture as a target sub-region of the picture to be recognized; or
And performing edge detection on the photo to be recognized, and determining a target sub-area of the target finger covered by the fingertip of the thumb in the photo to be recognized.
3. The hover gesture recognition method of claim 1, wherein receiving a hover gesture input by a user comprises:
and when detecting that the palm of the user is within the preset distance range of the terminal equipment, determining that the suspended gesture input of the user is received.
4. The hover gesture recognition method of claim 1, further comprising:
when the gesture state of the suspended gesture input is detected to change within a preset time interval, a picture to be recognized is sequentially shot for each suspended gesture input, and target contents corresponding to each suspended gesture input are sequenced according to a shooting sequence to obtain alternative contents.
5. The hover gesture recognition method of claim 4, wherein after taking a to-be-recognized picture of the palm of the user's hand, the method comprises:
determining an operation instruction corresponding to the suspended gesture input according to an included angle between the length direction of the user palm in the photo to be recognized and a preset reference direction of the terminal equipment;
after the obtaining of the alternative content, the method includes:
and determining the alternative content as the input content of the operation instruction.
6. A terminal device, characterized in that the terminal device comprises:
the input receiving module is used for receiving the suspended gesture input of a user;
the picture shooting module is used for responding to the suspended gesture input and shooting a picture to be recognized of the palm of the user; the photo to be recognized at least comprises a first area and a second area, wherein the first area is an area where a thumb of the palm of the user is located, the second area is an area where a target finger is located, and the second area is divided into at least two sub-areas according to a knuckle on the target finger;
the area determination module is used for determining a target sub-area which is overlapped with the first area in the second area of the photo to be recognized;
and the result identification module is used for determining the preset content corresponding to the target sub-region as the identification result of the suspended gesture input.
7. The terminal device of claim 6, wherein the region determining module comprises:
the first determining submodule is used for determining a target sample picture which meets the requirement of preset similarity with the picture to be recognized in the terminal equipment, and determining a preset area of a target finger in the target sample picture as a target sub-area of the picture to be recognized; or
And the second determining submodule is used for carrying out edge detection on the photo to be recognized and determining a target sub-area of the target finger covered by the fingertip of the thumb in the photo to be recognized.
8. The terminal device of claim 6, wherein the input receiving module comprises:
and the detection submodule is used for determining that the suspended gesture input of the user is received when the palm of the user is detected to be within the preset distance range of the terminal equipment.
9. A terminal device comprising a processor, a memory and a computer program stored on the memory and executable on the processor, the computer program, when executed by the processor, implementing the steps of the hover gesture recognition method of any of claims 1-5.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the flying gesture recognition method according to any one of claims 1 to 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911143620.XA CN111078002A (en) | 2019-11-20 | 2019-11-20 | Suspended gesture recognition method and terminal equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911143620.XA CN111078002A (en) | 2019-11-20 | 2019-11-20 | Suspended gesture recognition method and terminal equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111078002A true CN111078002A (en) | 2020-04-28 |
Family
ID=70311322
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911143620.XA Pending CN111078002A (en) | 2019-11-20 | 2019-11-20 | Suspended gesture recognition method and terminal equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111078002A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111652182A (en) * | 2020-06-17 | 2020-09-11 | 广东小天才科技有限公司 | Method and device for recognizing suspension gesture, electronic equipment and storage medium |
CN113190109A (en) * | 2021-03-30 | 2021-07-30 | 青岛小鸟看看科技有限公司 | Input control method and device of head-mounted display equipment and head-mounted display equipment |
WO2023236052A1 (en) * | 2022-06-07 | 2023-12-14 | 北京小米移动软件有限公司 | Input information determination method and apparatus, and device and storage medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100329509A1 (en) * | 2009-06-30 | 2010-12-30 | National Taiwan University Of Science And Technology | Method and system for gesture recognition |
CN105190477A (en) * | 2013-03-21 | 2015-12-23 | 索尼公司 | Head-mounted device for user interactions in an amplified reality environment |
US20150379238A1 (en) * | 2012-06-14 | 2015-12-31 | Medibotics Llc | Wearable Imaging Device for Monitoring Food Consumption Using Gesture Recognition |
US20160124514A1 (en) * | 2014-11-05 | 2016-05-05 | Samsung Electronics Co., Ltd. | Electronic device and method of controlling the same |
US20180181294A1 (en) * | 2016-12-28 | 2018-06-28 | Amazon Technologies, Inc. | Feedback animation for touch-based interactions |
US20180329209A1 (en) * | 2016-11-24 | 2018-11-15 | Rohildev Nattukallingal | Methods and systems of smart eyeglasses |
-
2019
- 2019-11-20 CN CN201911143620.XA patent/CN111078002A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100329509A1 (en) * | 2009-06-30 | 2010-12-30 | National Taiwan University Of Science And Technology | Method and system for gesture recognition |
US20150379238A1 (en) * | 2012-06-14 | 2015-12-31 | Medibotics Llc | Wearable Imaging Device for Monitoring Food Consumption Using Gesture Recognition |
CN105190477A (en) * | 2013-03-21 | 2015-12-23 | 索尼公司 | Head-mounted device for user interactions in an amplified reality environment |
US20160124514A1 (en) * | 2014-11-05 | 2016-05-05 | Samsung Electronics Co., Ltd. | Electronic device and method of controlling the same |
US20180329209A1 (en) * | 2016-11-24 | 2018-11-15 | Rohildev Nattukallingal | Methods and systems of smart eyeglasses |
US20180181294A1 (en) * | 2016-12-28 | 2018-06-28 | Amazon Technologies, Inc. | Feedback animation for touch-based interactions |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111652182A (en) * | 2020-06-17 | 2020-09-11 | 广东小天才科技有限公司 | Method and device for recognizing suspension gesture, electronic equipment and storage medium |
CN111652182B (en) * | 2020-06-17 | 2023-09-19 | 广东小天才科技有限公司 | Method and device for identifying suspension gesture, electronic equipment and storage medium |
CN113190109A (en) * | 2021-03-30 | 2021-07-30 | 青岛小鸟看看科技有限公司 | Input control method and device of head-mounted display equipment and head-mounted display equipment |
WO2023236052A1 (en) * | 2022-06-07 | 2023-12-14 | 北京小米移动软件有限公司 | Input information determination method and apparatus, and device and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107742072B (en) | Face recognition method and mobile terminal | |
CN109343788B (en) | Operation control method of mobile terminal and mobile terminal | |
CN108415641B (en) | Icon processing method and mobile terminal | |
EP3699743B1 (en) | Image viewing method and mobile terminal | |
CN111339515A (en) | Application program starting method and electronic equipment | |
CN111125800B (en) | Icon display method and electronic equipment | |
CN108108113B (en) | Webpage switching method and device | |
WO2019056969A1 (en) | Method for activating application, and mobile terminal | |
WO2020199987A1 (en) | Message display method and mobile terminal | |
CN110096203B (en) | Screenshot method and mobile terminal | |
CN111078002A (en) | Suspended gesture recognition method and terminal equipment | |
CN108710806B (en) | Terminal unlocking method and mobile terminal | |
CN111061446A (en) | Display method and electronic equipment | |
CN108664204B (en) | A kind of unlocking method, device and mobile terminal | |
CN111651105B (en) | Parameter setting method and device and electronic equipment | |
CN111522613B (en) | Screen capturing method and electronic equipment | |
CN110941469A (en) | Application body-splitting creating method and terminal equipment thereof | |
CN108459864B (en) | Method for updating display content and mobile terminal | |
CN107967086B (en) | Icon arrangement method and device for mobile terminal and mobile terminal | |
CN111381753B (en) | Multimedia file playing method and electronic equipment | |
CN109446765B (en) | Screen unlocking method and terminal equipment | |
CN109491572B (en) | Screen capturing method of mobile terminal and mobile terminal | |
CN108563940B (en) | Control method and mobile terminal | |
CN111460537A (en) | Method for hiding page content and electronic equipment | |
CN108897467B (en) | Display control method and terminal equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |