CN111949132B - Gesture control method based on touch pen and touch pen - Google Patents
Gesture control method based on touch pen and touch pen Download PDFInfo
- Publication number
- CN111949132B CN111949132B CN202010837741.0A CN202010837741A CN111949132B CN 111949132 B CN111949132 B CN 111949132B CN 202010837741 A CN202010837741 A CN 202010837741A CN 111949132 B CN111949132 B CN 111949132B
- Authority
- CN
- China
- Prior art keywords
- stroke action
- preset
- action
- stroke
- gesture
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 28
- 230000009471 action Effects 0.000 claims abstract description 252
- 230000006870 function Effects 0.000 claims abstract description 144
- 241001422033 Thestylus Species 0.000 claims description 14
- 238000013519 translation Methods 0.000 claims description 12
- 238000004590 computer program Methods 0.000 claims description 6
- 238000001514 detection method Methods 0.000 claims description 6
- 230000033001 locomotion Effects 0.000 description 12
- 230000008859 change Effects 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 238000011156 evaluation Methods 0.000 description 3
- 238000012015 optical character recognition Methods 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 230000006978 adaptation Effects 0.000 description 2
- 238000012790 confirmation Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000013459 approach Methods 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 238000012552 review Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/033—Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
- G06F3/0354—Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of 2D relative movements between the device, or an operating part thereof, and a plane or surface, e.g. 2D mice, trackballs, pens or pucks
- G06F3/03545—Pens or stylus
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
- G06V40/28—Recognition of hand or arm movements, e.g. recognition of deaf sign language
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Social Psychology (AREA)
- Psychiatry (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Position Input By Displaying (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The invention discloses a gesture control method based on a touch pen and the touch pen, wherein the method comprises the following steps: associating a preset gesture with a preset function in advance and storing a corresponding relation, wherein the preset gesture comprises a point reading gesture and a non-point reading gesture, and the preset function comprises a point reading function and a non-point reading function; detecting the stroke action executed by the point pen on the point reading; judging whether the stroke action is a preset gesture, if so, acquiring a preset function according to the preset gesture and the corresponding relation; and controlling the touch-and-talk pen to execute a preset function. According to the invention, when a stroke action is performed, the pen holding gesture is not required to be changed, the current click-reading operation is not required to be interrupted, the click-reading action and the non-click-reading action can be naturally combined, and the use of a user is convenient. In addition, the stroke action is combined with position judgment, and the operation is performed at different positions on the reading book, so that the realized functions are different.
Description
Technical Field
The invention relates to the technical field of read-write equipment, in particular to a gesture control method based on a touch-and-talk pen and the touch-and-talk pen.
Background
In the prior art, the touch-and-talk pen is a new generation intelligent reading and learning tool, is popular with young children and parents, and has the basic principle that the touch-and-talk pen performs touch-and-talk operation on matched touch-and-talk books, and emits various sounds according to the touch-and-talk positions.
To accommodate various needs of users, various manufacturers add various additional functions on the basic click-to-read function, such as: listening to music and listening to stories on a stylus; comparing with reading; spoken language evaluation, and the like. To control the various functions on the stylus, products currently on the market are generally controlled by:
1. Setting keys (or keys+screen display) on the touch-and-talk pen: the user can select proper functions through various key operations; this way many functional options can be implemented; however, the mode requires the user to change the gesture of holding the pen to operate the keys, and the normal click-to-read flow is interfered;
2. touch screen: similar problems with keys; meanwhile, the screen size is limited, and the operation is more inconvenient; if the screen is enlarged, the volume of the touch pen is larger;
3. And (3) voice: the user can press a special voice key and then read out some voice instructions; the method is convenient to operate, but has the problems of low accuracy of voice recognition, easiness in environmental influence and the like;
4. Paired cell phone or tablet: the touch-and-talk pen is connected with intelligent equipment such as a mobile phone or a tablet through Bluetooth or Wi-Fi, and then sends out an instruction through the intelligent equipment, so that the touch-and-talk pen is required to be operated by the matched mobile phone or tablet, and the operation is also inconvenient;
5. the spot-reading book is printed with a special command area: the corresponding area is clicked to perform a specific operation. This approach has limited instructions that can be implemented.
Disclosure of Invention
The invention aims to provide a gesture control method based on a touch and talk pen and the touch and talk pen, and aims to solve the problems that the conventional touch and talk pen is inconvenient to operate and the like.
In a first aspect, an embodiment of the present invention provides a gesture control method based on a stylus, including:
Associating a preset gesture with a preset function in advance and storing a corresponding relation, wherein the preset gesture comprises a point reading gesture and a non-point reading gesture, and the preset function comprises a point reading function and a non-point reading function;
detecting the stroke action executed by the point pen on the point reading;
Judging whether the stroke action is a preset gesture, if so, acquiring a preset function according to the preset gesture and the corresponding relation;
and controlling the touch-and-talk pen to execute a preset function.
In a second aspect, an embodiment of the present invention provides a stylus-based touch pen, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the gesture control method according to the first aspect when executing the computer program.
In a third aspect, an embodiment of the present invention provides a stylus-based touch-and-talk pen, including:
the storage unit is used for associating a preset gesture with a preset function in advance and storing a corresponding relation, wherein the preset gesture comprises a point-reading gesture and a non-point-reading gesture, and the preset function comprises a point-reading function and a non-point-reading function;
the detection unit is used for detecting the stroke action executed by the point reading pen on the point reading book;
The judging unit is used for judging whether the stroke action is a preset gesture, and if so, acquiring a preset function according to the preset gesture and the corresponding relation;
And the execution unit is used for controlling the touch and talk pen to execute a preset function.
The embodiment of the invention provides a gesture control method based on a touch pen and the touch pen, wherein the method comprises the following steps: associating a preset gesture with a preset function in advance and storing a corresponding relation, wherein the preset gesture comprises a point reading gesture and a non-point reading gesture, and the preset function comprises a point reading function and a non-point reading function; detecting the stroke action executed by the point pen on the point reading; judging whether the stroke action is a preset gesture, if so, acquiring a preset function according to the preset gesture and the corresponding relation; and controlling the touch-and-talk pen to execute a preset function. By the method provided by the embodiment of the invention, when a stroke action is performed, the pen holding gesture is not required to be changed, the current click-reading operation is not required to be interrupted, the click-reading action and the non-click-reading action can be naturally combined, and the use of a user is convenient. In addition, the stroke action is combined with position judgment, and the operation is performed at different positions on the reading book, so that the realized functions are different.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic flow chart of a gesture control method based on a stylus according to an embodiment of the present invention;
FIGS. 2 a-2 f are schematic diagrams illustrating a first type of stroke motion according to embodiments of the present invention;
FIGS. 3 a-3 s are diagrams illustrating a second type of stroke action according to embodiments of the present invention;
Fig. 4 is a schematic block diagram of a stylus according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
It should be understood that the terms "comprises" and "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in this specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in the present specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
Referring to fig. 1, fig. 1 shows a gesture control method based on a stylus according to an embodiment of the present invention, which includes steps S101 to S104:
S101, associating a preset gesture with a preset function in advance and storing a corresponding relation, wherein the preset gesture comprises a point-reading gesture and a non-point-reading gesture, and the preset function comprises a point-reading function and a non-point-reading function;
s102, detecting a stroke action executed by a point pen on a point reading book;
S103, judging whether the stroke action is a preset gesture, if so, acquiring a preset function according to the preset gesture and the corresponding relation;
s104, controlling the touch pen to execute a preset function.
The method of the embodiment of the invention provides a simple and convenient user interaction control method. Compared with the functional operation and switching method of the traditional touch and talk pen, the user does not need to change the pen holding gesture and interrupt the current touch and talk operation when doing the pen stroke action, and can naturally combine the touch and talk actions with the non-touch and talk actions, thereby being convenient for the user to use. In other embodiments, when the user performs a stroke action on the click-to-read book, a more intelligent function can be realized according to the position of the stroke, the specific stroke action, and the like.
Specifically, in the step S101, a predetermined gesture is associated with a predetermined function, so that when a stroke motion of the user operating the stylus is detected as the predetermined gesture, the predetermined function may be acquired and executed.
In the embodiment of the invention, the predetermined gesture includes a point-read gesture and a non-point-read gesture, and the predetermined function includes a point-read function and a non-point-read function. In short, the touch gesture is a touch gesture of the existing touch pen, and the corresponding touch function is a touch function of the existing touch pen.
In general, the reading of the existing stylus is performed by a clicking operation, so that the gesture is to put the stylus on the reading book for a short time and then lift the stylus, while the pen point of the stylus is not moved much in writing. The invention utilizes the characteristic to increase the non-point-reading gesture and distinguish the non-point-reading gesture from the point-reading gesture. The non-point-reading gesture refers to a stroke action when a point-reading pen is placed on a point-reading book, namely, a point-reading pen point moves on the point-reading book.
In the step S102, there are various ways of detecting the stroke action performed by the stylus on the reading book.
In one embodiment, the step S102 includes:
first kind: shooting a stroke image of the reading pen on the reading book through a camera arranged in the reading pen, and identifying the stroke image to acquire a stroke action;
second kind: and detecting the stroke action of the point reading pen on the point reading book through the action sensor arranged in the point reading pen.
For the first mode, a camera is arranged in the touch pen, the mounting position of the camera can be a pen point, and the camera can be arranged at other positions, such as a pen holder and the like, according to the requirement. The stroke images on the point reading book (or the micro-lattice on the point reading book) are continuously shot through the built-in camera, and specific stroke images, possible positions of the strokes on the specific book and movement tracks of the strokes on the book can be obtained through analyzing the contents and relative changes of the images on the series of shot images (or the micro-lattice on the point reading book); and then identifying the stroke image, the position and the track to acquire corresponding stroke actions. In the prior art, a camera is arranged outside the touch pen, and the touch pen is shot through the external camera or the hand, but in the first mode adopted in the embodiment of the invention, the touch pen is used for shooting the stroke image or the change of the micro-lattice (also called as the micro-lattice) on the touch book, so that the stroke is reversely pushed, and the gesture is obtained. Therefore, the first mode adopted by the embodiment of the invention is completely different from the mode adopted by the prior art, and the embodiment of the invention can be used for identifying by the touch pen, namely, the gesture identification is independently completed, the identification difficulty is reduced, and the identification is more accurate. Moreover, through the mode, the user can use the touch pen more conveniently, and the user can finish the operation by using the touch pen without carrying or placing other equipment.
In the second mode, a motion sensor is built in the stylus, and the motion sensor may be a sensor such as a gyroscope, and the stylus can detect stroke motions of the stylus on the reading book, such as lifting, dropping, moving, and the like, through the built-in motion sensor of the stylus, so as to realize a detection function.
In addition, the embodiment of the invention can also be used by combining the second mode with the first mode, for example, the first mode can judge the contact position of the pen point of the touch pen on the book, the second mode can distinguish some gestures, for example, the pen point of the touch pen is not moved but the pen body swings, and the stroke action corresponds to some preset functions, so that the corresponding functions are executed through the stroke action.
In step S103, it is determined whether the stroke motion is a predetermined gesture, and if so, a predetermined function is obtained according to the predetermined gesture and the correspondence.
If the preset gesture is a point-reading gesture, the corresponding traditional point-reading function is obtained, and if the preset gesture is a non-point-reading gesture, the corresponding non-point-reading function is obtained.
In one embodiment, the non-point-read gesture may be defined as a relatively acceptable stroke action, as shown in FIGS. 2a-2f (solid lines represent actual stroke actions and dashed lines represent trends in stroke actions): left stroke, right stroke, up stroke, down stroke, etc., but other stroke actions are also possible, as shown in fig. 3a-3 s: left stroke + polyline, right stroke + polyline, up stroke + polyline, down stroke + polyline, dotting, "W" shaped strokes, ">" shaped strokes, "N" shaped strokes, strokes symmetrical to "N", upper right arc + polyline strokes, lower right arc + polyline strokes, an "x" shaped stroke made up of one stroke, etc. Of course, other two-stroke actions or multi-stroke actions are also possible. In actual operation, a stroke action user can easily understand and memorize and can also easily recognize; for the actions of two strokes and multiple strokes, the interval time between strokes and a preset threshold value are required to be compared and judged; if the interval time is greater than the threshold value, determining the stroke action as a stroke action; if the interval time is less than the threshold value, it may be considered as a two-stroke action or a multi-stroke action.
According to the gestures, corresponding functions can be associated. For example, left-hand strokes + fold lines, which may correspond to left-hand key functions; right hand drawing + fold line, which can correspond to right hand key function; the upper line and the folding line can correspond to the key-up function; the lower line and the folding line can correspond to the key-down function; hooking can be performed correspondingly to confirm the key function; the W-shaped strokes can correspond to the double-click function of the confirmation key; the ">" shaped strokes can be played correspondingly; an N-shaped stroke can correspond to a pause key function; strokes symmetrical to 'N' can also correspond to pause key functions; the right upper arc + broken line stroke can correspond to the volume + key function; the lower right arc + broken line stroke can correspond to the volume-key function; an X-shaped stroke formed by connecting one stroke can be used for correspondingly canceling the key function. It should be noted that the above-mentioned stroke actions are only examples, and in other embodiments, other stroke actions not shown or illustrated may be used to implement the same function or implement other functions.
Therefore, when a user clicks, the user can execute the stroke action without changing the current pen holding gesture, the purpose of simulating the function key is achieved, and a certain function is executed, so that a natural and smooth use experience is obtained. For example, when a user reads a spoken language (foreign language or chinese language) from a point reading pen, if the user needs to enter a spoken language evaluation function provided by the point reading pen, an up stroke or a down stroke (or a left stroke, a right stroke, etc.) can be executed to enter the corresponding spoken language evaluation function.
In one embodiment, the step S103 includes:
Judging whether the stroke action is a preset gesture or not and whether the stroke action is in a character range or not;
if the stroke action is a predetermined gesture and the stroke action is within a character range, a predetermined function for the character is acquired.
In this embodiment, both the stroke motion and the position of the stroke motion need to be sensed. If the stroke action is a predetermined gesture and the stroke action is within a character range, a predetermined function for the character is acquired.
Where a stroke action may be within a character meaning that the stroke action encloses the character, such as a circle or a square or other pattern, of course, closed stroke action or non-closed stroke action. Further surrounding a character is understood to encompass the entire character, as well as a large part of the area of the character. In addition, stroke actions may also refer to other meanings within the character, as explained in subsequent embodiments.
For example, a click-to-read book may have multiple pronunciations, such as chinese pronunciation, english pronunciation, male pronunciation, female pronunciation, dialect, etc., which may be switched conveniently by embodiments of the invention: when the click-to-read operation (namely the click-to-read gesture) is carried out in a traditional mode, normal sound production can be carried out; when a small circle is stroked on a character, the mode is changed
Sounding. Besides sounding in the alternative, other intelligent functions with perception on the environment can be realized through different stroke actions and the positions where the strokes occur.
That is, the stroke action of the invention also combines the position judgment, and the function realized by the stroke action is different when the stroke action is operated at different positions on the reading book. That is, the invention can judge the reading position of the reading pen on the reading book, and even the same gesture operation (stroke action) can express different meanings (namely realizing different functions).
In an embodiment, if the stroke action is a predetermined gesture and the stroke action is within a character range, acquiring a predetermined function for the character includes:
if the stroke action is a circle and the stroke action surrounds a preset range of the character, acquiring a designated pronunciation function aiming at the character;
If the stroke action is underlined and the stroke action is located below a character, a translation function for the character is obtained.
In the embodiment of the invention, the stroke action can be in a character range, namely the stroke action surrounds a preset range of the character, or the stroke action is positioned below the character.
If the stroke action is a circle and the stroke action encloses a predetermined range of characters, a specified pronunciation function for the characters, such as pronunciation in English, etc., is obtained. Of course, the circle in this embodiment is actually a broad concept, which may refer to a closed figure, or may refer to a figure close to a closed figure, or the like. In addition, the function corresponding to the stroke action can be adjusted.
It is known from the foregoing embodiments that if the stroke action is a left stroke or a right stroke, etc., the corresponding left and right key functions are realized, whereas in the present embodiment, the stroke action is combined with the click position to realize the predetermined function, specifically, if the underline is also a left stroke or a right stroke, it is only determined whether the stroke action is located under the character at the same time, and if so, the translation function for the character is obtained. Thus, when the function is executed subsequently, the character can be translated.
In the embodiment of the invention, after the stroke action is identified and the stroke action is determined to be positioned below the character, the position of the character can be acquired and compared with the position information edited in advance, so that the content of the character, such as words, phrases or sentences, can be acquired, and corresponding translation operation can be executed subsequently.
In the prior art, a scanning pen with a translation function is adopted, but the scanning pen in the prior art realizes the principle that a word or a sentence is scanned in a sliding manner on a point reading book, then an image is spliced through a camera on a pen head and OCR (optical character recognition) is carried out, so that the scanned word or sentence is obtained and translation is carried out.
The principle of the embodiment of the invention is completely different from that of the scanning pen: the scanning pen recognizes the text information in the spliced image through OCR, and then the translation function is realized; the embodiment of the invention inquires the corresponding character content and translates the character content by identifying the stroke action and the position of the stroke action in the reading book and comparing the position information with the position information edited in advance.
Meanwhile, compared with a scanning pen, the touch-and-talk pen in the embodiment of the invention can realize more flexible functions than the scanning pen. For example, more detailed operations of circling words, collude words, and the like can be realized.
For example, a translation function may be set for a general underline; if the stroke action is set as an underline and a circle of words, if the user executes the stroke action, the user is shown to pay more attention to the character content, the character content can be added into a list of key words or sent into a matched mobile phone app, and special words can be set in the app to store the key character content so as to learn and memorize.
In addition, the stroke action can be set as an underline and a word or a separate word, if the user performs the stroke action, the user can memorize and understand the word, the character content can be added into a completed list or sent into a matched mobile phone app, and a special word book can be set in the app to store the memorized character content so as to review.
In one embodiment, the step S103 includes:
acquiring the last stroke action before the current stroke action;
Judging whether the current stroke action is a preset gesture and whether the last stroke action and the current stroke action are related actions, if the current stroke action is the preset gesture and the last stroke action and the current stroke action are related actions, acquiring a preset function related to the last stroke action according to the preset gesture and the corresponding relation, and if the current stroke action is the preset gesture and the last stroke action and the current stroke action are not related actions, acquiring a preset function not related to the last stroke action according to the preset gesture and the corresponding relation.
In the present embodiment, if too many stroke actions are set, the user is required to memorize the stroke actions, and also the functions corresponding to the stroke actions are required to be memorized, which causes an increase in the burden on the user. The embodiment of the invention can judge the specific operation to be executed according to the context.
For example, the stroke action needs to be checked, if the previous stroke action is an up-down or left-right sliding operation to imitate a function key operation, the check can be interpreted as confirmation of the previous up-down or left-right sliding operation; if there was no previous up-down or left-right sliding operation and the hooking action was performed on the area of the character, then the hooking may be interpreted as a corresponding hooking function, such as translation or recording, etc.
In one embodiment, the step S103 includes:
Judging whether the stroke action is double-click action or not according to the click time interval, and if so, acquiring a function corresponding to the double-click action according to the double-click action and the corresponding relation.
In the embodiment of the invention, the double-click action generally represents a special meaning, so the embodiment of the invention is limited solely for the stroke action, determines whether the stroke action is the double-click action, and defines corresponding functions for the double-click action in advance.
In this embodiment, whether the stroke action is a double-click action is determined according to a single click time interval, for example, if the click time interval is smaller than a preset time threshold, the stroke action is determined to be the double-click action, so that a function corresponding to the double-click action can be acquired.
In one embodiment, the step S103 includes:
Judging whether the stroke action is double-click action or not according to the click time interval and the coincidence ratio of the click positions, and if so, acquiring the function corresponding to the double-click action according to the double-click action and the corresponding relation.
In the previous embodiment, whether the stroke action is the double-click action is determined according to the time interval of the two-click operation, in this embodiment, on the basis of the previous embodiment, the contact ratio condition of the click position is additionally added, and only if the click time interval is smaller than a preset time threshold and the contact ratio of the click position is larger than the preset contact ratio threshold, the stroke action is determined to be the double-click action, and then the function corresponding to the double-click action is acquired, so that the situation of misjudgment can be prevented.
The functional use scenes of the double-click action can be various, and one comparison accords with the use habit of people, and the importance of the click-through area is emphasized, so that different operations are performed. Such as: the click-to-click may be a normal click-to-read operation; when the user clicks and reads the mobile phone app, the user can consider that the click-to-read area or the text or the picture corresponding to the click position is important, the corresponding character content can be stored or sent to the matched mobile phone app, and special words can be set in the app to store the character content.
In one embodiment, the step S103 includes:
Judging whether the stroke action is a preset gesture or not and whether the position of the stroke action is in a preset area or not, and acquiring a function of switching modes if the stroke action is the preset gesture and the corresponding relation;
the predetermined area includes one or more of an upper left corner, an upper right corner, a lower left corner, and a lower right corner of the click-to-read book.
The content of the point-reading books is different for the point-reading books. But typically, the upper left, upper right, lower left, lower right of a page, are not very practical or important. One or more of these 4 positions can be used as a special function area by the embodiment of the invention, which preferably uses the upper left corner area. The user may implement a function of switching modes when performing a predetermined gesture herein, such as various stroke actions of a single click, a double click, etc. For example, the stroke motion is a double click motion, and can be switched between a normal text click mode and a word learning mode.
Of course, in the embodiment of the present invention, multiple groups of gestures and corresponding functions may be set for different modes, and the gestures in different modes may be the same or different. Thus, the same stroke action is executed in different modes, the achieved functions can be completely different, the processing has the advantage of avoiding the burden of a user caused by excessive gestures, the user only needs to memorize a plurality of common gestures, and different effects can be achieved only according to the different modes.
Referring to fig. 4, an embodiment of the present invention further provides a stylus 400, which includes:
A storage unit 401, configured to associate a predetermined gesture with a predetermined function in advance and store a correspondence, where the predetermined gesture includes a point-read gesture and a non-point-read gesture, and the predetermined function includes a point-read function and a non-point-read function;
A detection unit 402, configured to detect a stroke action performed by the stylus on the reading book;
a judging unit 403, configured to judge whether the stroke action is a predetermined gesture, and if yes, acquire a predetermined function according to the predetermined gesture and the corresponding relationship;
and the execution unit 404 is used for controlling the touch and talk pen to execute a preset function.
Specific technical details about the above-mentioned stylus are described in the previous methods, and are not repeated here.
The embodiment of the invention also provides a touch and talk pen, which comprises a memory, a processor and a computer program stored in the memory and capable of running on the processor, wherein the gesture control method is realized when the processor executes the computer program.
In the description, each embodiment is described in a progressive manner, and each embodiment is mainly described by the differences from other embodiments, so that the same similar parts among the embodiments are mutually referred. It should be noted that it will be apparent to those skilled in the art that various modifications and adaptations of the invention can be made without departing from the principles of the invention and these modifications and adaptations are intended to be within the scope of the invention as defined in the following claims.
It should also be noted that in this specification, relational terms such as first and second, and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Claims (4)
1. The gesture control method based on the stylus is characterized by comprising the following steps of:
Associating a preset gesture with a preset function in advance and storing a corresponding relation, wherein the preset gesture comprises a point reading gesture and a non-point reading gesture, and the preset function comprises a point reading function and a non-point reading function;
detecting the stroke action executed by the point pen on the point reading;
Judging whether the stroke action is a preset gesture, if so, acquiring a preset function according to the preset gesture and the corresponding relation;
Controlling the touch pen to execute a preset function;
Judging whether the stroke action is a preset gesture or not, if so, acquiring a preset function according to the preset gesture and the corresponding relation, wherein the judging comprises the following steps:
Judging whether the stroke action is a preset gesture or not and whether the stroke action is in a character range or not;
If the stroke action is a preset gesture and the stroke action is in a character range, acquiring a preset function aiming at the character;
If the stroke action is a predetermined gesture and the stroke action is within a character range, acquiring a predetermined function for the character, including:
if the stroke action is a circle and the stroke action surrounds a preset range of the character, acquiring a designated pronunciation function aiming at the character;
If the stroke action is an underline and the stroke action is positioned below a character, acquiring a translation function aiming at the character, identifying the stroke action and determining that the stroke action is positioned below the character, acquiring the position of the character, comparing the position information with position information edited in advance, acquiring the content of the character, and executing corresponding translation operation;
Judging whether the stroke action is a preset gesture, if so, acquiring a preset function according to the preset gesture and the corresponding relation, and further comprising:
acquiring the last stroke action before the current stroke action;
judging whether the current stroke action is a preset gesture and whether the last stroke action and the current stroke action are related actions, if the current stroke action is the preset gesture and the last stroke action and the current stroke action are related actions, acquiring a preset function related to the last stroke action according to the preset gesture and the corresponding relation, and if the current stroke action is the preset gesture and the last stroke action and the current stroke action are not related actions, acquiring a preset function not related to the last stroke action according to the preset gesture and the corresponding relation;
Judging whether the stroke action is a preset gesture, if so, acquiring a preset function according to the preset gesture and the corresponding relation, and further comprising: judging whether the stroke action is double-click action according to the click time interval and the coincidence degree of the click position, if the click time interval is smaller than a preset time threshold and the coincidence degree of the click position is larger than the preset coincidence degree threshold, determining that the stroke action is double-click action, and acquiring a function corresponding to the double-click action according to the double-click action and the corresponding relation;
The stroke action executed by the detection point reading pen on the reading book comprises the following steps: comparing the interval time between strokes with a predetermined threshold; if the interval time is greater than the threshold value, judging the stroke action as a stroke action; if the interval time is smaller than the threshold value, judging that the stroke action is a two-stroke action or a multi-stroke action;
Judging whether the stroke action is a preset gesture, if so, acquiring a preset function according to the preset gesture and the corresponding relation, and further comprising: judging whether the stroke action is a preset gesture and whether the position of the stroke action is in a preset area, and if the stroke action is the preset gesture and the position of the stroke action is in the preset area, acquiring a function of switching modes; and the preset area is the upper left corner of the click reading, and the stroke action is double click action, so that the click reading mode and the word learning mode are switched.
2. The stylus-based gesture control method according to claim 1, wherein detecting the stroke action performed by the stylus on the reading comprises:
shooting a stroke image of the reading pen on the reading book through a camera arranged in the reading pen, and identifying the stroke image to acquire a stroke action;
and/or detecting the stroke action of the point reading pen on the point reading book through the action sensor built-in the point reading pen.
3. A stylus comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the gesture control method of any one of claims 1 to 2 when the computer program is executed by the processor.
4. A point-to-read pen, characterized by comprising the following steps:
the storage unit is used for associating a preset gesture with a preset function in advance and storing a corresponding relation, wherein the preset gesture comprises a point-reading gesture and a non-point-reading gesture, and the preset function comprises a point-reading function and a non-point-reading function;
the detection unit is used for detecting the stroke action executed by the point reading pen on the point reading book;
The judging unit is used for judging whether the stroke action is a preset gesture, and if so, acquiring a preset function according to the preset gesture and the corresponding relation;
the executing unit is used for controlling the touch-and-talk pen to execute a preset function;
judging whether the stroke action is a preset gesture or not, if so, acquiring a preset function according to the preset gesture and the corresponding relation, wherein the judging comprises the following steps: judging whether the stroke action is a preset gesture or not and whether the stroke action is in a character range or not; if the stroke action is a preset gesture and the stroke action is in a character range, acquiring a preset function aiming at the character;
if the stroke action is a predetermined gesture and the stroke action is within a character range, acquiring a predetermined function for the character, including: if the stroke action is a circle and the stroke action surrounds a preset range of the character, acquiring a designated pronunciation function aiming at the character; if the stroke action is an underline and the stroke action is positioned below a character, acquiring a translation function aiming at the character, identifying the stroke action and determining that the stroke action is positioned below the character, acquiring the position of the character, comparing the position information with position information edited in advance, acquiring the content of the character, and executing corresponding translation operation;
Judging whether the stroke action is a preset gesture, if so, acquiring a preset function according to the preset gesture and the corresponding relation, and further comprising: acquiring the last stroke action before the current stroke action;
judging whether the current stroke action is a preset gesture and whether the last stroke action and the current stroke action are related actions, if the current stroke action is the preset gesture and the last stroke action and the current stroke action are related actions, acquiring a preset function related to the last stroke action according to the preset gesture and the corresponding relation, and if the current stroke action is the preset gesture and the last stroke action and the current stroke action are not related actions, acquiring a preset function not related to the last stroke action according to the preset gesture and the corresponding relation;
Judging whether the stroke action is a preset gesture, if so, acquiring a preset function according to the preset gesture and the corresponding relation, and further comprising: judging whether the stroke action is a double-click action or not according to the click time interval, and if so, acquiring a function corresponding to the double-click action according to the double-click action and the corresponding relation; or judging whether the stroke action is double-click action according to the click time interval and the coincidence ratio of the click position, and if so, acquiring a function corresponding to the double-click action according to the double-click action and the corresponding relation;
The stroke action executed by the detection point reading pen on the reading book comprises the following steps: comparing the interval time between strokes with a predetermined threshold; if the interval time is greater than the threshold value, judging the stroke action as a stroke action; if the interval time is smaller than the threshold value, judging that the stroke action is a two-stroke action or a multi-stroke action;
Judging whether the stroke action is a preset gesture, if so, acquiring a preset function according to the preset gesture and the corresponding relation, and further comprising: judging whether the stroke action is a preset gesture and whether the position of the stroke action is in a preset area, and if the stroke action is the preset gesture and the position of the stroke action is in the preset area, acquiring a function of switching modes; and the preset area is the upper left corner of the click reading, and the stroke action is double click action, so that the click reading mode and the word learning mode are switched.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010837741.0A CN111949132B (en) | 2020-08-19 | 2020-08-19 | Gesture control method based on touch pen and touch pen |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010837741.0A CN111949132B (en) | 2020-08-19 | 2020-08-19 | Gesture control method based on touch pen and touch pen |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111949132A CN111949132A (en) | 2020-11-17 |
CN111949132B true CN111949132B (en) | 2024-09-13 |
Family
ID=73358510
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010837741.0A Active CN111949132B (en) | 2020-08-19 | 2020-08-19 | Gesture control method based on touch pen and touch pen |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111949132B (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1955889A (en) * | 2005-10-25 | 2007-05-02 | 尤卫建 | Pen-type character input operation method and device |
CN103809791A (en) * | 2012-11-12 | 2014-05-21 | 广东小天才科技有限公司 | Multifunctional point reading method and system |
CN108052938A (en) * | 2017-12-28 | 2018-05-18 | 广州酷狗计算机科技有限公司 | A kind of point-of-reading device |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3486459B2 (en) * | 1994-06-21 | 2004-01-13 | キヤノン株式会社 | Electronic information equipment and control method thereof |
TWI301590B (en) * | 2005-12-30 | 2008-10-01 | Ibm | Handwriting input method, apparatus, system and computer recording medium with a program recorded thereon of capturing video data of real-time handwriting strokes for recognition |
CN102455869B (en) * | 2011-09-29 | 2014-10-22 | 北京壹人壹本信息科技有限公司 | Method and device for editing characters by using gestures |
CN103186268A (en) * | 2011-12-29 | 2013-07-03 | 盛乐信息技术(上海)有限公司 | Handwriting input method and system |
JP2014186691A (en) * | 2013-03-25 | 2014-10-02 | Toshiba Corp | Information display apparatus |
US10429954B2 (en) * | 2017-05-31 | 2019-10-01 | Microsoft Technology Licensing, Llc | Multi-stroke smart ink gesture language |
CN208216358U (en) * | 2018-02-05 | 2018-12-11 | 武汉商贸职业学院 | A kind of English teaching Multifunctional template ruler pen |
-
2020
- 2020-08-19 CN CN202010837741.0A patent/CN111949132B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1955889A (en) * | 2005-10-25 | 2007-05-02 | 尤卫建 | Pen-type character input operation method and device |
CN103809791A (en) * | 2012-11-12 | 2014-05-21 | 广东小天才科技有限公司 | Multifunctional point reading method and system |
CN108052938A (en) * | 2017-12-28 | 2018-05-18 | 广州酷狗计算机科技有限公司 | A kind of point-of-reading device |
Also Published As
Publication number | Publication date |
---|---|
CN111949132A (en) | 2020-11-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9740399B2 (en) | Text entry using shapewriting on a touch-sensitive input panel | |
US10146318B2 (en) | Techniques for using gesture recognition to effectuate character selection | |
CN107436691B (en) | Method, client, server and device for correcting errors of input method | |
JP5616325B2 (en) | How to change the display based on user instructions | |
US9104306B2 (en) | Translation of directional input to gesture | |
JP6987067B2 (en) | Systems and methods for multiple input management | |
KR102284238B1 (en) | Input display device, input display method, and program | |
US20120179472A1 (en) | Electronic device controlled by a motion and controlling method thereof | |
US20140368434A1 (en) | Generation of text by way of a touchless interface | |
CN106971723A (en) | Method of speech processing and device, the device for speech processes | |
CN109614846A (en) | Manage real-time handwriting recognition | |
CN104090652A (en) | Voice input method and device | |
JP2011516924A (en) | Multi-mode learning system | |
TW200530901A (en) | Text entry system and method | |
CN110471599A (en) | Screen word-selecting searching method, device, electronic equipment and storage medium | |
CN109002183B (en) | Information input method and device | |
US20140297257A1 (en) | Motion sensor-based portable automatic interpretation apparatus and control method thereof | |
CN111949132B (en) | Gesture control method based on touch pen and touch pen | |
CN112163513A (en) | Information selection method, system, device, electronic equipment and storage medium | |
WO2022071448A1 (en) | Display apparatus, display method, and program | |
US11886801B1 (en) | System, method and device for multimodal text editing | |
CN102521573A (en) | Stroke identification method of writing strokes | |
KR20040107245A (en) | A method of processing user's input data by using a character recognition device and a method of interactive remote education by using the processing method | |
JP6710893B2 (en) | Electronics and programs | |
JPH0883092A (en) | Information inputting device and method therefor |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |