CN108536858A - The method and apparatus write - Google Patents
The method and apparatus write Download PDFInfo
- Publication number
- CN108536858A CN108536858A CN201810350093.9A CN201810350093A CN108536858A CN 108536858 A CN108536858 A CN 108536858A CN 201810350093 A CN201810350093 A CN 201810350093A CN 108536858 A CN108536858 A CN 108536858A
- Authority
- CN
- China
- Prior art keywords
- characters
- prompt
- written
- written content
- determining
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 58
- 230000008569 process Effects 0.000 claims description 23
- 238000005516 engineering process Methods 0.000 abstract description 7
- 238000004891 communication Methods 0.000 description 10
- 238000010586 diagram Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 239000002775 capsule Substances 0.000 description 3
- 239000000470 constituent Substances 0.000 description 3
- 230000005236 sound signal Effects 0.000 description 3
- 230000001133 acceleration Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000007726 management method Methods 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
- G06V30/32—Digital ink
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
- G06V30/14—Image acquisition
- G06V30/142—Image acquisition using hand-held instruments; Constructional details of the instruments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/10—Character recognition
- G06V30/14—Image acquisition
- G06V30/142—Image acquisition using hand-held instruments; Constructional details of the instruments
- G06V30/1423—Image acquisition using hand-held instruments; Constructional details of the instruments the instrument generating sequences of position coordinates corresponding to handwriting
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The disclosure is directed to a kind of method and apparatus write, and belong to electronic technology field.The method includes:During writing, by the shooting component in smart pen, shooting includes the image for the content currently write;Described image is handled, the content currently write for including in described image is obtained;According to the content currently write got, the prompt text to match with the content currently write is determined;Show the prompt text.Using the disclosure, writing efficiency can be improved.
Description
Technical Field
The present disclosure relates to the field of electronic technologies, and in particular, to a method and an apparatus for writing.
Background
In daily life, people often need to write on paper, and when writing, the problem that how to write a certain character is forgotten is often encountered. Generally, people can look up a word to be written through a computer or a mobile phone, and can continue to write after looking up the word.
In carrying out the present disclosure, the inventors found that at least the following problems exist:
based on the above processing mode, when people forget how to write a certain word, people can continue writing by searching the word through a computer or a mobile phone, and thus, the writing efficiency is low.
Disclosure of Invention
To overcome the problem of low writing efficiency in the related art, the present disclosure provides a method and apparatus for writing. The technical scheme is as follows:
according to a first aspect of embodiments of the present disclosure, there is provided a method of writing, the method comprising:
in the writing process, shooting an image containing the current written content through a shooting component in the intelligent pen;
processing the image to obtain the current written content contained in the image;
determining prompt characters matched with the current written content according to the acquired current written content;
and displaying the prompt words.
Optionally, the determining, according to the obtained content of the current writing, a prompt text matched with the content of the current writing includes:
and when the currently written content is detected to be pinyin, determining prompt characters matched with the pinyin.
Optionally, the determining, according to the obtained content of the current writing, a prompt text matched with the content of the current writing includes:
and when the current written content is detected to be the wrongly written characters in the wrongly written character library, determining the correct characters corresponding to the current written wrongly written characters according to the corresponding relation between the prestored wrongly written characters and the correct characters, and using the correct characters as prompt characters matched with the current written content.
Optionally, the determining, according to the obtained content of the current writing, a prompt text matched with the content of the current writing includes:
and when the current written content is detected to be a part of the target character in the complex character library, determining the target character as the prompt character matched with the current written content.
Optionally, the determining, according to the obtained content of the current writing, a prompt text matched with the content of the current writing includes:
determining words containing the currently written characters according to the currently written characters;
and determining the characters behind the currently written characters in the words as prompt characters matched with the currently written contents.
Optionally, the displaying the prompt text includes:
controlling a projector to project the prompt words; or,
and displaying the prompt words through a screen in the intelligent pen.
According to a second aspect of embodiments of the present disclosure, there is provided an apparatus for writing, the apparatus comprising:
the shooting module is used for shooting an image containing the current written content through a shooting component in the intelligent pen in the writing process;
the acquisition module is used for processing the image and acquiring the current written content contained in the image;
the determining module is used for determining prompt characters matched with the current written content according to the obtained current written content;
and the display module is used for displaying the prompt words.
Optionally, the determining module is configured to:
and when the currently written content is detected to be pinyin, determining prompt characters matched with the pinyin.
Optionally, the determining module is configured to:
and when the current written content is detected to be the wrongly written characters in the wrongly written character library, determining the correct characters corresponding to the current written wrongly written characters according to the corresponding relation between the prestored wrongly written characters and the correct characters, and using the correct characters as prompt characters matched with the current written content.
Optionally, the determining module is configured to:
and when the current written content is detected to be a part of the target character in the complex character library, determining the target character as the prompt character matched with the current written content.
Optionally, the determining module is configured to:
determining words containing the currently written characters according to the currently written characters;
and determining the characters behind the currently written characters in the words as prompt characters matched with the currently written contents.
Optionally, the display module is configured to:
controlling a projector to project the prompt words; or,
and displaying the prompt words through a screen in the intelligent pen.
According to a third aspect of embodiments of the present disclosure, there is provided a terminal comprising a processor and a memory, the memory having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by the processor to implement the method of writing according to the first aspect.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium having stored therein at least one instruction, at least one program, set of codes, or set of instructions, which is loaded and executed by a processor to implement the method of writing as described in the first aspect.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
in the embodiment of the disclosure, in the writing process, the smart pen may acquire the current written content, determine the prompt text matched with the current written content based on the acquired current written content, and further display the determined prompt text. Therefore, when a user writes by using the intelligent pen, the user can write on the basis of the prompt characters displayed by the intelligent pen, and the user can be prevented from searching the characters which cannot be written by using a computer or a mobile phone, so that the writing efficiency can be improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure. In the drawings:
FIG. 1 is a system framework diagram illustrating an exemplary embodiment;
FIG. 2 is a flow diagram illustrating a method of writing in accordance with an exemplary embodiment;
FIG. 3 is a schematic illustration of a writing instrument according to an exemplary embodiment;
fig. 4 is a schematic structural diagram illustrating a smart pen according to an exemplary embodiment.
With the foregoing drawings in mind, certain embodiments of the disclosure have been shown and described in more detail below. These drawings and written description are not intended to limit the scope of the disclosed concepts in any way, but rather to illustrate the concepts of the disclosure to those skilled in the art by reference to specific embodiments.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
An exemplary embodiment of the present disclosure provides a method of writing, which may be used in a smart pen, wherein the smart pen may be a smart pen with certain prompt text. The smart pen may include a processor, memory, display components, capture components, etc., as shown in fig. 1. The processor may be a CPU (Central Processing Unit) or the like, and may be configured to obtain the content of the current writing, determine a prompt word matching the content of the current writing, and the like. The Memory may be a RAM (Random Access Memory), a Flash (Flash Memory), or the like, and may be used to store received data, data required by a processing procedure, data generated in the processing procedure, or the like, such as a wrong word library, a complex word library, a correspondence between a wrongly written word and a correct word, or the like. The display component can be a screen (such as a touch screen) and can be used for prompting words; the display component may also be a projector, which may be used to project the prompt text. The shooting component can be a camera or the like, and a technician can set the position of the shooting component according to the shooting range of the shooting component, namely the shooting component can be set at a position capable of shooting the currently written content, for example, the shooting component can be set at the pen point accessory of the smart pen, or the shooting component can be set in the pen cap of the smart pen. The smart pen may also include a transceiver, audio output components, audio input components, and the like. The transceiver may be used for data transmission with other devices, for example, may communicate with a computer or a cell phone, may include an antenna, a matching circuit, a modem, and the like. The audio output component may be a speaker, headphones, or the like. The audio input means may be a microphone or the like.
The process flow shown in fig. 2 will be described in detail below with reference to the embodiments, and the contents may be as follows:
in step 201, during writing, an image containing the current written content is shot by a shooting component in the smart pen.
In the implementation, in daily life, people often need to write on the paper, and when the user wants to write on the paper, can write through the smart pen. Specifically, can be provided with the power button on the smart pen, when the user wants to use the smart pen, can open the power button, trigger the smart pen and get into operating condition, wherein, can have when the smart pen is in the closed condition and write the function that the function does not have the definite suggestion characters, and then, the user can write through the smart pen. In the writing process of the intelligent pen, a shooting part in the intelligent pen can shoot an image containing the currently written content, wherein when the position of the shooting part can be moved, a user can move the shooting part to the position where the currently written content can be shot before writing. In addition, during the writing process, the shooting component can shoot in real time and continuously acquire images containing the current written content. In this case, the image pickup means of the smart pen may perform image pickup each time the preset image pickup period is reached during writing. In this case, when the smart pen detects the occurrence of the shooting trigger event during writing, shooting may be performed by the shooting means.
Optionally, based on different shooting trigger events, the processing manner of step 201 may be various, and several feasible processing manners are given below:
in the first mode, in the writing process, when a selection instruction of a shooting key is detected, an image containing the currently written content is shot through a shooting component in the intelligent pen.
In implementation, a shooting trigger button can be arranged in the smart pen. The user is at the in-process of writing, when meetting the word that can not write, can press and shoot the trigger button, and at this moment, the smart pen will detect the selection instruction to shooting the button, and then, can shoot the image that contains the content of writing at present through the shooting part in the smart pen.
In the writing process, when the fact that the writing continuous interruption duration reaches the preset duration threshold is detected, the shooting component in the intelligent pen shoots the image containing the currently written content.
In implementation, the smart pen may be pre-stored with a preset duration threshold. In the writing process, the intelligent pen can detect the posture information of the intelligent pen, and then whether the intelligent pen is in a held state or not can be determined according to the detected posture information. When the intelligent pen is in a held state and when the fact that the writing continuous interruption duration reaches a preset duration threshold value is detected, the intelligent pen can shoot an image containing the currently written content through a shooting component in the intelligent pen.
In step 202, the image is processed to obtain the current written content contained in the image.
In implementation, whenever an image containing the currently written content is shot by a shooting component in the smart pen, the smart pen can perform image recognition processing on the image to obtain the currently written content contained in the image, wherein the currently written content can be various, can be pinyin, foreign language words, Chinese characters and can be a part of a Chinese character.
In step 203, according to the obtained current written content, determining a prompt character matched with the current written content.
In implementation, after the smart pen obtains the current writing content, the language corresponding to the current writing content may be determined based on the current writing content. Specifically, the smart pen may store in advance the constituent elements of the characters of each language, for example, the components and pinyin elements of chinese, 26 letters of english, and the like. After the smart pen obtains the current written content, the corresponding language can be determined based on the constituent elements of the characters contained in the current written content, and then the prompt characters matched with the current written content can be determined in the database corresponding to the determined language. For example, when the currently written content is "high", the smart pen may determine that the prompt text matching "high" is "happy".
Optionally, based on different current written contents, the processing manner of step 203 may be various, and several feasible processing manners are given as follows:
in the first mode, when the currently written content is detected to be pinyin, the prompt characters matched with the pinyin are determined.
In the implementation, the user writes through the smart pen, when the user forgets how to write a certain word, the pinyin of the word can be written on the paper through the smart pen, at this moment, the smart pen can detect that the currently written content is the pinyin, and then, the prompt text matched with the pinyin can be determined, for example, the text corresponding to the pinyin can be determined as the prompt text, wherein the prompt text can include a plurality of, and the prompt text can include a single word or a word.
And secondly, when the current written content is detected to be the wrongly written characters in the wrongly written character library, determining the correct characters corresponding to the current written wrongly written characters according to the corresponding relation between the prestored wrongly written characters and the correct characters, and using the correct characters as the prompt characters matched with the current written content.
In implementation, the smart pen may store a wrong word library and a corresponding relationship between each wrongly written word and a correct word in advance, where the wrong word library includes a single wrongly written word and/or a word including the wrongly written word (for example, "happy and happy", where "colorful" is a wrongly written word in the word), and the correct word corresponding to the wrong word may be a single word or a word. When the intelligent pen detects that the currently written content is the wrongly written character in the wrongly written character library, the intelligent pen can determine the correct character corresponding to the currently written wrongly written character in the corresponding relation of the prestored wrongly written character and the correct character, and further can determine the correct character as the prompt character related to the currently written content. In addition, the smart pen can communicate with a server so that the corresponding relation can be updated in the using process.
In addition, in the writing process, after the content of the current writing is obtained, when the intelligent pen detects that the content of the current writing is the finished character and cannot determine the character corresponding to the content of the current writing, the content of the current writing can be marked as a wrongly written character. For example, when the user writes a third word, the smart pen cannot determine which word the second word belongs to after acquiring the written second word, and the smart pen may mark the written second word as a wrongly written word. After the wrongly written characters are determined, the wrongly written characters can be sent to the server, and after the wrongly written characters are received by the server, the wrongly written characters can be stored in a wrongly written character library, correct characters corresponding to the wrongly written characters can be determined, and then the wrongly written characters and the correct characters corresponding to the wrongly written characters can be added into the corresponding relation.
And determining the target character as the prompt character matched with the currently written content when the currently written content is detected to be a part of the target character in the complex character library.
In implementation, a complex word stock may be stored in advance in the smart pen, wherein the complex word stock may include a single word or a word with complex writing, and the complex word in the complex word stock may be a word with a number of constituent elements greater than a preset number threshold. During writing, when the current written content is detected to be a part of the target word in the complex word stock, the target word can be determined as the prompt word matched with the current written content. That is, during writing, when the smart pen detects that the currently written content is a part of a word input, the word to which the part of the content belongs may be determined as the prompt word matching the currently written content, for example, when the user writes half of the "capsule" word, the smart pen may detect that the currently written part belongs to the "capsule" word, and further, may determine the "capsule" as the prompt word matching the currently written content.
Determining words containing the currently written characters according to the currently written characters; and determining the characters behind the currently written characters in the words as prompt characters matched with the currently written contents.
In implementation, the intelligent pen can also match the commonly used collocation of the current written characters for the current written characters in the writing process. Specifically, after the currently written characters are obtained, words including the currently written characters can be determined, and then, characters after the currently written characters in the words can be determined as prompt characters matched with the currently written contents. For example, if the currently written text is "high", the words determined by the smart pen may include "happy", and further "happy" may be determined as the prompt text matching the currently written content. For another example, if the currently written text is "happy," the words determined by the smart pen may include "happy and happy," and further "happy" may be determined as the prompt text matching the currently written content.
In addition, in the writing process, the shooting component can shoot the image containing the currently written content in real time, namely the intelligent pen can acquire the currently written content in real time, so that the characters written by the intelligent pen can be recorded based on the currently written content acquired in real time. After a certain time, the intelligent pen can count the writing times of words written by the intelligent pen in history, and determine the words with the writing times larger than a preset time threshold value as the words frequently used. Under the condition, after the intelligent pen obtains the current written content, whether the current written content is matched with a certain frequently-used word or not can be judged, and if the current written content is matched with the word, the characters behind the current written character in the word can be determined as the prompt characters matched with the current written content.
In step 204, prompt text is displayed.
In implementation, after the intelligent pen determines the prompt words matched with the current written content, the prompt words can be displayed, so that the user can write based on the displayed prompt words in the writing process, and can not inquire words which cannot be written through a computer or a mobile phone.
In addition, the number of times of use of each word can be stored in the smart pen, and under the condition, when the determined prompt characters comprise a plurality of prompt characters, the smart pen can display each prompt character according to the sequence of the number of times of use from high to low.
Optionally, based on different display modes, the processing mode of step 204 may be various, and several feasible processing modes are given below:
in the first mode, the projector is controlled to project prompt characters.
In the implementation, to the condition that is provided with the projecting apparatus in the smart pen, when the direction of projection of projecting apparatus can be adjusted, before writing, the user can adjust the direction of projection of projecting apparatus to the direction of conveniently looking over the suggestion characters that throw. Under the condition, after the determined prompt characters are determined, the intelligent pen can control the projector to project the determined prompt characters.
And in the second mode, prompt characters are displayed through a screen in the intelligent pen.
In the implementation, can also be provided with the screen in the intelligence pen, under this kind of circumstances, after the intelligence pen determined the suggestion characters, can show the suggestion characters that determine through the screen.
In the embodiment of the disclosure, in the writing process, the smart pen may acquire the current written content, determine the prompt text matched with the current written content based on the acquired current written content, and further display the determined prompt text. Therefore, when a user writes by using the intelligent pen, the user can write on the basis of the prompt characters displayed by the intelligent pen, and the user can be prevented from searching the characters which cannot be written by using a computer or a mobile phone, so that the writing efficiency can be improved.
Yet another exemplary embodiment of the present disclosure provides a writing apparatus, which may be the above-mentioned smart pen, as shown in fig. 3, and includes:
the shooting module 310 is used for shooting an image containing the current written content through a shooting component in the smart pen in the writing process;
an obtaining module 320, configured to process the image, and obtain currently written content included in the image;
the determining module 330 is configured to determine, according to the obtained content of the current writing, a prompt text matched with the content of the current writing;
and the display module 340 is configured to display the prompt text.
Optionally, the determining module 330 is configured to:
and when the currently written content is detected to be pinyin, determining prompt characters matched with the pinyin.
Optionally, the determining module 330 is configured to:
and when the current written content is detected to be the wrongly written characters in the wrongly written character library, determining the correct characters corresponding to the current written wrongly written characters according to the corresponding relation between the prestored wrongly written characters and the correct characters, and using the correct characters as prompt characters matched with the current written content.
Optionally, the determining module 330 is configured to:
and when the current written content is detected to be a part of the target character in the complex character library, determining the target character as the prompt character matched with the current written content.
Optionally, the determining module 330 is configured to:
determining words containing the currently written characters according to the currently written characters;
and determining the characters behind the currently written characters in the words as prompt characters matched with the currently written contents.
Optionally, the display module 340 is configured to:
controlling a projector to project the prompt words; or,
and displaying the prompt words through a screen in the intelligent pen.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
In the embodiment of the disclosure, in the writing process, the smart pen may acquire the current written content, determine the prompt text matched with the current written content based on the acquired current written content, and further display the determined prompt text. Therefore, when a user writes by using the intelligent pen, the user can write on the basis of the prompt characters displayed by the intelligent pen, and the user can be prevented from searching the characters which cannot be written by using a computer or a mobile phone, so that the writing efficiency can be improved.
It should be noted that: in the writing device provided in the above embodiment, only the division of the functional modules is taken as an example for illustration, and in practical applications, the function distribution may be completed by different functional modules as needed, that is, the internal structure of the smart pen may be divided into different functional modules to complete all or part of the functions described above. In addition, the device for writing and the method for writing provided by the above embodiment belong to the same concept, and the specific implementation process is described in the method embodiment, which is not described herein again.
The embodiment of the disclosure also shows a structural schematic diagram of a terminal. The terminal may be a smart pen or the like.
Referring to fig. 4, the terminal 400 may include one or more of the following components: processing components 402, memory 404, power components 406, multimedia components 408, audio components 410, input/output (I/O) interfaces 412, sensor components 414, and communication components 416.
The processing component 402 generally controls overall operation of the terminal 400, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing element 402 may include one or more processors 420 to execute instructions to perform all or part of the steps of the methods described above. Further, the processing component 402 can include one or more modules that facilitate interaction between the processing component 402 and other components. For example, the processing component 402 can include a multimedia module to facilitate interaction between the multimedia component 408 and the processing component 402.
The memory 404 is configured to store various types of data to support operations at the terminal 400. Examples of such data include instructions for any application or method operating on the terminal 400, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 404 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power components 406 provide power to the various components of the terminal 400. The power components 406 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for the terminal 400.
The multimedia component 408 comprises a screen providing an output interface between the terminal 400 and the user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 408 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the terminal 400 is in an operation mode, such as a photographing mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 410 is configured to output and/or input audio signals. For example, the audio component 410 includes a Microphone (MIC) configured to receive external audio signals when the terminal 400 is in an operating mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 404 or transmitted via the communication component 416.
The I/O interface 412 provides an interface between the processing component 402 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor component 414 includes one or more sensors for providing various aspects of status assessment for the terminal 400. For example, the sensor assembly 414 can detect an open/closed state of the terminal 400, relative positioning of components, such as a display and keypad of the terminal 400, the sensor assembly 414 can also detect a change in position of the terminal 400 or a component of the terminal 400, the presence or absence of user contact with the terminal 400, orientation or acceleration/deceleration of the terminal 400, and a change in temperature of the terminal 400. The sensor assembly 414 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 414 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 414 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 416 is configured to facilitate communications between the terminal 400 and other devices in a wired or wireless manner. The terminal 400 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 416 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 416 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the terminal 400 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium comprising instructions, such as the memory 404 comprising instructions, executable by the processor 420 of the terminal 400 to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
A non-transitory computer readable storage medium in which instructions, when executed by a processor of a smart pen, enable the smart pen to perform a method of writing, the method comprising:
in the writing process, shooting an image containing the current written content through a shooting component in the intelligent pen;
processing the image to obtain the current written content contained in the image;
determining prompt characters matched with the current written content according to the acquired current written content;
and displaying the prompt words.
Optionally, the determining, according to the obtained content of the current writing, a prompt text matched with the content of the current writing includes:
and when the currently written content is detected to be pinyin, determining prompt characters matched with the pinyin.
Optionally, the determining, according to the obtained content of the current writing, a prompt text matched with the content of the current writing includes:
and when the current written content is detected to be the wrongly written characters in the wrongly written character library, determining the correct characters corresponding to the current written wrongly written characters according to the corresponding relation between the prestored wrongly written characters and the correct characters, and using the correct characters as prompt characters matched with the current written content.
Optionally, the determining, according to the obtained content of the current writing, a prompt text matched with the content of the current writing includes:
and when the current written content is detected to be a part of the target character in the complex character library, determining the target character as the prompt character matched with the current written content.
Optionally, the determining, according to the obtained content of the current writing, a prompt text matched with the content of the current writing includes:
determining words containing the currently written characters according to the currently written characters;
and determining the characters behind the currently written characters in the words as prompt characters matched with the currently written contents.
Optionally, the displaying the prompt text includes:
controlling a projector to project the prompt words; or,
and displaying the prompt words through a screen in the intelligent pen.
In the embodiment of the disclosure, in the writing process, the smart pen may acquire the current written content, determine the prompt text matched with the current written content based on the acquired current written content, and further display the determined prompt text. Therefore, when a user writes by using the intelligent pen, the user can write on the basis of the prompt characters displayed by the intelligent pen, and the user can be prevented from searching the characters which cannot be written by using a computer or a mobile phone, so that the writing efficiency can be improved.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.
Claims (10)
1. A method of writing, wherein the method is for a smart pen, the method comprising:
in the writing process, shooting an image containing the current written content through a shooting component in the intelligent pen;
processing the image to obtain the current written content contained in the image;
determining prompt characters matched with the current written content according to the acquired current written content;
and displaying the prompt words.
2. The method according to claim 1, wherein the determining, according to the obtained currently written content, a prompt text matching the currently written content comprises:
and when the currently written content is detected to be pinyin, determining prompt characters matched with the pinyin.
3. The method according to claim 1, wherein the determining, according to the obtained currently written content, a prompt text matching the currently written content comprises:
and when the current written content is detected to be the wrongly written characters in the wrongly written character library, determining the correct characters corresponding to the current written wrongly written characters according to the corresponding relation between the prestored wrongly written characters and the correct characters, and using the correct characters as prompt characters matched with the current written content.
4. The method according to claim 1, wherein the determining, according to the obtained currently written content, a prompt text matching the currently written content comprises:
and when the current written content is detected to be a part of the target character in the complex character library, determining the target character as the prompt character matched with the current written content.
5. The method according to claim 1, wherein the determining, according to the obtained currently written content, a prompt text matching the currently written content comprises:
determining words containing the currently written characters according to the currently written characters;
and determining the characters behind the currently written characters in the words as prompt characters matched with the currently written contents.
6. The method according to any one of claims 1-5, wherein the displaying the prompt text comprises:
controlling a projector to project the prompt words; or,
and displaying the prompt words through a screen in the intelligent pen.
7. A device for writing, the device being for a smart pen, the device comprising:
the shooting module is used for shooting an image containing the current written content through a shooting component in the intelligent pen in the writing process;
the acquisition module is used for processing the image and acquiring the current written content contained in the image;
the determining module is used for determining prompt characters matched with the current written content according to the obtained current written content;
and the display module is used for displaying the prompt words.
8. The apparatus of claim 7, wherein the determining module is configured to:
and when the currently written content is detected to be pinyin, determining prompt characters matched with the pinyin.
9. A smart pen comprising a processor and a memory, the memory having stored therein at least one instruction, at least one program, set of codes or set of instructions, the at least one instruction, the at least one program, set of codes or set of instructions being loaded and executed by the processor to implement a method of writing as claimed in any one of claims 1 to 6.
10. A computer readable storage medium having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by a processor to implement a method of writing as claimed in any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810350093.9A CN108536858A (en) | 2018-04-18 | 2018-04-18 | The method and apparatus write |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810350093.9A CN108536858A (en) | 2018-04-18 | 2018-04-18 | The method and apparatus write |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108536858A true CN108536858A (en) | 2018-09-14 |
Family
ID=63477792
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810350093.9A Pending CN108536858A (en) | 2018-04-18 | 2018-04-18 | The method and apparatus write |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108536858A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111081105A (en) * | 2019-07-17 | 2020-04-28 | 广东小天才科技有限公司 | Dictation detection method in black screen standby state and electronic equipment |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120099147A1 (en) * | 2010-10-21 | 2012-04-26 | Yoshinori Tanaka | Image Forming Apparatus, Data Processing Program, Data Processing Method, And Electronic Pen |
CN103576989A (en) * | 2013-11-01 | 2014-02-12 | 北京汉神科创文化发展有限公司 | Writing based human-computer interaction display system and method |
CN103646582A (en) * | 2013-12-04 | 2014-03-19 | 广东小天才科技有限公司 | Method and device for prompting writing errors |
CN103778818A (en) * | 2014-01-20 | 2014-05-07 | 广东小天才科技有限公司 | Method and device for prompting writing errors |
CN103903491A (en) * | 2014-02-14 | 2014-07-02 | 广东小天才科技有限公司 | Method and device for realizing writing check |
CN104866216A (en) * | 2014-02-24 | 2015-08-26 | 联想(北京)有限公司 | Information processing method and intelligent pen |
CN107798322A (en) * | 2017-11-17 | 2018-03-13 | 深圳市极联信息科技有限公司 | A kind of smart pen |
CN107885345A (en) * | 2017-10-17 | 2018-04-06 | 深圳市金立通信设备有限公司 | A kind of method, terminal and computer-readable medium for aiding in amendment word |
-
2018
- 2018-04-18 CN CN201810350093.9A patent/CN108536858A/en active Pending
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120099147A1 (en) * | 2010-10-21 | 2012-04-26 | Yoshinori Tanaka | Image Forming Apparatus, Data Processing Program, Data Processing Method, And Electronic Pen |
CN103576989A (en) * | 2013-11-01 | 2014-02-12 | 北京汉神科创文化发展有限公司 | Writing based human-computer interaction display system and method |
CN103646582A (en) * | 2013-12-04 | 2014-03-19 | 广东小天才科技有限公司 | Method and device for prompting writing errors |
CN103778818A (en) * | 2014-01-20 | 2014-05-07 | 广东小天才科技有限公司 | Method and device for prompting writing errors |
CN103903491A (en) * | 2014-02-14 | 2014-07-02 | 广东小天才科技有限公司 | Method and device for realizing writing check |
CN104866216A (en) * | 2014-02-24 | 2015-08-26 | 联想(北京)有限公司 | Information processing method and intelligent pen |
CN107885345A (en) * | 2017-10-17 | 2018-04-06 | 深圳市金立通信设备有限公司 | A kind of method, terminal and computer-readable medium for aiding in amendment word |
CN107798322A (en) * | 2017-11-17 | 2018-03-13 | 深圳市极联信息科技有限公司 | A kind of smart pen |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111081105A (en) * | 2019-07-17 | 2020-04-28 | 广东小天才科技有限公司 | Dictation detection method in black screen standby state and electronic equipment |
CN111081105B (en) * | 2019-07-17 | 2022-07-08 | 广东小天才科技有限公司 | Dictation detection method in black screen standby state and electronic equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108040360B (en) | Method and device for controlling screen display | |
CN110796988B (en) | Backlight adjusting method and device | |
US10292004B2 (en) | Method, device and medium for acquiring location information | |
CN109992946A (en) | Solve the method, apparatus and computer readable storage medium of locked application | |
CN107562349B (en) | Method and device for executing processing | |
CN105426094B (en) | Information pasting method and device | |
CN105446616A (en) | Screen display control method, apparatus and device | |
CN108040213B (en) | Method and apparatus for photographing image and computer-readable storage medium | |
CN105376412A (en) | Information processing method and device | |
CN104850643B (en) | Picture comparison method and device | |
CN105205093B (en) | The method and device that picture is handled in picture library | |
CN109862169B (en) | Electronic equipment control method, device and storage medium | |
CN107132983B (en) | Split-screen window operation method and device | |
EP3226524B1 (en) | Searching and displaying name information of a caller if said information is not stored in the address book | |
CN107656616B (en) | Input interface display method and device and electronic equipment | |
CN108319899B (en) | Fingerprint identification method and device | |
CN107885464B (en) | Data storage method, device and computer readable storage medium | |
CN108536858A (en) | The method and apparatus write | |
CN105677406A (en) | Application operating method and device | |
CN112486604B (en) | Toolbar setting method and device for setting toolbar | |
CN104317480B (en) | Character keys display methods, device and terminal | |
CN107728909B (en) | Information processing method and device | |
CN107679123B (en) | Picture naming method and device | |
CN110417987B (en) | Operation response method, device, equipment and readable storage medium | |
CN107682623B (en) | Photographing method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180914 |
|
RJ01 | Rejection of invention patent application after publication |