[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN103295008A - Character recognition method and user terminal - Google Patents

Character recognition method and user terminal Download PDF

Info

Publication number
CN103295008A
CN103295008A CN2013101934767A CN201310193476A CN103295008A CN 103295008 A CN103295008 A CN 103295008A CN 2013101934767 A CN2013101934767 A CN 2013101934767A CN 201310193476 A CN201310193476 A CN 201310193476A CN 103295008 A CN103295008 A CN 103295008A
Authority
CN
China
Prior art keywords
user terminal
rectangle
described user
value
straight line
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2013101934767A
Other languages
Chinese (zh)
Other versions
CN103295008B (en
Inventor
李昌竹
汪运斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Device Co Ltd
Original Assignee
Huawei Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Device Co Ltd filed Critical Huawei Device Co Ltd
Priority to CN201710142064.9A priority Critical patent/CN107066999A/en
Priority to CN201310193476.7A priority patent/CN103295008B/en
Priority to CN201710142076.1A priority patent/CN107103319A/en
Publication of CN103295008A publication Critical patent/CN103295008A/en
Application granted granted Critical
Publication of CN103295008B publication Critical patent/CN103295008B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/14Image acquisition
    • G06V30/148Segmentation of character regions
    • G06V30/153Segmentation of character regions using recognition of characters or words
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Character Input (AREA)

Abstract

An embodiment of the invention discloses a character recognition method and a user terminal. The character recognition method includes that the user terminal photographs texts and generates images, the texts include marks made by a user, the images are recognized, marked areas corresponding to the marks made by the user on the images are determined, and optical character recognition is performed on mark content in the marked areas. Therefore, the user terminal only recognizes the mark content in the marked areas, and further user experience is improved.

Description

A kind of character recognition method and user terminal
Technical field
The present invention relates to the communications field, relate in particular to a kind of character recognition method and user terminal.
Background technology
Usually, people can descend own interested or important content-label in reading or newspaper, and along with the development of communication, increasing people wants by network own interested content to be shared with other people.
Existing optical character identification (OCR, Optical Character Recognition) technology, content of text can be seen through optical instrument, as image scanner, facsimile recorder or any photographic goods, image is changed over to terminals such as computing machine, mobile phone, then content of text is identified and then is presented in the terminals such as computing machine, mobile phone.
But the OCR technology can only be identified the word content in the whole image or single word, word, and can not the regional area content of user's mark be identified, thereby has reduced user's experience.
Summary of the invention
The invention provides a kind of character recognition method and user terminal, can realize that user terminal only identifies the tag content in the marked region, and then improve user's experience.
The first aspect of the embodiment of the invention provides character recognition method, comprising: user terminal is taken pictures to text and is generated image, has the mark that the user does on the described text;
Described user terminal is identified described image, and determines the marked region that is marked at correspondence on the described image that described user does;
Described user terminal carries out optical character identification to the tag content in the described marked region.
In conjunction with the first aspect of the embodiment of the invention, in first kind of embodiment of the first aspect of the embodiment of the invention, comprising: described user does is labeled as straight line or curve or oval or rectangle or circle.
First aspect in conjunction with the embodiment of the invention, or first kind of embodiment of first aspect, in second kind of embodiment of the first aspect of the embodiment of the invention, comprising: the technology that described user terminal is identified described image is Hough transformation or chain code technology or not displacement technology or Fourier descriptor technology or autoregression pattern technology.
First aspect in conjunction with the embodiment of the invention, or first kind of embodiment of first aspect, in the third embodiment of the first aspect of the embodiment of the invention, described user terminal is identified described image, and determines that the concrete steps that are marked at marked region corresponding on the described image that described user does comprise: the mark that described user terminal is done user described in the described image by Hough transformation detects and locatees;
Described user terminal is determined described marked region according to the result who detects and locate.
The third embodiment in conjunction with the first aspect of the embodiment of the invention, in the 4th kind of embodiment of the first aspect of the embodiment of the invention, described user does when being labeled as straight line, described user terminal detects the mark in the described image by Hough transformation and the concrete steps of locating comprise: described user terminal converts the formula y=ax+b of straight line correspondence to polar coordinates formula ρ=xcos θ+ysin θ, x wherein, the sinusoidal curve of some correspondence in the pole coordinate parameter space in the y space;
The x that described user terminal is chosen, N point under the y coordinate system with the discrete N ρ parameter space that turns to of ρ, with the discrete N θ parameter space that turns to of θ, and according to N the point of choosing, calculated N ρ value and N the θ value corresponding with the ρ value;
The mode of described user terminal by accumulated counts obtained peak point (ρ 0, and θ 0) in calculating N ρ value and N the θ value corresponding with the ρ value;
Described user terminal detects and locatees corresponding x according to described peak point (ρ 0, and θ 0), and the straight line under the y coordinate system, described straight line are the mark that described user does.
The 4th kind of embodiment in conjunction with the first aspect of the embodiment of the invention, in the 5th kind of embodiment of the first aspect of the embodiment of the invention, described user terminal is according to detecting and the result of location determines that the concrete steps of described marked region comprise: described user terminal determines that according to the described straight line that detects and navigate to the character area above the described straight line is marked region.
The third embodiment in conjunction with the first aspect of the embodiment of the invention, in the 6th kind of embodiment of the first aspect of the embodiment of the invention, when described user do be labeled as rectangle the time, described user terminal detects the mark in the described image by Hough transformation and the concrete steps of locating comprise: described user terminal converts the formula y=ax+b of the limit correspondence of described rectangle to polar coordinates formula ρ=xcos θ+ysin θ, described rectangle comprises four edges, the x of every limit correspondence, the corresponding pole coordinate parameter space of y coordinate space, x wherein, the sinusoidal curve of some correspondence in the pole coordinate parameter space in the y coordinate space;
Described user terminal is at the x of every limit correspondence, choose M point under the y coordinate system respectively, with the discrete M ρ parameter space that turns to of ρ, with the discrete M θ parameter space that turns to of θ, and according to the x every limit correspondence, the M that chooses under the y coordinate system point calculates ρ value and the θ value corresponding with the ρ value, and a limit correspondence of described rectangle is calculated one group of M ρ value and M the θ value corresponding with the ρ value;
Four groups of M ρ values that described user terminal will be calculated reach M the θ value corresponding with the ρ value as four arrays that add up, mode by accumulated counts gets access to a peak point in each adds up array, the corresponding x of a described peak point, straight line in the y coordinate system, described four four edges that straight line is described rectangle;
Described user terminal is searched four summits of described rectangle according to the feature of rectangle from the described array that adds up, the adjacent both sides angle that is characterized as rectangle of wherein said rectangle is that 90 ° of opposite side with rectangle are isometric;
Described user terminal detects according to four limits of described rectangle and four summits and locatees described rectangle, and described rectangle is the mark that described user does.
The 6th kind of embodiment in conjunction with the first aspect of the embodiment of the invention, in the 7th kind of embodiment of the first aspect of the embodiment of the invention, described user terminal is according to detecting and the result of location determines that the concrete steps of described marked region comprise: described user terminal determines that according to detecting and navigating to described rectangle the character area in the described rectangle is marked region.
First aspect in conjunction with the embodiment of the invention, or first kind of embodiment of first aspect, in the 8th kind of embodiment of the first aspect of the embodiment of the invention, described user terminal carries out also comprising after the optical character identification to the tag content in the described marked region: described user terminal is won the tag content that optical character identification obtains;
The described tag content that described user terminal will be won is set type again, and the described tag content after preservation and the demonstration composing.
The second aspect of the embodiment of the invention provides a kind of user terminal, and described user terminal comprises: image unit for text being taken pictures and generating image, has the mark that the user does on the described text;
Image identification unit is used for described image is identified, and determines the marked region that is marked at correspondence on the described image that described user does;
The optical character identification unit is used for the tag content of described marked region is carried out optical character identification.
In conjunction with the second aspect of the embodiment of the invention, in first kind of embodiment of the second aspect of the embodiment of the invention, comprising: described user does is labeled as straight line or curve or oval or rectangle or circle.
Second aspect in conjunction with the embodiment of the invention, or first kind of embodiment of second aspect, in second kind of embodiment of the second aspect of the embodiment of the invention, comprising: the technology that described user terminal is identified described image is Hough transformation or chain code technology or not displacement technology or Fourier descriptor technology or autoregression pattern technology.
In conjunction with the second aspect of the embodiment of the invention, or first kind of embodiment of second aspect, in the third embodiment of the second aspect of the embodiment of the invention, described image identification unit comprises:
Detection module is used for detecting and locating by the mark that Hough transformation is done the described user of described image;
Determination module is used for determining described marked region according to the result who detects and locate.
In conjunction with the third embodiment of the second aspect of the embodiment of the invention, in the 4th kind of embodiment of the second aspect of the embodiment of the invention, when described user do be labeled as straight line the time, described detection module comprises:
First modular converter is used for converting the formula y=ax+b of straight line correspondence to polar coordinates formula ρ=xcos θ+ysin θ, x wherein, the sinusoidal curve of some correspondence in the pole coordinate parameter space in the y space;
First computing module is used for choosing x, and N point under the y coordinate system with the discrete N ρ parameter space that turns to of ρ, with the discrete N θ parameter space that turns to of θ, and according to N the point of choosing, calculated N ρ value and N the θ value corresponding with the ρ value;
The first accumulated counts module, the mode that is used for by accumulated counts reaches N the θ value corresponding with the ρ value and obtains peak point (ρ 0, and θ 0) calculating N ρ value;
First detection module detects and locatees corresponding x according to described peak point (ρ 0, and θ 0), and the straight line under the y coordinate system, described straight line are the mark that described user does.
In conjunction with the 4th kind of embodiment of the second aspect of the embodiment of the invention, in the 5th kind of embodiment of the second aspect of the embodiment of the invention, described determination module comprises:
First determination module is used for determining that according to the described straight line that detects and navigate to the character area of straight line top is marked region.
In conjunction with the third embodiment of the second aspect of the embodiment of the invention, in the 6th kind of embodiment of the second aspect of the embodiment of the invention, described user does when being labeled as rectangle, and described detection module comprises:
Second modular converter, be used for converting the formula y=ax+b of the limit correspondence of described rectangle to polar coordinates formula ρ=xcos θ+ysin θ, described rectangle comprises four edges, the x of every limit correspondence, the corresponding pole coordinate parameter space of y coordinate space, x wherein, the sinusoidal curve of some correspondence in the pole coordinate parameter space in the y coordinate space;
Second computing module, be used for the x every limit correspondence, choose M point under the y coordinate system respectively, with the discrete M ρ parameter space that turns to of ρ, with the discrete M θ parameter space that turns to of θ, and according to the x every limit correspondence, the M that chooses under the y coordinate system point, calculate ρ value and the θ value corresponding with the ρ value, a limit correspondence of described rectangle is calculated one group of M ρ value and M the θ value corresponding with the ρ value;
The second accumulated counts module, four groups of M ρ values that are used for calculating reach M the θ value corresponding with the ρ value as four arrays that add up, mode by accumulated counts gets access to a peak point in each adds up array, the corresponding x of a described peak point, straight line in the y coordinate system, described four four edges that straight line is described rectangle;
Search module, be used for searching four summits of described rectangle according to the feature of rectangle from the described array that adds up, the adjacent both sides angle that is characterized as rectangle of wherein said rectangle is that 90 ° of opposite side with rectangle are isometric;
Second detection module is used for four limits and four summit detections of the described rectangle of foundation and locatees described rectangle, and described rectangle is the mark that described user does.
In conjunction with the 6th kind of embodiment of the second aspect of the embodiment of the invention, in the 7th kind of embodiment of the second aspect of the embodiment of the invention, described determination module comprises:
Second determination module is used for determining that according to the described rectangle that detects and navigate to the character area in the rectangle is marked region.
In conjunction with the second aspect of the embodiment of the invention, or first kind of embodiment of second aspect, in the 8th kind of embodiment of the second aspect of the embodiment of the invention, described user terminal also comprises:
Abstraction units is used for the described tag content that optical character identification obtains is won;
Display unit is used for the described tag content of winning is set type, and the described tag content after preservation and the demonstration composing.
As can be seen from the above technical solutions, the embodiment of the invention has the following advantages:
In the embodiment of the invention, user terminal is taken pictures to text and is generated image, have the mark that the user does on the text, then image is identified, and definite user do be marked on the image corresponding marked region, tag content in the marked region is carried out optical character identification, thereby realized that user terminal only identifies the tag content in the marked region, and then improved user's experience.
Description of drawings
Fig. 1 is embodiment synoptic diagram of character recognition method in the embodiment of the invention;
Fig. 2 is another embodiment synoptic diagram of character recognition method in the embodiment of the invention;
Fig. 3 is another embodiment synoptic diagram of character recognition method in the embodiment of the invention;
Fig. 4 is another embodiment synoptic diagram of character recognition method in the embodiment of the invention;
Fig. 5 is example structure reference diagram of user terminal in the embodiment of the invention;
Fig. 6 is another example structure reference diagram of user terminal in the embodiment of the invention;
Fig. 7 is another example structure reference diagram of user terminal in the embodiment of the invention;
Fig. 8 is another example structure reference diagram of user terminal in the embodiment of the invention.
Embodiment
The embodiment of the invention provides a kind of character recognition method and user terminal, can identify the content of user's mark, and then increase user's experience.
Please refer to Fig. 1, embodiment of character recognition method comprises in the embodiment of the invention:
101, user terminal is taken pictures to text and is generated image;
In the present embodiment, have the mark that the user does on the text, to mark the user's interest word content, user terminal is taken pictures to the text that has the user and make marks then, and the production drawing picture.
Need to prove that the user can make marks at text with pencil, also can make marks at text with pen, can also make marks at text with oil pike, do not do restriction herein.
102, user terminal is identified image, and definite user do be marked on the image corresponding marked region;
In the present embodiment, user terminal is identified the image that has the user and make marks, and determines the marked region of user's make marks correspondence on image then.
103, user terminal carries out optical character identification to the tag content in the marked region.
In the present embodiment, user terminal is only identified the tag content in the marked region by OCR.The OCR technology is the character of printing on the paper by checking, and detects the shape that dark, bright pattern is determined character, with character identifying method shape is translated into the technology of computword then, and the concrete implementation of OCR is known technology, does not do detailed description herein.
In the present embodiment, user terminal is taken pictures to text and is generated image, have the mark that the user does on the text, then image is identified, and definite user do be marked on the image corresponding marked region, tag content in the marked region is carried out optical character identification, thereby realized that user terminal only identifies the tag content in the marked region, and then improved user's experience.
For the ease of understanding, with an instantiation character recognition method in the embodiment of the invention is described below, see also Fig. 2, another embodiment of character recognition method comprises in the embodiment of the invention:
201, user terminal is taken pictures to text and is generated image;
In the present embodiment, have the mark that the user does on the text, the user can make marks at text with pencil, also can make marks at text with pen, can also make marks at text with oil pike, does not do restriction herein.
The mark that the user does can be straight line, can be curve, can be rectangle, also can be circle, can also be ellipse, not do restriction herein, the user can be according to the custom interested word content of mark oneself on text of oneself, as drawing straight line or iris out word content interested with rectangle below word content interested, user terminal is taken pictures to the text that has user's mark and is generated image then.
202, the mark user in the image done by Hough transformation of user terminal detects and locatees;
In the present embodiment, user terminal carries out recognition technology to image and does not do restriction herein, can be Hough transformation in actual applications, chain code technology can be, can also the Fourier descriptor technology can be for constant apart from technology, can also be autoregression pattern technology etc., only be that example describes with the Hough transformation, user terminal detects the mark that the user does by Hough transformation herein, and detection and localization to the mark done of user.
203, user terminal is determined marked region according to the result who detects and locate;
In the present embodiment, user terminal is determined marked region according to the result who detects in the step 202 and locate, and is circular as the result who detects and locate, and marked region is circular interior zone so.
204, user terminal carries out optical character identification to the tag content in the marked region;
In the present embodiment, user terminal is only identified the tag content in the marked region by OCR.The OCR technology is the character of printing on the paper by checking, and detects the shape that dark, bright pattern is determined character, with character identifying method shape is translated into the technology of computword then, and the concrete implementation of OCR is known technology, does not do detailed description herein.
205, user terminal is won the tag content that optical character identification obtains;
206, the tag content that will win of user terminal is set type again, and preserves and show tag content after setting type.
In the present embodiment, user terminal is won the tag content that obtains in the step 204, and the tag content of setting type again and winning is then preserved the tag content after setting type again and shown to the user.
In the present embodiment, user terminal is taken pictures to text and is generated image, have the mark that the user does on the text, the mark of user in the image being done by Hough transformation detects and locatees then, and according to detecting and the result of location determines marked region, tag content in the marked region is carried out optical character identification, and the tag content that optical character identification obtains won and set type again, and the tag content after preservation and the demonstration composing, thereby realized that user terminal only identifies the tag content in the marked region, the user can see tag content more intuitively, and can be shared with other people at any time by tag content being saved to user terminal, further improved user's experience.
For the ease of understanding, following specific embodiment for do as the user be labeled as straight line the time, the character recognition method in the embodiment of the invention is described, see also Fig. 3, another embodiment of character recognition method comprises in the embodiment of the invention:
301, user terminal is taken pictures to text and is generated image;
In the present embodiment, have the mark that the user does on the text, the user can make marks at text with pencil, also can make marks at text with pen, can also make marks at text with oil pike, does not do restriction herein.
The mark that the user does can be straight line, can be curve, can be rectangle, also can be circle, can also be ellipse, not do restriction herein, the user can be according to the custom interested word content of mark oneself on text of oneself, as drawing straight line or iris out word content interested with rectangle below word content interested, user terminal is taken pictures to the text that has user's mark and is generated image then, and this sentences straight line is that example describes.
302, user terminal converts the formula y=ax+b of straight line correspondence to polar coordinates formula ρ=xcos θ+ysin θ;
In the present embodiment, when the user do be labeled as straight line the time, set up x, the y coordinate space, and with the x of straight line correspondence, y coordinate formula y=ax+b converts polar coordinates formula ρ=xcos θ+ysin θ to, x, the sinusoidal curve of some correspondence in the pole coordinate parameter space in the y coordinate space.
303, the x that chooses of user terminal, N point under the y coordinate system, with the discrete N ρ parameter space that turns to of ρ, with the discrete N θ parameter space that turns to of θ, and N point choosing of foundation, calculate N ρ value and reach N the θ value corresponding with the ρ value;
In the present embodiment, user terminal is chosen x, N under the y coordinate system point, then ρ and θ are carried out discretize and obtain a N ρ parameter space and a N θ parameter space respectively, and calculate N ρ value and N the θ value corresponding with the ρ value according to the N that chooses point, wherein N each in putting put ρ value of correspondence and θ value.
304, the mode of user terminal by accumulated counts obtained peak point (ρ 0, and θ 0) in calculating N ρ value and N the θ value corresponding with the ρ value;
In the present embodiment, obtaining peak point by the mode of accumulated counts is the technological means that those skilled in the art habitually practise, and does not do detailed description herein.
305, user terminal detects and locatees corresponding x, the straight line under the y coordinate system according to peak point (ρ 0, and θ 0).
In the present embodiment, because at x, the sinusoidal curve of some correspondence in the pole coordinate parameter space in the y space, so corresponding x of point in the pole coordinate parameter space, straight line in the y space, the peak point (ρ 0, and θ 0) that obtains in step 304 is x, straight line under the y coordinate system, the mark that this straight line is done for the user.
306, user terminal determines that according to the straight line that detects and navigate to the character area of straight line top is marked region.
In the present embodiment, when in user terminal detection and the positioning image straight line being arranged, determine the word content of straight line top according to the straight line that detects and navigate to.
307, user terminal carries out optical character identification to the tag content in the marked region;
In the present embodiment, user terminal is with the content that serves as a mark of the literal in the character area of straight line top, to then tag content being carried out optical character identification, to identify the literal of straight line top, the concrete implementation of OCR is known technology, does not do detailed description herein.
308, user terminal is won the tag content that optical character identification obtains;
309, the tag content that will win of user terminal is set type again, and preserves and show tag content after setting type.
In the present embodiment, user terminal is won tag content, and the tag content of setting type again and winning is then preserved the tag content after setting type again and shown to the user.
In the present embodiment, user terminal is taken pictures to text and is generated image, have the mark that the user does on the text, when the user do be labeled as straight line the time, user terminal detects the image cathetus by Hough transformation and locatees, the character area of determining the straight line top according to the straight line that detects and navigate to is marked region, then the tag content in the marked region being carried out optical character identification wins the tag content that optical character identification obtains, the tag content of winning is set type again, and the tag content after preservation and the demonstration composing, thereby realized that user terminal only identifies the tag content in the marked region, the user can see tag content more intuitively, be saved to user terminal by tag content simultaneously and can be shared with other people at any time, further improved user's experience.
For the ease of understanding, following specific embodiment for do as the user be labeled as rectangle the time, the character recognition method in the embodiment of the invention is described, see also Fig. 3, another embodiment of character recognition method comprises in the embodiment of the invention:
401, user terminal is taken pictures to text and is generated image;
In the present embodiment, have the mark that the user does on the text, the user can make marks at text with pencil, also can make marks at text with pen, can also make marks at text with oil pike, does not do restriction herein.
The mark that the user does can be straight line, can be curve, can be rectangle, also can be circle, can also be ellipse, not do restriction herein, the user can be according to the custom interested word content of mark oneself on text of oneself, as drawing straight line or iris out word content interested with rectangle below word content interested, user terminal is taken pictures to the text that has user's mark and is generated image then, and this sentences rectangle is that example describes.
402, user terminal converts the formula y=ax+b of the limit correspondence of rectangle to polar coordinates formula ρ=xcos θ+ysin θ;
In the present embodiment, when the user do be labeled as rectangle the time, set up x, y coordinate space, rectangle comprise four edges and every corresponding straight line in limit, then the formula of every straight line correspondence is y=ax+b, convert formula y=ax+b to polar coordinates formula ρ=xcos θ+ysin θ, x wherein, the sinusoidal curve of some correspondence in the pole coordinate parameter space in the y coordinate space, the x of every limit correspondence, the corresponding pole coordinate parameter space of y coordinate space.
403, user terminal is chosen M point respectively at the x of every limit correspondence under the y coordinate system, with the discrete M ρ parameter space that turns to of ρ, with the discrete M θ parameter space that turns to of θ, and according to the x every limit correspondence, the M that chooses under the y coordinate system point calculates ρ value and the θ value corresponding with the ρ value;
In the present embodiment, user terminal is at the x of every limit correspondence, choose M point under the y coordinate system respectively, this rectangle comprises four edges, then choose four groups and each group and comprise M point, then ρ and θ are carried out discretize and obtain a M ρ parameter space and a M θ parameter space respectively, calculate one group of M ρ value and reach M the θ value corresponding with the ρ value according to M point of each group is corresponding respectively, a limit correspondence of rectangle is calculated one group of M ρ value and is reached M the θ value corresponding with the ρ value.
404, four groups of M ρ values will calculating of user terminal and M the θ value corresponding with the ρ value be as four arrays that add up, and the mode by accumulated counts gets access to a peak point in each adds up array;
In the present embodiment, the corresponding x of peak point, the straight line in the y coordinate system, user terminal gets access to four straight lines by four groups of arrays that add up, wherein four four edges that straight line is rectangle.
405, user terminal is searched the rectangle summit according to the feature of rectangle from the array that adds up;
In the present embodiment, the intrinsic geometric properties that is characterized as rectangle of rectangle, comprise that adjacent both sides angle is that 90 ° and opposite side are isometric, user terminal can find out four summits of rectangle from the array that adds up according to the feature of rectangle, the summit of searching rectangle by rectangular characteristic is the conventional techniques means of those skilled in the art in Hough transformation herein, does not give unnecessary details herein.
405, user terminal detects and locating rectangle according to four limits and four summits of rectangle;
In the present embodiment, when user terminal detects four limits of rectangle and four summits, just can detect and navigate to complete rectangle, the mark that this rectangle is done for the user.
406, user terminal determines that according to detecting and navigating to rectangle the character area in the rectangle is marked region;
In the present embodiment, the rectangle in user terminal detection and positioning image, then the character area in this rectangle is marked region.
407, user terminal carries out optical character identification to the tag content in the marked region;
In the present embodiment, user terminal is with the content that serves as a mark of the literal in the character area in the rectangle, to then tag content being carried out optical character identification, to identify the word content in the rectangle, the concrete implementation of OCR is known technology, does not do detailed description herein.
408, user terminal is won the tag content that optical character identification obtains;
409, the tag content that will win of user terminal is set type again, and preserves and show tag content after setting type.
In the present embodiment, user terminal is won tag content, and the tag content of setting type again and winning is then preserved the tag content after setting type again and shown to the user.
In the present embodiment, user terminal is taken pictures to text and is generated image, have the mark that the user does on the text, when the user do be labeled as rectangle the time, user terminal detects rectangle in the image by Hough transformation and locatees, determine that according to the rectangle that detects and navigate to the character area in the rectangle is marked region, then the tag content in the marked region is carried out optical character identification, the tag content that optical character identification obtains is won, the tag content of winning is set type again, and the tag content after preservation and the demonstration composing, thereby realized that user terminal only identifies the tag content in the marked region, the user can see tag content more intuitively, is saved to user terminal by tag content simultaneously and can be shared with other people at any time, has further improved user's experience.
Below the user terminal for the embodiment of the invention of carrying out above-mentioned character recognition method is described, its basic logical structure is with reference to figure 5, and embodiment of user terminal comprises in the embodiment of the invention:
Image unit 501, image identification unit 502, optical character identification unit 503;
Image unit 501 for text being taken pictures and generating image, has the mark that the user does on the text;
Image identification unit 502 is used for image is identified, and definite user do be marked at marked region corresponding on the image;
Optical character identification unit 503 is used for the tag content of marked region is carried out optical character identification.
In the present embodiment, 501 pairs of texts of image unit are taken pictures and are generated image, have the mark that the user does on the text, 502 pairs of images of image identification unit are identified then, and definite user do be marked on the image corresponding marked region, carry out optical character identification by the tag content in the 503 pairs of marked regions in optical character identification unit again, thereby realized that user terminal only identifies the tag content in the marked region, and then improved user's experience.
For the ease of understanding, with an instantiation user terminal in the embodiment of the invention is described below, see also Fig. 6, another embodiment of user terminal comprises in the embodiment of the invention:
Image unit 601, detection module 602, determination module 603, optical character identification unit 604, abstraction units 605 and display unit 606;
Image unit 601 for text being taken pictures and generating image, has the mark that the user does on the text;
Detection module 602 is used for detecting and locating by the mark that Hough transformation is done the user of image;
Determination module 603 is used for determining marked region according to the result who detects and locate;
Optical character identification unit 604 is used for the tag content of marked region is carried out optical character identification;
Abstraction units 605 is used for the tag content that optical character identification obtains is won;
Display unit 606 is used for the tag content of winning is set type, and the tag content after preservation and the demonstration composing.
In the present embodiment, 601 pairs of texts of image unit are taken pictures and are generated image, have the mark that the user does on the text, the mark user in the image done by Hough transformation of detection module 602 detects and locatees then, determination module 603 is determined marked region according to the result who detects and locate, tag content in the 604 pairs of marked regions in optical character identification unit is carried out optical character identification, the tag content that abstraction units 605 obtains optical character identification is won and is set type again, tag content after display unit 606 is preserved and demonstration is set type, thereby realized only the tag content in the marked region being identified, the user can see tag content more intuitively, preserve by tag content simultaneously and can be shared with other people at any time, further improved user's experience.
In order better to understand the above embodiments, below when being made marks to straight line as the user, mutual between each module that comprises in the user terminal and unit describes the data interactive mode in the user terminal, please further consults Fig. 7, comprising:
Image unit 701, first modular converter 702, first computing module 703, the first accumulated counts module 704, first detection module 705, first determination module 706, optical character identification unit 707, abstraction units 708, display unit 709;
701 pairs of texts of image unit are taken pictures and are generated image, have the mark that the user does on the text, and the image that will have the mark that the user does then sends to first modular converter 702;
After first modular converter 702 receives the image that has the mark that the user does, when the user do be labeled as straight line the time, x with the straight line correspondence, y coordinate formula y=ax+b converts polar coordinates formula ρ=xcos θ+ysin θ to, and will convert message and be sent to first computing module 703, x wherein, the sinusoidal curve of some correspondence in the pole coordinate parameter space in the y coordinate space;
First computing module 703 receive convert message after, at x, choose N point under the y coordinate system, and with the discrete N ρ parameter space that turns to of ρ, with the discrete N θ parameter space that turns to of θ, according to N the point of choosing, calculate N ρ value and N the θ value corresponding with the ρ value then, and N ρ value and N the θ value corresponding with the ρ value are sent to the first accumulated counts module 704;
The mode of the first accumulated counts module 704 by accumulated counts obtained peak point (ρ 0, and θ 0) from calculate N ρ value and N the θ value corresponding with the ρ value, and peak point (ρ 0, and θ 0) is sent to first detection module 705;
First detection module 705 detects and locatees corresponding x according to peak point (ρ 0, and θ 0), the straight line under the y coordinate system, and will comprise the message that detects and navigate to straight line and be sent to first determination module, the wherein mark done for the user of this straight line;
The straight line that first determination module 706 detects and navigates to according to first detection module 705 determines that the character area of straight line top is marked region, and this marked region is sent to optical character identification unit 707;
Optical character identification unit 707, carries out optical character identification to tag content then, and the tag content that this identifies is sent to abstraction units 708 content that serves as a mark of the literal in the character area of straight line top according to the marked region that receives;
708 pairs of tag content that receive of abstraction units are won, and the tag content after will winning is sent to display unit 709;
709 pairs of tag content that receive of display unit are set type again, with the tag content after setting type again to preserving and showing to the user.
In the present embodiment, 701 pairs of texts of image unit are taken pictures and are generated image, have the mark that the user does on the text, when the user do be labeled as straight line the time, by first modular converter 702, first computing module 703, the first accumulated counts module 704 and first detection module 705 utilize Hough transformation that the image cathetus is detected and locate, first determination module 706 determines that according to the straight line that detects and navigate to the character area of straight line top is marked region then, tag content in the 707 pairs of marked regions in optical character identification unit is carried out optical character identification, the tag content that abstraction units 708 will be won is set type again, tag content after display unit 709 is preserved and demonstration is set type, thereby realized that user terminal only identifies the tag content in the marked region, the user can see tag content more intuitively, preserve by tag content simultaneously and can be shared with other people at any time, further improved user's experience.
In order better to understand the above embodiments, below when being made marks to rectangle as the user, mutual between each module that comprises in the user terminal and unit describes the data interactive mode in the user terminal, please further consults Fig. 8, comprising:
Image unit 801, second modular converter 802, second computing module 803, the second accumulated counts module 804, search module 805, second detection module 806, second determination module 807, optical character identification unit 808, abstraction units 809, display unit 810;
801 pairs of texts of image unit are taken pictures and are generated image, have the mark that the user does on the text, and the image that will have the mark that the user does then sends to second modular converter 802;
After second modular converter 802 receives the image that has the mark that the user does, when the user do be labeled as rectangle the time, and with the x of the limit correspondence of rectangle, y coordinate formula y=ax+b converts polar coordinates formula ρ=xcos θ+ysin θ to, and wherein rectangle comprises four edges, every corresponding straight line in limit, then the formula of every straight line correspondence is y=ax+b, to convert message and be sent to second computing module 803, x wherein, the sinusoidal curve of some correspondence in the pole coordinate parameter space in the y coordinate space;
Second computing module 803 receive convert message after, x every limit correspondence, choose M point under the y coordinate system respectively, with the discrete M ρ parameter space that turns to of ρ, with the discrete M θ parameter space that turns to of θ, and according to the x every limit correspondence, the M that chooses respectively under the y coordinate system point, calculate ρ value and the θ value corresponding with the ρ value, a limit correspondence of rectangle is calculated one group of M ρ value and M the θ value corresponding with the ρ value, and M ρ value of each group and M the θ value corresponding with the ρ value are sent to the second accumulated counts module 804;
Four groups of M ρ values that the second accumulated counts module 804 will be calculated reach M the θ value corresponding with the ρ value as four arrays that add up, mode by accumulated counts gets access to a peak point in each adds up array, the corresponding x of peak point, straight line in the y coordinate system, four groups of arrays that add up get access to four straight lines, four four edges that straight line is rectangle wherein are sent to four message that add up array and comprise the rectangle four edges and search module 805;
Search module 805 and from the array that adds up, search four summits of rectangle according to the feature of rectangle, wherein, the adjacent both sides angle that is characterized as rectangle of rectangle is that the opposite side of 90 ° and rectangle are isometric, and the message that will comprise four limits of four summits of rectangle and rectangle is sent to second detection module 806;
Second detection module 806 detects and locating rectangle according to four limits and four summits of rectangle, the mark that this rectangle is done for the user, and will comprise detection and the message of the rectangle that navigates to is sent to second determination module 807;
Second determination module 807 determines that according to the rectangle that second detection module 806 detects and navigates to the character area in the rectangle is marked region, and this marked region is sent to optical character identification unit 808;
Optical character identification unit 808 with the content that serves as a mark of the literal in the character area in the rectangle, to then tag content being carried out optical character identification, and is sent to abstraction units 809 with the tag content that this identifies according to the marked region that receives;
809 pairs of tag content that receive of abstraction units are won, and the tag content after will winning is sent to display unit 810;
810 pairs of tag content that receive of display unit are set type again, with the tag content after setting type again to preserving and showing to the user.
In the present embodiment, 801 pairs of texts of image unit are taken pictures and are generated image, have the mark that the user does on the text, when the user do be labeled as rectangle the time, by second modular converter 802, second computing module 803, the second accumulated counts module 804, searching module 805 and second detection module 806 utilizes Hough transformation that rectangle in the image is detected and locatees, second determination module 807 determines that according to the rectangle that detects and navigate to the character area in the rectangle is marked region then, tag content in the 808 pairs of marked regions in optical character identification unit is carried out optical character identification, the tag content that abstraction units 809 will be won is set type again, tag content after display unit 810 is preserved and demonstration is set type, thereby realized that user terminal only identifies the tag content in the marked region, the user can see tag content more intuitively, preserve by tag content simultaneously and can be shared with other people at any time, further improved user's experience.
The those skilled in the art can be well understood to, and is the convenience described and succinct, the system of foregoing description, and the concrete course of work of device and unit can not repeat them here with reference to the corresponding process among the preceding method embodiment.
In several embodiment that the application provides, should be understood that, disclosed system, apparatus and method can realize by other mode.For example, device embodiment described above only is schematic, for example, the division of described unit, only be that a kind of logic function is divided, during actual the realization other dividing mode can be arranged, for example a plurality of unit or assembly can in conjunction with or can be integrated into another system, or some features can ignore, or do not carry out.Another point, the shown or coupling each other discussed or directly to be coupled or to communicate to connect can be by some interfaces, the indirect coupling of device or unit or communicate to connect can be electrically, machinery or other form.
Described unit as separating component explanation can or can not be physically to separate also, and the parts that show as the unit can be or can not be physical locations also, namely can be positioned at a place, perhaps also can be distributed on a plurality of network element.Can select wherein some or all of unit to realize the purpose of present embodiment scheme according to the actual needs.
In addition, each functional unit in each embodiment of the present invention can be integrated in the processing unit, also can be that the independent physics in each unit exists, and also can be integrated in the unit two or more unit.Above-mentioned integrated unit both can adopt the form of hardware to realize, also can adopt the form of SFU software functional unit to realize.
If described integrated unit is realized with the form of SFU software functional unit and during as independently production marketing or use, can be stored in the computer read/write memory medium.Based on such understanding, part or all or part of of this technical scheme that technical scheme of the present invention contributes to prior art in essence in other words can embody with the form of software product, this computer software product is stored in the storage medium, comprise that some instructions are with so that a computer equipment (can be personal computer, server, the perhaps network equipment etc.) carry out all or part of step of the described method of each embodiment of the present invention.And aforesaid storage medium comprises: various media that can be program code stored such as USB flash disk, portable hard drive, ROM (read-only memory) (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disc or CD.
The above, above embodiment only in order to technical scheme of the present invention to be described, is not intended to limit; Although with reference to previous embodiment the present invention is had been described in detail, those of ordinary skill in the art is to be understood that: it still can be made amendment to the technical scheme that aforementioned each embodiment puts down in writing, perhaps part technical characterictic wherein is equal to replacement, and these modifications or replacement do not make the essence of appropriate technical solution break away from the spirit and scope of various embodiments of the present invention technical scheme.

Claims (18)

1. a character recognition method is characterized in that, comprising:
User terminal is taken pictures to text and is generated image, has the mark that the user does on the described text;
Described user terminal is identified described image, and determines the marked region that is marked at correspondence on the described image that described user does;
Described user terminal carries out optical character identification to the tag content in the described marked region.
2. method according to claim 1 is characterized in that, described user does is labeled as straight line or curve or oval or rectangle or circle.
3. according to the method for claim 1 or 2, it is characterized in that the technology that described user terminal is identified described image is Hough transformation or chain code technology or not displacement technology or Fourier descriptor technology or autoregression pattern technology.
4. method according to claim 1 and 2 is characterized in that, described user terminal is identified described image, and determines that the concrete steps that are marked at marked region corresponding on the described image that described user does comprise:
The mark that described user terminal is done user described in the described image by Hough transformation detects and locatees;
Described user terminal is determined described marked region according to the result who detects and locate.
5. method according to claim 4 is characterized in that, when described user do be labeled as straight line the time, described user terminal detects the mark in the described image by Hough transformation and the concrete steps of locating comprise:
Described user terminal converts the formula y=ax+b of straight line correspondence to polar coordinates formula ρ=xcos θ+ysin θ, x wherein, the sinusoidal curve of some correspondence in the pole coordinate parameter space in the y space;
The x that described user terminal is chosen, N point under the y coordinate system with the discrete N ρ parameter space that turns to of ρ, with the discrete N θ parameter space that turns to of θ, and according to N the point of choosing, calculated N ρ value and N the θ value corresponding with the ρ value;
The mode of described user terminal by accumulated counts obtained peak point (ρ 0, and θ 0) in calculating N ρ value and N the θ value corresponding with the ρ value;
Described user terminal detects and locatees corresponding x according to described peak point (ρ 0, and θ 0), and the straight line under the y coordinate system, described straight line are the mark that described user does.
6. method according to claim 5 is characterized in that, described user terminal determines that according to the result who detects and locate the concrete steps of described marked region comprise:
Described user terminal determines that according to the described straight line that detects and navigate to the character area of described straight line top is marked region.
7. method according to claim 4 is characterized in that, when described user do be labeled as rectangle the time, described user terminal detects the mark in the described image by Hough transformation and the concrete steps of locating comprise:
Described user terminal converts the formula y=ax+b of the limit correspondence of described rectangle to polar coordinates formula ρ=xcos θ+ysin θ, described rectangle comprises four edges, the x of every limit correspondence, the corresponding pole coordinate parameter space of y coordinate space, x wherein, the sinusoidal curve of some correspondence in the pole coordinate parameter space in the y coordinate space;
Described user terminal is at the x of every limit correspondence, choose M point under the y coordinate system respectively, with the discrete M ρ parameter space that turns to of ρ, with the discrete M θ parameter space that turns to of θ, and according to the x every limit correspondence, the M that chooses under the y coordinate system point calculates ρ value and the θ value corresponding with the ρ value, and a limit correspondence of described rectangle is calculated one group of M ρ value and M the θ value corresponding with the ρ value;
Four groups of M ρ values that described user terminal will be calculated reach M the θ value corresponding with the ρ value as four arrays that add up, mode by accumulated counts gets access to a peak point in each adds up array, the corresponding x of a described peak point, straight line in the y coordinate system, described four four edges that straight line is described rectangle;
Described user terminal is searched four summits of described rectangle according to the feature of rectangle from the described array that adds up, the adjacent both sides angle that is characterized as rectangle of wherein said rectangle is that 90 ° of opposite side with rectangle are isometric;
Described user terminal detects according to four limits of described rectangle and four summits and locatees described rectangle, and described rectangle is the mark that described user does.
8. method according to claim 7 is characterized in that, described user terminal determines that according to the result who detects and locate the concrete steps of described marked region comprise:
Described user terminal determines that according to detecting and navigating to described rectangle the character area in the described rectangle is marked region.
9. method according to claim 1 and 2 is characterized in that, described user terminal carries out also comprising after the optical character identification to the tag content in the described marked region:
Described user terminal is won the tag content that optical character identification obtains;
The described tag content that described user terminal will be won is set type again, and the described tag content after preservation and the demonstration composing.
10. a user terminal is characterized in that, described user terminal comprises:
Image unit for text being taken pictures and generating image, has the mark that the user does on the described text;
Image identification unit is used for described image is identified, and determines the marked region that is marked at correspondence on the described image that described user does;
The optical character identification unit is used for the tag content of described marked region is carried out optical character identification.
11. user terminal according to claim 10 is characterized in that, described user does is labeled as straight line or curve or oval or rectangle or circle.
12., it is characterized in that the technology that described user terminal is identified described image is Hough transformation or chain code technology or not displacement technology or Fourier descriptor technology or autoregression pattern technology according to claim 10 or 11 described user terminals.
13. according to claim 10 or 11 described user terminals, it is characterized in that described image identification unit comprises:
Detection module is used for detecting and locating by the mark that Hough transformation is done the described user of described image;
Determination module is used for determining described marked region according to the result who detects and locate.
14. user terminal according to claim 13 is characterized in that, when described user do be labeled as straight line the time, described detection module comprises:
First modular converter is used for converting the formula y=ax+b of straight line correspondence to polar coordinates formula ρ=xcos θ+ysin θ, x wherein, the sinusoidal curve of some correspondence in the pole coordinate parameter space in the y space;
First computing module is used for choosing x, and N point under the y coordinate system with the discrete N ρ parameter space that turns to of ρ, with the discrete N θ parameter space that turns to of θ, and according to N the point of choosing, calculated N ρ value and N the θ value corresponding with the ρ value;
The first accumulated counts module, the mode that is used for by accumulated counts reaches N the θ value corresponding with the ρ value and obtains peak point (ρ 0, and θ 0) calculating N ρ value;
First detection module detects and locatees corresponding x according to described peak point (ρ 0, and θ 0), and the straight line under the y coordinate system, described straight line are the mark that described user does.
15. user terminal according to claim 14 is characterized in that, described determination module comprises:
First determination module is used for determining that according to the described straight line that detects and navigate to the character area of straight line top is marked region.
16. user terminal according to claim 13 is characterized in that, described user does when being labeled as rectangle, and described detection module comprises:
Second modular converter, be used for converting the formula y=ax+b of the limit correspondence of described rectangle to polar coordinates formula ρ=xcos θ+ysin θ, described rectangle comprises four edges, the x of every limit correspondence, the corresponding pole coordinate parameter space of y coordinate space, x wherein, the sinusoidal curve of some correspondence in the pole coordinate parameter space in the y coordinate space;
Second computing module, be used for the x every limit correspondence, choose M point under the y coordinate system respectively, with the discrete M ρ parameter space that turns to of ρ, with the discrete M θ parameter space that turns to of θ, and according to the x every limit correspondence, the M that chooses under the y coordinate system point, calculate ρ value and the θ value corresponding with the ρ value, a limit correspondence of described rectangle is calculated one group of M ρ value and M the θ value corresponding with the ρ value;
The second accumulated counts module, four groups of M ρ values that are used for calculating reach M the θ value corresponding with the ρ value as four arrays that add up, mode by accumulated counts gets access to a peak point in each adds up array, the corresponding x of a described peak point, straight line in the y coordinate system, described four four edges that straight line is described rectangle;
Search module, be used for searching four summits of described rectangle according to the feature of rectangle from the described array that adds up, the adjacent both sides angle that is characterized as rectangle of wherein said rectangle is that 90 ° of opposite side with rectangle are isometric;
Second detection module is used for four limits and four summit detections of the described rectangle of foundation and locatees described rectangle, and described rectangle is the mark that described user does.
17. user terminal according to claim 16 is characterized in that, described determination module comprises:
Second determination module is used for determining that according to the described rectangle that detects and navigate to the character area in the rectangle is marked region.
18. according to claim 10 or 11 described user terminals, it is characterized in that described user terminal also comprises:
Abstraction units is used for the described tag content that optical character identification obtains is won;
Display unit is used for the described tag content of winning is set type, and the described tag content after preservation and the demonstration composing.
CN201310193476.7A 2013-05-22 2013-05-22 A kind of character recognition method and user terminal Active CN103295008B (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN201710142064.9A CN107066999A (en) 2013-05-22 2013-05-22 A kind of character recognition method and user terminal
CN201310193476.7A CN103295008B (en) 2013-05-22 2013-05-22 A kind of character recognition method and user terminal
CN201710142076.1A CN107103319A (en) 2013-05-22 2013-05-22 A kind of character recognition method and user terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310193476.7A CN103295008B (en) 2013-05-22 2013-05-22 A kind of character recognition method and user terminal

Related Child Applications (2)

Application Number Title Priority Date Filing Date
CN201710142076.1A Division CN107103319A (en) 2013-05-22 2013-05-22 A kind of character recognition method and user terminal
CN201710142064.9A Division CN107066999A (en) 2013-05-22 2013-05-22 A kind of character recognition method and user terminal

Publications (2)

Publication Number Publication Date
CN103295008A true CN103295008A (en) 2013-09-11
CN103295008B CN103295008B (en) 2017-04-05

Family

ID=49095839

Family Applications (3)

Application Number Title Priority Date Filing Date
CN201710142076.1A Pending CN107103319A (en) 2013-05-22 2013-05-22 A kind of character recognition method and user terminal
CN201710142064.9A Pending CN107066999A (en) 2013-05-22 2013-05-22 A kind of character recognition method and user terminal
CN201310193476.7A Active CN103295008B (en) 2013-05-22 2013-05-22 A kind of character recognition method and user terminal

Family Applications Before (2)

Application Number Title Priority Date Filing Date
CN201710142076.1A Pending CN107103319A (en) 2013-05-22 2013-05-22 A kind of character recognition method and user terminal
CN201710142064.9A Pending CN107066999A (en) 2013-05-22 2013-05-22 A kind of character recognition method and user terminal

Country Status (1)

Country Link
CN (3) CN107103319A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105571583A (en) * 2014-10-16 2016-05-11 华为技术有限公司 User location positioning method and server
WO2017032190A1 (en) * 2015-08-24 2017-03-02 广州视睿电子科技有限公司 Image identification method and apparatus
CN107209862A (en) * 2015-01-21 2017-09-26 国立大学法人东京农工大学 Program, information storage medium and identifying device
CN107426456A (en) * 2016-04-28 2017-12-01 京瓷办公信息系统株式会社 Image processing apparatus and image processing system
CN107610138A (en) * 2017-10-20 2018-01-19 四川长虹电器股份有限公司 A kind of bill seal regional sequence dividing method
CN110175652A (en) * 2019-05-29 2019-08-27 广东小天才科技有限公司 Information classification method, device, equipment and storage medium
CN111094912A (en) * 2017-08-09 2020-05-01 株式会社DSi Weighing system, electronic scale, and marker for electronic scale
WO2020133442A1 (en) * 2018-12-29 2020-07-02 华为技术有限公司 Text recognition method and terminal device

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109871848B (en) * 2017-12-01 2022-01-25 北京搜狗科技发展有限公司 Character recognition method and device for mobile terminal
CN109635805B (en) * 2018-12-11 2022-01-11 上海智臻智能网络科技股份有限公司 Image text positioning method and device and image text identification method and device
CN111079759B (en) * 2019-07-17 2023-12-22 广东小天才科技有限公司 Dictation content generation method, electronic equipment and system
CN111079760B (en) * 2019-08-02 2023-11-28 广东小天才科技有限公司 Character recognition method and electronic equipment

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060161578A1 (en) * 2005-01-19 2006-07-20 Siegel Hilliard B Method and system for providing annotations of a digital work
JP3840206B2 (en) * 2003-06-23 2006-11-01 株式会社東芝 Translation method and program in copying machine
EP2067102A2 (en) * 2006-09-15 2009-06-10 Exbiblio B.V. Capture and display of annotations in paper and electronic documents
CN101661465A (en) * 2008-08-28 2010-03-03 富士施乐株式会社 Image processing apparatus, image processing method and image processing program
CN101882384A (en) * 2010-06-29 2010-11-10 汉王科技股份有限公司 Method for note management on electronic book and electronic book equipment
CN102117269A (en) * 2010-01-06 2011-07-06 佳能株式会社 Apparatus and method for digitizing documents
CN102289322A (en) * 2011-08-25 2011-12-21 盛乐信息技术(上海)有限公司 Method and system for processing handwriting
CN102446274A (en) * 2010-09-30 2012-05-09 汉王科技股份有限公司 Underlined text image preprocessing method and device

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7564999B2 (en) * 2005-07-25 2009-07-21 Carestream Health, Inc. Method for identifying markers in radiographic images
CN101620595A (en) * 2009-08-11 2010-01-06 上海合合信息科技发展有限公司 Method and system for translating text of electronic equipment
CN102201051A (en) * 2010-03-25 2011-09-28 汉王科技股份有限公司 Text excerpting device, method and system

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3840206B2 (en) * 2003-06-23 2006-11-01 株式会社東芝 Translation method and program in copying machine
US20060161578A1 (en) * 2005-01-19 2006-07-20 Siegel Hilliard B Method and system for providing annotations of a digital work
EP2067102A2 (en) * 2006-09-15 2009-06-10 Exbiblio B.V. Capture and display of annotations in paper and electronic documents
CN101661465A (en) * 2008-08-28 2010-03-03 富士施乐株式会社 Image processing apparatus, image processing method and image processing program
CN102117269A (en) * 2010-01-06 2011-07-06 佳能株式会社 Apparatus and method for digitizing documents
CN101882384A (en) * 2010-06-29 2010-11-10 汉王科技股份有限公司 Method for note management on electronic book and electronic book equipment
CN102446274A (en) * 2010-09-30 2012-05-09 汉王科技股份有限公司 Underlined text image preprocessing method and device
CN102289322A (en) * 2011-08-25 2011-12-21 盛乐信息技术(上海)有限公司 Method and system for processing handwriting

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李强兵,刘文予: "基于Hough变换的快速矩形检测算法", 《微计算机信息》 *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105571583A (en) * 2014-10-16 2016-05-11 华为技术有限公司 User location positioning method and server
CN105571583B (en) * 2014-10-16 2020-02-21 华为技术有限公司 User position positioning method and server
CN107209862A (en) * 2015-01-21 2017-09-26 国立大学法人东京农工大学 Program, information storage medium and identifying device
WO2017032190A1 (en) * 2015-08-24 2017-03-02 广州视睿电子科技有限公司 Image identification method and apparatus
CN107426456A (en) * 2016-04-28 2017-12-01 京瓷办公信息系统株式会社 Image processing apparatus and image processing system
CN107426456B (en) * 2016-04-28 2019-06-11 京瓷办公信息系统株式会社 Image processing apparatus and image processing system
US11460340B2 (en) 2017-08-09 2022-10-04 Dsi Corporation Weighing system, electronic scale, and electronic scale marker for performing inventory management
CN111094912A (en) * 2017-08-09 2020-05-01 株式会社DSi Weighing system, electronic scale, and marker for electronic scale
CN107610138A (en) * 2017-10-20 2018-01-19 四川长虹电器股份有限公司 A kind of bill seal regional sequence dividing method
WO2020133442A1 (en) * 2018-12-29 2020-07-02 华为技术有限公司 Text recognition method and terminal device
CN112041851A (en) * 2018-12-29 2020-12-04 华为技术有限公司 Text recognition method and terminal equipment
US12125303B2 (en) 2018-12-29 2024-10-22 Huawei Technologies Co., Ltd. Text recognition method and terminal device
CN110175652A (en) * 2019-05-29 2019-08-27 广东小天才科技有限公司 Information classification method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN107066999A (en) 2017-08-18
CN107103319A (en) 2017-08-29
CN103295008B (en) 2017-04-05

Similar Documents

Publication Publication Date Title
CN103295008A (en) Character recognition method and user terminal
EP2883158B1 (en) Identifying textual terms in response to a visual query
JP6381002B2 (en) Search recommendation method and apparatus
US12038977B2 (en) Visual recognition using user tap locations
US20180276896A1 (en) System and method for augmented reality annotations
EP2015224A1 (en) Invisible junction features for patch recognition
US9239961B1 (en) Text recognition near an edge
CN107111618B (en) Linking thumbnails of images to web pages
CN110263792B (en) Image recognizing and reading and data processing method, intelligent pen, system and storage medium
CN105807957A (en) Input method and intelligent pen
CN104978577B (en) Information processing method, device and electronic equipment
CN110263793A (en) Article tag recognition methods and device
CN104834467A (en) Handwriting sharing method and system in paper page
KR20130054116A (en) Method and system for digitizing and utilizing paper documents through transparent display.
CN111783786B (en) Picture identification method, system, electronic device and storage medium
CN111695372B (en) Click-to-read method and click-to-read data processing method
KR101515162B1 (en) Information providing apparatus using electronic pen and information providing method of the same
CN113486171B (en) Image processing method and device and electronic equipment
CN104063449B (en) A kind of generation of electronic book in mobile terminal label and localization method and its system
CN105975193A (en) Rapid search method and device applied to mobile terminal
KR102547386B1 (en) Method and system for answer processing
KR101594590B1 (en) Advertising apparatus using electronic pen and advertising method of the same
CN114627471A (en) Subject identification method and device, terminal device and readable storage medium
Sarin et al. Joint Equal Contribution of Global and Local Features for Image Annotation.
US20170024417A1 (en) Information processing system and information processing method thereof

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20171106

Address after: Metro Songshan Lake high tech Industrial Development Zone, Guangdong Province, Dongguan City Road 523808 No. 2 South Factory (1) project B2 -5 production workshop

Patentee after: HUAWEI terminal (Dongguan) Co., Ltd.

Address before: 518129 Longgang District, Guangdong, Bantian HUAWEI base B District, building 2, building No.

Patentee before: Huawei Device Co., Ltd.

TR01 Transfer of patent right
CP01 Change in the name or title of a patent holder

Address after: 523808 Southern Factory Building (Phase I) Project B2 Production Plant-5, New Town Avenue, Songshan Lake High-tech Industrial Development Zone, Dongguan City, Guangdong Province

Patentee after: Huawei Device Co., Ltd.

Address before: 523808 Southern Factory Building (Phase I) Project B2 Production Plant-5, New Town Avenue, Songshan Lake High-tech Industrial Development Zone, Dongguan City, Guangdong Province

Patentee before: HUAWEI terminal (Dongguan) Co., Ltd.

CP01 Change in the name or title of a patent holder