[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN112396050A - Image processing method, device and storage medium - Google Patents

Image processing method, device and storage medium Download PDF

Info

Publication number
CN112396050A
CN112396050A CN202011401669.3A CN202011401669A CN112396050A CN 112396050 A CN112396050 A CN 112396050A CN 202011401669 A CN202011401669 A CN 202011401669A CN 112396050 A CN112396050 A CN 112396050A
Authority
CN
China
Prior art keywords
image
target object
feature
preset
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011401669.3A
Other languages
Chinese (zh)
Other versions
CN112396050B (en
Inventor
万阳春
杨青
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Du Xiaoman Technology Beijing Co Ltd
Original Assignee
Shanghai Youyang New Media Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Youyang New Media Information Technology Co ltd filed Critical Shanghai Youyang New Media Information Technology Co ltd
Priority to CN202011401669.3A priority Critical patent/CN112396050B/en
Publication of CN112396050A publication Critical patent/CN112396050A/en
Application granted granted Critical
Publication of CN112396050B publication Critical patent/CN112396050B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The method comprises the steps of obtaining at least one image characteristic of an image to be recognized, wherein the image to be recognized comprises a target object, the image characteristic is used for representing the imaging quality of the image or the integrity degree of the target object, determining whether the image meets a recognition condition or not based on the at least one image characteristic, recognizing the target object in the image when the image meets the recognition condition, obtaining the information of the target object, and accurately recognizing the target object in the image.

Description

Image processing method, device and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image processing method, an image processing apparatus, and a storage medium.
Background
At present, certificates uploaded by users are collected and identified through the internet so as to achieve the purpose of determining identity and authority.
In the prior art, the certificate is identified, before corresponding certificate information is acquired, whether the certificate can be identified or not is not confirmed, and contents such as texts in the certificate are directly identified, so that the identified contents are inaccurate and reliable for pictures with poor imaging quality, accuracy of identity determination, authority and the like is further influenced, and great potential safety hazards are brought to subsequent internet behaviors.
Disclosure of Invention
The application provides an image processing method, image processing equipment and a storage medium, which can realize accurate identification of an image.
In a first aspect, an embodiment of the present application provides an image processing method, including:
acquiring at least one image characteristic of an image to be identified, wherein the image comprises a target object, and the image characteristic is used for representing the imaging quality of the image or the integrity degree of the target object;
determining whether the image satisfies the recognition condition based on the at least one image feature;
and when the image meets the identification condition, identifying the target object in the image to obtain the information of the target object.
In a second aspect, an embodiment of the present application provides an electronic device, including:
the device comprises an acquisition unit, a recognition unit and a processing unit, wherein the acquisition unit is used for acquiring at least one image characteristic of an image to be recognized, the image comprises a target object, and the image characteristic is used for representing the imaging quality of the image or the integrity of the target object;
a processing unit for determining whether the image satisfies the recognition condition based on at least one image feature;
the processing unit is further used for identifying the target object in the image when the image meets the identification condition, and obtaining the information of the target object.
In a third aspect, an embodiment of the present application provides an electronic device, including: a memory and a processor;
the memory stores computer-executable instructions;
the processor executes the computer-executable instructions stored by the memory, causing the processor to perform the method of the first aspect or embodiments thereof.
In a fourth aspect, an embodiment of the present application provides a storage medium, including: a readable storage medium and a computer program for implementing the method of the first aspect or implementations thereof.
According to the image recognition method and device, whether the image meets the recognition condition on the basis of at least one image feature of the acquired image to be recognized or not on the basis of the imaging quality and/or the integrity degree of the target object is confirmed, and the target object in the image is recognized when the image meets the confirmation condition, so that an accurate recognition result is obtained.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to these drawings without inventive exercise.
Fig. 1 is a schematic structural diagram of an electronic device 100 according to an embodiment of the present disclosure;
fig. 2 is a flowchart illustrating an image processing method 200 according to an embodiment of the present disclosure;
fig. 3 is a flowchart illustrating an image processing method 300 according to an embodiment of the present disclosure;
fig. 4 is a flowchart illustrating an image processing method 400 according to an embodiment of the present disclosure;
fig. 5 is a flowchart illustrating an image processing method 500 according to an embodiment of the present disclosure;
fig. 6 is a flowchart illustrating an image processing method 600 according to an embodiment of the present disclosure;
FIGS. 7a and 7b are schematic diagrams of an image collection and union provided by an embodiment of the present application;
fig. 8 is a flowchart illustrating an image processing method 800 according to an embodiment of the present disclosure;
fig. 9 is a flowchart illustrating an image processing method 900 according to an embodiment of the present disclosure;
FIG. 10 is a schematic diagram of a process 1000 for perspective transformation provided by an embodiment of the present application;
fig. 11 is a flowchart illustrating an image processing method 1100 according to an embodiment of the present disclosure;
fig. 12 is a schematic structural diagram of an electronic device 1200 according to an embodiment of the present disclosure;
fig. 13 is a schematic hardware structure diagram of an electronic device 1300 according to an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
In various fields, for actions to be performed by a user, such as transactions, investments, ratings and the like, an image including a target object is uploaded by the user, the target object is generally a certificate, and the identity, authority, capability and the like of the user are identified.
At present, for an image uploaded by a user, a certificate in the image is identified to obtain contents such as a text in the certificate, however, the quality of the image uploaded by the user often cannot meet identification requirements, for example, the image has low definition, low brightness, overexposure or incomplete certificate, and the image which does not meet the identification requirements is directly identified, and the accuracy of an identification result cannot be ensured.
In view of the above problems, in the embodiment of the present application, the image feature of the image to be recognized is obtained, whether the image meets the recognition condition in the imaging quality or the integrity of the target object is determined based on the image feature, and after it is determined that the image meets the recognition condition, the text in the image is recognized, so as to obtain the text information.
As an example and not by way of limitation, while confirming whether the image to be recognized is a copy, an invalid object (e.g., an incorrect document or an absence of a desired document), a correct orientation, or the like, the text in the image is recognized when the state of the image is confirmed to be normal and the image satisfies the recognition condition.
The technical scheme of the embodiment of the application can be applied to various electronic devices and is used for accurately identifying the image to be identified. The electronic device may be a terminal device, such as a Mobile Phone (Mobile Phone), a tablet computer (Pad), a computer, a Virtual Reality (VR) terminal device, an Augmented Reality (AR) terminal device, a terminal device in industrial control (industrial control), a terminal device in unmanned driving (self driving), a terminal device in remote medical treatment (remote medical), a terminal device in smart city (smart city), a terminal device in smart home (smart home), and the like. The terminal equipment in this application embodiment can also be wearable equipment, and wearable equipment also can be called as wearing formula smart machine, is the general term of using wearing formula technique to carry out intelligent design, develop the equipment that can dress to daily wearing, like glasses, gloves, wrist-watch, dress and shoes etc.. A wearable device is a portable device that is worn directly on the body or integrated into the clothing or accessories of the user. The terminal device may be fixed or mobile.
For example, the electronic device in the embodiment of the present application may also be a server, and when the electronic device is the server, the electronic device may receive an image acquired by a terminal device, and perform image processing on the image, so as to achieve accurate identification of a target object.
Fig. 1 is a schematic structural diagram of an electronic device 100 according to an embodiment of the present disclosure. As shown in fig. 1, the electronic device 100 includes: the image recognition system comprises an image acquisition unit 101, an image determination unit 102 and an image recognition unit 103, wherein the image determination unit 102 is respectively connected with the image acquisition unit 101 and the image recognition unit 103.
The image acquiring unit 102 is configured to acquire an image to be recognized, which includes, for example, a document object. For example, the image acquired by the image acquisition device, or the image transmitted by another device, or the image input by the user may be received, which is not limited in the embodiments of the present application.
The image determining unit 102 receives the image to be recognized sent by the image acquiring unit 101, confirms the imaging quality of the image and/or the integrity of the target object in the image, and sends the image to be recognized to the image recognizing unit 103 when the image meets the recognition condition, that is, when the imaging quality of the image meets the quality condition and/or the integrity of the target object meets the integrity condition; for example, if the imaging quality of the image does not satisfy the identification condition, or the integrity of the target object does not satisfy the identification condition, or neither the imaging quality of the image nor the integrity of the target object satisfies the identification condition, the image is not identified.
In some embodiments, the electronic device 100 further comprises an information sending unit 104, and the information sending unit 104 is connected with the image determining unit 102. When the image determining unit 102 determines that the image does not satisfy the recognition condition, an instruction is sent to the information sending unit 104, and the information sending unit 104 generates indication information according to the instruction, wherein the indication information is used for indicating that the image does not meet the requirement, optionally, further indicating an item which does not meet the requirement, and prompting the user to provide the image to be recognized again.
After receiving the image to be recognized, the image recognition unit 103 recognizes the target object in the image to be recognized to obtain information of the target object, generally, the obtained information of the target object is text information, and in some embodiments, image information of the target object may also be obtained, for example, when the target object is an identification card, a personal photograph in the target object is obtained.
It should be understood that the electronic device 100 further includes a storage unit (not shown in the figure) for storing information of the identified target objects, and illustratively, the information of each target object is structurally stored, for example, the "name lie" in the form of [ key, value ] is stored as [ name, lie ] in the identification card.
The present application is specifically illustrated by the following examples.
Fig. 2 is a flowchart illustrating an image processing method 200 according to an embodiment of the present disclosure.
In order to accurately identify a target object in an image, the embodiment of the application firstly confirms the imaging quality and the integrity of the image before identifying the image, and identifies the target object in the image when confirming that the image meets the identification condition based on the imaging quality and the integrity.
As shown in fig. 2, the image processing method provided in the embodiment of the present application includes:
s201: at least one image feature of an image to be identified is acquired.
It should be understood that the image to be recognized contains an object, and the object may be a document object, such as an identification card, a license or certificate, or a document object.
Wherein the image features are used to characterize the imaging quality of the image or the completeness of the target object in the image.
Illustratively, the image features include at least one first feature for characterizing an imaging quality of the image, and/or a second feature for characterizing a completeness of the target object. Correspondingly, acquiring at least one image feature of the image to be recognized comprises: at least one first feature of the image is acquired, and/or a second feature of the image is acquired. It should be understood that only the at least one first feature of the image or only the second feature of the image may be acquired, or that at least one first feature and second feature of the image may be acquired separately.
Optionally, the first feature may be any one of a variance feature, a mean feature, a first pixel number feature, or a second pixel number feature. The first pixel number is the number of adjacent pixels with pixel values larger than a first preset pixel value, the second pixel number is the number of adjacent pixels with pixel values smaller than a second preset pixel value, and optionally, the first preset pixel value is larger than the second preset pixel value.
S202: based on the at least one image feature, it is determined whether the image satisfies the recognition condition.
In this step, it is determined whether the image satisfies the recognition condition in combination with the acquired at least one image feature. For example, whether the definition of the image meets the requirement is determined based on the variance characteristics, and when the definition of the image meets the requirement, the image meets the identification condition is determined; for another example, whether the definition of the image meets the requirement is determined based on the variance characteristics, whether the brightness of the image meets the requirement is determined based on the mean value, and when the definition meets the requirement and the brightness meets the requirement, the image meets the identification condition is determined.
S203: and when the image meets the identification condition, identifying the target object in the image to obtain the information of the target object.
If the image meets the identification condition, the image is easy to be accurately identified, namely, the image meeting the identification condition is identified, and a more accurate identification result is obtained. Further, the information of the target object is obtained by identifying the target object in the image, and generally, the information of the target object includes text information, such as name, address, date of birth, etc. in the identification card, and in some embodiments, the target object also includes image information, such as a personal photo in the identification card.
Illustratively, the resulting information of the target object is stored for subsequent querying or use.
The method and the device for recognizing the target object in the image are used for confirming whether the image meets the recognition condition on the basis of at least one image feature of the acquired image to be recognized in the imaging quality and/or the integrity degree of the target object, and recognizing the target object in the image when the image meets the confirmation condition so as to obtain an accurate recognition result.
Fig. 3 is a flowchart illustrating an image processing method 300 according to an embodiment of the present disclosure.
In order to accurately confirm whether an image meets an identification condition, an embodiment of the present application proposes an implementation manner as shown in fig. 3 to determine whether an image meets an identification condition, including:
s301: and obtaining an evaluation result of whether the image features meet the corresponding preset conditions or not based on the image features and the corresponding threshold value aiming at each image feature in the at least one image feature.
For example, if the image feature is a variance of an image, when the variance is greater than a definition threshold, it is determined that the image feature satisfies a corresponding preset condition, it should be understood that the variance of the image is a variance calculated based on a pixel value of each pixel point in the image, and the larger the variance of the image is, the image has a wider frequency response range, which indicates that the image is focused accurately, i.e., the higher the definition of the image is, and the smaller the variance of the image is, the narrower frequency response range of the image is, which indicates that the number of edges of the image is small, i.e., the lower the definition of the image is. The definition threshold is a preset value of the variance meeting the definition requirement.
If the image feature is an average value of the image, determining that the image feature satisfies a corresponding preset condition when the average value is greater than a brightness threshold, and it should be understood that the average value of the image is an average value calculated based on a pixel value of each pixel point in the image, where the larger the average value of the image is, the higher the brightness of the image is, and the smaller the average value of the image is, the lower the brightness of the image is. The brightness threshold is a value corresponding to the mean value when the brightness requirement is met.
If the image feature is the first pixel number, when the first pixel number is smaller than a first number threshold, it is determined that the image feature satisfies a corresponding preset condition, wherein the first pixel number is the number of adjacent pixels of which the pixel values are larger than a first preset pixel value. Firstly, the number of adjacent pixels in the image, which is greater than a first preset pixel value, is determined, that is, the first pixel number, and when the first pixel number is smaller than a first number threshold, it indicates that there is no bright spot, or referred to as a light spot, in the image.
If the image feature is the second pixel number, when the second pixel number is smaller than a second number threshold, determining that the image feature meets the corresponding preset condition, wherein the second pixel number is the number of adjacent pixels of which the pixel values are smaller than a second preset pixel value. First, the number of vector pixels in the image smaller than a second preset pixel value, i.e., the second pixel number, is determined, and when the second pixel number is smaller than a second number threshold, it indicates that there is no shadow, or shadow.
It should be noted that the first preset pixel value is greater than the second preset pixel value; the first quantity threshold and the second quantity threshold may be the same or different, and the application does not limit this.
If the image features are intersection ratios, when the intersection ratios are larger than an intersection ratio threshold value, determining that the image features meet corresponding preset conditions, wherein the intersection ratios are ratios of intersections and unions of foreground images and background images obtained after image segmentation of the images, the foreground images contain target objects, the background images do not contain the target objects, and generally, the foreground objects only contain the target objects. It should be understood that when the intersection ratio is 1, the target object in the image is complete, when the intersection ratio is less than 1, the target object in the image has a situation of lacking edges, lacking corners or shielding, the smaller the intersection ratio is, the more serious the situation of lacking edges, lacking corners or shielding of the target object is, and the intersection ratio threshold is used for distinguishing whether the target object meets the preset condition according to the acceptable incomplete degree of the target object.
S302: based on the evaluation result of each image feature, it is determined whether the image satisfies the recognition condition.
In an actual application scene, the evaluation result of each image feature can be subjected to weighted operation to determine whether the image meets the identification condition; or when more than half of the evaluation results indicate that the image features meet the corresponding preset conditions, determining that the image meets the identification conditions, otherwise, determining that the image does not meet the identification conditions; or when the evaluation result of each image feature indicates that the corresponding image feature satisfies the corresponding preset condition, determining that the image satisfies the identification condition, and when any evaluation result indicates that the image feature does not satisfy the corresponding preset condition, determining that the image does not satisfy the identification condition.
Fig. 4 is a flowchart illustrating an image processing method 400 according to an embodiment of the present disclosure.
On the basis of any of the above embodiments, the present application will now describe how to acquire at least one first feature of an image with reference to fig. 4.
As shown in fig. 4, the method includes:
s401: the image is converted into a grayscale image.
Generally, the image to be recognized is a color image, such as an RGB image, and in this step, the color image needs to be converted into a grayscale image through color control conversion. Optionally, the pixel value of each pixel point in the grayscale image is between 0 and 255.
S402: based on the gray scale image, at least one first feature of the image is determined.
At least one first feature of the image is obtained based on the pixel value of each pixel point in the gray image, such as the variance of the image, the mean of the image, the first pixel number or the second pixel number, and the like.
With reference to fig. 5, a possible implementation is provided for determining at least one first feature of an image based on a gray-scale image.
S501: and converting the gray level image into a Laplace image through a Laplace algorithm.
It should be appreciated that laplace is a differential operator, and its application can enhance the abrupt gray level change region in the gray image and reduce the slow change region of gray level.
In this step, the grayscale image is converted into a laplacian image by a laplacian algorithm, and an operation can be performed based on an arbitrary laplacian operator.
Illustratively, the gray image is convolved through a preset laplacian mask to obtain a laplacian image.
The laplacian mask is a preset convolution template, and preferably, the laplacian mask may be set to a 3-by-3 mask as shown in table 1.
0 1 0
1 -4 1
0 1 0
TABLE 1
S502: based on the laplacian image, at least one first feature of the image is determined.
Illustratively, at least one of the variance, the mean, the first pixel number, and the second pixel number of the laplacian image is calculated based on the pixel value of each pixel point in the laplacian image.
The first pixel number is the number of adjacent pixels with pixel values larger than a first preset pixel value, the second pixel number is the number of adjacent pixels with pixel values smaller than a second preset pixel value, and the first preset pixel value is larger than the second preset pixel value.
Fig. 6 is a flowchart illustrating an image processing method 600 according to an embodiment of the present application.
On the basis of any of the above embodiments, the embodiments of the present application will describe how to acquire the second feature of the image with reference to fig. 6.
As shown in fig. 6, the method includes:
s601: and carrying out image segmentation on the image through a segmentation algorithm to obtain a foreground image and a background image.
In this step, the image is segmented by a segmentation algorithm, for example, a GrabCut segmentation algorithm, to obtain a foreground image containing the target object and a background image not containing the target object.
S602: based on the foreground image and the preset image, an Intersection-over-Union (IoU) of the foreground image and the preset image is calculated.
Note that the preset image is an image having the same aspect ratio as the target object. As an example, when a user performs image acquisition on a target object, a preset image or an outline of the preset image is displayed in a view frame or a preview frame, so that the user can obtain an image containing the target object based on the preset image acquisition; as another example, after acquiring the image to be recognized, the target object in the image is calibrated according to the preset image, for example, a center point of the target object is aligned with a center point of the preset image, and the target object is scaled to the size of the preset image.
In this step, the intersection of the foreground image and the preset image, i.e. the area of the quadrilateral ABCD in fig. 7a, is compared with the union of the foreground image and the preset image, i.e. the area of the irregular polygon EFBGHD in fig. 7a, to obtain the intersection ratio. And the intersection and union ratio is used for representing the ratio of the intersection and the union of the foreground image and the preset image. For example, referring to fig. 7B again, if the quadrangle a 'B' C 'D' is a blocked area, the area belongs to the background image, and the intersection ratio is the area ratio of the quadrangle ABCD minus the quadrangle a 'B' C 'D' to the irregular polygon EFBGHD.
It is to be understood that the second feature includes the cross-over ratio.
Further, when the intersection ratio is larger than the intersection ratio threshold value, the image feature is determined to meet the corresponding preset condition.
In this embodiment, whether the target object in the image is complete is determined by calculating the intersection ratio, and the target object is prevented from having an edge or an angle or being blocked, so as to determine that the image meets the identification condition.
Fig. 8 is a flowchart illustrating an image processing method 800 according to an embodiment of the present disclosure.
On the basis of any of the above embodiments, with reference to fig. 8, the image processing method further includes:
s801: and inputting the image to be identified into the image classification model to obtain the classification result of the image.
The image classification model is obtained by training based on a first network model, for example, an initiation series network model, and preferably, initiation v3 may be used as a backbone network.
The classification result is used to represent that the state of the target object is a normal state or an abnormal state, optionally, the abnormal state at least includes one of a copy, or an invalid object, taking the target object as an identity card as an example, the invalid object includes that the target object is a temporary identity card, and the image does not include the identity card or a non-identity card of the target object. Optionally, the abnormal state further includes whether the front side and the back side of the target object meet the requirement, for example, when the front side of the identity card needs to be uploaded, the target object in the image is the back side of the identity card, which is the abnormal state, and the target object is the front side of the identity card, which is the normal state.
For example, the image to be recognized may be input into the image classification model, or the image to be recognized may be preprocessed and the processed image may be input into the image classification model, for example, the image to be recognized may be converted into a grayscale image.
S802: and when the classification result indicates that the state of the target object is a normal state and the image meets the identification condition, identifying the target object in the image to obtain the information of the target object.
It should be noted that, for the execution order of determining whether the state of the image is normal and determining whether the image satisfies the recognition condition based on the classification result, the present embodiment is not limited, that is, determining whether the state of the image is normal based on the classification result may be executed before or after determining whether the image satisfies the recognition condition, or may be executed simultaneously with determining whether the image satisfies the recognition condition.
In this embodiment, the target object in the image is further classified and judged, and when the target object is determined to be in a normal state, the target object in the image is identified, so that the target object is prevented from being identified by mistake, and the information of the acquired target object is prevented from being wrong.
Fig. 9 is a flowchart illustrating an image processing method 900 according to an embodiment of the present application.
On the basis of any of the above embodiments, with reference to fig. 9, when an image satisfies a recognition condition, how to recognize a target object in the image to obtain information of the target object provides the following implementation manners:
s901: at least one text line image of the target object is acquired.
In this step, the text line of the target object in the image is identified to obtain at least one text line image.
For example, edge detection may be performed on each text line of the target object by an arbitrary image segmentation algorithm, and a binary mask of the text line may be extracted by a morphological operation (also referred to as an on-off operation) in combination with a connected component, so as to obtain a text line image.
S902: and inputting at least one text line image into the text recognition model to obtain text information of the target object.
And the text recognition model is obtained based on the second network model training.
Optionally, the second network model is a network model based on a convolutional neural network CNN model and a join-sense-time-sorted CTC setup. Compared with the traditional network model containing the Recurrent Neural Network (RNN), the method can accurately identify the content of the text line and improve the identification speed of the content of the text line, the CTC is used for overcoming the problem that the length of an output sequence is inconsistent with that of an input sequence, a new sequence is formed by filling a space in the middle, and the space is removed by using a certain rule during decoding. Alternatively, the backbone network of the second network model may employ a network of the DenseNet algorithm.
It should be understood that in the text recognition model in this step, for each text line image, the output text information of the target object is structured information, that is, the text information is output in the form of [ key, Value ].
On the basis of the embodiment shown in fig. 9, as an example, the image to be recognized needs to be preprocessed before acquiring at least one text line image of the target object.
Illustratively, during the process of image capturing of the target object, there is a certain relative angle between the image capturing device, such as the lens of the camera, and the target object, so that there is a certain degree of distortion in the target object, as shown in fig. 10, the target object in fig. 10- (a) is a quadrangle represented by 1234 four vertices, which is a trapezoid, and therefore, it is necessary to convert the quadrangle into a regular quadrangle represented by 1234 four vertices in fig. 10- (b).
For example, the image of the target object may be acquired from the image to be recognized based on an arbitrary edge detection algorithm, for example, the edge detection is performed on the target object through an image segmentation algorithm, a binarization boundary (also referred to as a binarization mask) of the target object is extracted based on a morphological operation (also referred to as an opening and closing operation) in combination with a connected domain analysis, a maximum bounding rectangle is taken based on the binarization boundary, and an area ratio of a region of the target object and a region of the image to be recognized is combined to eliminate the possibility of false detection. Further, four vertexes of the quadrangle are obtained through connected domain analysis, the target object in the converted regular quadrangle range is obtained through perspective transformation based on the position coordinates of the four vertexes, and recognition is carried out based on the image containing the target object so as to obtain the information of the target object.
Fig. 11 is a flowchart illustrating an image processing method 1100 according to an embodiment of the present disclosure.
On the basis of any of the above embodiments, this embodiment provides a possible implementation manner, which specifically includes:
firstly, image acquisition is carried out, in an application scene, when a user carries out image acquisition on a target object through a handheld image acquisition device, an electronic device provides an image preview frame for the user through a display device, in some embodiments, a preset image outline is displayed in the image preview frame, the preset image outline and the target object to be acquired have the same transverse-longitudinal ratio, and the user can align the target object with the preset image outline, place the target object in the preset image outline and carry out acquisition on the target object.
Further, imaging quality evaluation is performed on the image to be recognized including the target object, for example, whether the definition and the brightness of the image to be recognized meet the requirements of preset conditions and whether bright spots or shadows exist are determined, after the imaging quality evaluation, if the image to be recognized is qualified, the integrity detection is continued, and if the image to be recognized is not qualified, the image acquisition is performed again.
And carrying out integrity detection on the image to be identified with the imaging quality evaluated to be qualified, determining whether the target object has the conditions of edge deletion, corner deletion or shielding, if the detection result of the integrity is qualified, carrying out next-step risk type evaluation, and if the detection result of the integrity is not qualified, carrying out image acquisition again.
And then, performing risk type evaluation on the image to be recognized through an image classification model obtained through pre-training to obtain a classification result, detecting the target object when the classification result indicates that the state of the target object is a normal state, and performing image acquisition again when the classification result indicates that the state of the target object is an abnormal state.
When the evaluation and the detection of the image to be recognized pass through the above processes, it is indicated that the image is easy to be recognized accurately, further, the embodiment detects the target object in the image, for example, the image of the target object is obtained through an image segmentation algorithm, further, the image of the target object is detected in a text line, for example, at least one text line image is obtained through the image segmentation algorithm, then the at least one text line image is input into a text recognition model obtained through pre-training, and structured text information is output by the text recognition model.
Fig. 12 is a schematic structural diagram of an electronic device 1200 according to an embodiment of the present application, and as shown in fig. 12, the electronic device 1200 includes:
an obtaining unit 1210, configured to obtain at least one image feature of an image to be identified, where the image includes a target object, and the image feature is used to represent imaging quality of the image or a completeness of the target object;
a processing unit 1220 for determining whether the image satisfies the recognition condition based on at least one image feature;
the processing unit 1220 is further configured to identify a target object in the image when the image satisfies the identification condition, and obtain information of the target object.
The electronic device 1200 provided by the present embodiment includes an obtaining unit 1210 and a processing unit 1220, and determines whether an image satisfies a recognition condition in terms of imaging quality and/or integrity of a target object based on at least one image feature of the obtained image to be recognized, and recognizes the target object in the image when the image satisfies the recognition condition, so as to obtain an accurate recognition result.
In one possible design, the obtaining unit 1210 is specifically configured to:
acquiring at least one first characteristic of the image, wherein the first characteristic is used for representing the imaging quality of the image;
and/or the presence of a gas in the gas,
and acquiring a second characteristic of the image, wherein the second characteristic is used for representing the integrity degree of the target object.
In one possible design, the processing unit 1220 is specifically configured to:
for each image feature in at least one image feature, obtaining an evaluation result of whether the image feature meets a corresponding preset condition based on the image feature and a corresponding threshold;
based on the evaluation result of each image feature, it is determined whether the image satisfies the recognition condition.
In one possible design, the processing unit 1220 is specifically configured to:
and when the evaluation result of each image feature indicates that the image feature meets the corresponding preset condition, determining that the image meets the identification condition.
In one possible design, the obtaining unit 1210 is specifically configured to:
converting the image into a gray scale image;
based on the gray scale image, at least one first feature of the image is determined.
In one possible design, the obtaining unit 1210 is specifically configured to:
converting the gray level image into a Laplace image through a Laplace algorithm;
based on the laplacian image, at least one first feature of the image is determined.
In one possible design, the obtaining unit 1210 is specifically configured to:
and performing convolution operation on the gray image through a preset Laplace mask to obtain a Laplace image.
In one possible design, the obtaining unit 1210 is specifically configured to:
calculating to obtain at least one of variance, mean, first pixel quantity or second pixel quantity of the Laplace image based on the pixel value of each pixel point in the Laplace image;
the first pixel number is the number of adjacent pixels with pixel values larger than a first preset pixel value, the second pixel number is the number of adjacent pixels with pixel values smaller than a second preset pixel value, and the first preset pixel value is larger than the second preset pixel value.
In one possible design, the processing unit 1220 is specifically configured to:
if the image characteristics are the variance of the image, determining that the image characteristics meet corresponding preset conditions when the variance is larger than a definition threshold;
if the image characteristics are the average value of the image, determining that the image characteristics meet corresponding preset conditions when the average value is larger than a brightness threshold;
if the image features are the first pixel number, when the first pixel number is smaller than a first number threshold, determining that the image features meet corresponding preset conditions, wherein the first pixel number is the number of adjacent pixels of which the pixel values are larger than a first preset pixel value;
if the image feature is the second pixel number, when the second pixel number is smaller than a second number threshold, it is determined that the image feature satisfies the corresponding preset condition, and the second pixel number is the number of adjacent pixels of which the pixel values are smaller than a second preset pixel value.
In one possible design, the obtaining unit 1210 is specifically configured to:
carrying out image segmentation on the image through a segmentation algorithm to obtain a foreground image and a background image, wherein the foreground image comprises a target object, and the background image does not comprise the target object;
and calculating to obtain an intersection and union ratio of the foreground image and the preset image based on the foreground image and the preset image, wherein the intersection and union ratio is used for representing the ratio of the intersection and the union of the foreground image and the preset image.
In one possible design, the processing unit 1220 is specifically configured to:
and when the intersection ratio is greater than the intersection ratio threshold value, determining that the image characteristics meet corresponding preset conditions, wherein the intersection ratio is used for representing the ratio of the intersection and the union of the foreground image and the preset image, and the foreground image is an image which is obtained by carrying out image segmentation on the image and contains a target object.
In one possible design, the obtaining unit 1210 is further configured to: inputting an image to be recognized into an image classification model to obtain a classification result of the image, wherein the image classification model is obtained based on training of a first network model, the classification result is used for representing that the state of a target object is a normal state or an abnormal state, and the abnormal state at least comprises one of a copying object, a copying object or an invalid object;
the processing unit 1220 is further configured to, when the classification result indicates that the state of the target object is a normal state, perform a step of identifying the target object in the image to obtain information of the target object when the image satisfies the identification condition.
In one possible design, the processing unit is specifically configured to:
acquiring at least one text line image of a target object;
and inputting at least one text line image into a text recognition model to obtain text information of the target object, wherein the text recognition model is obtained based on the second network model training.
The electronic device provided in this embodiment can be used to implement the method in any of the above embodiments, and the implementation effect is similar to that of the method embodiment, and is not described here again.
Fig. 13 is a schematic hardware structure diagram of an electronic device 1300 according to an embodiment of the present disclosure. As shown in fig. 13, in general, an electronic device 1300 includes: a processor 1310 and a memory 1320.
Processor 1310 may include one or more processing cores, such as a 4-core processor, an 8-core processor, and so forth. The processor 1310 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 901 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 1310 may be integrated with a GPU (Graphics Processing Unit), which is responsible for rendering and drawing the content required to be displayed on the display screen. In some embodiments, processor 1310 may also include an AI (Artificial Intelligence) processor for processing computational operations related to machine learning.
Memory 1320 may include one or more computer-readable storage media, which may be non-transitory. Memory 1320 may also include high speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in the memory 1320 is used to store at least one instruction for execution by the processor 1310 to implement the methods provided by the method embodiments herein.
Optionally, as shown in fig. 10, the electronic device 1300 may further include a transceiver 1330, and the processor 1310 may control the transceiver 1330 to communicate with other devices, and specifically may transmit information or data to other devices or receive information or data transmitted by other devices.
The transceiver 1330 may include a transmitter and a receiver, among others. The transceiver 1330 can further include one or more antennas.
Optionally, the electronic device 1300 may implement corresponding processes in the methods of the embodiments of the present application, and for brevity, details are not described here again.
Those skilled in the art will appreciate that the configuration shown in fig. 10 does not constitute a limitation of the electronic device 1300, and may include more or fewer components than those shown, or combine certain components, or employ a different arrangement of components.
Embodiments of the present application also provide a non-transitory computer-readable storage medium, where instructions in the storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the method provided by the above embodiments.
The computer-readable storage medium in this embodiment may be any available medium that can be accessed by a computer or a data storage device such as a server, a data center, etc. that is integrated with one or more available media, and the available media may be magnetic media (e.g., floppy disks, hard disks, magnetic tapes), optical media (e.g., DVDs), or semiconductor media (e.g., SSDs), etc.
Those of ordinary skill in the art will understand that: all or a portion of the steps of implementing the above-described method embodiments may be performed by hardware associated with program instructions. The program may be stored in a computer-readable storage medium. When executed, the program performs steps comprising the method embodiments described above; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
The embodiment of the present application also provides a computer program product containing instructions, which when run on a computer, causes the computer to execute the method provided by the above embodiment.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (15)

1. A method of processing an image, the method comprising:
acquiring at least one image feature of an image to be identified, wherein the image comprises a target object, and the image feature is used for representing the imaging quality of the image or the integrity degree of the target object;
determining whether the image satisfies an identification condition based on the at least one image feature;
and when the image meets the identification condition, identifying a target object in the image to obtain the information of the target object.
2. The method of claim 1, wherein the obtaining at least one image feature of the image to be identified comprises:
acquiring at least one first feature of the image, wherein the first feature is used for representing the imaging quality of the image;
and/or the presence of a gas in the gas,
and acquiring a second characteristic of the image, wherein the second characteristic is used for representing the integrity degree of the target object.
3. The method of claim 1 or 2, wherein the determining whether the image satisfies an identification condition based on the at least one image feature comprises:
for each image feature in the at least one image feature, obtaining an evaluation result of whether the image feature meets a corresponding preset condition based on the image feature and a corresponding threshold;
determining whether the image satisfies the recognition condition based on the evaluation result of each image feature.
4. The method according to claim 3, wherein the determining whether the image satisfies an identification condition based on the evaluation result of each image feature comprises:
and when the evaluation result of each image feature indicates that the image feature meets the corresponding preset condition, determining that the image meets the identification condition.
5. The method of claim 2, wherein said acquiring at least one first feature of said image comprises:
converting the image into a grayscale image;
based on the grayscale image, at least one first feature of the image is determined.
6. The method of claim 5, wherein determining at least one first feature of the image based on the grayscale image comprises:
converting the gray level image into a Laplace image through a Laplace algorithm;
based on the Laplace image, at least one first feature of the image is determined.
7. The method of claim 6, wherein the converting the grayscale image to a Laplace image by a Laplace algorithm comprises:
and performing convolution operation on the gray level image through a preset Laplace mask to obtain the Laplace image.
8. The method according to claim 6 or 7, wherein the determining at least one first feature of the image based on the Laplacian image comprises:
calculating at least one of variance, mean, first pixel quantity or second pixel quantity of the Laplace image based on the pixel value of each pixel point in the Laplace image;
the first pixel number is the number of adjacent pixels with pixel values larger than a first preset pixel value, the second pixel number is the number of adjacent pixels with pixel values smaller than a second preset pixel value, and the first preset pixel value is larger than the second preset pixel value.
9. The method according to claim 3, wherein the obtaining of the evaluation result of whether the image feature satisfies the corresponding preset condition based on the image feature and the corresponding threshold value comprises:
if the image feature is the variance of the image, determining that the image feature meets a corresponding preset condition when the variance is larger than a definition threshold;
if the image features are the mean value of the image, determining that the image features meet corresponding preset conditions when the mean value is larger than a brightness threshold;
if the image feature is a first pixel number, determining that the image feature meets a corresponding preset condition when the first pixel number is smaller than a first number threshold, wherein the first pixel number is the number of adjacent pixels of which the pixel values are larger than a first preset pixel value;
if the image feature is a second pixel number, when the second pixel number is smaller than a second number threshold, determining that the image feature meets a corresponding preset condition, wherein the second pixel number is the number of adjacent pixels of which the pixel values are smaller than a second preset pixel value.
10. The method of claim 2, wherein the acquiring the second feature of the image comprises:
performing image segmentation on the image through a segmentation algorithm to obtain a foreground image and a background image, wherein the foreground image comprises the target object, and the background image does not comprise the target object;
and calculating to obtain an intersection ratio of the foreground image and the preset image based on the foreground image and the preset image, wherein the intersection ratio is used for representing the ratio of the intersection and the union of the foreground image and the preset image.
11. The method according to claim 3, wherein the obtaining of the evaluation result of whether the image feature satisfies the corresponding preset condition based on the image feature and the corresponding threshold value comprises:
and when the intersection ratio is greater than an intersection ratio threshold value, determining that the image characteristics meet corresponding preset conditions, wherein the intersection ratio is used for representing the ratio of the intersection and the union of a foreground image and a preset image, and the foreground image is an image which is obtained by carrying out image segmentation on the image and contains a target object.
12. The method according to claim 1 or 2, characterized in that the method further comprises:
inputting the image to be recognized into an image classification model to obtain a classification result of the image, wherein the image classification model is obtained by training based on a first network model, the classification result is used for representing that the state of the target object is a normal state or an abnormal state, and the abnormal state at least comprises one of a copying object, a copying object or an invalid object;
and when the classification result indicates that the state of the target object is a normal state, executing the step of identifying the target object in the image to obtain the information of the target object when the image meets the identification condition.
13. The method according to claim 1 or 2, wherein the identifying a target object in the image when the image satisfies the identification condition to obtain information of the target object comprises:
acquiring at least one text line image of the target object;
and inputting the at least one text line image into a text recognition model to obtain text information of the target object, wherein the text recognition model is obtained based on second network model training.
14. An electronic device, comprising:
the device comprises an acquisition unit, a recognition unit and a processing unit, wherein the acquisition unit is used for acquiring at least one image characteristic of an image to be recognized, the image comprises a target object, and the image characteristic is used for representing the imaging quality of the image or the integrity degree of the target object;
a processing unit for determining whether the image satisfies an identification condition based on the at least one image feature;
the processing unit is further configured to identify a target object in the image when the image meets the identification condition, so as to obtain information of the target object.
15. An electronic device, comprising: a memory and a processor;
the memory stores computer-executable instructions;
the processor executes computer-executable instructions stored by the memory, causing the processor to perform the method of any of claims 1 to 13.
CN202011401669.3A 2020-12-02 2020-12-02 Image processing method, device and storage medium Active CN112396050B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011401669.3A CN112396050B (en) 2020-12-02 2020-12-02 Image processing method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011401669.3A CN112396050B (en) 2020-12-02 2020-12-02 Image processing method, device and storage medium

Publications (2)

Publication Number Publication Date
CN112396050A true CN112396050A (en) 2021-02-23
CN112396050B CN112396050B (en) 2023-09-15

Family

ID=74604142

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011401669.3A Active CN112396050B (en) 2020-12-02 2020-12-02 Image processing method, device and storage medium

Country Status (1)

Country Link
CN (1) CN112396050B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113407774A (en) * 2021-06-30 2021-09-17 广州酷狗计算机科技有限公司 Cover determining method and device, computer equipment and storage medium
CN113538809A (en) * 2021-06-11 2021-10-22 深圳怡化电脑科技有限公司 Data processing method and device based on self-service equipment
CN117237440A (en) * 2023-10-10 2023-12-15 北京惠朗时代科技有限公司 Image calibration method for printing control instrument

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130308866A1 (en) * 2012-05-15 2013-11-21 National Chung Cheng University Method for estimating blur degree of image and method for evaluating image quality
CN105069783A (en) * 2015-07-23 2015-11-18 北京金山安全软件有限公司 Fuzzy picture identification method and device
CN106846011A (en) * 2016-12-30 2017-06-13 金蝶软件(中国)有限公司 Business license recognition methods and device
EP3232371A1 (en) * 2016-04-15 2017-10-18 Ricoh Company, Ltd. Object recognition method, object recognition device, and classifier training method
CN107481238A (en) * 2017-09-20 2017-12-15 众安信息技术服务有限公司 Image quality measure method and device
CN108830186A (en) * 2018-05-28 2018-11-16 腾讯科技(深圳)有限公司 Method for extracting content, device, equipment and the storage medium of text image
CN108830197A (en) * 2018-05-31 2018-11-16 平安医疗科技有限公司 Image processing method, device, computer equipment and storage medium
CN109948625A (en) * 2019-03-07 2019-06-28 上海汽车集团股份有限公司 Definition of text images appraisal procedure and system, computer readable storage medium
CN110335232A (en) * 2018-03-29 2019-10-15 住友化学株式会社 Image processing apparatus, foreign body detecting device and image processing method
CN110363753A (en) * 2019-07-11 2019-10-22 北京字节跳动网络技术有限公司 Image quality measure method, apparatus and electronic equipment
CN110717871A (en) * 2019-09-30 2020-01-21 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and electronic equipment
CN110798627A (en) * 2019-10-12 2020-02-14 深圳酷派技术有限公司 Shooting method, shooting device, storage medium and terminal
CN111291753A (en) * 2020-01-22 2020-06-16 平安科技(深圳)有限公司 Image-based text recognition method and device and storage medium
CN111461097A (en) * 2020-03-18 2020-07-28 北京大米未来科技有限公司 Method, apparatus, electronic device and medium for recognizing image information

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130308866A1 (en) * 2012-05-15 2013-11-21 National Chung Cheng University Method for estimating blur degree of image and method for evaluating image quality
CN105069783A (en) * 2015-07-23 2015-11-18 北京金山安全软件有限公司 Fuzzy picture identification method and device
EP3232371A1 (en) * 2016-04-15 2017-10-18 Ricoh Company, Ltd. Object recognition method, object recognition device, and classifier training method
CN106846011A (en) * 2016-12-30 2017-06-13 金蝶软件(中国)有限公司 Business license recognition methods and device
CN107481238A (en) * 2017-09-20 2017-12-15 众安信息技术服务有限公司 Image quality measure method and device
CN110335232A (en) * 2018-03-29 2019-10-15 住友化学株式会社 Image processing apparatus, foreign body detecting device and image processing method
CN108830186A (en) * 2018-05-28 2018-11-16 腾讯科技(深圳)有限公司 Method for extracting content, device, equipment and the storage medium of text image
CN108830197A (en) * 2018-05-31 2018-11-16 平安医疗科技有限公司 Image processing method, device, computer equipment and storage medium
CN109948625A (en) * 2019-03-07 2019-06-28 上海汽车集团股份有限公司 Definition of text images appraisal procedure and system, computer readable storage medium
CN110363753A (en) * 2019-07-11 2019-10-22 北京字节跳动网络技术有限公司 Image quality measure method, apparatus and electronic equipment
CN110717871A (en) * 2019-09-30 2020-01-21 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and electronic equipment
CN110798627A (en) * 2019-10-12 2020-02-14 深圳酷派技术有限公司 Shooting method, shooting device, storage medium and terminal
CN111291753A (en) * 2020-01-22 2020-06-16 平安科技(深圳)有限公司 Image-based text recognition method and device and storage medium
CN111461097A (en) * 2020-03-18 2020-07-28 北京大米未来科技有限公司 Method, apparatus, electronic device and medium for recognizing image information

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
A. EL HARRAJ 等: "OCR ACCURACY IMPROVEMENT ON DOCUMENT IMAGES THROUGH A NOVEL PRE-PROCESSING APPROACH", 《SIGNAL & IMAGE PROCESSING: AN INTERNATIONAL JOURNAL》, vol. 06, no. 04, pages 1 - 18 *
康鑫 等: "复杂场景下的水表示数检测与识别", 《计算机应用》, vol. 39, no. 2, pages 63 - 67 *
曾凡锋 等: "基于区域的光照不均文本图像校正方法", 《计算机工程与设计》, vol. 35, no. 12, pages 4233 - 4237 *
李峰 等: "图像清晰度检测方法", 《计算机工程与设计》, no. 09, pages 1545 - 1546 *
杨彬: "图像中的文本检测与识别研究", 《中国优秀硕士学位论文全文数据库信息科技辑》, no. 2018, pages 138 - 2722 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113538809A (en) * 2021-06-11 2021-10-22 深圳怡化电脑科技有限公司 Data processing method and device based on self-service equipment
CN113538809B (en) * 2021-06-11 2023-08-04 深圳怡化电脑科技有限公司 Data processing method and device based on self-service equipment
CN113407774A (en) * 2021-06-30 2021-09-17 广州酷狗计算机科技有限公司 Cover determining method and device, computer equipment and storage medium
CN117237440A (en) * 2023-10-10 2023-12-15 北京惠朗时代科技有限公司 Image calibration method for printing control instrument
CN117237440B (en) * 2023-10-10 2024-03-15 北京惠朗时代科技有限公司 Image calibration method for printing control instrument

Also Published As

Publication number Publication date
CN112396050B (en) 2023-09-15

Similar Documents

Publication Publication Date Title
CN107220640B (en) Character recognition method, character recognition device, computer equipment and computer-readable storage medium
AU2011250829B2 (en) Image processing apparatus, image processing method, and program
US7123754B2 (en) Face detection device, face pose detection device, partial image extraction device, and methods for said devices
CN110232713B (en) Image target positioning correction method and related equipment
CN113139445A (en) Table recognition method, apparatus and computer-readable storage medium
CN112396050B (en) Image processing method, device and storage medium
AU2011250827B2 (en) Image processing apparatus, image processing method, and program
CN108090511B (en) Image classification method and device, electronic equipment and readable storage medium
TWI435288B (en) Image processing apparatus and method, and program product
CN111461070B (en) Text recognition method, device, electronic equipment and storage medium
CN111626163B (en) Human face living body detection method and device and computer equipment
CN113627428A (en) Document image correction method and device, storage medium and intelligent terminal device
CN101983507A (en) Automatic redeye detection
CN110059607B (en) Living body multiplex detection method, living body multiplex detection device, computer equipment and storage medium
CN114037992A (en) Instrument reading identification method and device, electronic equipment and storage medium
CN113487473A (en) Method and device for adding image watermark, electronic equipment and storage medium
CN113947613B (en) Target area detection method, device, equipment and storage medium
JP5201184B2 (en) Image processing apparatus and program
CN113989336A (en) Visible light image and infrared image registration method and device
JP2015176252A (en) Image processor and image processing method
CN108205641A (en) Images of gestures processing method and processing device
CN113837020B (en) Cosmetic progress detection method, device, equipment and storage medium
JP2008084109A (en) Eye opening/closing determination device and eye opening/closing determination method
CN114663418A (en) Image processing method and device, storage medium and electronic equipment
JP4796165B2 (en) Image processing device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 401121 b7-7-2, Yuxing Plaza, No.5 Huangyang Road, Yubei District, Chongqing

Applicant after: Chongqing duxiaoman Youyang Technology Co.,Ltd.

Address before: Room 3075, building 815, Jiayuan district, Shanghai

Applicant before: SHANGHAI YOUYANG NEW MEDIA INFORMATION TECHNOLOGY Co.,Ltd.

TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20211224

Address after: Room 606, 6 / F, building 4, courtyard 10, Xibeiwang Road, Haidian District, Beijing 100085

Applicant after: Du Xiaoman Technology (Beijing) Co.,Ltd.

Address before: 401121 b7-7-2, Yuxing Plaza, No.5 Huangyang Road, Yubei District, Chongqing

Applicant before: Chongqing duxiaoman Youyang Technology Co.,Ltd.

GR01 Patent grant
GR01 Patent grant