CN115862022A - Image correction method and device, equipment, storage medium and product thereof - Google Patents
Image correction method and device, equipment, storage medium and product thereof Download PDFInfo
- Publication number
- CN115862022A CN115862022A CN202310046897.0A CN202310046897A CN115862022A CN 115862022 A CN115862022 A CN 115862022A CN 202310046897 A CN202310046897 A CN 202310046897A CN 115862022 A CN115862022 A CN 115862022A
- Authority
- CN
- China
- Prior art keywords
- image
- target
- deflection angle
- target image
- angle
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Landscapes
- Character Input (AREA)
Abstract
The application discloses an image correction method, an image correction device, computer equipment and a storage medium, wherein the image correction device comprises the following steps: acquiring document data to be processed; inputting the document data into a preset image recognition model, wherein the image recognition model is a pre-trained convergence state and is used for recognizing a neural network model of a target image; reading image classification information output by the image recognition model, and extracting a target image in the document data according to the image classification information; measuring a target deflection angle of the target image according to the image type represented by the image classification information; and carrying out image correction on the target image according to the target deflection angle. The deflection angle of the target image is measured according to the type of the target image, so that the problem of inaccurate deflection angle measurement caused by the fact that different image types adopt the same measurement method can be solved, and the accuracy of image angle regression is improved.
Description
Technical Field
The present invention relates to the field of image processing, and in particular, to an image correction method, an image correction apparatus, an electronic device, and a storage medium.
Background
Along with the process of the digital technology revolution, paperless office work is started, and a large amount of reserved paper documents need to be electronized, filed and used. In the face of mass unstructured data electronization, besides converting paper files into electronic parts, the images with deflection in the files need to be corrected.
When a paper document is filed, a seal image or other images with skew in the paper document need to be corrected. The image correction method comprises the steps of training a neural network model through labeling, and then identifying and correcting the deflection angle of an image through the trained neural network model.
However, the inventors of the present invention have found, in their research, that when a deflection image is recognized by a neural network model, the accuracy of angle recognition of the deflection image by the neural network model is low and the corrected image angle does not return to an ideal angle because the image types are different.
Disclosure of Invention
The invention provides an image correction method, an image correction device, an electronic device, a storage medium electrocardiosignal noise reduction method, an image correction device, an electronic device and a storage medium electrocardiosignal noise reduction device, and aims to improve the identification accuracy and the angle regression accuracy of deflection angles of deflection images.
In a first aspect, an embodiment of the present invention provides an image rectification method, including:
acquiring document data to be processed;
inputting the document data into a preset image recognition model, wherein the image recognition model is a pre-trained convergence state and is used for recognizing a neural network model of a target image;
reading image classification information output by the image recognition model, and extracting a target image in the document data according to the image classification information;
measuring a target deflection angle of the target image according to the image type represented by the image classification information;
and carrying out image correction on the target image according to the target deflection angle.
Optionally, the image type includes a first image type, and the measuring the target deflection angle of the target image according to the image type characterized by the image classification information includes:
determining the image type of the target image as a first image type according to the image classification information;
carrying out binarization processing on the target image according to the first image type to generate a binary image;
calculating a minimum circumscribed rectangle of the binary image, and acquiring a first deflection angle of the minimum circumscribed rectangle;
and determining a target deflection angle of the target image according to the first deflection angle.
Optionally, the image type includes a second image type, and the measuring the target deflection angle of the target image according to the image type characterized by the image classification information includes:
determining the image type of the target image as a second image type according to the image classification information;
inputting the target image into a preset optical character model according to the second image type, wherein the optical character model is a neural network model which is trained to a convergence state in advance and used for recognizing characters;
extracting a character outline in the target image according to the character classification information output by the optical character model;
and measuring a target deflection angle of the target image according to the character outline.
Optionally, the measuring a target deflection angle of the target image according to the character outline includes:
calculating the minimum bounding rectangle of the character outline;
acquiring a second deflection angle of the minimum circumscribed rectangle and the direction information of the minimum circumscribed rectangle;
and generating a target deflection angle of the target image according to the second deflection angle and the direction information.
Optionally, the calculating a minimum bounding rectangle of the character outline includes:
acquiring a first transverse pole and a second transverse pole of the character outline in the transverse length direction, and a first vertical pole and a second vertical pole of the character outline in the vertical length direction, wherein the distance from the second transverse pole to a preset origin is greater than the distance from the first transverse pole to the preset origin, and the distance from the second vertical pole to the preset origin is greater than the distance from the first vertical pole to the preset origin;
respectively generating a first tangent, a second tangent, a third tangent and a fourth tangent of the first transverse pole, the second transverse pole, the first vertical pole and the second vertical pole;
adjusting the angles of the first tangent line, the second tangent line, the third tangent line and the fourth tangent line according to a preset angle threshold value until any one of the first tangent line, the second tangent line, the third tangent line and the fourth tangent line is superposed with any one edge of the character outline, and reading a rectangle surrounded by the first tangent line, the second tangent line, the third tangent line and the fourth tangent line;
and determining the rectangle as the minimum bounding rectangle of the character outline.
Optionally, the image rectification of the target image according to the target deflection angle includes:
generating a compensation angle required by the target image correction based on the target deflection angle;
and carrying out rotation processing on the target image according to the compensation angle so as to correct the target image.
Optionally, after performing image rectification on the target image according to the target deflection angle, the method includes:
generating a standard coordinate system based on the document pictures represented by the document data;
reading a base line of any continuous character in the document data in the horizontal length direction;
calculating a baseline angle of the baseline in the standard coordinate system;
and carrying out secondary image correction on the target image according to the baseline angle and the target deflection angle.
In a second aspect, an embodiment of the present invention provides an image correction apparatus, including:
the acquisition module is used for acquiring document data to be processed;
the identification module is used for inputting the document data into a preset image identification model, wherein the image identification model is a pre-trained convergence state and is used for identifying a neural network model of a target image;
the processing module is used for reading the image classification information output by the image recognition model and extracting a target image in the document data according to the image classification information;
the measuring module is used for measuring a target deflection angle of the target image according to the image type represented by the image classification information;
and the execution module is used for carrying out image rectification on the target image according to the target deflection angle.
Optionally, the image type comprises a first image type, and the measurement module is further configured to:
determining the image type of the target image as a first image type according to the image classification information;
carrying out binarization processing on the target image according to the first image type to generate a binary image;
calculating a minimum circumscribed rectangle of the binary image, and acquiring a first deflection angle of the minimum circumscribed rectangle;
and determining a target deflection angle of the target image according to the first deflection angle.
Optionally, the image type comprises a second image type, and the measurement module is further configured to:
determining the image type of the target image as a second image type according to the image classification information;
inputting the target image into a preset optical character model according to the second image type, wherein the optical character model is a neural network model which is trained to a convergence state in advance and used for recognizing characters;
extracting a character outline in the target image according to the character classification information output by the optical character model;
and measuring the target deflection angle of the target image according to the character outline.
Optionally, the measurement module is further configured to:
calculating the minimum circumscribed rectangle of the character outline;
acquiring a second deflection angle of the minimum circumscribed rectangle and the direction information of the minimum circumscribed rectangle;
and generating a target deflection angle of the target image according to the second deflection angle and the direction information.
Optionally, the measurement module is further configured to:
acquiring a first transverse pole and a second transverse pole of the character outline in the transverse length direction, and a first vertical pole and a second vertical pole of the character outline in the vertical length direction, wherein the distance from the second transverse pole to a preset origin is greater than that from the first transverse pole to the preset origin, and the distance from the second vertical pole to the preset origin is greater than that from the first vertical pole to the preset origin;
respectively generating a first tangent, a second tangent, a third tangent and a fourth tangent of the first transverse pole, the second transverse pole, the first vertical pole and the second vertical pole;
adjusting the angles of the first tangent line, the second tangent line, the third tangent line and the fourth tangent line according to a preset angle threshold value until any one of the first tangent line, the second tangent line, the third tangent line and the fourth tangent line is superposed with any one edge of the character outline, and reading a rectangle surrounded by the first tangent line, the second tangent line, the third tangent line and the fourth tangent line;
and determining the rectangle as the minimum bounding rectangle of the character outline.
Optionally, the execution module is further configured to:
generating a compensation angle required by the target image correction based on the target deflection angle;
and carrying out rotation processing on the target image according to the compensation angle so as to correct the target image.
Optionally, the execution module is further configured to:
generating a standard coordinate system based on the document picture represented by the document data;
reading a base line of any continuous character in the document data in the horizontal length direction;
calculating a baseline angle of the baseline in the standard coordinate system;
and carrying out secondary image correction on the target image according to the baseline angle and the target deflection angle.
In a third aspect, an embodiment of the present invention provides an electronic device, where the electronic device includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and the processor implements the image rectification method when executing the computer program.
In a fourth aspect, an embodiment of the present invention provides a non-transitory computer-readable storage medium, which includes a computer program, and when the computer program is executed by a processor, the computer program implements the image rectification method described above.
The beneficial effects of the embodiment of the application are that: after reading the document data, quickly identifying the image type set in the document data or the target image needing image correction, and dividing the type of the target image. And according to the image type of the divided target image, classifying the deflection angle of the target image, and then carrying out image correction on the target image according to the deflection angle. Because the deflection angle measurement of the target image is carried out according to the type of the target image, the problem that the deflection angle measurement is inaccurate due to the fact that different image types adopt the same measurement method can be solved, and the accuracy rate of image angle regression is improved.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a schematic flow chart of an image rectification method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a basic structure of an image rectification device according to an embodiment of the present application;
fig. 3 is a block diagram of a basic structure of a computer device according to an embodiment of the present application.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are exemplary only for the purpose of explaining the present application and are not to be construed as limiting the present application.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It will be understood by those within the art that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
As used herein, a "terminal" includes both wireless signal receiver devices, which include only wireless signal receiver devices without transmit capability, and receiving and transmitting hardware devices, which include receiving and transmitting hardware devices capable of performing two-way communication over a two-way communication link, as will be understood by those skilled in the art. Such a device may include: a cellular or other communication device having a single line display or a multi-line display or a cellular or other communication device without a multi-line display; PCS (personal communications Service), which may combine voice, data processing, facsimile and/or data communications capabilities; a PDA (Personal Digital Assistant) which may include a radio frequency receiver, a pager, internet/intranet access, a web browser, a notepad, a calendar and/or a GPS (global positioning System) receiver; a conventional laptop and/or palmtop computer or other device having and/or including a radio frequency receiver. As used herein, a "terminal" may be portable, transportable, installed in a vehicle (aeronautical, maritime, and/or land-based), or situated and/or configured to operate locally and/or in a distributed fashion at any other location(s) on earth and/or in space. The "terminal" used herein may also be a communication terminal, a Internet access terminal, and a music/video playing terminal, and may be, for example, a PDA, an MID (Mobile Internet Device), and/or a Mobile phone with music/video playing function, and may also be a smart television, a set-top box, and other devices.
The hardware referred to by the names "server", "client", "service node", etc. is essentially an electronic device with the performance of a personal computer, and is a hardware device having necessary components disclosed by the von neumann principle such as a central processing unit (including an arithmetic unit and a controller), a memory, an input device, an output device, etc., a computer program is stored in the memory, and the central processing unit calls a program stored in an external memory into the internal memory to run, executes instructions in the program, and interacts with the input and output devices, thereby completing a specific function.
It should be noted that the concept of "server" in the present application can be extended to the case of server cluster. According to the network deployment principle understood by those skilled in the art, the servers should be logically divided, and in physical space, the servers may be independent from each other but can be called through an interface, or may be integrated into one physical computer or a set of computer clusters. Those skilled in the art should understand this variation and should not be so constrained as to implement the network deployment of the present application.
One or more technical features of the present application, unless expressly specified otherwise, may be deployed to a server to implement access by a client remotely invoking an online service interface provided by a fetch server, or may be deployed directly and run on a client to implement access.
Unless specified in clear text, the neural network model referred to or possibly referred to in the application can be deployed in a remote server and used for remote call at a client, and can also be deployed in a client with qualified equipment capability for direct call.
Various data referred to in the present application may be stored in a server remotely or in a local terminal device unless specified in the clear text, as long as the data is suitable for being called by the technical solution of the present application.
The person skilled in the art will know this: although the various methods of the present application are described based on the same concept so as to be common to each other, they may be independently performed unless otherwise specified. In the same way, for each embodiment disclosed in the present application, it is proposed based on the same inventive concept, and therefore, concepts of the same expression and concepts of which expressions are different but are appropriately changed only for convenience should be equally understood.
The embodiments to be disclosed herein can be flexibly constructed by cross-linking related technical features of the embodiments unless the mutual exclusion relationship between the related technical features is stated in the clear text, as long as the combination does not depart from the inventive spirit of the present application and can meet the needs of the prior art or solve the deficiencies of the prior art. Those skilled in the art will appreciate variations therefrom.
Referring to fig. 1, fig. 1 is a basic flowchart illustrating an image correction method according to the present embodiment.
As shown in fig. 1, an image rectification method includes:
s1100, acquiring document data to be processed;
acquiring document data to be processed, the document data in this embodiment can be: picture format or editable document format.
The scene of acquiring document data can be: scanning a paper document to generate document data, reading local storage to obtain the document data, acquiring the document data sent by other terminals through a network, or requesting the server terminal to acquire the obtained document data, and the like. The scene of acquiring the document data is not limited to this, and any scene may be used as long as the document data necessary for the image correction method processing according to the present embodiment can be generated according to the specific application scene.
In some embodiments, since the document data is required to be in a picture format in the subsequent processing, when the acquired document data is in a non-picture format, the document data needs to be subjected to image conversion.
S1200, inputting the document data into a preset image recognition model, wherein the image recognition model is a pre-trained convergence state and is used for recognizing a neural network model of a target image;
and inputting the document data into a preset image recognition model, wherein the image recognition model is a pre-trained convergence state and is used for recognizing a neural network model of the target image. The image recognition model in the present embodiment is trained for segmenting the target image in the document data.
The target image refers to an image having the same image attribute as the preset image type, for example, the target image can be a stamp image, and the image correction method of the present embodiment is used for correcting the stamp image in document data. However, the type of target image is not limited to this, and in some embodiments, the target image can be: signature, artwork, table, or other image type.
In this embodiment, the image recognition model is YOLO (a target detection algorithm), which includes but is not limited to: YOLO1, YOLO2 or YOLO3. The model type of the image recognition model is not limited to this, and depending on the specific application scenario, in some embodiments, the image recognition model can be: and the multi-mode model is formed by combining any one or more of a convolutional neural network model for image recognition, a deep convolutional neural network model, a cyclic neural network model and a variant model thereof.
The image recognition model can accurately recognize and extract the target image only by being trained in advance, and the training method of the image recognition model comprises the following steps: and (3) supervised training and unsupervised training, wherein the initial model of the image recognition model is trained to be in a convergence state through training of a large number of samples, so that the target image in the document data can be recognized and extracted.
S1300, reading image classification information output by the image recognition model, and extracting a target image in the document data according to the image classification information;
inputting the document data into an image recognition model, performing feature extraction on the document data by the image recognition model, and recognizing and extracting a target image in the document data according to the extracted features. Specifically, the image recognition model performs contour extraction and cropping on a target image in document data, and then classifies the target image to obtain image classification information of the target image.
The image classification information describes the type of the target image, and when the target image is a stamp image, the image classification information may be shape information of the stamp image, and includes (but is not limited to) a circular stamp, a square stamp, an oval stamp, and the like. The classification type of the image classification information is not limited to the shape of the stamp image, and depending on the application scenario, the target image may also be used to classify the font image or the content of the illustration in some embodiments.
S1400, measuring a target deflection angle of the target image according to the image type represented by the image classification information;
after the image type represented by the image classification information of the target image is obtained, angle measurement strategies corresponding to different images are obtained according to the image type, and the target deflection angle of the target image is measured according to the angle measurement strategies corresponding to the target image. For example, in some embodiments, the target image is a stamp image, the image recognition model is a neural network model that recognizes and extracts the stamp image, and the image classification information describes the image type of the stamp image: square, circular, or oval.
When the stamp image is identified to be square, an angle measurement strategy corresponding to the square stamp image is determined, the angle measurement strategy is pre-specified and is deployed in a local storage space, and when the stamp image is identified to be square, the angle measurement strategy is directly called through the image type.
The square stamp image angle measurement strategy comprises the following steps: the method comprises the steps of conducting binarization processing on a stamp image according to the angle measurement strategy, converting the stamp image into a binary image with pixel values of 0 and 255, calculating to obtain a minimum circumscribed rectangle of the binary image through a minimum circumscribed rectangle algorithm, placing the minimum circumscribed rectangle into a standard coordinate system after the minimum circumscribed rectangle is obtained through calculation, and measuring an included angle of the minimum circumscribed rectangle in the X-axis direction, wherein the included angle is a first deflection angle of the minimum circumscribed rectangle. The first deflection angle is used to characterize the deflection angle of the minimum bounding rectangle in the horizontal direction. Because the deflection angle of the square seal image is the same as the deflection angle of the corresponding minimum circumscribed rectangle, the target deflection angle of the square seal image can be determined after the first deflection angle of the minimum circumscribed rectangle is obtained by measurement. The target deflection angle is used for representing the deflection angle of the square stamp image in the horizontal direction.
When the seal image is recognized to be circular or elliptical, an angle measurement strategy corresponding to the circular or elliptical seal image is determined, the angle measurement strategy is pre-specified and deployed in a local storage space, and when the seal image is recognized to be circular or elliptical, the angle measurement strategy can be directly called through the image type. Theoretically, the minimum circumscribed rectangle of the circle or the ellipse includes at least two or more, and therefore, for a circle or an ellipse stamp image, the minimum circumscribed rectangle has no uniqueness, and the deflection angle cannot be obtained by finding the minimum circumscribed rectangle of the circle or the ellipse.
The seal image angle measurement strategy comprises the following steps: and reading the image classification information, and determining the image type of the target image as a second image type according to the character content recorded in the image classification information, wherein the second image type means that the stamp image is circular or elliptical. And calling an optical character model corresponding to the second image type, wherein the optical character model is a neural network model which is trained to a convergence state in advance and used for recognizing and processing characters. The optical character model is a DBNET algorithm model, but the model type of the optical character model is not limited thereto, and according to different application scenarios, in some embodiments, the optical character model can be: and the multi-mode model is formed by combining one or more of a convolutional neural network model, a deep convolutional neural network model, a cyclic neural network model and a variant model thereof for character segmentation. And carrying out image processing on the target image through an optical character model, identifying character classification information in the target image, and extracting a character outline in the target image according to the character classification information. Because the characters in the target image are usually arc-shaped, after the optical character model identifies the position and the outline of each character, the character outlines between adjacent characters are connected to obtain the overall character outline of the target image. The arrangement mode of the characters in the target image is arc-shaped, so that the character outline extracted by the optical character model is also an arc-shaped outline.
After extracting the character outline of the character in the obtained target image, the deflection angle of the character outline is measured, and since the deflection angle of the character outline is the same as the target deflection angle of the target image, the target deflection angle of the target image can be obtained after the deflection angle of the character outline is obtained.
The calculation of the deflection angle of the character outline also adopts a calculation method of a minimum circumscribed rectangle: and calculating to obtain a minimum circumscribed rectangle of the character outline by a minimum circumscribed rectangle algorithm, placing the minimum circumscribed rectangle into a standard coordinate system, and measuring an included angle of the minimum circumscribed rectangle in the X-axis direction, wherein the included angle is a second deflection angle of the minimum circumscribed rectangle. The second deflection angle is used to characterize the deflection angle of the minimum bounding rectangle in the horizontal direction. It should be noted that, in practice, the value of the second deflection angle obtained by the minimum circumscribed rectangle refers to the range of 0-180 °, but the actual deflection angle of the character outline ranges from 0 to 360 °, so that the second deflection angle needs to be further corrected by the following formula: n + 90 ° + the second deflection angle. Wherein n is a constant 1, and positive and negative are determined by the minimum circumscribed rectangle direction information, specifically, if the direction of the convex position in the middle of the minimum circumscribed rectangle arc is left, the value is positive 1, and if the direction of the convex position in the middle of the minimum circumscribed rectangle arc is right, the value is negative 1. And judging the reference line of the direction information of the raised position in the middle of the minimum external rectangular arc as follows: the perpendicular between the minimum circumscribed rectangle centerline and the X-axis. Since the target yaw angle = ± n × 90 ° + the second yaw angle, the target yaw angle of the target image can be generated from the second yaw angle and the direction information.
S1500, image rectification is carried out on the target image according to the target deflection angle.
After the target deflection angle of the target image is obtained through calculation, the target deflection angle is taken, for example, the target deflection angle is 30 degrees, the angle value obtained through negation is-30 degrees, and the angle value obtained through negation is the compensation angle required by the correction of the target image. And after the compensation angle is generated, rotating the target image according to the compensation angle to finish the image correction of the target image. It should be noted that the rotation processing of the target image is performed by rotating the axis of the center point of the minimum circumscribed rectangle of the target image.
In the above-described embodiment, after the document data is read, the type of the image specified in the document data or the target image to be subjected to image rectification is quickly recognized, and the type of the target image is divided. And according to the image type of the divided target image, classifying the deflection angle of the target image, and then carrying out image correction on the target image according to the deflection angle. Because the deflection angle measurement of the target image is carried out according to the type of the target image, the problem that the deflection angle measurement is inaccurate due to the fact that different image types adopt the same measurement method can be solved, and the accuracy rate of image angle regression is improved.
In some embodiments, the image type includes a first image type, and the target deflection angle of the target image is measured according to a detection strategy corresponding to the first image type. Specifically, S1400 includes:
s1411, determining the image type of the target image to be a first image type according to the image classification information;
the first image type is a square stamp, when the stamp image is identified to be square, an angle measurement strategy corresponding to the square stamp image is determined, the angle measurement strategy is pre-specified and deployed in a local storage space, and when the stamp image is identified to be square, the angle measurement strategy is directly called through the image type.
S1412, performing binarization processing on the target image according to the first image type to generate a binary image;
and performing binarization processing on the stamp image according to the angle measurement strategy, and converting the stamp image into a binary image with pixel values of 0 and 255.
S1413, calculating a minimum circumscribed rectangle of the binary image, and acquiring a first deflection angle of the minimum circumscribed rectangle;
calculating to obtain a minimum circumscribed rectangle of the binary image through a minimum circumscribed rectangle algorithm, after the minimum circumscribed rectangle is obtained through calculation, placing the minimum circumscribed rectangle into a standard coordinate system, and measuring an included angle of the minimum circumscribed rectangle in the X-axis direction, wherein the included angle is a first deflection angle of the minimum circumscribed rectangle. The first deflection angle is used to characterize the deflection angle of the minimum bounding rectangle in the horizontal direction.
And S1414, determining a target deflection angle of the target image according to the first deflection angle.
Because the deflection angle of the square seal image is the same as the deflection angle of the corresponding minimum circumscribed rectangle, the target deflection angle of the square seal image can be determined after the first deflection angle of the minimum circumscribed rectangle is obtained by measurement. The target deflection angle is used for representing the deflection angle of the square stamp image in the horizontal direction.
In some embodiments, the image type includes a second image type, and the target deflection angle of the target image is measured according to a detection strategy corresponding to the second image type. Specifically, S1400 includes:
s1421, determining the image type of the target image to be a second image type according to the image classification information;
in the present embodiment, the second image type is a stamp image having a circular or elliptical shape. When the seal image is recognized to be circular or elliptical, an angle measurement strategy corresponding to the circular or elliptical seal image is determined, the angle measurement strategy is pre-specified and deployed in a local storage space, and when the seal image is recognized to be circular or elliptical, the angle measurement strategy can be directly called through the image type.
S1422, inputting the target image to a preset optical character model according to the second image type, wherein the optical character model is a neural network model which is trained to a convergence state in advance and used for recognizing characters;
and reading the image classification information, and determining the image type of the target image as a second image type according to the character content recorded in the image classification information, wherein the second image type means that the stamp image is circular or elliptical. And calling an optical character model corresponding to the second image type, wherein the optical character model is a neural network model which is trained to a convergence state in advance and used for recognizing and processing characters. The optical character model is a DBNET algorithm model, but the model type of the optical character model is not limited thereto, and according to different application scenarios, in some embodiments, the optical character model can be: and the multi-mode model is formed by combining one or more of a convolutional neural network model, a deep convolutional neural network model, a cyclic neural network model and a variant model thereof for character segmentation. And performing image processing on the target image through an optical character model, identifying character classification information in the target image, and extracting a character outline in the target image according to the character classification information. Because the characters in the target image are usually arc-shaped, after the optical character model identifies the position and the outline of each character, the character outlines between adjacent characters are connected to obtain the integral character outline of the target image. The arrangement mode of the characters in the target image is arc-shaped, so that the character outline extracted by the optical character model is also an arc-shaped outline.
S1423, extracting a character outline in the target image according to the character classification information output by the optical character model;
after extracting the character outline of the character in the obtained target image, the deflection angle of the character outline is measured, and since the deflection angle of the character outline is the same as the target deflection angle of the target image, the target deflection angle of the target image can be obtained after the deflection angle of the character outline is obtained.
S1424, measuring a target deflection angle of the target image according to the character outline.
The calculation of the deflection angle of the character outline also adopts a calculation method of a minimum circumscribed rectangle: and calculating to obtain a minimum circumscribed rectangle of the character outline by a minimum circumscribed rectangle algorithm, placing the minimum circumscribed rectangle into a standard coordinate system, and measuring an included angle of the minimum circumscribed rectangle in the X-axis direction, wherein the included angle is a second deflection angle of the minimum circumscribed rectangle. The second deflection angle is used to characterize the deflection angle of the minimum bounding rectangle in the horizontal direction. It should be noted that, in practice, the value of the second deflection angle obtained by the minimum circumscribed rectangle refers to the range of 0-180 °, but the actual deflection angle of the character outline ranges from 0 to 360 °, so that the second deflection angle needs to be further corrected by the following formula: n + 90 ° + the second deflection angle. Wherein n is a constant 1, and positive and negative are determined by the minimum circumscribed rectangle direction information, specifically, if the direction of the protruding position in the middle of the minimum circumscribed rectangle arc is left, the value is positive 1, and if the direction of the protruding position in the middle of the minimum circumscribed rectangle arc is right, the value is negative 1. And judging the reference line of the direction information of the raised position in the middle of the minimum external rectangular arc as follows: the perpendicular between the minimum circumscribed rectangle centerline and the X-axis. Since the target yaw angle = ± n × 90 ° + the second yaw angle, the target yaw angle of the target image can be generated from the second yaw angle and the direction information.
In some embodiments, it is desirable to measure the target deflection angle of the target image by a minimum bounding rectangle. Specifically, S1424 includes:
s1431, calculating a minimum circumscribed rectangle of the character outline;
and calculating to obtain a minimum circumscribed rectangle of the character outline by a minimum circumscribed rectangle algorithm, placing the minimum circumscribed rectangle into a standard coordinate system, and measuring an included angle of the minimum circumscribed rectangle in the X-axis direction, wherein the included angle is a second deflection angle of the minimum circumscribed rectangle. The second deflection angle is used to characterize the deflection angle of the minimum bounding rectangle in the horizontal direction.
S1432, acquiring a second deflection angle of the minimum circumscribed rectangle and direction information of the minimum circumscribed rectangle;
in practice, it is found that the value of the second deflection angle obtained by the minimum circumscribed rectangle is in the range of 0-180 °, however, the actual deflection angle of the character outline is in the range of 0-360 °, and therefore, the second deflection angle needs to be further corrected by the following formula: n + 90 ° + the second deflection angle. Wherein n is a constant 1, and positive and negative are determined by the minimum circumscribed rectangle direction information, specifically, if the direction of the protruding position in the middle of the minimum circumscribed rectangle arc is left, the value is positive 1, and if the direction of the protruding position in the middle of the minimum circumscribed rectangle arc is right, the value is negative 1. And judging the reference line of the direction information of the raised position in the middle of the minimum external rectangle arc as follows: the perpendicular between the minimum circumscribed rectangle centerline and the X-axis.
And S1433, generating a target deflection angle of the target image according to the second deflection angle and the direction information.
The target deflection angle = ± n × 90 ° + the second deflection angle, and therefore, the target deflection angle of the target image can be generated based on the second deflection angle and the direction information.
In some embodiments, the minimum bounding rectangle is calculated by continuously taking relatively set points on the character outline, and the minimum bounding rectangle of the graph can be obtained by an exhaustive calculation method. In the implementation of the application, the character outline is an arc regular body, so that the minimum circumscribed rectangle of the character outline can be calculated by one-time value taking. Specifically, S1431 includes:
s1441, acquiring a first transverse pole and a second transverse pole of the character outline in the transverse length direction, and a first vertical pole and a second vertical pole of the character outline in the vertical length direction, wherein the distance from the second transverse pole to a preset origin is greater than the distance from the first transverse pole to the preset origin, and the distance from the second vertical pole to the preset origin is greater than the distance from the first vertical pole to the preset origin;
and acquiring a first transverse pole and a second transverse pole of the character outline in the transverse length direction, wherein the transverse length direction refers to the poles of the character outline in the X direction and is respectively marked as Xmin and Xmax. And acquiring a first vertical pole and a second vertical pole of the character outline in the vertical length direction, wherein the vertical length direction refers to the poles of the character outline in the Y direction and is respectively marked as Ymin and Ymax. The distance from the second transverse pole to the preset origin is greater than that from the first transverse pole to the preset origin, and the distance from the second vertical pole to the preset origin is greater than that from the first vertical pole to the preset origin.
S1442, generating a first tangent, a second tangent, a third tangent, and a fourth tangent of the first transverse pole, the second transverse pole, the first vertical pole, and the second vertical pole, respectively;
and respectively extending a straight line to two ends of the first transverse pole, wherein the straight line is a first tangent of the character image. And respectively extending a straight line to two ends of the second transverse pole, wherein the straight line is a second tangent line of the character image. And respectively extending a straight line to two ends of the third transverse pole, wherein the straight line is a third tangent of the character image. And respectively extending a straight line to two ends of the fourth transverse pole, wherein the straight line is a fourth tangent of the character image.
S1443, adjusting the angles of the first tangent line, the second tangent line, the third tangent line and the fourth tangent line according to a preset angle threshold value until any one of the first tangent line, the second tangent line, the third tangent line and the fourth tangent line is overlapped with any one edge of the character outline, and reading a rectangle formed by the first tangent line, the second tangent line, the third tangent line and the fourth tangent line;
according to the synchronous first tangent line, the second tangent line, third tangent line and fourth tangent line of going on the syntropy rotation according to predetermined angle threshold value, because, the holistic shape of character profile is the arc, consequently, have and only one limit can coincide with the base of arc image in first tangent line, the second tangent line, among third tangent line and the fourth tangent line, and remaining tangent line can only be tangent with the arc limit, when arbitrary one in first tangent line, the second tangent line, third tangent line and the fourth tangent line coincides with arbitrary one limit of character profile, just can obtain the minimum external rectangle of character profile.
S1444, determining the rectangle to be the minimum circumscribed rectangle of the character outline.
By means of extreme value obtaining, the minimum external rectangle of the character outline can be determined by the computer through one rotation of an extreme value tangent line, calculation force for obtaining the minimum external rectangle of the character outline is greatly saved, and calculation efficiency is improved.
In some embodiments, after the target deflection angle is calculated, the target image needs to be rotationally corrected based on the target deflection angle. Specifically, S1500 includes:
s1511, generating a compensation angle required by the target image correction based on the target deflection angle;
after the target deflection angle of the target image is obtained through calculation, the target deflection angle is taken, for example, the target deflection angle is 30 degrees, the angle value obtained through negation is-30 degrees, and the angle value obtained through negation is the compensation angle required by the correction of the target image. It should be noted that the target deflection angle is not limited to the above example, and the target deflection angle can be any value within 0-360 ° according to different application scenarios, and therefore, the compensation angle can also be any value after inversion.
And S1512, performing rotation processing on the target image according to the compensation angle to correct the target image.
And after the compensation angle is generated, rotating the target image according to the compensation angle to finish the image correction of the target image. It should be noted that the rotation processing of the target image is performed by rotating the axis of the center point of the minimum circumscribed rectangle of the target image.
In some embodiments, after the target image is corrected, in order to keep the angles of the characters in the document data and the target image consistent, that is, to ensure the angles of the character part and the stamp part in the document data are consistent, the target image needs to be corrected for the second time. After S1500, comprising:
s1611, generating a standard coordinate system based on the document pictures represented by the document data;
and generating a standard coordinate system according to the document pictures of the document data, specifically, constructing a two-dimensional standard coordinate system by taking the fixed point at the upper left corner of the document pictures as an origin, and taking the edges of the document pictures as the X axis and the Y axis of the standard coordinate system.
S1612, reading a base line of any continuous character in the document data in the horizontal and longitudinal directions;
reading any one row of continuous characters in the document data, and taking an included angle line between the continuous characters and an X axis of a standard coordinate system as a base line. The determination of the base line can be determined by calculating the minimum circumscribed rectangle of the continuous character, and the minimum circumscribed rectangle of the continuous character is determined by a minimum circumscribed rectangle algorithm, wherein the minimum circumscribed rectangle and the side closest to the X axis are the base lines of the continuous character in the horizontal length direction.
S1613, calculating a base line angle of the base line in the standard coordinate system;
and an included angle between the base line and the X axis is a base line angle in a standard coordinate system, and reflects the deflection angle of the characters in the document picture.
And S1614, performing secondary image rectification on the target image according to the baseline angle and the target deflection angle.
And performing secondary correction on the target image according to the calculated baseline angle and the calculated deflection angle of the target image, wherein the correction mode is an angle difference between the baseline angle and the deflection angle of the target image, and when the angle difference is 0, the character angles in the target image and the document image are proved to be consistent without adjustment. When the angle difference is not 0, the character angles in the target image and the document picture are proved to be inconsistent, the target image needs to be rotated, and the rotated angle is a numerical value represented by the angle difference. The secondary rotation processing of the target image is to rotate the target image by a central point rotating shaft of the minimum external rectangle of the target image.
Referring to fig. 2 in detail, fig. 2 is a schematic diagram of a basic structure of the image correction device according to the present embodiment.
As shown in fig. 2, an image rectification apparatus includes: an acquisition module 1100, an identification module 1200, a processing module 1300, a measurement module 1400, and an execution module 1500. The acquiring module 1100 is configured to acquire document data to be processed; the recognition module 1200 is configured to input the document data into a preset image recognition model, where the image recognition model is a pre-trained convergence state and is used to recognize a neural network model of a target image; the processing module 1300 is configured to read the image classification information output by the image recognition model, and extract a target image in the document data according to the image classification information; the measuring module 1400 is configured to measure a target deflection angle of the target image according to the image type represented by the image classification information; the execution module 1500 is configured to perform image rectification on the target image according to the target deflection angle.
After reading the document data, the image rectification device rapidly identifies the image type set in the document data or the target image needing image rectification, and divides the type of the target image. And according to the image type of the divided target image, classifying the deflection angle of the target image, and then carrying out image correction on the target image according to the deflection angle. Because the deflection angle measurement of the target image is carried out according to the type of the target image, the problem that the deflection angle measurement is inaccurate due to the fact that different image types adopt the same measurement method can be solved, and the accuracy rate of image angle regression is improved.
Optionally, the image type comprises a first image type, and the measurement module is further configured to:
determining the image type of the target image as a first image type according to the image classification information;
carrying out binarization processing on the target image according to the first image type to generate a binary image;
calculating a minimum circumscribed rectangle of the binary image, and acquiring a first deflection angle of the minimum circumscribed rectangle;
and determining a target deflection angle of the target image according to the first deflection angle.
Optionally, the image type comprises a second image type, and the measurement module is further configured to:
determining the image type of the target image as a second image type according to the image classification information;
inputting the target image into a preset optical character model according to the second image type, wherein the optical character model is a neural network model which is trained to a convergence state in advance and used for recognizing characters;
extracting a character outline in the target image according to the character classification information output by the optical character model;
and measuring a target deflection angle of the target image according to the character outline.
Optionally, the measurement module is further configured to:
calculating the minimum bounding rectangle of the character outline;
acquiring a second deflection angle of the minimum circumscribed rectangle and the direction information of the minimum circumscribed rectangle;
and generating a target deflection angle of the target image according to the second deflection angle and the direction information.
Optionally, the measurement module is further configured to:
acquiring a first transverse pole and a second transverse pole of the character outline in the transverse length direction, and a first vertical pole and a second vertical pole of the character outline in the vertical length direction, wherein the distance from the second transverse pole to a preset origin is greater than that from the first transverse pole to the preset origin, and the distance from the second vertical pole to the preset origin is greater than that from the first vertical pole to the preset origin;
respectively generating a first tangent, a second tangent, a third tangent and a fourth tangent of the first transverse pole, the second transverse pole, the first vertical pole and the second vertical pole;
adjusting the angles of the first tangent line, the second tangent line, the third tangent line and the fourth tangent line according to a preset angle threshold value until any one of the first tangent line, the second tangent line, the third tangent line and the fourth tangent line is superposed with any one edge of the character outline, and reading a rectangle surrounded by the first tangent line, the second tangent line, the third tangent line and the fourth tangent line;
and determining the rectangle as the minimum bounding rectangle of the character outline.
Optionally, the execution module is further configured to:
generating a compensation angle required by the target image correction based on the target deflection angle;
and carrying out rotation processing on the target image according to the compensation angle so as to correct the target image.
Optionally, the execution module is further configured to:
generating a standard coordinate system based on the document pictures represented by the document data;
reading a base line of any continuous character in the document data in the horizontal length direction;
calculating a baseline angle of the baseline in the standard coordinate system;
and carrying out secondary image correction on the target image according to the baseline angle and the target deflection angle.
In order to solve the technical problem, an embodiment of the present application further provides a computer device. Referring to fig. 3, fig. 3 is a block diagram of a basic structure of a computer device according to the present embodiment.
As shown in fig. 3, the internal structure of the computer device is schematically illustrated. The computer device includes a processor, a non-volatile storage medium, a memory, and a network interface connected by a system bus. Wherein the non-volatile storage medium of the computer device stores an operating system, a database having stored therein a sequence of control information, and computer readable instructions that, when executed by the processor, cause the processor to implement an image rectification method. The processor of the computer device is used for providing calculation and control capability and supporting the operation of the whole computer device. The memory of the computer device may have stored thereon computer readable instructions that, when executed by the processor, cause the processor to perform an image rectification method. The network interface of the computer device is used for connecting and communicating with the terminal. Those skilled in the art will appreciate that the architecture shown in fig. 3 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In this embodiment, the processor is configured to execute specific functions of the acquiring module 1100, the identifying module 1200, the processing module 1300, the measuring module 1400 and the executing module 1500 in fig. 2, and the memory stores program codes and various data required for executing the modules. The network interface is used for data transmission to and from a user terminal or a server. The memory in the present embodiment stores program codes and data necessary for executing all the sub-modules in the image correction device, and the server can call the program codes and data of the server to execute the functions of all the sub-modules.
After reading the document data, the computer device quickly identifies the image type set in the document data or the target image needing image correction, and divides the type of the target image. And according to the image type of the divided target image, classifying the deflection angle of the target image, and then carrying out image correction on the target image according to the deflection angle. Because the deflection angle measurement of the target image is carried out according to the type of the target image, the problem that the deflection angle measurement is inaccurate due to the fact that different image types adopt the same measurement method can be solved, and the accuracy rate of image angle regression is improved.
The present application also provides a storage medium storing computer-readable instructions which, when executed by one or more processors, cause the one or more processors to perform the steps of any of the embodiments of the image correction method described above.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium, and can include the processes of the embodiments of the methods described above when the computer program is executed. The storage medium may be a non-volatile storage medium such as a magnetic disk, an optical disk, a Read-Only Memory (ROM), or a Random Access Memory (RAM).
Those of skill in the art will appreciate that the various operations, methods, steps in the processes, acts, or solutions discussed in this application can be interchanged, modified, combined, or eliminated. Further, other steps, measures, or schemes in various operations, methods, or flows that have been discussed in this application can be alternated, altered, rearranged, broken down, combined, or deleted. Further, steps, measures, schemes in the prior art having various operations, methods, procedures disclosed in the present application may also be alternated, modified, rearranged, decomposed, combined, or deleted.
The foregoing is only a partial embodiment of the present application, and it should be noted that, for those skilled in the art, several modifications and decorations can be made without departing from the principle of the present application, and these modifications and decorations should also be regarded as the protection scope of the present application.
Claims (10)
1. An image rectification method, comprising:
acquiring document data to be processed;
inputting the document data into a preset image recognition model, wherein the image recognition model is a pre-trained convergence state and is used for recognizing a neural network model of a target image;
reading image classification information output by the image recognition model, and extracting a target image in the document data according to the image classification information;
measuring a target deflection angle of the target image according to the image type represented by the image classification information;
and carrying out image correction on the target image according to the target deflection angle.
2. The image rectification method according to claim 1, wherein the image type includes a first image type, and the measuring a target deflection angle of the target image according to the image type characterized by the image classification information includes:
determining the image type of the target image as a first image type according to the image classification information;
carrying out binarization processing on the target image according to the first image type to generate a binary image;
calculating a minimum circumscribed rectangle of the binary image, and acquiring a first deflection angle of the minimum circumscribed rectangle;
and determining a target deflection angle of the target image according to the first deflection angle.
3. The image rectification method according to claim 1, wherein the image type includes a second image type, and the measuring the target deflection angle of the target image according to the image type characterized by the image classification information includes:
determining the image type of the target image as a second image type according to the image classification information;
inputting the target image into a preset optical character model according to the second image type, wherein the optical character model is a neural network model which is trained to a convergence state in advance and used for recognizing characters;
extracting a character outline in the target image according to the character classification information output by the optical character model;
and measuring the target deflection angle of the target image according to the character outline.
4. The image rectification method according to claim 3, wherein the measuring a target deflection angle of the target image from the character outline includes:
calculating the minimum bounding rectangle of the character outline;
acquiring a second deflection angle of the minimum circumscribed rectangle and the direction information of the minimum circumscribed rectangle;
and generating a target deflection angle of the target image according to the second deflection angle and the direction information.
5. The image rectification method according to claim 4, wherein the calculating of the minimum bounding rectangle of the character outline includes:
acquiring a first transverse pole and a second transverse pole of the character outline in the transverse length direction, and a first vertical pole and a second vertical pole of the character outline in the vertical length direction, wherein the distance from the second transverse pole to a preset origin is greater than the distance from the first transverse pole to the preset origin, and the distance from the second vertical pole to the preset origin is greater than the distance from the first vertical pole to the preset origin;
respectively generating a first tangent, a second tangent, a third tangent and a fourth tangent of the first transverse pole, the second transverse pole, the first vertical pole and the second vertical pole;
adjusting the angles of the first tangent line, the second tangent line, the third tangent line and the fourth tangent line according to a preset angle threshold value until any one of the first tangent line, the second tangent line, the third tangent line and the fourth tangent line is superposed with any one edge of the character outline, and reading a rectangle surrounded by the first tangent line, the second tangent line, the third tangent line and the fourth tangent line;
and determining the rectangle as the minimum bounding rectangle of the character outline.
6. The image rectification method according to claim 1, wherein the image rectification of the target image according to the target deflection angle includes:
generating a compensation angle required by the target image correction based on the target deflection angle;
and carrying out rotation processing on the target image according to the compensation angle so as to correct the target image.
7. The image rectification method according to claim 1, wherein after the image rectification of the target image according to the target deflection angle, the method comprises:
generating a standard coordinate system based on the document pictures represented by the document data;
reading a base line of any continuous character in the document data in the horizontal length direction;
calculating a baseline angle of the baseline in the standard coordinate system;
and carrying out secondary image correction on the target image according to the baseline angle and the target deflection angle.
8. An image rectification apparatus, characterized by comprising:
the acquisition module is used for acquiring document data to be processed;
the identification module is used for inputting the document data into a preset image identification model, wherein the image identification model is a pre-trained convergence state and is used for identifying a neural network model of a target image;
the processing module is used for reading the image classification information output by the image recognition model and extracting a target image in the document data according to the image classification information;
the measuring module is used for measuring a target deflection angle of the target image according to the image type represented by the image classification information;
and the execution module is used for carrying out image rectification on the target image according to the target deflection angle.
9. An electronic device comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, wherein the processor implements the image rectification method according to any one of claims 1 to 7 when executing the computer program.
10. A non-transitory computer readable storage medium comprising a computer program, wherein the computer program when executed by a processor implements the image rectification method of any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310046897.0A CN115862022B (en) | 2023-01-31 | 2023-01-31 | Image correction method and device, equipment, storage medium and product thereof |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310046897.0A CN115862022B (en) | 2023-01-31 | 2023-01-31 | Image correction method and device, equipment, storage medium and product thereof |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115862022A true CN115862022A (en) | 2023-03-28 |
CN115862022B CN115862022B (en) | 2023-07-14 |
Family
ID=85657404
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310046897.0A Active CN115862022B (en) | 2023-01-31 | 2023-01-31 | Image correction method and device, equipment, storage medium and product thereof |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115862022B (en) |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110443773A (en) * | 2019-08-20 | 2019-11-12 | 江西博微新技术有限公司 | File and picture denoising method, server and storage medium based on seal identification |
CN112037077A (en) * | 2020-09-03 | 2020-12-04 | 平安健康保险股份有限公司 | Seal identification method, device, equipment and storage medium based on artificial intelligence |
US20200401834A1 (en) * | 2018-02-26 | 2020-12-24 | Videonetics Technology Private Limited | A system for real-time automated segmentation and recognition of vehicle's license plates characters from vehicle's image and a method thereof. |
CN112132142A (en) * | 2020-09-27 | 2020-12-25 | 平安医疗健康管理股份有限公司 | Text region determination method, text region determination device, computer equipment and storage medium |
CN113177899A (en) * | 2021-05-25 | 2021-07-27 | 上海海事大学 | Method for correcting text tilt of medical photocopy, electronic device and readable storage medium |
CN113436080A (en) * | 2021-06-30 | 2021-09-24 | 平安科技(深圳)有限公司 | Seal image processing method, device, equipment and storage medium |
CN114092941A (en) * | 2020-08-03 | 2022-02-25 | 中移物联网有限公司 | Number identification method and device, electronic equipment and readable storage medium |
CN114549928A (en) * | 2022-02-21 | 2022-05-27 | 平安科技(深圳)有限公司 | Image enhancement processing method and device, computer equipment and storage medium |
CN114549835A (en) * | 2022-02-15 | 2022-05-27 | 中国人民解放军海军工程大学 | Pointer instrument correction identification method and device based on deep learning |
CN115100660A (en) * | 2022-06-27 | 2022-09-23 | 平安银行股份有限公司 | Method and device for correcting inclination of document image |
-
2023
- 2023-01-31 CN CN202310046897.0A patent/CN115862022B/en active Active
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20200401834A1 (en) * | 2018-02-26 | 2020-12-24 | Videonetics Technology Private Limited | A system for real-time automated segmentation and recognition of vehicle's license plates characters from vehicle's image and a method thereof. |
CN110443773A (en) * | 2019-08-20 | 2019-11-12 | 江西博微新技术有限公司 | File and picture denoising method, server and storage medium based on seal identification |
CN114092941A (en) * | 2020-08-03 | 2022-02-25 | 中移物联网有限公司 | Number identification method and device, electronic equipment and readable storage medium |
CN112037077A (en) * | 2020-09-03 | 2020-12-04 | 平安健康保险股份有限公司 | Seal identification method, device, equipment and storage medium based on artificial intelligence |
CN112132142A (en) * | 2020-09-27 | 2020-12-25 | 平安医疗健康管理股份有限公司 | Text region determination method, text region determination device, computer equipment and storage medium |
CN113177899A (en) * | 2021-05-25 | 2021-07-27 | 上海海事大学 | Method for correcting text tilt of medical photocopy, electronic device and readable storage medium |
CN113436080A (en) * | 2021-06-30 | 2021-09-24 | 平安科技(深圳)有限公司 | Seal image processing method, device, equipment and storage medium |
CN114549835A (en) * | 2022-02-15 | 2022-05-27 | 中国人民解放军海军工程大学 | Pointer instrument correction identification method and device based on deep learning |
CN114549928A (en) * | 2022-02-21 | 2022-05-27 | 平安科技(深圳)有限公司 | Image enhancement processing method and device, computer equipment and storage medium |
CN115100660A (en) * | 2022-06-27 | 2022-09-23 | 平安银行股份有限公司 | Method and device for correcting inclination of document image |
Non-Patent Citations (2)
Title |
---|
唐群群 等: "维吾尔文扫描页的倾斜校正", 计算机应用研究, vol. 30, no. 05, pages 1551 - 1553 * |
飞狐进招: "物体轮廓最小外接矩形", pages 1 - 9 * |
Also Published As
Publication number | Publication date |
---|---|
CN115862022B (en) | 2023-07-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110046529B (en) | Two-dimensional code identification method, device and equipment | |
CN103714327B (en) | Method and system for correcting image direction | |
CN109993160B (en) | Image correction and text and position identification method and system | |
CN110348294B (en) | Method and device for positioning chart in PDF document and computer equipment | |
CN109800761A (en) | Method and terminal based on deep learning model creation paper document structural data | |
US20200167596A1 (en) | Method and device for determining handwriting similarity | |
US11367310B2 (en) | Method and apparatus for identity verification, electronic device, computer program, and storage medium | |
CN110942061A (en) | Character recognition method, device, equipment and computer readable medium | |
WO2020082731A1 (en) | Electronic device, credential recognition method and storage medium | |
CN112597940B (en) | Certificate image recognition method and device and storage medium | |
CN113486828A (en) | Image processing method, device, equipment and storage medium | |
CN113313111A (en) | Text recognition method, device, equipment and medium | |
AU2016208411B2 (en) | Identifying shapes in an image by comparing bézier curves | |
CN109388935B (en) | Document verification method and device, electronic equipment and readable storage medium | |
CN115862022B (en) | Image correction method and device, equipment, storage medium and product thereof | |
CN115731554A (en) | Express mail list identification method and device, computer equipment and storage medium | |
CN112348008A (en) | Certificate information identification method and device, terminal equipment and storage medium | |
CN113177542A (en) | Method, device and equipment for identifying characters of seal and computer readable medium | |
CN118196799A (en) | Circular seal character recognition method and device, electronic equipment and storage medium | |
CN116311276A (en) | Document image correction method, device, electronic equipment and readable medium | |
CN116994269A (en) | Seal similarity comparison method and seal similarity comparison system in image document | |
CN113255629B (en) | Document processing method and device, electronic equipment and computer readable storage medium | |
CN115294557A (en) | Image processing method, image processing apparatus, electronic device, and storage medium | |
CN114429628A (en) | Image processing method and device, readable storage medium and electronic equipment | |
CN114612647A (en) | Image processing method, image processing device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |