CN113392819B - Batch academic image automatic segmentation and labeling device and method - Google Patents
Batch academic image automatic segmentation and labeling device and method Download PDFInfo
- Publication number
- CN113392819B CN113392819B CN202110940037.2A CN202110940037A CN113392819B CN 113392819 B CN113392819 B CN 113392819B CN 202110940037 A CN202110940037 A CN 202110940037A CN 113392819 B CN113392819 B CN 113392819B
- Authority
- CN
- China
- Prior art keywords
- image
- area
- segmentation
- contour
- circumscribed rectangle
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Landscapes
- Image Analysis (AREA)
Abstract
The invention discloses a batch academic image automatic segmentation and labeling device and method, which comprises the following steps: an image acquisition module for reading an image; the threshold processing module is used for converting the image into a gray-scale image and further converting the gray-scale image into a binary image according to a set threshold; the edge extraction module is used for searching a closed contour on the binary image to obtain initial contour information; the edge filtering module is used for making a circumscribed rectangle for each closed contour and screening the contours according to the area of the circumscribed rectangle to obtain contour information meeting the requirements; the edge repairing module is used for integrating the selected contour to obtain a final proper contour and determining a final segmentation area; the image segmentation module is used for segmenting the image according to the finally determined segmentation areas, outputting the position information of each segmentation area, forming an annotation file and storing the annotation file; and the class labeling module is used for inputting each segmentation area into the convolutional neural network for automatic classification to generate class labels, so that labeled files are perfected.
Description
Technical Field
The invention relates to a method for automatically segmenting and labeling academic images in a thesis according to contents, in particular to an automatic image segmenting and labeling method which can be used for carrying out batch processing on massive academic images and removing interference factors such as characters and the like as much as possible, only preserving subimages and classifying the subimages according to the contents, aiming at the characteristics of irregular internal typesetting, various subimage types and the like of the academic images in the batch processing process of the academic images.
Background
Image segmentation is a key technology in the fields of digital image processing and computer vision, and is also a crucial preprocessing link in image analysis and image recognition tasks. In the study and application of images, due to the visual characteristics of human eyes, only a certain part or certain specific regions in the images are often interested, and the interested regions need to be extracted for the convenience of identification and analysis. The image segmentation is to divide an image into a plurality of geometrically mutually disjoint areas according to characteristics such as gray scale, color, spatial texture, geometric shape and the like, so that the characteristics show consistency or similarity in the same area and obviously differ among different areas. However, a unified technical specification is not formed yet in the development of the image segmentation technology, and the specific modes of image segmentation need to be adjusted correspondingly in different actual requirements and application scenarios, so that the image segmentation technology still needs to be continuously researched.
The common image segmentation task at present is based on natural images, and the academic images appearing in the paper are not studied sufficiently. Different from natural images which contain rich information such as colors, shapes and textures, academic images are carriers for embodying research results, and particularly in the field of biomedicine, paper authors often integrate a plurality of images of different types into a composite image through combination, splicing, arrangement and the like to present the composite image, so that a large amount of mismatching exists later in image matching analysis, namely, a large amount of interference exists among subgraphs of different types. To avoid this, it is critical to segment each sub-image from a composite image.
Although conventional image segmentation methods such as a threshold segmentation method and an edge detection segmentation method are feasible for solving the problem, for a special academic image, the methods have certain limitations or are difficult to achieve satisfactory effects in application.
One is that the threshold for the academic image is difficult to determine. Generally, an image is operated by converting the image into a gray-scale image, so that subsequent processing is facilitated, and in the application context of academic images, an area with content in the image needs to be identified first, and then, the identified area needs to be analyzed and labeled. In order to increase the degree of distinguishing between the content area and the background, binarization processing needs to be performed on the academic image, but a common experience threshold cannot well distinguish the content area from the background, and as for the strip charts frequently appearing in the biomedical field paper, the background of some strip charts is very shallow and is relatively close to the background of the whole image, so that the threshold selection of the academic image needs to be determined again.
Secondly, the application adaptability of the existing method is not very strong. The method comprises the following steps that a plurality of sub-images are possibly contained in an academic image, the layout of the sub-images is not regularly circulated, although the region with content in the image can be drawn and detected by applying an edge detection method, the academic image does not have high resolution sometimes, that is, some academic images are possibly not clear, some noise points exist to cause the quality of the image to be reduced, and the edge detection method detects all regions with content, wherein some noise points or some zero scattered regions exist, and the existence of the noise points is interference, so that the segmentation effect is reduced. Therefore, the detected edge contour needs to be reasonably screened, so that some factors such as noise which can interfere with the effective content area can be filtered, and the quality and accuracy of academic image segmentation are improved. In addition, at present, an annotation data set for an academic image is lacked, and automatic classification of a segmented image is also helpful for constructing the academic image data set and is also a supplement to the data set.
Disclosure of Invention
The invention aims to provide a method for realizing automatic segmentation and labeling of academic images in batch, which solves the problems of irregular sub-image layout and low image quality widely existing in the academic images, and provides corresponding improvement measures for the situations of false detection and false detection of the academic images in the conventional edge detection method. The specific technical scheme is as follows:
an automatic segmentation and annotation method for batch academic images comprises the following steps:
s101: reading an image, if the image reading fails, converting the image format into a uniform format, and then reading;
s102: converting the image into a gray scale image, and further converting into a binary image according to a set threshold;
s103: searching a closed contour on the binary image so as to obtain initial contour information;
s104: making an external rectangle for each closed contour, and screening the contours according to the area of the external rectangle to obtain contour information meeting the requirements;
s105: integrating the selected contours to obtain a final proper contour so as to determine a final segmentation area;
s106: segmenting the image according to the finally determined segmentation areas, outputting the position information of each segmentation area, forming an annotation file and storing the annotation file;
s107: and inputting each segmentation area into a convolutional neural network for automatic classification to generate a class label, thereby perfecting the labeled file.
Further, in the step S101, if the image is a non-three-channel RGB image, the image needs to be expanded into a standard three-channel RGB image.
Further, in the step S102, the original image is converted into a gray-scale image by using a weighted average method, and a specific formula is as follows:
whereinI(x,y) Represents a gray scale pattern ofx,y) The value of the pixel of (a) is,I_R(x,y)、I_G(x,y) AndI_B(x,y) The values of three channels of RGB of the original image are respectively represented, and the front coefficient is a weight value from the perspective of human physiology.
Further, in the step S102, the gray scale map is converted into a binary map according to a set threshold, and a specific thresholding formula is as follows:
whereinI(x,y) Represents a gray scale pattern ofx,y) The value of the pixel is processed, sigma is a set segmentation threshold value, when the gray value is larger than the set segmentation threshold valueSet to 0 at σ, i.e., black; when the gray value is smaller than sigma, the gray value is 255, namely white.
Further, in the step S103, scanning the contour on the binary image, where the scanning is performed in an order from top to bottom and from left to right, and when the contour is found to be a starting point of the boundary, the contour type is determined, and then the current point is continuously updated, and then the next point is found by rotating counterclockwise around the point and the pixel value is continuously updated; only information of inflection points is stored in the storage of the outline, elements in the horizontal direction, the vertical direction and the diagonal direction are compressed, and only key coordinates in the direction are reserved.
Further, in the step S104, the outline is screened according to the area of the circumscribed rectangle, including filtering and screening according to the maximum area and the minimum area of the circumscribed rectangle.
Further, in the step S105, an external rectangle of each selected contour is made on a solid color fill map having a size consistent with the original drawing, filling is performed inside the external rectangle, a mask is manufactured, and the connected regions are limited to rectangles, so that a final divided region corresponding to the original drawing is obtained.
Further, in the step S106, pixels on the original image are scanned according to the mask, only framing areas in the mask are reserved, all other areas are set to be black, the whole image is divided into sub-images, the position coordinates of each area are recorded, and the sub-images are saved as a JSON-format markup file and output together.
An automatic segmentation and annotation device for batch academic images comprises:
the image acquisition module is used for reading images, if the image reading fails, the format of the images needs to be converted, and the images are read after being converted into a uniform format;
the threshold processing module is used for converting the image into a gray-scale image and further converting the gray-scale image into a binary image according to a set threshold;
the edge extraction module is used for searching a closed contour on the binary image so as to obtain initial contour information;
the edge filtering module is used for making a circumscribed rectangle for each closed contour, and screening the contours according to the area of the circumscribed rectangle to obtain contour information meeting the requirements;
the edge repairing module is used for integrating the selected contour to obtain a final proper contour so as to determine a final segmentation area;
the image segmentation module is used for segmenting the image according to the finally determined segmentation areas, outputting the position information of each segmentation area, forming an annotation file and storing the annotation file;
and the class labeling module is used for inputting each segmentation area into the convolutional neural network for automatic classification to generate class labels, so that labeled files are perfected.
The invention mainly aims at the application problem of the traditional image segmentation method in the segmentation of academic images, firstly, a threshold value which is more suitable for the academic images is determined by adjusting parameters, and in view of the fact that the academic images generally use pure white as a background, and the background of a part of strip charts which are difficult to distinguish is lighter after being converted into a gray scale chart and is easy to be confused with the background, a value close to white is taken as a binary threshold value, so that not only is a content area not missed ensured, but also a convolution room is left, and then, the effectiveness of the threshold value setting method is verified through experiments. Secondly, a screening strategy of the edge contour is adopted, the edge detection algorithm can identify interference factors such as image noise, unnecessary characters and the like while identifying the content area, effective contents of the image are often displayed in a whole image in a key mode through analysis, so the effective contents often occupy a larger proportion in the whole image, most of the characters are annotations of the effective content area, the annotations only occupy a small part of the effective content area, and the annotations can be filtered out by applying the proportion, so that only the effective content area is left at last, namely the area of the sub-image is divided. By screening the contours, the accuracy of image segmentation is effectively improved, and verification is carried out through experiments. In addition, the automatic classification work of the images added at last enables the whole processing process to be more complete, and the labeling information is more comprehensive, so that convenience is provided for further analysis and research work in the future.
Drawings
Fig. 1 is a flow chart of image data processing in performing academic image segmentation according to the present invention.
Fig. 2 is an original image.
Fig. 3 is a binary diagram.
FIG. 4 is an unscreened profile.
FIG. 5 is a filtered contour map.
Fig. 6 is a mask diagram.
Fig. 7 is a diagram of the final segmentation result.
Fig. 8 is a schematic structural diagram of an automatic segmentation and annotation device for batch academic images according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail below with reference to specific embodiments and the accompanying drawings.
The invention is realized by adopting python, mainly depends on an open source software library in the field of computer vision, and completes the segmentation and labeling tasks of the academic image through a series of implementation strategies, just as the academic image often comprises a plurality of sub-images, and the sub-images are segmented, the simplest rectangular frame mark can be sufficient, so that the rectangular frame is also adopted to determine the segmentation and the labeling in the implementation. In addition, the segmentation and labeling of the academic images still stay in a pure manual mode at present, the number of the academic images is large, a large amount of time is wasted when the segmentation and labeling are manually performed, a satisfactory segmentation effect can be obtained by the method without performing pre-training, and a large batch of academic images can be input at one time, so that the method is convenient and fast. As shown in fig. 1, an automatic segmentation and annotation method for batch academic images includes the following steps:
s101, reading an image, if the image reading fails, converting the image format into a uniform format, and then reading; as shown in fig. 2, the original image of the academic image includes a picture, a serial number, and a description of alphanumeric symbols;
specifically, the academic image processed by the method supports a plurality of file formats, such as common jpg, jpeg, png and the like formats, and also some uncommon tiff formats and the like, but when the program reads the image, a three-channel RGB image is taken as a standard, wherein R, G, B represents three channels of red, green and blue respectively, and the three channels are combined together to obtain a color image. Some academic images are not three-channel RGB images in the usual sense, but only one-channel images, and these images need to be expanded into standard three-channel images before being processed.
S102, converting the image into a gray-scale image, and further converting the gray-scale image into a binary image according to a set threshold; as shown in fig. 3, a binary map obtained for academic image conversion;
specifically, the original image is converted into a grayscale image by using a weighted average method, and the specific formula is as follows:
whereinI(x,y) Represents a gray scale pattern ofx,y) The value of the pixel of (a) is,I_R(x,y)、I_G(x,y) AndI_B(x,y) Respectively representing the values of three channels of RGB of an original image, wherein the front coefficient is a weight value (the human eye has the highest sensitivity to green and the lowest sensitivity to blue) provided from the perspective of human physiology;
a segmentation threshold is set according to the idea of a threshold segmentation method, and the gray level image is binarized to obtain a more definite edge boundary, wherein a specific thresholding formula is as follows:
whereinI(x,y) Represents a gray scale pattern ofx,y) The value of the pixel is set, sigma is the set dividing threshold, in this example the value is 200, when the gray value is larger than or equal toSigma is set to be 0, namely black; when the gray value is smaller than sigma, the gray value is 255, namely white. The result is that the image of the original white background will be converted into an image of a black background, since then finding the outline will be more accurate in a darker environment than in a lighter environment.
S103: searching a closed contour on the binary image so as to obtain initial contour information;
specifically, the contour is scanned on the binary image according to the idea of the edge detection segmentation method, in the scanning mode, in the order from top to bottom and from left to right, the contour type (outer contour or inner aperture) is judged when the boundary starting point is found, in this example, only the outer contour is found, then the current point is continuously updated, then the next point is found by rotating counterclockwise around the point, and the pixel value is continuously updated. Only information of inflection points is stored in the storage of the contour, elements in the horizontal direction, the vertical direction and the diagonal direction are compressed, only key coordinates in the direction are reserved, and only four points are needed for storing the information of the rectangular contour adopted in the embodiment.
S104: making a circumscribed rectangle for each closed contour, as shown in fig. 4, which is a contour diagram of an academic image, wherein all rectangles are selected; the outline is screened according to the area of the external rectangle to obtain the outline information meeting the requirement, as shown in fig. 5, the rectangular frames described by the serial number and the alphanumeric symbols are filtered, and only the rectangular frame containing the picture is left;
specifically, whether an outline is a required sub-image or depends on the area of a rectangle circumscribed by the outline is determined, and after all, each sub-image is the content to be displayed on the whole image and is also the important content, so that the whole image occupies a certain area. Based on the filtering thought, the step can be further subdivided into two steps, namely judging the maximum area of the circumscribed rectangle of the sub-graph, and then judging the minimum area of the circumscribed rectangle of the sub-graph.
The judgment of the maximum area of the circumscribed rectangle of the subgraph is considered from a schematic diagram, for example, a molecular formula, a structural diagram and the like drawn through software are not continuous at a pixel level, so that the whole is manufactured but is wrongly segmented, and the segmented areas are approximately the same in size and are huge in number. Therefore, the area of the sub-graph bounding rectangle needs to be limited, which is set as 1/81 of the whole sub-graph bounding rectangle in this example, otherwise, the following segmentation and labeling will not be performed.
The judgment of the minimum area of the circumscribed rectangle of the subgraph is relative, and the purpose is to solve the interference factors in the whole image, such as noise points caused by low definition, description serial numbers of the subgraph and related alphanumeric descriptions. The interferences are represented by very small fragments or points on the outline, so the relative area ratio is determined based on the maximum area of the circumscribed rectangle of the sub-image, 1/8 which is set as the maximum area of the circumscribed rectangle of the sub-image in the present example is the acceptable minimum area of the circumscribed rectangle of the outline, and only the outline within the range is selected, thus the influence of the interference factors is greatly reduced.
S105: integrating the selected contours to obtain a final proper contour so as to determine a final segmentation area;
specifically, a solid filling graph with the same size as the original graph is filled with circumscribed rectangles of each selected outline, a mask (mask) is manufactured, as shown in fig. 6, the connected areas are limited to be rectangles, and finally divided areas corresponding to the original graph are obtained.
S106: and segmenting the image according to the finally determined segmentation areas, outputting the position information of each segmentation area, forming an annotation file and storing the annotation file.
Specifically, as shown in fig. 7, pixels on the original image are scanned according to the mask, only the framing area in the mask is reserved, and all other areas are set to be black, so that the purpose of dividing the whole academic image into sub-images is achieved, and meanwhile, the position coordinates of each area are recorded, and the marked files stored in the JSON format are output together.
S107: and inputting each segmentation area (namely subgraph) into a convolutional neural network for automatic classification to generate a class label, thereby perfecting the labeled file.
Specifically, a Convolutional Neural Network (CNN) is used to complete the classification task of each sub-graph category, and the currently defined categories include seven categories, which are respectively a statistical chart, a real-object chart, a staining chart, a strip chart, a schematic diagram, a contrast chart and others. The image difference of the classes is obvious, good characteristics can be learned through CNN, a classification model is trained through a self-established small data set, a good result is finally obtained, and the class information of the labeled file is perfected together.
Referring to fig. 8, a schematic structural diagram of an automatic batch academic image segmentation and annotation apparatus according to an embodiment of the present invention is shown, in this embodiment, the apparatus includes:
the image acquisition module 101 is configured to read an image, and if the image reading fails, the image format needs to be converted into a uniform format and then the image is read;
a threshold processing module 102, configured to convert the image into a grayscale image, and further convert the grayscale image into a binary image according to a set threshold;
an edge extraction module 103, configured to find a closed contour on the binary image, so as to obtain initial contour information;
the edge filtering module 104 is used for making a circumscribed rectangle for each closed contour, and screening the contours according to the area of the circumscribed rectangle to obtain contour information meeting the requirements;
an edge patch module 105, configured to integrate the selected contours to obtain a final suitable contour, so as to determine a final segmented region;
and the image segmentation module 106 is configured to segment the image according to the finally determined segmentation area, output position information of each segmentation area, and form and store an annotation file.
And the class labeling module 107 is configured to input each segmented region (i.e., sub-graph) into a convolutional neural network for automatic classification, and generate a class label, so as to perfect a labeled file.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.
Claims (6)
1. An automatic segmentation and annotation method for batch academic images is characterized by comprising the following steps:
s101: reading an image, if the image reading fails, converting the image format into a uniform format, and then reading;
s102: converting the image into a gray scale image, and further converting into a binary image according to a set threshold;
s103: searching a closed contour on the binary image so as to obtain initial contour information;
s104: making an external rectangle for each closed contour, and screening the contours according to the area of the external rectangle to obtain contour information meeting the requirements;
s105: integrating the selected contours to obtain a final proper contour so as to determine a final segmentation area;
s106: segmenting the image according to the finally determined segmentation areas, outputting the position information of each segmentation area, forming an annotation file and storing the annotation file;
s107: inputting each segmentation area into a convolutional neural network for automatic classification, and generating a class label so as to perfect a labeled file;
and S102, converting the gray-scale image into a binary image according to a set threshold, wherein a specific thresholding formula is as follows:
whereinI(x,y) Represents a gray scale pattern ofx,y) The value of the pixel is processed, sigma is a set segmentation threshold value, the segmentation threshold value is a value close to white, sigma =200, and the value is set to 0 when the gray value is larger than or equal to sigma, namely black; when the gray value is less than sigma, the gray value is 255, namely white;
s104, screening outlines according to the area of the external rectangle, including filtering and screening according to the maximum area and the minimum area of the external rectangle; specifically, the maximum area of the circumscribed rectangle is judged, the area of the circumscribed rectangle of the sub-graph is limited, the maximum area ratio of the circumscribed rectangle of the sub-graph is set to 1/81 of the whole graph, otherwise, the following segmentation and labeling are not carried out, the minimum area of the circumscribed rectangle is judged, the relative area ratio is determined based on the maximum area of the circumscribed rectangle of the sub-graph, the 1/8 of the maximum area of the circumscribed rectangle of the sub-graph is set to be the acceptable minimum outline circumscribed rectangle area, and only the outline within the range is selected;
and S105, making an external rectangle of each selected outline on a pure color filling graph with the same size as the original drawing, filling the external rectangle in the pure color filling graph to manufacture a mask, and limiting the connected area into a rectangle so as to obtain a final segmentation area corresponding to the original drawing.
2. The method according to claim 1, wherein in step S101, if the image is a non-three-channel RGB image, it needs to be expanded into a standard three-channel RGB image.
3. The method as claimed in claim 1, wherein in step S102, the original image is converted into a gray scale image by using a weighted average method, and the specific formula is as follows:
whereinI(x,y) Represents a gray scale pattern ofx,y) The value of the pixel of (a) is,I_R(x,y)、I_G(x,y) AndI_B(x,y) The values of three channels of RGB of the original image are respectively represented, and the front coefficient is a weight value from the perspective of human physiology.
4. The method according to claim 1, wherein the step S103 is to scan the contour on the binary image in an order from top to bottom and from left to right, judge the contour type when finding the starting point of the boundary, then continuously update the current point, then rotate counterclockwise around the point to find the next point and continuously update the pixel values; only information of inflection points is stored in the storage of the outline, elements in the horizontal direction, the vertical direction and the diagonal direction are compressed, and only key coordinates in the direction are reserved, namely only four vertexes of a circumscribed rectangle are reserved.
5. The method according to claim 1, wherein in step S106, pixels on the original image are scanned according to the mask, only framed areas in the mask are reserved, all other areas are set to black, the whole image is divided into sub-images, position coordinates of each area are recorded, and the sub-images are saved as JSON-formatted markup files and output together.
6. An automatic segmentation and labeling device for batch academic images is characterized by comprising:
the image acquisition module is used for reading images, if the image reading fails, the format of the images needs to be converted, and the images are read after being converted into a uniform format;
the threshold processing module is used for converting the image into a gray-scale image and further converting the gray-scale image into a binary image according to a set threshold;
the edge extraction module is used for searching a closed contour on the binary image so as to obtain initial contour information;
the edge filtering module is used for making a circumscribed rectangle for each closed contour, and screening the contours according to the area of the circumscribed rectangle to obtain contour information meeting the requirements;
the edge repairing module is used for integrating the selected contour to obtain a final proper contour so as to determine a final segmentation area;
the image segmentation module is used for segmenting the image according to the finally determined segmentation areas, outputting the position information of each segmentation area, forming an annotation file and storing the annotation file;
the class marking module is used for inputting each segmentation area into a convolutional neural network for automatic classification to generate class labels, so that a marked file is perfected;
the threshold processing module converts the gray-scale image into a binary image according to a set threshold, and a specific thresholding formula is as follows:
whereinI(x,y) Represents a gray scale pattern ofx,y) The value of the pixel is processed, sigma is a set segmentation threshold value, the segmentation threshold value is a value close to white, sigma =200, and the value is set to 0 when the gray value is larger than or equal to sigma, namely black; when the gray value is less than sigma, the gray value is 255, namely white;
the edge filtering module is used for carrying out outline screening according to the area of an external rectangle, and comprises the step of filtering and screening according to the maximum area and the minimum area of the external rectangle; specifically, the maximum area of the circumscribed rectangle is judged, the area of the circumscribed rectangle of the sub-graph is limited, the maximum area ratio of the circumscribed rectangle of the sub-graph is set to 1/81 of the whole graph, otherwise, the following segmentation and labeling are not carried out, the minimum area of the circumscribed rectangle is judged, the relative area ratio is determined based on the maximum area of the circumscribed rectangle of the sub-graph, the 1/8 of the maximum area of the circumscribed rectangle of the sub-graph is set to be the acceptable minimum outline circumscribed rectangle area, and only the outline within the range is selected;
and the edge repairing module is used for making an external rectangle of each selected outline on a pure color filling picture with the same size as the original picture, filling the internal rectangle to manufacture a mask, and limiting the communicated area into a rectangle so as to obtain a final segmentation area corresponding to the original picture.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110940037.2A CN113392819B (en) | 2021-08-17 | 2021-08-17 | Batch academic image automatic segmentation and labeling device and method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110940037.2A CN113392819B (en) | 2021-08-17 | 2021-08-17 | Batch academic image automatic segmentation and labeling device and method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113392819A CN113392819A (en) | 2021-09-14 |
CN113392819B true CN113392819B (en) | 2022-03-08 |
Family
ID=77622771
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110940037.2A Active CN113392819B (en) | 2021-08-17 | 2021-08-17 | Batch academic image automatic segmentation and labeling device and method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113392819B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114121179B (en) * | 2022-01-28 | 2022-12-13 | 药渡经纬信息科技(北京)有限公司 | Extraction method and extraction device of chemical structural formula |
CN114820478A (en) * | 2022-04-12 | 2022-07-29 | 江西裕丰智能农业科技有限公司 | Navel orange fruit disease image labeling method and device and computer equipment |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107180230A (en) * | 2017-05-08 | 2017-09-19 | 上海理工大学 | General licence plate recognition method |
CN113191361A (en) * | 2021-04-19 | 2021-07-30 | 苏州大学 | Shape recognition method |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5987173A (en) * | 1995-03-27 | 1999-11-16 | Nippon Steel Corporation | Interactive drawing recognition processing method and apparatus thereof |
CN110838105B (en) * | 2019-10-30 | 2023-09-15 | 南京大学 | Business process model image recognition and reconstruction method |
CN111179289B (en) * | 2019-12-31 | 2023-05-19 | 重庆邮电大学 | Image segmentation method suitable for webpage length graph and width graph |
-
2021
- 2021-08-17 CN CN202110940037.2A patent/CN113392819B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107180230A (en) * | 2017-05-08 | 2017-09-19 | 上海理工大学 | General licence plate recognition method |
CN113191361A (en) * | 2021-04-19 | 2021-07-30 | 苏州大学 | Shape recognition method |
Also Published As
Publication number | Publication date |
---|---|
CN113392819A (en) | 2021-09-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107545239B (en) | Fake plate detection method based on license plate recognition and vehicle characteristic matching | |
US6839466B2 (en) | Detecting overlapping images in an automatic image segmentation device with the presence of severe bleeding | |
CN103971126B (en) | A kind of traffic sign recognition method and device | |
CN111915704A (en) | Apple hierarchical identification method based on deep learning | |
CN108596166A (en) | A kind of container number identification method based on convolutional neural networks classification | |
US5892854A (en) | Automatic image registration using binary moments | |
US6704456B1 (en) | Automatic image segmentation in the presence of severe background bleeding | |
CN110766017B (en) | Mobile terminal text recognition method and system based on deep learning | |
US11151402B2 (en) | Method of character recognition in written document | |
CN112434544A (en) | Cigarette carton code detection and identification method and device | |
CN113158977B (en) | Image character editing method for improving FANnet generation network | |
CN113392819B (en) | Batch academic image automatic segmentation and labeling device and method | |
CN110598566A (en) | Image processing method, device, terminal and computer readable storage medium | |
CN111680690A (en) | Character recognition method and device | |
CN111461133A (en) | Express delivery surface single item name identification method, device, equipment and storage medium | |
CN113688838B (en) | Red handwriting extraction method and system, readable storage medium and computer equipment | |
CN113283405A (en) | Mask detection method and device, computer equipment and storage medium | |
CN113609984A (en) | Pointer instrument reading identification method and device and electronic equipment | |
CN115082776A (en) | Electric energy meter automatic detection system and method based on image recognition | |
CN112580383A (en) | Two-dimensional code identification method and device, electronic equipment and storage medium | |
CN112861861A (en) | Method and device for identifying nixie tube text and electronic equipment | |
CN115588208A (en) | Full-line table structure identification method based on digital image processing technology | |
CN107145888A (en) | Video caption real time translating method | |
CN111126266A (en) | Text processing method, text processing system, device, and medium | |
CN110619331A (en) | Color distance-based color image field positioning method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |