CN113936195B - Sensitive image recognition model training method and device and electronic equipment - Google Patents
Sensitive image recognition model training method and device and electronic equipment Download PDFInfo
- Publication number
- CN113936195B CN113936195B CN202111536956.XA CN202111536956A CN113936195B CN 113936195 B CN113936195 B CN 113936195B CN 202111536956 A CN202111536956 A CN 202111536956A CN 113936195 B CN113936195 B CN 113936195B
- Authority
- CN
- China
- Prior art keywords
- image
- sensitive
- training
- recognition model
- image set
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
Landscapes
- Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a training method, a training device and electronic equipment of a sensitive image recognition model, wherein the method comprises the following steps: acquiring an image data set from a network, preprocessing the image data set to obtain a first image set, and performing rough marking on the first image set; acquiring a second image set; combining the first image set and the second image set into a training image set; training the sensitive image recognition model by adopting a training image set to obtain a first sensitive image recognition model; forming a third image set by the training image set which is roughly labeled as a sensitive image in the training image set, calculating a saliency map of the third image in the third image set, and determining a fine label of the third image according to the saliency map; and training the first sensitive image recognition model by adopting the third image set to obtain a second sensitive image recognition model. The method reduces the cost of obtaining the high-accuracy identification model, and can obtain the model with high identification precision.
Description
Technical Field
The embodiment of the invention relates to the technical field of computers, in particular to a training method and a training device for a sensitive image recognition model and electronic equipment.
Background
Currently, training of recognition algorithms for sensitive images (e.g., gambling, yellow-related, biographical, terrorism, etc.) requires a large amount of image data, namely: the trained neural network prediction is ensured to have high accuracy through a large amount of image data, and the time, labor and material cost required for completing model training is huge.
Under the existing training method, the sensitive image recognition model can only carry out coarse-grained recognition on the image, but cannot carry out fine-grained recognition on the image, and the recognition precision is low. Wherein, the coarse granularity is to judge whether the image is sensitive or not; the fine granularity is the specific sensitive image type to which the image belongs.
Disclosure of Invention
The embodiment of the invention provides a training method of a sensitive image recognition model, which is used for solving the problems that the cost for obtaining a high-accuracy recognition model by using the existing training method is high, and the recognition precision of the obtained recognition model is low.
In order to solve the technical problem, the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides a method for training a sensitive image recognition model, where the method includes:
acquiring an image data set from a network, preprocessing the image data set to obtain a first image set, and performing rough marking on the first image set, wherein the rough marking is used for indicating whether a first image in the first image set is a sensitive image;
acquiring a second image set, wherein a second image in the second image set has a rough label;
combining the first and second image sets into a training image set;
training a sensitive image recognition model by adopting the training image set to obtain a first sensitive image recognition model;
forming a third image set by a training image set which is roughly labeled as a sensitive image in the training image set, calculating a significance map of the third image in the third image set, and determining a fine label of the third image according to the significance map, wherein the fine label is used for indicating that the third image belongs to one of a plurality of preset sensitive types;
and training the first sensitive image recognition model by adopting the third image set to obtain a second sensitive image recognition model.
Optionally, the sensitive image recognition model employs a residual neural network model.
Optionally, the sensitive image recognition model adopts an EfficientNet model.
Optionally, the pre-processing comprises at least one of: data cleansing, data deduplication, data enhancement, and image segmentation.
Optionally, the data enhancement is for at least one of: horizontal flipping, vertical flipping, rotation, horizontal translation, vertical translation, cropping, zoom in and zoom out, and color transformation.
Optionally, the method comprises:
extracting a sensitive area indicated by the saliency map on the third image as a sensitive area map, wherein the sensitive area is used for determining that the third image belongs to one of a plurality of preset sensitive types;
increasing a length value and/or a width value of the sensitive region map;
and forming a fourth image set by the sensitive region image set, carrying out the fine labeling on a fourth image in the fourth image set, and training the second sensitive image recognition model by adopting the fourth image set to obtain a third sensitive image recognition model.
In a second aspect, an embodiment of the present invention provides a training apparatus for a sensitive image recognition model, including:
an acquisition module for acquiring an image dataset from a network;
the preprocessing module is used for preprocessing the image data set to obtain a first image set;
the rough labeling module is used for performing rough labeling on the first image set, and the rough labeling is used for indicating whether a first image in the first image set is a sensitive image or not;
the synthesis module is used for acquiring a second image set, wherein a second image in the second image set has a rough label, and combining the first image set and the second image set into a training image set;
the first fine labeling module is used for forming a third image set by a training image set which is coarsely labeled as a sensitive image in the training image set, calculating a saliency map of the third image in the third image set, and determining a fine label of the third image according to the saliency map, wherein the fine label is used for indicating that the third image belongs to one of a plurality of preset sensitive types;
and the first training module is used for training a sensitive image recognition model by adopting the training image set and training the sensitive image recognition model by adopting the third image set.
Optionally, the training device further comprises:
the extraction module is used for extracting a sensitive area indicated by the saliency map on the third image as a sensitive area map, and the sensitive area is used for determining that the third image belongs to one of a plurality of preset sensitive types;
the amplifying module is used for increasing the length value and/or the width value of the sensitive area map;
the second fine labeling module is used for forming the sensitive region image set into a fourth image set and performing fine labeling on a fourth image in the fourth image set;
and the second training module is used for training the sensitive image recognition model by adopting the fourth image set.
In a third aspect, an embodiment of the present invention provides an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, where the program or instructions, when executed by the processor, implement the steps in the method for training the sensitive image recognition model according to the first aspect.
In a fourth aspect, the embodiment of the present invention provides a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps in the method for training the sensitive image recognition model according to the first aspect.
In the embodiment of the invention, the marked image data in the existing similar task is used for assisting the sensitive image recognition model to train, so that the cost for obtaining the high-accuracy sensitive image recognition model is reduced; in addition, the embodiment of the invention utilizes the saliency map to perform fine labeling on the training image with the rough label as the sensitivity, and then uses the image set with the fine label for the sensitive image recognition model training, so that the sensitive image recognition model with high recognition precision can be obtained.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
fig. 1 is a schematic flowchart of a method for training a sensitive image recognition model according to an embodiment of the present invention;
FIG. 2 is a second flowchart illustrating a method for training a sensitive image recognition model according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of the internal structure of the EfficientNet model;
FIG. 4 is a schematic diagram of the internal structure of MBConv3 corresponding to FIG. 3;
FIG. 5 is a schematic diagram of the internal structure corresponding to MBConv6 in FIG. 3;
FIG. 6 is a schematic diagram of the internal structure corresponding to SepConv (separable convolution) in FIG. 3;
FIG. 7 is a third flowchart illustrating a method for training a sensitive image recognition model according to an embodiment of the present invention;
FIG. 8 is a schematic diagram of the internal structure of U-Net;
FIG. 9 is a fourth flowchart illustrating a method for training a sensitive image recognition model according to an embodiment of the present invention;
FIG. 10 is a schematic structural diagram of an exercise device according to an embodiment of the present invention;
fig. 11 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, fig. 1 is a schematic flow chart of a training method of a sensitive image recognition model according to an embodiment of the present invention, where the training method includes:
step 11: acquiring an image data set from a network, preprocessing the image data set to obtain a first image set, and performing rough marking on the first image set, wherein the rough marking is used for indicating whether a first image in the first image set is a sensitive image;
step 12: acquiring a second image set, wherein a second image in the second image set has a rough label;
step 13: combining the first image set and the second image set into a training image set;
step 14: training the sensitive image recognition model by adopting a training image set to obtain a first sensitive image recognition model;
step 15: forming a third image set by the training image set which is roughly labeled as the sensitive image in the training image set, calculating a significance map of the third image in the third image set, and determining a fine label of the third image according to the significance map, wherein the fine label is used for indicating that the third image belongs to one of a plurality of preset sensitive types;
step 16: and training the first sensitive image recognition model by adopting the third image set to obtain a second sensitive image recognition model.
In the embodiment of the invention, the marked image data in the existing similar task is used for assisting the sensitive image recognition model to train, so that the cost for obtaining the high-accuracy sensitive image recognition model is reduced; in addition, the embodiment of the invention utilizes the saliency map to perform fine labeling on the training image with the rough label as the sensitivity, and then uses the image set with the fine label for the sensitive image recognition model training, so that the sensitive image recognition model with high recognition precision can be obtained.
Referring to fig. 2, fig. 2 is a second flowchart illustrating a method for training a sensitive image recognition model according to an embodiment of the present invention, where the second image set includes: an image dataset 21 in an open source project; and/or an internal training image dataset 22.
The Open Source project, Open Source, is registered as an authentication mark by a non-profit software organization (Open Source Initiative association in the united states) and formally defined to describe software whose Source code can be used by the public, and the use, modification and issuance of the software are not limited by the license. The owner of an open source item does not belong to any organization or individual. Under the condition of complying with the open source protocol, the open source product can be customized into a personalized product belonging to the open source product by modifying the code. Specifically, the image data set 21 in the open source project refers to an image data set which can be publicly acquired by the public and is also not restricted by the license in use, modification, and issuance.
The internal training image dataset 22 is an image dataset used by an enterprise, individual or public service group of the relevant industry to train a neural network model during project development, and is used or circulated within the enterprise, individual or public service group without being disclosed externally.
In the embodiment of the present invention, the image dataset 21 in the open source project and the internal training image dataset 22 are combined into the second image set 23, and the second image set 23 is labeled image data in an existing similar task, where the similar task is a task for sensitive image recognition. The first image set 24 and the second image set 23 are then combined into a training image set 25.
The arrangement is beneficial to training the sensitive image recognition model with high accuracy by using the labeled image data in the existing similar task under the conditions of less sample image quantity, short development time or manpower shortage, and the development cost of the sensitive image recognition model is reduced.
The image dataset 21 in the open source project may be an ImageNet image dataset. ImageNet is a large visual database for visual object recognition software research. Image URLs in excess of 1400 million were manually annotated by ImageNet to indicate objects in the picture; a bounding box is also provided in at least one million images. ImageNet contains 2 ten thousand categories; a typical category, such as "balloon" or "strawberry", contains hundreds of images. The annotation database for the third party image URL may be available for free directly from ImageNet.
In some embodiments of the present invention, optionally, the sensitive image recognition model employs a residual neural network model. The residual network is a convolutional neural network proposed by 4 scholars from Microsoft Research, and wins image classification and object Recognition in the 2015 ImageNet Large Scale Visual Recognition Competition (ILSVRC). The residual network is characterized by easy optimization and can improve accuracy by adding considerable depth.
The embodiment of the invention is favorable for improving the training accuracy and relieving the gradient disappearance problem caused by increasing the depth in the deep neural network by adopting the residual neural network model.
In some embodiments of the present invention, optionally, the sensitive image recognition model adopts an EfficientNet model. Referring to fig. 3, fig. 3 is a schematic diagram of the internal structure of the EfficientNet model, and the EfficientNet network structure includes 1 discrete convolution, 12 MBConv6 volume blocks of 3 × 3, 2 MBConv3 volume blocks of 5 × 5, 2 MBConv6 volume blocks of 5 × 5, and 1 Softmax (normalized exponential function) classifier, wherein the network further includes 8 SE (Squeeze-and-Excitation) modules. Of course, in other embodiments of the present invention, the structure of the sensitive image recognition model is not so limited.
In one embodiment, referring to fig. 4 to 6, fig. 4 is a schematic diagram of an internal structure of an MBConv3 volume block, including: 21 × 1 Conv volume blocks, 1 SE module and 15 × 5 DWConv volume block; fig. 5 is a schematic diagram of the internal structure of the MBConv6 volume block, which includes: 21 × 1 Conv volume blocks and 13 × 3 DWConv volume block; fig. 6 is a schematic diagram of the internal structure of the SepConv volume block, including: 1 x 1 Conv volume block and 1 x 3 DWConv volume block.
According to the embodiment of the invention, the EfficientNet model is adopted, so that the balance of three aspects of pixel value, width and depth is favorably realized, the network is expanded to the maximum balance degree, and the extraction of the abundant characteristics of the structure such as deconvolution, SE module, BN (batch norm) module and ResNet module is applied, so that the recognition rate is improved.
In some embodiments of the present invention, optionally, referring to fig. 7, fig. 7 is a third flowchart of a method for training a sensitive image recognition model according to an embodiment of the present invention, where the preprocessing includes at least one of: data cleansing 71, data deduplication 72, data enhancement 73, and image segmentation 74.
Wherein: the data cleaning 71 is to filter out image data with low resolution and inconsistent content; data deduplication 72 is the deletion of duplicate image data; data enhancement 73 is to increase the sample size of the image data; the image segmentation 74 cuts image data to reduce computational load and improve training efficiency.
The image segmentation 74 can be implemented by using U-Net, which was published in 2015 and belongs to a variant of FCN (full Convolution Network), and since the initial purpose of U-Net is to solve the biomedical image problem, it is widely applied to various directions of semantic segmentation, such as satellite image segmentation and industrial flaw detection, since the effect is really good.
Referring to fig. 8, fig. 8 is a schematic diagram of an internal structure of U-Net, wherein:
1) the left side of the U-Net network is a feature extraction stage, also called an encoder, the structure is similar to VGG, two 3 × 3 convolution operations are used for feature extraction, after each convolution operation, a ReLU (Rectified Linear Unit) activation function is used for activation, then the maximum pooling with kernel of 2 × 2 is used for reducing the size of the feature map, and the above operations are performed for four times, so that the size of the feature map is reduced to 16 times of the original size.
2) And in the subsequent operation and the feature extraction stage, two convolution operations of 3 multiplied by 3 are used for carrying out feature extraction, and a ReLU activation function is used for activation.
3) And in the image output stage, activating the multi-classification task of the enterprise sensitive image recognition by using a Softmax activation function, and outputting a segmentation graph.
According to the embodiment of the invention, the characteristic that the skip layer connection can be fully fused with the semantic information of the high layer and the position information of the low layer is realized by utilizing the U-Net, so that the accuracy of image segmentation is improved.
In some embodiments of the invention, optionally, data enhancement 73 is used to perform at least one of the following operations on the image data in the image dataset: horizontal flipping, vertical flipping, rotation, horizontal translation, vertical translation, cropping, zoom in and zoom out, and color transformation.
In some embodiments of the present invention, optionally, referring to fig. 9, fig. 9 is a fourth flowchart of a training method for a sensitive image recognition model provided in an embodiment of the present invention, where the training method further includes:
step 91: extracting a sensitive area indicated by the saliency map on the third image as a sensitive area map, wherein the sensitive area is used for determining that the third image belongs to one of a plurality of preset sensitive types;
and step 92: increasing the length value and/or the width value of the sensitive area map;
step 93: and forming a fourth image set by the sensitive area image set, performing fine labeling on a fourth image in the fourth image set, and training the second sensitive image recognition model by adopting the fourth image set to obtain a third sensitive image recognition model.
Significance Maps (Saliency Maps) are a way to study the interpretability of convolutional networks, i.e., to predict the current classification result based on which regions or patterns the heuristic model predicts. The main idea of the saliency map is to assume a training image I for a real label c0To transport itAfter entering a certain depth convolution neural network, the corresponding score of the output class c is Sc (I), and then the original image I0The influence of the change of each pixel point on the score sc (i) can reflect the importance degree of the pixel points to the category c, so that which pixels (regions) are relatively important can be determined.
The preset plurality of sensitivity types may include at least one of: gambling, yellow, transmission and terrorism; the sensitive area is an area which can judge one type of gambling, yellow-related, biographical and terrorist related through the saliency map on the third image, and the sensitive area is extracted to be a sensitive area map.
The length value and/or the width value of the sensitive area map are/is increased, so that the length value and the width value of the sensitive area map after being increased are enlarged in proportion to the length and the width of the sensitive area map without being increased.
Through the arrangement, the third sensitive image recognition model can realize fine-grained recognition of the image and distinguish whether the sensitive image belongs to one of a plurality of preset sensitive types.
Based on the same inventive concept, an embodiment of the present invention provides a training apparatus 100 for a sensitive image recognition model, referring to fig. 10, where fig. 10 is one of schematic structural diagrams of the training apparatus 100 provided in an embodiment of the present invention, and the training apparatus includes:
an acquisition module 101 for acquiring an image data set from a network;
the preprocessing module 102 is configured to preprocess the image data set to obtain a first image set;
the rough labeling module 103 is used for performing rough labeling on the first image set, wherein the rough labeling is used for indicating whether a first image in the first image set is a sensitive image;
a synthesis module 104, configured to obtain a second image set, where a second image in the second image set has a rough label, and combine the first image set and the second image set into a training image set;
the first fine labeling module 105 is configured to combine a training image set, which is coarsely labeled as a sensitive image, in the training image set into a third image set, calculate a saliency map of the third image in the third image set, determine a fine label of the third image according to the saliency map, where the fine label is used to indicate that the third image belongs to one of a plurality of preset sensitive types;
the first training module 106 is configured to train the sensitive image recognition model using the training image set, and train the sensitive image recognition model using the third image set.
In some embodiments of the present invention, optionally, the training device further includes:
an extracting module 107, configured to extract a sensitive region indicated by the saliency map on the third image as a sensitive region map, where the sensitive region is used to determine that the third image belongs to one of a plurality of preset sensitive types;
the amplifying module 108 is used for increasing the length value and/or the width value of the sensitive area map;
the second fine labeling module 109 is configured to combine the sensitive region image sets into a fourth image set, and perform fine labeling on a fourth image in the fourth image set;
and the second training module 10-10 is used for training the sensitive image recognition model by adopting a fourth image set.
The training device for the sensitive image recognition model provided by the embodiment of the application can realize each process realized by the method embodiments of fig. 1 to 9, achieve the same technical effect, and is not repeated here to avoid repetition.
An embodiment of the present invention provides an electronic device 110, referring to fig. 11, where fig. 11 is a schematic structural diagram of the electronic device 110 provided in the embodiment of the present invention, the electronic device includes a processor 111, a memory 112, and a program or an instruction stored on the memory 112 and executable on the processor 111, and when the program or the instruction is executed by the processor, the step in the training method for the sensitive image recognition model of the present invention is implemented.
The embodiment of the present invention provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the embodiment of the method for training a sensitive image recognition model, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.
Claims (8)
1. A training method of a sensitive image recognition model is characterized by comprising the following steps: the method comprises the following steps:
acquiring an image data set from a network, preprocessing the image data set to obtain a first image set, and performing rough marking on the first image set, wherein the rough marking is used for indicating whether a first image in the first image set is a sensitive image;
acquiring a second image set, wherein a second image in the second image set has a rough label;
combining the first and second image sets into a training image set;
training a sensitive image recognition model by adopting the training image set to obtain a first sensitive image recognition model;
forming a third image set by a training image set which is roughly labeled as a sensitive image in the training image set, calculating a significance map of the third image in the third image set, and determining a fine label of the third image according to the significance map, wherein the fine label is used for indicating that the third image belongs to one of a plurality of preset sensitive types;
training the first sensitive image recognition model by adopting the third image set to obtain a second sensitive image recognition model;
extracting a sensitive area indicated by the saliency map on the third image as a sensitive area map, wherein the sensitive area is used for determining that the third image belongs to one of a plurality of preset sensitive types;
increasing a length value and/or a width value of the sensitive region map;
and forming a fourth image set by the sensitive region image set, carrying out the fine labeling on a fourth image in the fourth image set, and training the second sensitive image recognition model by adopting the fourth image set to obtain a third sensitive image recognition model.
2. The method for training the sensitive image recognition model according to claim 1, wherein: the sensitive image recognition model adopts a residual error neural network model.
3. The method for training the sensitive image recognition model according to claim 1, wherein: the sensitive image recognition model adopts an EfficientNet model.
4. The method for training the sensitive image recognition model according to claim 1, wherein: the pre-treatment comprises at least one of: data cleansing, data deduplication, data enhancement, and image segmentation.
5. The method for training the sensitive image recognition model according to claim 4, wherein: the data enhancement is for at least one of: horizontal flipping, vertical flipping, rotation, horizontal translation, vertical translation, cropping, zoom in and zoom out, and color transformation.
6. A training device for a sensitive image recognition model is characterized in that: the method comprises the following steps:
an acquisition module for acquiring an image dataset from a network;
the preprocessing module is used for preprocessing the image data set to obtain a first image set;
the rough labeling module is used for performing rough labeling on the first image set, and the rough labeling is used for indicating whether a first image in the first image set is a sensitive image or not;
the synthesis module is used for acquiring a second image set, wherein a second image in the second image set has a rough label, and combining the first image set and the second image set into a training image set;
the first fine labeling module is used for forming a third image set by a training image set which is coarsely labeled as a sensitive image in the training image set, calculating a saliency map of the third image in the third image set, and determining a fine label of the third image according to the saliency map, wherein the fine label is used for indicating that the third image belongs to one of a plurality of preset sensitive types;
the first training module is used for training a sensitive image recognition model by adopting the training image set and training the sensitive image recognition model by adopting the third image set;
the extraction module is used for extracting a sensitive area indicated by the saliency map on the third image as a sensitive area map, and the sensitive area is used for determining that the third image belongs to one of a plurality of preset sensitive types;
the amplifying module is used for increasing the length value and/or the width value of the sensitive area map;
the second fine labeling module is used for forming the sensitive region image set into a fourth image set and performing fine labeling on a fourth image in the fourth image set;
and the second training module is used for training the sensitive image recognition model by adopting the fourth image set.
7. An electronic device, characterized in that: comprising a processor, a memory and a program or instructions stored on the memory and executable on the processor, which program or instructions, when executed by the processor, implement the steps in the method of training a sensitive image recognition model according to any of the claims 1 to 5.
8. A readable storage medium, characterized by: the readable storage medium stores thereon a program or instructions which, when executed by a processor, implement the steps in the method of training a sensitive image recognition model according to any of claims 1 to 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111536956.XA CN113936195B (en) | 2021-12-16 | 2021-12-16 | Sensitive image recognition model training method and device and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111536956.XA CN113936195B (en) | 2021-12-16 | 2021-12-16 | Sensitive image recognition model training method and device and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113936195A CN113936195A (en) | 2022-01-14 |
CN113936195B true CN113936195B (en) | 2022-02-25 |
Family
ID=79289116
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111536956.XA Active CN113936195B (en) | 2021-12-16 | 2021-12-16 | Sensitive image recognition model training method and device and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113936195B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115035510B (en) * | 2022-08-11 | 2022-11-15 | 深圳前海环融联易信息科技服务有限公司 | Text recognition model training method, text recognition device, and medium |
CN116524327B (en) * | 2023-06-25 | 2023-08-25 | 云账户技术(天津)有限公司 | Training method and device of face recognition model, electronic equipment and storage medium |
CN117593596B (en) * | 2024-01-19 | 2024-04-16 | 四川封面传媒科技有限责任公司 | Sensitive information detection method, system, electronic equipment and medium |
CN118196539B (en) * | 2024-05-13 | 2024-08-20 | 云账户技术(天津)有限公司 | Training method, device, equipment and storage medium of sensitive image detection model |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107871314B (en) * | 2016-09-23 | 2022-02-18 | 商汤集团有限公司 | Sensitive image identification method and device |
CN107122806B (en) * | 2017-05-16 | 2019-12-31 | 北京京东尚科信息技术有限公司 | Sensitive image identification method and device |
CN107992764B (en) * | 2017-11-28 | 2021-07-23 | 国网河南省电力公司电力科学研究院 | Sensitive webpage identification and detection method and device |
CN109145979B (en) * | 2018-08-15 | 2022-06-21 | 上海嵩恒网络科技股份有限公司 | Sensitive image identification method and terminal system |
CN111104538A (en) * | 2019-12-06 | 2020-05-05 | 深圳久凌软件技术有限公司 | Fine-grained vehicle image retrieval method and device based on multi-scale constraint |
CN111680698A (en) * | 2020-04-21 | 2020-09-18 | 北京三快在线科技有限公司 | Image recognition method and device and training method and device of image recognition model |
-
2021
- 2021-12-16 CN CN202111536956.XA patent/CN113936195B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN113936195A (en) | 2022-01-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113936195B (en) | Sensitive image recognition model training method and device and electronic equipment | |
CN110175613A (en) | Street view image semantic segmentation method based on Analysis On Multi-scale Features and codec models | |
CN109960742B (en) | Local information searching method and device | |
CN112818975B (en) | Text detection model training method and device, text detection method and device | |
CN107784288B (en) | Iterative positioning type face detection method based on deep neural network | |
CN115424282A (en) | Unstructured text table identification method and system | |
CN112381175A (en) | Circuit board identification and analysis method based on image processing | |
CN113033321A (en) | Training method of target pedestrian attribute identification model and pedestrian attribute identification method | |
CN117197763A (en) | Road crack detection method and system based on cross attention guide feature alignment network | |
CN112883926B (en) | Identification method and device for form medical images | |
CN111461121A (en) | Electric meter number identification method based on YO L OV3 network | |
CN112037239B (en) | Text guidance image segmentation method based on multi-level explicit relation selection | |
CN116597270A (en) | Road damage target detection method based on attention mechanism integrated learning network | |
CN116189162A (en) | Ship plate detection and identification method and device, electronic equipment and storage medium | |
CN104463091A (en) | Face image recognition method based on LGBP feature subvectors of image | |
CN112465821A (en) | Multi-scale pest image detection method based on boundary key point perception | |
CN116824274A (en) | Small sample fine granularity image classification method and system | |
CN115205877A (en) | Irregular typesetting invoice document layout prediction method and device and storage medium | |
CN111914863A (en) | Target detection method and device, terminal equipment and computer readable storage medium | |
CN116110066A (en) | Information extraction method, device and equipment of bill text and storage medium | |
CN111738088B (en) | Pedestrian distance prediction method based on monocular camera | |
Akhter et al. | Semantic segmentation of printed text from marathi document images using deep learning methods | |
WO2024092968A1 (en) | Pavement crack detection method, medium, and system | |
CN113971745B (en) | Method and device for identifying entry-exit check stamp based on deep neural network | |
CN118172787B (en) | Lightweight document layout analysis method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |