[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN113420801A - Network model generation method, device, terminal and storage medium - Google Patents

Network model generation method, device, terminal and storage medium Download PDF

Info

Publication number
CN113420801A
CN113420801A CN202110662865.4A CN202110662865A CN113420801A CN 113420801 A CN113420801 A CN 113420801A CN 202110662865 A CN202110662865 A CN 202110662865A CN 113420801 A CN113420801 A CN 113420801A
Authority
CN
China
Prior art keywords
network model
classified
sample
sample image
image group
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110662865.4A
Other languages
Chinese (zh)
Inventor
谭乔予
谷湘煜
彭志远
余亚玲
鲜开义
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Launch Digital Technology Co Ltd
Original Assignee
Shenzhen Launch Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Launch Digital Technology Co Ltd filed Critical Shenzhen Launch Digital Technology Co Ltd
Priority to CN202110662865.4A priority Critical patent/CN113420801A/en
Publication of CN113420801A publication Critical patent/CN113420801A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The application is applicable to the field of computers, and provides a network model generation method, a network model generation device, a network model generation terminal and a storage medium. The method for generating the network model comprises the following steps: acquiring a plurality of sample image groups, wherein each sample image group comprises a first sample image and a second sample image; training a network model to be trained by utilizing a plurality of sample image groups to obtain an initial network model capable of accurately outputting a first classification confidence corresponding to each sample image group; reconstructing an output layer of the initial network model to obtain a target network model; the target network model is used for processing the input first image group to be classified and outputting a first change detection thermodynamic diagram corresponding to the first image group to be classified and a second classification confidence coefficient corresponding to the first image group to be classified. The embodiment of the application can improve the generation efficiency of the end-to-end model.

Description

Network model generation method, device, terminal and storage medium
Technical Field
The present application belongs to the field of computers, and in particular, to a method, an apparatus, a terminal, and a storage medium for generating a network model.
Background
The network model is utilized to process the image, so that the practicability is better. When the two images are classified and identified, the two input images can be processed by using the network model to obtain the classification result of the two images.
At present, the complexity of the end-to-end network model training process is high, so that a network model generation method is needed, and the complexity of the training process is reduced while the end-to-end network model is obtained.
Disclosure of Invention
The embodiment of the application provides a method, a device, a terminal and a storage medium for generating a network model, which can reduce the complexity of an end-to-end network model training process while obtaining the end-to-end network model.
A first aspect of an embodiment of the present application provides a method for generating a network model, including:
acquiring a plurality of sample image groups, wherein each sample image group comprises a first sample image and a second sample image;
training a network model to be trained by utilizing the plurality of sample image groups to obtain an initial network model capable of accurately outputting a first classification confidence corresponding to each sample image group; the first classification confidence represents the similarity or difference degree between the first sample image and the second sample image in the corresponding sample image group;
reconstructing an output layer of the initial network model to obtain a target network model; the target network model is used for processing an input first image group to be classified and outputting a first change detection thermodynamic diagram corresponding to the first image group to be classified and a second classification confidence coefficient corresponding to the first image group to be classified
A second aspect of the embodiments of the present application provides a method for identifying a device state, including:
acquiring a first image group to be classified, wherein the first image group to be classified comprises a reference image and an image to be identified of equipment;
inputting the first image group to be classified into an equipment state recognition network model, and acquiring a first change detection thermodynamic diagram corresponding to the first image group to be classified output by the equipment state recognition network model and a second classification confidence corresponding to the first image group to be classified; wherein, the device state identification network model is the target network model provided in the first aspect of the embodiment of the present application;
and determining the target equipment state of the equipment according to the first change detection thermodynamic diagram and/or the second classification confidence level and the reference equipment state corresponding to the reference image.
A third aspect of the embodiments of the present application provides an apparatus for generating a network model, including:
a first acquisition unit configured to acquire a plurality of sample image groups, each sample image group including a first sample image and a second sample image;
the network model training unit is used for training a network model to be trained by utilizing the plurality of sample image groups to obtain an initial network model capable of accurately outputting a first classification confidence coefficient corresponding to each sample image group; the first classification confidence represents the similarity or difference degree between the first sample image and the second sample image in the corresponding sample image group;
the network model generating unit is used for reconstructing an output layer of the initial network model to obtain a target network model; the target network model is used for processing the input first image group to be classified and outputting a first change detection thermodynamic diagram corresponding to the first image group to be classified and a second classification confidence coefficient corresponding to the first image group to be classified.
A fourth aspect of the present application provides an apparatus for identifying a device status, including:
the second acquisition unit is used for acquiring a first image group to be classified, wherein the first image group to be classified comprises a reference image and an image to be identified of equipment;
a third obtaining unit, configured to input the first image group to be classified into an equipment state identification network model, and obtain a first change detection thermodynamic diagram corresponding to the first image group to be classified output by the equipment state identification network model and a second classification confidence corresponding to the first image group to be classified; the device state identification network model is a target network model provided by the first aspect of the embodiment of the application;
and the device state identification unit is used for determining the target device state of the device according to the first change detection thermodynamic diagram and/or the second classification confidence level and the reference device state corresponding to the reference image.
A fifth aspect of the embodiments of the present application provides a terminal, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the method when executing the computer program.
A sixth aspect of embodiments of the present application provides a computer-readable storage medium, which stores a computer program, and the computer program, when executed by a processor, implements the steps of the above method.
A seventh aspect of embodiments of the present application provides a computer program product, which when run on a terminal, causes the terminal to perform the steps of the method.
In the implementation mode of the application, the non-end-to-end network model to be trained is trained to obtain a trained non-end-to-end initial network model, and then the output layer of the initial network model is reconstructed to obtain a trained end-to-end target network model.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic implementation flow diagram of a method for generating a network model according to an embodiment of the present application;
fig. 2 is a schematic flowchart of a specific implementation of step S101 provided in an embodiment of the present application;
fig. 3 is a schematic flowchart of a specific implementation of step S102 according to an embodiment of the present application;
fig. 4 is a schematic flowchart of a first specific implementation of generating a second variation detection thermodynamic diagram according to an embodiment of the present application;
fig. 5 is a schematic flowchart of a second specific implementation of generating a second variation detection thermodynamic diagram according to an embodiment of the present application;
FIG. 6 is a schematic structural diagram of an initial network model provided by an embodiment of the present application;
fig. 7 is a schematic implementation flow chart of a method for identifying a device status according to an embodiment of the present application;
FIG. 8 is a schematic diagram illustrating an effect of identifying a device status according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of an apparatus for generating a network model according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of an apparatus for identifying a device status according to an embodiment of the present application;
fig. 11 is a schematic structural diagram of a terminal according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall be protected by the present application.
The network model is utilized to process the image, so that the practicability is better. When the two images are classified and identified, the two input images can be processed by using the network model to obtain the classification result of the two images.
For example, in an industrial scene inspection process, it is often necessary to determine whether a state change occurs in the state of a certain device or area according to the recorded normal state of the device or area. At this time, the network model may be used to process the input image to be recognized and the reference image, and output the similar information or the difference information between the two images, thereby obtaining the device status of the device.
The complexity of the current end-to-end network model training process is higher, so the method for generating the network model is provided, and the complexity of the training process is reduced while the end-to-end network model is obtained.
In some embodiments of the present application, the end-to-end network model obtained based on the above generation method may be applied in a patrol scenario. The generated network model is used for processing the input image to be recognized and the reference image, the terminal can acquire the change detection thermodynamic diagram and the classification confidence coefficient between the two images output by the generated network model, and then recognition of the equipment state is achieved according to the change detection thermodynamic diagram and the classification confidence coefficient.
In order to explain the technical means of the present application, the following description will be given by way of specific examples.
Fig. 1 shows a schematic implementation flow diagram of a method for generating a network model according to an embodiment of the present application, where the method may be applied to a terminal and may be applied to a situation where an end-to-end network model needs to be generated and the complexity of training the network model is reduced. The terminal can be a server, a computer and other devices.
Specifically, the method for generating the network model may include the following steps S101 to S103.
In step S101, a plurality of sample image groups are acquired.
In an embodiment of the application, the sample image groups are used for training a network model to be trained, and each sample image group includes one first sample image and one second sample image.
In the existing network model training mode, generally, balanced positive and negative samples are required to be used for model training, and when the finally trained network model is applied to equipment state recognition, the positive sample and the negative sample respectively correspond to a sample image in a normal equipment state and a sample image in an abnormal equipment state. In practical application, the sample image in the abnormal state can be acquired only under a specific condition, that is, only the image corresponding to the normal device state can be obtained in the sample image acquisition process.
Based on this, in some embodiments of the present application, as shown in fig. 2, the above-mentioned acquisition of the sample image group may include the following steps S201 to S203.
In step S201, a first sample image obtained by imaging a target object from a plurality of points is acquired.
The target object can be adjusted according to the specific application scene of the network model after training is completed. For example, when the network model is applied to device state recognition after training is completed, the target object may be a device whose device state needs to be recognized.
In some embodiments of the present application, different first sample images can be obtained by shooting the target object at different points, where the image contents of the first sample images corresponding to different points are different from each other.
Step S202, noise enhancement is carried out on the first sample image corresponding to each point location in the plurality of point locations respectively, and a second sample image corresponding to each point location in the plurality of point locations respectively is obtained.
Specifically, the noise enhancement refers to introducing interference factors such as nonlinear light, noise, and light and shade change to simulate an environmental change during image acquisition, and simultaneously ensuring that specific contents of the image are not modified to ensure that no interested change area exists in the enhanced second sample image relative to the corresponding first sample image.
Step S203, two first target sample images are selected from the first sample image and the second sample image to be used as a positive sample image group, two second target sample images are selected from the first sample image and the second sample image to be used as a negative sample image group, and a plurality of sample image groups are obtained.
The point locations corresponding to the two first target sample images are the same, and the point locations corresponding to the two second target sample images are different.
That is to say, in the present application, two arbitrary sample images are selected from the first sample image and the second sample image at the same point, and a positive sample image group is formed. And a first point location in the two different point locations corresponds to the first sample image or the second sample image, and the second point location corresponds to the first sample image or the second sample image to form a negative sample image group. By analogy, a plurality of positive sample image groups and a plurality of negative sample image groups can be obtained. And respectively taking each image group of the plurality of positive sample image groups and each image group of the plurality of negative sample image groups as a sample image group, and finally obtaining a plurality of sample image groups.
In order to make the finally trained network model have stronger universality, in some embodiments of the present application, the number of the above point locations is as large as possible, the number of different composed sample image groups is as large as possible, and meanwhile, the proportions of the positive sample image group and the negative sample image group in the obtained plurality of sample image groups should be relatively balanced.
In the embodiment of the application, the sample images corresponding to the same point location form a positive sample image group, and the sample images corresponding to different point locations form a positive sample image group, on one hand, in practical application, the corresponding images in different states do not need to be specially acquired, for example, when the method provided by the application is applied to equipment state detection, the image obtained by shooting equipment in an abnormal state does not need to be specially acquired, so that the sample collection is more convenient, and the practicability is higher; meanwhile, the positive sample and the negative sample can be determined without manually calibrating the image; on the other hand, in practical application, images shot at different point positions can be accurately identified, and when the method provided by the application is applied to inspection, the consistency of the images to be identified and the point positions of the reference images can be ensured.
Moreover, by means of noise enhancement, interference factors such as nonlinear illumination change can be prevented from influencing the accuracy of the trained network model, and the accuracy and universality of the network model are improved.
Step S102, training the network model to be trained by utilizing a plurality of sample image groups to obtain an initial network model capable of accurately outputting a first classification confidence corresponding to each sample image group.
Wherein the classification confidence represents the degree of similarity or difference between the two images. The first classification confidence is the classification confidence between the first sample image and the second sample image output by the initial network model.
That is, in the embodiment of the present application, based on a plurality of sample image groups, a network model to be trained may be trained, so that the network model to be trained converges, and an initial network model is obtained, where the initial network model may output an accurate first classification confidence between a first sample image and a second sample image based on the first sample image and the second sample image included in a single sample image group.
Specifically, in some embodiments of the present application, as shown in fig. 3, the training of the network model to be trained may include the following steps S301 to S303.
Step S301, selecting a target training sample from the sample image set.
Wherein each target training sample is associated with a sample negativity. The sample positivity and negativity are used for identifying that the target training sample belongs to the positive sample image group, or the target training sample belongs to the negative sample image group.
Step S302, each target training sample is respectively input into the network model to be trained, a third classification confidence corresponding to the target training sample output by the network model to be trained is obtained, and the accuracy of the network model to be trained is counted according to the third classification confidence corresponding to each target training sample and the positive and negative of the sample associated with each target training sample.
In some embodiments of the present application, the third classification confidence may be a confidence value output by the two classifiers, where when the third classification confidence is 0, it indicates that the two images in the target training sample are the same; when the third classification confidence is 1, it indicates that the two images in the target training sample are not the same.
Specifically, in some embodiments of the present application, a first target training sample may be input to a network model to be trained, a third classification confidence corresponding to the first target training sample output by the network model to be trained is obtained, and whether the output is accurate may be determined according to the positive and negative characteristics of a sample associated with the first target training sample and the third classification confidence corresponding to the first target training sample. That is, when the target training sample belongs to the positive sample image group and the third classification confidence coefficient is 0, or the target training sample belongs to the negative sample image group and the third classification confidence coefficient is 1, the output is accurate; on the contrary, when the target training sample belongs to the positive sample image group and the third classification confidence coefficient is 1, or the target training sample belongs to the negative sample image group and the third classification confidence coefficient is 0, the output is inaccurate. By analogy, after each target training sample is processed, the accuracy of the network model to be trained can be obtained through statistics.
Step S303, if the accuracy of the network model to be trained is smaller than the accuracy threshold, adjusting parameters in the network model to be trained, and re-executing the step of selecting the target training sample from the sample image set and the subsequent steps until the accuracy of the network model to be trained is larger than or equal to the accuracy threshold, so as to obtain an initial network model.
The accuracy threshold is the lowest accuracy required by the network model when the network model to be trained can output accurate classification confidence.
In some embodiments of the present application, if the accuracy of the network model to be trained is smaller than the accuracy threshold, it indicates that the network model to be trained has not converged and cannot output an accurate classification confidence, and it is necessary to adjust parameters in the network model to be trained, and re-perform the step of selecting the target training sample from the sample image set and subsequent steps until the accuracy of the network model to be trained is greater than or equal to the accuracy threshold, which indicates that the network model to be trained has converged and can output an accurate classification confidence, thereby obtaining an initial network model.
In the embodiment of the application, the initial network model capable of accurately outputting the first classification confidence corresponding to each sample image group can be obtained by training the network model to be trained by using the target training sample and the sample positivity and negativity of the target training sample.
And step S103, reconstructing an output layer of the initial network model to obtain a target network model.
The target network model is used for processing the input first image group to be classified and outputting a first change detection thermodynamic diagram corresponding to the first image group to be classified and a second classification confidence coefficient corresponding to the first image group to be classified.
Wherein the first change detection thermodynamic diagram is used to represent the difference between two images in the first group of images to be classified. And the second classification confidence coefficient is the classification confidence coefficient between two images in the image group to be classified output by the target network model.
In the implementation mode of the application, the non-end-to-end network model to be trained is trained to obtain a trained non-end-to-end initial network model, and then the output layer of the initial network model is reconstructed to obtain a trained end-to-end target network model.
Specifically, in some embodiments of the present application, the initial network model includes a change detection thermodynamic diagram generation layer and an output layer.
The change detection thermodynamic diagram generation layer is used for processing an input second image group to be classified and outputting a second change detection thermodynamic diagram corresponding to the second image group to be classified; and the output layer is used for classifying according to the second change detection thermodynamic diagram to obtain a first classification confidence corresponding to the second image group to be classified.
Specifically, in some embodiments of the present application, as shown in fig. 4, the specific steps of performing the above-described change detection thermodynamic development layer include the following steps S401 to S404.
Step S401, respectively performing feature extraction on two images to be classified included in the second image group to be classified, to obtain a plurality of first feature maps corresponding to the two images to be classified.
The sizes of a plurality of first characteristic graphs corresponding to the same image to be classified are different.
Specifically, in some embodiments of the present application, assuming that two images to be classified are a first image to be classified and a second image to be classified, feature extraction networks may be used to perform feature extraction of different scales on the first image to be classified, so as to obtain first feature maps of different sizes corresponding to the first image to be classified; and then, carrying out feature extraction of different scales on the second image to be classified by using the same feature extraction network to obtain first feature maps of different sizes corresponding to the second image to be classified.
Step S402, a plurality of first feature maps respectively corresponding to two images to be classified are respectively subjected to up-sampling, and a second feature map corresponding to each first feature map is obtained.
And the sizes of the second characteristic diagrams corresponding to the first characteristic diagrams are the same.
In some embodiments of the present application, in order to fuse feature maps of respective scales, a second feature map with the same size corresponding to each first feature map may be obtained by means of upsampling.
And S403, splicing the second feature maps corresponding to the same image to be classified in the direction of the feature channel to obtain a third feature map.
In other words, in the channel direction, the second feature maps corresponding to the first image to be classified may be stitched to obtain a third feature map containing scale features corresponding to the first image to be classified; similarly, in the channel direction, the second feature maps corresponding to the second image to be classified may be spliced to obtain a third feature map including the scale features corresponding to the second image to be classified.
And S404, generating a second change detection thermodynamic diagram corresponding to the second image group to be classified according to the third feature diagrams respectively corresponding to the two images to be classified.
Specifically, as shown in fig. 5, in some embodiments of the present application, the step S404 may include the following steps S501 to S503.
And S501, performing difference on the third feature maps corresponding to the two images to be classified respectively to obtain a difference map between the third feature maps corresponding to the two images to be classified respectively.
In this case, the difference map may substantially reflect the difference between the two images to be classified in the corresponding regions.
In step S502, different weights are assigned to each feature channel in the difference map by using an attention mechanism.
In some embodiments of the present application, attention mechanisms may be utilized to assign higher weights to feature channels of interest in the difference map and lower weights to feature channels of no interest in the difference map.
And S503, fusing the difference image in the direction of the characteristic channel according to the weight of each characteristic channel in the difference image to obtain a difference fusion image, and normalizing the difference fusion image to obtain a second change detection thermodynamic diagram corresponding to the second image group to be classified.
Specifically, in some embodiments of the present application, a sum of squares may be obtained in the channel direction, and then normalized by the image normalization and sigmoid activation function, so as to form a second change detection thermodynamic diagram consistent with the size of the image to be classified. In the obtained second change detection thermodynamic diagram, the closer the pixel value of the pixel region is to 1, the greater the degree of difference between the two images to be classified in the pixel region, and the closer the pixel value of the pixel region is to 0, the smaller the degree of difference between the two images to be classified in the pixel region.
In the embodiment of the application, a second change detection thermodynamic diagram for representing the difference degree or the similarity degree between two images to be classified can be generated by performing feature extraction, feature diagram splicing, difference making, fusion and normalization on the two images to be classified.
Considering that if two images to be classified in the second image group to be classified originate from the same point, there is very high similarity between the two images to be classified, and the pixel value on the second change detection thermodynamic diagram also approaches to 0, otherwise, if the two images to be classified in the second image group to be classified originate from different points, it is difficult for the two images to be classified to encounter a same corresponding image in real acquisition, and therefore there is a situation that the pixel value on the second change detection thermodynamic diagram tends to 1, in some embodiments of the present application, the change detection thermodynamic diagram may be flattened and two classifiers based on numerical values may be connected in series, so that the output layer of the network model may classify the second change detection thermodynamic diagram to obtain the first classification confidence corresponding to the second image group to be classified, which is also equivalent to introducing a signal that can be used for supervised learning, and simultaneously, directly outputting the similarity of the two images in the second image group to be classified.
FIG. 6 illustrates the structure of an initial network model provided in an embodiment of the present application; the images a and B correspond to two images, for example, a first sample image and a second sample image in a sample image group, and after feature extraction, feature map stitching, difference making, fusion and normalization, a second change detection thermodynamic diagram representing the degree of difference or the degree of similarity between the two images can be generated. After flattening and classification by the second classifier, a first classification confidence between the two images can be obtained.
In some embodiments of the present application, the target network model described above may be applied in the identification of the device state.
Specifically, fig. 7 shows a schematic implementation flow chart of the method for identifying the device state provided in the embodiment of the present application, where the method may be applied to a terminal and may be applied to a situation where the device state needs to be efficiently identified. Wherein, the terminal can be a computer, a server, a patrol robot and other equipment.
Specifically, the method for identifying the device status may include the following steps S701 to S703.
In step S701, a first image group to be classified is acquired.
In an embodiment of the present application, the first to-be-classified image group includes a reference image and an image to be recognized of a device.
The image to be identified can be an image obtained by shooting equipment by the inspection robot in the inspection process; the reference image may be a pre-stored image obtained by shooting the device in a normal state.
In some embodiments of the application, because the patrol inspection image often has a position offset relative to the reference image, the terminal can perform image registration on the image to be recognized and the reference image after acquiring the image to be recognized, so that the recognition accuracy is higher.
Step S702, inputting the first image group to be classified into the device state identification network model, and acquiring a first change detection thermodynamic diagram corresponding to the first image group to be classified output by the device state identification network model and a second classification confidence corresponding to the first image group to be classified.
The device state identification network model is a target network model shown in fig. 1 to 6.
That is, in some embodiments of the present application, the device state recognition network model may be generated based on the network model generation method shown in fig. 1 to 6, then, the first group of images to be classified is input into the trained device state recognition network model, and the first change detection thermodynamic diagram corresponding to the first group of images to be classified output by the device state recognition network model and the second classification confidence corresponding to the first group of images to be classified are obtained.
The first change detection thermodynamic diagram and the second classification confidence level may both represent the degree of similarity or the degree of difference between the reference image and the image to be recognized of the device.
Step S703, determining a target device state of the device according to the first change detection thermodynamic diagram and/or the second classification confidence level, and the reference device state corresponding to the reference image.
Specifically, in some embodiments of the present application, according to the second classification confidence, if the second classification confidence is 0, it indicates that the image to be recognized and the reference image are similar images, and based on this, it may be determined that the target device state of the device in the image to be recognized is the same as the reference device state corresponding to the reference image. If the second classification confidence is 1, the image to be recognized and the reference image are dissimilar images, and based on the image to be recognized and the reference image, the target device state of the device in the image to be recognized and the reference device state corresponding to the reference image can be confirmed to be different.
Likewise, since the image content of the first change detection thermodynamic diagram represents the difference between the image to be recognized and the reference image, whether the image to be recognized and the reference image are similar can also be determined based on the first change detection thermodynamic diagram, so that the target device state of the device in the image to be recognized is determined according to the reference device state corresponding to the reference image.
The first change detection thermodynamic diagram can intuitively represent the difference between the image to be recognized and the reference image, and further image processing is performed on the basis of the first change detection thermodynamic diagram, so that more image information can be determined, for example, the difference between the corresponding point of the image to be recognized and the corresponding point of the reference image can be determined on the basis of the first change detection thermodynamic diagram, and the like, and the practicability is higher.
FIG. 8 shows the effect of device status recognition using the device status recognition method of the present application; wherein 801 is a reference image, 802 is an image to be recognized, 803 is the reference image after image registration, 804 is the image to be recognized after image registration, and 805 is a first change detection thermodynamic diagram output by the device state recognition network model.
In the embodiment of the application, the device state recognition network model is generated according to the generation method of the network model shown in fig. 1 to 6, and the first change detection thermodynamic diagram and the second classification confidence coefficient between the reference image and the image to be recognized are output based on the device state recognition network model, so that the target device state of the device is determined according to the first change detection thermodynamic diagram and/or the second classification confidence coefficient and the reference device state corresponding to the reference image, and the practicability, the accuracy and the recognition efficiency of the device state recognition are improved.
It should be noted that, for simplicity of description, the foregoing method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts, as some steps may, in accordance with the present application, occur in other orders.
Fig. 9 is a schematic structural diagram of a network model generation apparatus 900 according to an embodiment of the present application, where the network model generation apparatus 900 is configured on a terminal.
Specifically, the network model generating apparatus 900 may include:
a first acquiring unit 901 configured to acquire a plurality of sample image groups, each sample image group including a first sample image and a second sample image;
a network model training unit 902, configured to train a network model to be trained by using the multiple sample image groups, so as to obtain an initial network model that can accurately output a first classification confidence corresponding to each sample image group; the first classification confidence represents the similarity or difference degree between the first sample image and the second sample image in the corresponding sample image group;
a network model generating unit 903, configured to reconstruct an output layer of the initial network model to obtain a target network model; the target network model is used for processing the input first image group to be classified and outputting a first change detection thermodynamic diagram corresponding to the first image group to be classified and a second classification confidence coefficient corresponding to the first image group to be classified.
In some embodiments of the present application, the first obtaining unit 901 may be further specifically configured to: acquiring a first sample image obtained by shooting a target object from a plurality of point positions; performing noise enhancement on the first sample image corresponding to each of the plurality of point locations to obtain a second sample image corresponding to each of the plurality of point locations; selecting two first target sample images from the first sample image and the second sample image as a positive sample image group, and selecting two second target sample images from the first sample image and the second sample image as a negative sample image group to obtain the multiple sample image groups, wherein the point locations corresponding to the two first target sample images are the same, and the point locations corresponding to the two second target sample images are different.
In some embodiments of the present application, the network model training unit 902 may be further specifically configured to: selecting target training samples from the sample image set, wherein each target training sample is associated with sample positivity and negativity, and the sample positivity and negativity are used for identifying that the target training samples belong to a positive sample image group or that the target training samples belong to a negative sample image group; respectively inputting each target training sample into the network model to be trained, obtaining a third classification confidence corresponding to the target training sample output by the network model to be trained, and counting the accuracy of the network model to be trained according to the third classification confidence corresponding to each target training sample and the positive and negative of the sample associated with each target training sample; if the accuracy of the network model to be trained is smaller than the accuracy threshold, adjusting parameters in the network model to be trained, and re-executing the step of selecting the target training sample from the sample image set and the subsequent steps until the accuracy of the network model to be trained is larger than or equal to the accuracy threshold, so as to obtain an initial network model.
In some embodiments of the present application, the initial network model comprises a change detection thermodynamic diagram generation layer and an output layer; the change detection thermodynamic diagram generation layer is used for processing an input second image group to be classified and outputting a second change detection thermodynamic diagram corresponding to the second image group to be classified; and the output layer is used for classifying the second change detection thermodynamic diagram to obtain a first classification confidence corresponding to the second image group to be classified.
In some embodiments of the present application, the above-described change detection thermodynamic diagram generation layer is further specifically configured to: respectively performing feature extraction on two images to be classified contained in the second image group to be classified to obtain a plurality of first feature maps corresponding to the two images to be classified, wherein the sizes of the plurality of first feature maps corresponding to the same image to be classified are different; respectively performing upsampling on a plurality of first feature maps respectively corresponding to two images to be classified to obtain a second feature map corresponding to each first feature map; splicing the second feature maps corresponding to the same image to be classified in the direction of a feature channel to obtain a third feature map; and generating a second change detection thermodynamic diagram corresponding to the second image group to be classified according to the third feature diagrams respectively corresponding to the two images to be classified.
In some embodiments of the present application, the above-described change detection thermodynamic diagram generation layer is further specifically configured to: performing difference on the third feature maps corresponding to the two images to be classified respectively to obtain a difference map between the third feature maps corresponding to the two images to be classified respectively; assigning different weights to each feature channel in the difference map by using an attention mechanism; and according to the weight of each feature channel in the difference map, fusing the difference map in the direction of the feature channel to obtain a difference fusion map, and performing normalization processing on the difference fusion map to obtain a second change detection thermodynamic map corresponding to the second image group to be classified.
It should be noted that, for convenience and simplicity of description, the specific working process of the network model generating apparatus 900 may refer to the corresponding process of the method described in fig. 1 to fig. 6, and is not described herein again.
Fig. 10 is a schematic structural diagram of an apparatus 1000 for identifying a device status according to an embodiment of the present application, where the apparatus 1000 for identifying a device status is configured on a terminal.
Specifically, the device 1000 for identifying the device status may include:
a second obtaining unit 1001, configured to obtain a first image group to be classified, where the first image group to be classified includes a reference image and an image to be identified of a device;
a third obtaining unit 1002, configured to input the first image group to be classified into a device state identification network model, and obtain a first change detection thermodynamic diagram corresponding to the first image group to be classified output by the device state identification network model, and a second classification confidence corresponding to the first image group to be classified; wherein the device state identification network model is the target network model described in fig. 1 to 6;
the device state identification unit 1003 is configured to determine a target device state of the device according to the first change detection thermodynamic diagram and/or the second classification confidence level, and a reference device state corresponding to the reference image.
It should be noted that, for convenience and simplicity of description, the specific working process of the device state identification apparatus 1000 may refer to the corresponding process of the method described in fig. 7 to fig. 8, and is not described herein again.
Fig. 11 is a schematic diagram of a terminal according to an embodiment of the present application. The terminal 11 may include: a processor 110, a memory 111 and a computer program 112, such as a network model generator, stored in the memory 111 and executable on the processor 110. The processor 110, when executing the computer program 112, implements the steps in the above-described embodiments of the method for generating a network model, such as the steps S101 to S103 shown in fig. 1. Alternatively, the processor 110 executes the computer program 112 to implement the steps in the above-mentioned method embodiments for identifying the device status, for example, steps S701 to S703 shown in fig. 7.
Alternatively, the processor 110, when executing the computer program 112, implements the functions of each module/unit in each apparatus embodiment described above, such as the first obtaining unit 901, the network model training unit 902, and the network model generating unit 903 shown in fig. 9. Alternatively, the processor 110, when executing the computer program 112, implements the functions of the modules/units in the above-described apparatus embodiments, such as the second obtaining unit 1001, the third obtaining unit 1002, and the device status identifying unit 1003 shown in fig. 10.
The computer program may be divided into one or more modules/units, which are stored in the memory 111 and executed by the processor 110 to accomplish the present application. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program in the terminal.
For example, the computer program may be divided into: the device comprises a first acquisition unit, a network model training unit and a network model generation unit. The specific functions of each unit are as follows: a first acquisition unit configured to acquire a plurality of sample image groups, each sample image group including a first sample image and a second sample image; the network model training unit is used for training a network model to be trained by utilizing the plurality of sample image groups to obtain an initial network model capable of accurately outputting a first classification confidence coefficient corresponding to each sample image group; the first classification confidence represents the similarity or difference degree between the first sample image and the second sample image in the corresponding sample image group; the network model generating unit is used for reconstructing an output layer of the initial network model to obtain a target network model; the target network model is used for processing the input first image group to be classified and outputting a first change detection thermodynamic diagram corresponding to the first image group to be classified and a second classification confidence coefficient corresponding to the first image group to be classified.
For another example, the computer program may be divided into: the device comprises a second acquisition unit, a third acquisition unit and a device state identification unit. The specific functions of each unit are as follows: the second acquisition unit is used for acquiring a first image group to be classified, wherein the first image group to be classified comprises a reference image and an image to be identified of equipment; a third obtaining unit, configured to input the first image group to be classified into an equipment state identification network model, and obtain a first change detection thermodynamic diagram corresponding to the first image group to be classified output by the equipment state identification network model and a second classification confidence corresponding to the first image group to be classified; wherein the device state identification network model is the target network model described in fig. 1 to 6; and the device state identification unit is used for determining the target device state of the device according to the first change detection thermodynamic diagram and/or the second classification confidence level and the reference device state corresponding to the reference image.
The terminal may include, but is not limited to, a processor 110, a memory 111. Those skilled in the art will appreciate that fig. 11 is merely an example of a terminal and is not intended to be limiting and may include more or fewer components than those shown, or some of the components may be combined, or different components, e.g., the terminal may also include input-output devices, network access devices, buses, etc.
The Processor 110 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The storage 111 may be an internal storage unit of the terminal, such as a hard disk or a memory of the terminal. The memory 111 may also be an external storage device of the terminal, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like provided on the terminal. Further, the memory 111 may also include both an internal storage unit and an external storage device of the terminal. The memory 111 is used for storing the computer program and other programs and data required by the terminal. The memory 111 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/terminal and method may be implemented in other ways. For example, the above-described apparatus/terminal embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow in the method of the embodiments described above can be realized by a computer program, which can be stored in a computer-readable storage medium and can realize the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. A method for generating a network model, comprising:
acquiring a plurality of sample image groups, wherein each sample image group comprises a first sample image and a second sample image;
training a network model to be trained by utilizing the plurality of sample image groups to obtain an initial network model capable of accurately outputting a first classification confidence corresponding to each sample image group; the first classification confidence represents the similarity or difference degree between the first sample image and the second sample image in the corresponding sample image group;
reconstructing an output layer of the initial network model to obtain a target network model; the target network model is used for processing the input first image group to be classified and outputting a first change detection thermodynamic diagram corresponding to the first image group to be classified and a second classification confidence coefficient corresponding to the first image group to be classified.
2. The method for generating a network model according to claim 1, wherein said obtaining a plurality of sets of sample images comprises:
acquiring a first sample image obtained by shooting a target object from a plurality of point positions;
performing noise enhancement on the first sample image corresponding to each of the plurality of point locations to obtain a second sample image corresponding to each of the plurality of point locations;
selecting two first target sample images from the first sample image and the second sample image as a positive sample image group, and selecting two second target sample images from the first sample image and the second sample image as a negative sample image group to obtain the multiple sample image groups, wherein the point locations corresponding to the two first target sample images are the same, and the point locations corresponding to the two second target sample images are different.
3. The method for generating a network model according to claim 1, wherein the training a network model to be trained by using the plurality of sample images to obtain an initial network model capable of accurately outputting a classification confidence of each sample image comprises:
selecting target training samples from the sample image set, wherein each target training sample is associated with sample positivity and negativity, and the sample positivity and negativity are used for identifying that the target training samples belong to a positive sample image group or that the target training samples belong to a negative sample image group;
respectively inputting each target training sample into the network model to be trained, obtaining a third classification confidence corresponding to the target training sample output by the network model to be trained, and counting the accuracy of the network model to be trained according to the third classification confidence corresponding to each target training sample and the positive and negative of the sample associated with each target training sample;
if the accuracy of the network model to be trained is smaller than the accuracy threshold, adjusting parameters in the network model to be trained, and re-executing the step of selecting the target training sample from the sample image set and the subsequent steps until the accuracy of the network model to be trained is larger than or equal to the accuracy threshold, so as to obtain an initial network model.
4. A method for generating a network model according to any one of claims 1 to 3, wherein the initial network model includes a change detection thermodynamic diagram generation layer and an output layer;
the change detection thermodynamic diagram generation layer is used for processing an input second image group to be classified and outputting a second change detection thermodynamic diagram corresponding to the second image group to be classified;
and the output layer is used for classifying the second change detection thermodynamic diagram to obtain a first classification confidence corresponding to the second image group to be classified.
5. The method for generating a network model according to claim 4, wherein the change detection thermodynamic diagram generation layer is further specifically configured to:
respectively performing feature extraction on two images to be classified contained in the second image group to be classified to obtain a plurality of first feature maps corresponding to the two images to be classified, wherein the sizes of the plurality of first feature maps corresponding to the same image to be classified are different;
respectively performing upsampling on a plurality of first feature maps respectively corresponding to two images to be classified to obtain a second feature map corresponding to each first feature map;
splicing the second feature maps corresponding to the same image to be classified in the direction of a feature channel to obtain a third feature map;
and generating a second change detection thermodynamic diagram corresponding to the second image group to be classified according to the third feature diagrams respectively corresponding to the two images to be classified.
6. The method for generating a network model according to claim 5, wherein the generating a second change detection thermodynamic diagram corresponding to the second image group to be classified according to the third feature maps corresponding to the two images to be classified respectively comprises:
performing difference on the third feature maps corresponding to the two images to be classified respectively to obtain a difference map between the third feature maps corresponding to the two images to be classified respectively;
assigning different weights to each feature channel in the difference map by using an attention mechanism;
and according to the weight of each feature channel in the difference map, fusing the difference map in the direction of the feature channel to obtain a difference fusion map, and performing normalization processing on the difference fusion map to obtain a second change detection thermodynamic map corresponding to the second image group to be classified.
7. A method for identifying the state of equipment is characterized by comprising the following steps:
acquiring a first image group to be classified, wherein the first image group to be classified comprises a reference image and an image to be identified of equipment;
inputting the first image group to be classified into an equipment state recognition network model, and acquiring a first change detection thermodynamic diagram corresponding to the first image group to be classified output by the equipment state recognition network model and a second classification confidence corresponding to the first image group to be classified; wherein the device state recognition network model is the target network model of any one of claims 1 to 6;
and determining the target equipment state of the equipment according to the first change detection thermodynamic diagram and/or the second classification confidence level and the reference equipment state corresponding to the reference image.
8. An apparatus for generating a network model, comprising:
a first acquisition unit configured to acquire a plurality of sample image groups, each sample image group including a first sample image and a second sample image;
the network model training unit is used for training a network model to be trained by utilizing the plurality of sample image groups to obtain an initial network model capable of accurately outputting a first classification confidence coefficient corresponding to each sample image group; the first classification confidence represents the similarity or difference degree between the first sample image and the second sample image in the corresponding sample image group;
the network model generating unit is used for reconstructing an output layer of the initial network model to obtain a target network model; the target network model is used for processing the input first image group to be classified and outputting a first change detection thermodynamic diagram corresponding to the first image group to be classified and a second classification confidence coefficient corresponding to the first image group to be classified.
9. A terminal comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any of claims 1 to 6 when executing the computer program or implements the steps of the method according to claim 7 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 6, or which, when being executed by a processor, carries out the steps of the method according to claim 7.
CN202110662865.4A 2021-06-15 2021-06-15 Network model generation method, device, terminal and storage medium Pending CN113420801A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110662865.4A CN113420801A (en) 2021-06-15 2021-06-15 Network model generation method, device, terminal and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110662865.4A CN113420801A (en) 2021-06-15 2021-06-15 Network model generation method, device, terminal and storage medium

Publications (1)

Publication Number Publication Date
CN113420801A true CN113420801A (en) 2021-09-21

Family

ID=77788585

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110662865.4A Pending CN113420801A (en) 2021-06-15 2021-06-15 Network model generation method, device, terminal and storage medium

Country Status (1)

Country Link
CN (1) CN113420801A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114299304A (en) * 2021-12-15 2022-04-08 腾讯科技(深圳)有限公司 Image processing method and related equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109344742A (en) * 2018-09-14 2019-02-15 腾讯科技(深圳)有限公司 Characteristic point positioning method, device, storage medium and computer equipment
US20190377940A1 (en) * 2018-06-12 2019-12-12 Capillary Technologies International Pte Ltd People detection system with feature space enhancement
CN110659545A (en) * 2018-06-29 2020-01-07 比亚迪股份有限公司 Training method of vehicle recognition model, vehicle recognition method and device and vehicle
CN111259968A (en) * 2020-01-17 2020-06-09 腾讯科技(深圳)有限公司 Illegal image recognition method, device, equipment and computer readable storage medium
CN111310850A (en) * 2020-03-02 2020-06-19 杭州雄迈集成电路技术股份有限公司 License plate detection model construction method and system and license plate detection method and system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190377940A1 (en) * 2018-06-12 2019-12-12 Capillary Technologies International Pte Ltd People detection system with feature space enhancement
CN110659545A (en) * 2018-06-29 2020-01-07 比亚迪股份有限公司 Training method of vehicle recognition model, vehicle recognition method and device and vehicle
CN109344742A (en) * 2018-09-14 2019-02-15 腾讯科技(深圳)有限公司 Characteristic point positioning method, device, storage medium and computer equipment
CN111259968A (en) * 2020-01-17 2020-06-09 腾讯科技(深圳)有限公司 Illegal image recognition method, device, equipment and computer readable storage medium
CN111310850A (en) * 2020-03-02 2020-06-19 杭州雄迈集成电路技术股份有限公司 License plate detection model construction method and system and license plate detection method and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
田敏: "复杂工业场景目标视觉检测方法研究", 昆明理工大学硕士学位论文, 31 December 2019 (2019-12-31) *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114299304A (en) * 2021-12-15 2022-04-08 腾讯科技(深圳)有限公司 Image processing method and related equipment
CN114299304B (en) * 2021-12-15 2024-04-12 腾讯科技(深圳)有限公司 Image processing method and related equipment

Similar Documents

Publication Publication Date Title
CN109166156B (en) Camera calibration image generation method, mobile terminal and storage medium
CN109002820B (en) License plate recognition method and device and related equipment
CN109117773B (en) Image feature point detection method, terminal device and storage medium
CN111275107A (en) Multi-label scene image classification method and device based on transfer learning
US11455831B2 (en) Method and apparatus for face classification
CN111126514A (en) Image multi-label classification method, device, equipment and medium
CN110751037A (en) Method for recognizing color of vehicle body and terminal equipment
CN111985458B (en) Method for detecting multiple targets, electronic equipment and storage medium
CN112257808B (en) Integrated collaborative training method and device for zero sample classification and terminal equipment
CN112801146A (en) Target detection method and system
CN110348511A (en) A kind of picture reproduction detection method, system and electronic equipment
CN113095370A (en) Image recognition method and device, electronic equipment and storage medium
CN111932363A (en) Identification and verification method, device, equipment and system for authorization book
CN113420801A (en) Network model generation method, device, terminal and storage medium
CN113255766B (en) Image classification method, device, equipment and storage medium
CN113221762B (en) Cost balance decision method, insurance claim decision method, apparatus and equipment
CN114332993A (en) Face recognition method and device, electronic equipment and computer readable storage medium
CN111353514A (en) Model training method, image recognition method, device and terminal equipment
CN113139617A (en) Power transmission line autonomous positioning method and device and terminal equipment
CN112287923A (en) Card information identification method, device, equipment and storage medium
CN114051625A (en) Point cloud data processing method, device, equipment and storage medium
CN111612021B (en) Error sample identification method, device and terminal
CN111860623A (en) Method and system for counting tree number based on improved SSD neural network
CN117274992A (en) Method, device, equipment and storage medium for constructing plant three-dimensional segmentation model
CN116071804A (en) Face recognition method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination