[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN117115567A - Domain generalization image classification method, system, terminal and medium based on feature adjustment - Google Patents

Domain generalization image classification method, system, terminal and medium based on feature adjustment Download PDF

Info

Publication number
CN117115567A
CN117115567A CN202311371704.5A CN202311371704A CN117115567A CN 117115567 A CN117115567 A CN 117115567A CN 202311371704 A CN202311371704 A CN 202311371704A CN 117115567 A CN117115567 A CN 117115567A
Authority
CN
China
Prior art keywords
domain
image
classification
network
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311371704.5A
Other languages
Chinese (zh)
Other versions
CN117115567B (en
Inventor
何志海
陈烁硕
唐雨顺
阚哲涵
欧阳健
吴昊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southern University of Science and Technology
Original Assignee
Southern University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southern University of Science and Technology filed Critical Southern University of Science and Technology
Priority to CN202311371704.5A priority Critical patent/CN117115567B/en
Publication of CN117115567A publication Critical patent/CN117115567A/en
Application granted granted Critical
Publication of CN117115567B publication Critical patent/CN117115567B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • G06V10/765Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects using rules for classification or partitioning the feature space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
    • G06V10/451Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
    • G06V10/454Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Molecular Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a domain generalization image classification method, a system, a terminal and a medium based on feature adjustment, and particularly relates to the technical field of machine learning and computer vision. And constructing a target set by utilizing the image of the target domain, inputting the target set into a trained domain generalization image classification model, classifying the image of the target domain, and obtaining a final classification result. The scheme can adaptively adjust the image characteristic representation learned by the basic network model based on the domain offset information captured by the sensor network, consumes less computing resources and time, has high classification efficiency and has good robustness and robustness.

Description

Domain generalization image classification method, system, terminal and medium based on feature adjustment
Technical Field
The invention relates to the technical field of machine learning and computer vision, in particular to a domain generalization image classification method, a system, a terminal and a medium based on feature adjustment.
Background
To solve the problem of poor model generalization capability under different data distributions, domain generalization aims at achieving object-oriented generalization by model learning using only source data. Domain generalization is widely used in practical applications such as image classification, object detection, face recognition, speech recognition, and the like.
Currently, a large number of domain generalization methods are aimed at solving the domain offset problem, including a learning domain invariant feature representation method, a data enhancement method, a model learning strategy and the like. The training of the model based on the model learning strategy is mainly limited to how to train the model on source domain data, and individual information related to a target domain sample is ignored to carry out targeted self-adaptive reasoning in a test stage, so that the model has poor performance on part of samples on the target domain, and the domain generalization capability and the model precision are lower.
Disclosure of Invention
In view of the above-mentioned shortcomings of the prior art, the present invention aims to provide a method, a system, a terminal and a medium for classifying domain generalization images based on feature adjustment, which aims to solve the problem of low domain generalization capability of learning models in the prior art.
In order to achieve the above object, a first aspect of the present invention provides a domain-generalized image classification method based on feature adjustment, including the steps of:
Acquiring an image and an image class label of a target domain, and acquiring a trained domain generalization image classification model, wherein the trained domain generalization image classification model comprises a trained basic feature extraction network, a trained domain offset perception network, a trained executor network and a trained basic classification network;
and constructing a target set by utilizing the image of the target domain, inputting the target set into the trained domain generalization image classification model, and classifying the image of the target domain to obtain a final classification result.
Optionally, the step of training the trained domain generalization image classification model includes:
acquiring an image of a source domain, and constructing a domain generalization image classification model, wherein the domain generalization image classification model comprises a basic feature extraction network, a domain offset perception network, an executor network and a basic classification network;
constructing a training set based on the image of the source domain, and inputting the training set into the basic feature extraction network to obtain original features;
inputting the original characteristics into the domain offset sensing network based on preset constraint conditions to obtain constraint deviation;
inputting the original features to the actuator network based on the constraint deviation to obtain adjusted features;
Inputting the adjusted characteristics into the basic classification network to obtain classification results;
and calculating joint loss based on the classification result and the image class label, and repeatedly executing the step of training the domain generalized image classification model until the joint loss reaches a preset joint loss threshold value to obtain a trained domain generalized image classification model.
Optionally, the method further comprises updating the original features by using disturbance features, and specifically comprises the following steps:
based on the statistical distribution of the original features, acquiring the mean and variance of the original features;
scaling the Gaussian distribution of the original feature based on the mean and the variance to obtain a scaled mean and a scaled variance;
generating a disturbance sample by using the scaled mean value and the scaled variance to obtain disturbance characteristics;
and updating the original characteristic by using the disturbance characteristic.
Optionally, the inputting the original feature into the domain offset aware network to obtain a constraint deviation includes:
acquiring classification categories of the images and class centers corresponding to each classification category;
obtaining a structural constraint deviation based on the original characteristics and a preset structural constraint condition;
Obtaining distribution constraint deviation based on the original characteristics, the class center and a preset distribution constraint condition;
and obtaining the constraint deviation of the original feature based on the structural constraint deviation and the distribution constraint deviation.
Optionally, the obtaining a structural constraint deviation based on the original feature and a preset structural constraint condition includes:
projecting the original features to a space with the same dimension as the original features to obtain projected features;
based on the classification category of the image, carrying out normalization processing on the projected features to obtain structural constraint features;
and calculating the distance between the projected feature and the structural constraint feature to obtain structural constraint deviation.
Optionally, the obtaining a distribution constraint deviation based on the original feature, the class center and a preset distribution constraint condition includes:
based on the preset distribution constraint conditions, obtaining the correlation between the original features and each class center;
based on the correlation, obtaining weights of the original features and the centers of the classes;
and obtaining distribution constraint deviation based on the distance between the original feature and each class center and the weight.
Optionally, the calculating a joint loss based on the classification result and the image class label includes:
calculating cross entropy loss based on the classification result, the image class label and the number of the image class labels;
based on the classification category of the image, carrying out normalization processing on the projected features to obtain a structural constraint condition; constructing a structure constraint loss function based on the structure constraint condition and the projected features;
based on the cross entropy loss and the structure constraint loss function, a joint loss is calculated.
A second aspect of the present invention provides a feature adjustment-based domain generalization classification system, the system comprising:
the data acquisition module is used for acquiring an image of a target domain and an image category label, and acquiring a trained domain generalization image classification model, wherein the trained domain generalization image classification model comprises a trained basic feature extraction network, a trained domain offset perception network, a trained executor network and a trained basic classification network;
the domain generalization classification module is used for constructing a target set by utilizing the image of the target domain, inputting the target set into the domain generalization image classification model, and classifying the image of the target domain to obtain a final classification result.
The third aspect of the present invention provides an intelligent terminal, which includes a memory, a processor, and a feature adjustment-based domain generalization classification program stored in the memory and executable on the processor, where the feature adjustment-based domain generalization classification program implements any one of the steps of the feature adjustment-based domain generalization image classification method described above when executed by the processor.
A fourth aspect of the present invention provides a computer-readable storage medium having stored thereon a feature-adjustment-based domain generalization classification program, which when executed by a processor, implements the steps of any one of the above feature-adjustment-based domain generalization image classification methods.
Compared with the prior art, the invention has the following beneficial effects:
the method comprises the steps of obtaining an image of a target domain and an image category label, and obtaining a trained domain generalization image classification model, wherein the trained domain generalization image classification model comprises a trained basic feature extraction network, a trained domain offset perception network, a trained executor network and a trained basic classification network; and constructing a target set by utilizing the image of the target domain, inputting the target set into a trained domain generalization image classification model, classifying the image of the target domain, and obtaining a final classification result.
Therefore, the trained domain generalization image classification model designed by the invention is a novel sensor-actuator network model, each submodule in the model can sense constraint deviation caused by domain deviation by introducing a domain deviation sensor, scale statistical parameters of the source domain image characteristics according to the statistical distribution characteristics of the source domain image characteristics to obtain disturbed source domain image characteristics, and adjust and optimize the scaled source domain image characteristics according to the constraint deviation to obtain adjusted characteristics, functions of each submodule complement each other, classification efficiency can be improved, accuracy and generalization capability of the image classification model are improved, and the submodule has good self-adaptive adjustment capability without depending on complex data generation and enhancement, so that the model can keep good performance in different environments and has good robustness and robustness.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a domain generalized image classification strategy of the present invention;
FIG. 2 is a schematic diagram of a domain generalized image classification system according to the present invention;
FIG. 3 is a flow chart of a domain generalization image classification method based on feature adjustment according to the present invention;
FIG. 4 is a flow chart of training a domain generalized image classification model according to the present invention;
FIG. 5 is a projected feature simulation diagram and corresponding structural constraint deviation simulation diagram of the present invention;
FIG. 6 is a graph showing the comparison of the characteristic distribution before and after the adjustment of the actuator network according to the present invention;
FIG. 7 is a flow chart of classification using the constructed domain generalized image classification model according to the present invention;
FIG. 8 is a schematic diagram of a domain generalization image classification system based on feature adjustment according to the present invention;
fig. 9 is a schematic block diagram of an intelligent terminal according to the present invention.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth such as the particular system architecture, techniques, etc., in order to provide a thorough understanding of the embodiments of the present invention. It will be apparent, however, to one skilled in the art that the present invention may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present invention with unnecessary detail.
It should be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It is also to be understood that the terminology used in the description of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in this specification and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be further understood that the term "and/or" as used in the present specification and the appended claims refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations.
As used in this specification and the appended claims, the term "if" may be interpreted in context as "when …" or "upon" or "in response to a determination" or "in response to detection. Similarly, the phrase "if a condition or event described is determined" or "if a condition or event described is detected" may be interpreted in the context of meaning "upon determination" or "in response to determination" or "upon detection of a condition or event described" or "in response to detection of a condition or event described".
The following description of the embodiments of the present invention will be made more fully hereinafter with reference to the accompanying drawings, in which embodiments of the invention are shown, it being evident that the embodiments described are only some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, but the present invention may be practiced in other ways other than those described herein, and persons skilled in the art will readily appreciate that the present invention is not limited to the specific embodiments disclosed below.
The existing method ignores the problem that individual information related to a target domain sample is utilized to conduct targeted self-adaptive reasoning optimization in a testing stage, and is inspired by an automatic control theory, a sensor-actuator network model is designed, and a domain offset detection mechanism and a displayed characteristic adjustment mechanism are provided. Our sensor-actuator network model consists of a sensor network for detecting domain offsets and an actuator network that can adaptively adjust characteristics based on sensor signals. Wherein the domain offset sensor comprises two parts: a constraint network and a data conversion network. We introduce two constraints on the original features extracted by the basic feature extraction network and learn a constraint network in the source domain to verify whether the output features meet these constraints, so that in the inference phase the constraint network will analyze whether the features of the target domain samples meet these constraints, and if not, the deviation between the constraints is called constraint deviation, which is caused by the domain offset. We then learn a data conversion network to map these constraint deviations into valid guide signals. Upon receipt of the guidance signal, the actuator network is configured to adjust the original feature to a more differentiated new feature that is input into the base classification network for classification decisions. According to the method, in the reasoning stage, the domain offset of the target domain sample is captured by utilizing the information of the target domain sample, and a corresponding guide signal is generated, so that the original characteristic with the offset is guided to be adjusted and optimized, and the precision and generalization capability of the picture classification model are improved. The sensor-actuator network model introduced by the method has the advantages of small parameter quantity, low calculation resource and time consumption; and the domain offset detection is carried out by utilizing the information of the target domain sample and specific constraint during the test, and complex data generation and enhancement are not required.
Exemplary method
The embodiment of the invention provides a domain generalization image classification method based on feature adjustment, which is deployed on electronic equipment such as a computer, a server and the like, wherein the application fields are medical diagnosis, image classification, target detection, face recognition, voice recognition and the like, the application scene is a model generalization from source domain data training to a target domain with different fields but related arbitrary data distribution, the main purpose is to construct a domain generalization image classification system based on a domain generalization image classification strategy, the domain generalization image classification strategy is as shown in fig. 1, namely, original features of source domain data are acquired by generating a basic feature extraction network, the domain offset of the original features on the target domain is perceived by using a sensor network, then the perceived domain offset is subjected to feature adjustment by using an actuator network, the adjusted features are obtained, and finally the classification decision is performed on the adjusted features by using the basic classification network.
Specifically, domain generalizationThe image classification system is shown in FIG. 2 and includes a basic feature extraction networkDomain offset aware network, data conversion network->Actuator network->And basic classification network->. Wherein, the basic feature extraction network->The method is used for constructing a training set based on the image of the source domain, extracting original features based on the training set and outputting the original features to the domain offset perception network and the actuator network. The domain offset sensing network is used for obtaining structural constraint deviation based on the received original characteristics, a preset structural constraint condition and a preset distribution constraint condition and outputting the constraint deviation to the actuator network, and comprises a structural constraint network- >And distributed constraint network->. Wherein the structural constraint network->The method comprises the steps of inputting original features into a trained structure constraint network to obtain structure constraint deviation; distribution constraint network->The method is used for inputting the class center of the original characteristics and the source domain data into a trained distribution constraint network to obtain distribution constraint deviation. Data conversion network->For use inThe structural constraint deviations and the distribution constraint deviations are mapped into guide signals and the guide signals are output to an actuator network. Actuator network->The method is used for obtaining the adjusted characteristics based on the received constraint deviation and the original characteristics and outputting the adjusted characteristics to a basic classification network. Basic classification network->For obtaining a classification result based on the received adjusted features. And calculating joint loss based on the classification result and the image class label, and training each sub-network according to a preset joint loss threshold until the joint loss reaches the preset joint loss threshold, so as to obtain a trained domain generalized image classification model. And classifying the image of the target domain by using the trained domain generalization image classification model to obtain a final classification result.
Based on the system, the designed domain generalization image classification method flow based on feature adjustment mainly comprises the following steps as shown in fig. 3:
Step S100: acquiring an image and an image class label of a target domain, and acquiring a trained domain generalization image classification model, wherein the trained domain generalization image classification model comprises a trained basic feature extraction network, a trained domain offset perception network, a trained executor network and a trained basic classification network;
step S200: and constructing a target set by utilizing the image of the target domain, inputting the target set into a trained domain generalization image classification model, classifying the image of the target domain, and obtaining a final classification result.
The step of training the trained domain generalized image classification model obtained in step S100, as shown in fig. 4, includes:
step S110: acquiring an image of a source domain, and constructing a domain generalization image classification model, wherein the domain generalization image classification model comprises a basic feature extraction network, a domain offset perception network, an executor network and a basic classification network;
specifically, based on the source domain data sample, image data of the source domain and category labels of the respective images are acquired. It should be noted that different images may have the same category label, but at most one category label per image. The class labels of the images are obtained by calibrating application domain experts to which the images belong according to experience.
And constructing a domain generalization image classification model, wherein the domain generalization image classification model comprises a basic feature extraction network, a domain offset perception network, an executor network and a basic classification network.
Specifically, a domain generalized image classification model is constructed, each sub-network in the model, namely a basic feature extraction network, a domain offset perception network, an actuator network and a basic classification network is initialized, and iterative training is carried out on each sub-network through set conditions until the set conditions are met. The training process of the domain generalized image classification model is described in detail below. The domain offset aware network comprises a structure constraint network and a distribution constraint network.
Step S120: and constructing a training set by using the image of the source domain, and inputting the training set into a basic feature extraction network to obtain original features.
Specifically, all source domain image data is combined into one training set using existing empirical risk minimization methods. Training a basic feature extraction network using a Resnet-50 convolutional neural network as a backbone by using the constructed training set and using cross entropy loss as an objective functionAnd a basic classification network constructed by a layer of fully connected layers +.>
Step S130: inputting the original characteristics into a domain offset sensing network based on preset constraint conditions to obtain constraint deviation;
In particular, because the underlying feature extraction network is trained on the source domainSource domain sample features can cluster well in between features according to category. Here, the class center of the source domain feature is defined as +.>WhereinRepresenting the number of class centers. However, on the target domain, the feature distribution of these clusters becomes discrete due to some degree of disruption, part of the features become far from the correct class center, original features +.>There will often be some deviation from each class center, and this embodiment generates a distribution constraint deviation by capturing these deviations, so as to perform statistical analysis on the domain offset.
The present embodiment detects a domain offset by sensing the domain offset, that is, by learning constraints in the source domain, and verifying whether the learned constraints are satisfied based on cross entropy loss. Thus, the present embodiment defines a set of constraints in the source domainWherein->Each constraint defined is represented, i represents the number of the set of constraints, and N represents one dimension of the data in the source domain. For the target domain sample, extracting network generated original features based on basic features +.>The constraint that the original feature +. >Relative to the constraintExists aboutBeam deviation->. In the embodiment, the structural constraint network and the distributed constraint network are introduced to generate corresponding constraint conditions to perceive the domain offset, so that the domain offset is perceived from different angles to obtain the comprehensive deviation caused by the domain offset, and the accuracy of domain offset perception can be improved.
In this embodiment, the domain offset is directly perceived based on the original features by using the structural constraint network and the distribution constraint network to generate corresponding constraint conditions, and as other preferred embodiments, the domain offset may be perceived by using the structural constraint network and the distribution constraint network to generate corresponding constraint conditions after the disturbance is applied to the original features to obtain disturbance features and the original features are updated by using the disturbance features.
Because the target domain is inaccessible, the actuator network needs to be trained on the source domain to obtain the ability to adaptively adjust the features. For the source domain data sample, acquiring the mean value and variance of Gaussian distribution of the original characteristic in a training stage; scaling the Gaussian distribution of the original feature based on the mean and variance of the Gaussian distribution of the original feature to obtain a scaled mean and scaled variance; generating a disturbance sample by using the scaled mean value and the scaled variance to obtain disturbance characteristics, namely:
Wherein,representing disturbance characteristics->Representing original features->Indicating the amount of disturbance.
The domain offset is then perceived based on the perturbation features, namely perceived structural constraint deviations and distributed constraint deviations.
The generation of the structural constraint network and the distributed constraint network and the acquisition process of the corresponding constraint deviation are described in detail below:
step S131: obtaining a structural constraint deviation based on the original characteristics and a preset structural constraint condition;
specifically, a structure constraint network is initialized based on original features, and a structure constraint loss function is built based on preset structure constraint conditionsAnd training the structure constraint network by using the structure constraint loss function to obtain a trained structure constraint network.
In one embodiment, the original features are updated with the perturbation features, and the structural constraint network is trained with the structural constraint loss function to obtain a trained structural constraint network.
Specifically, the updated original features are projected to a space with the same dimension as the original features, and the projected features are obtained; constructing a structure constraint loss function based on a preset structure constraint condition; based on the projected features and the structural constraint loss function, a structural constraint deviation is obtained. The structure constraint loss function is based on the classification category of the image, and the projected features are normalized to obtain a structure constraint condition; and is constructed based on structural constraints and the projected features.
The present embodiment constrains the network by structureD dimension (where D represents a positive integer) to be extracted from source domain image data (e.g., 256 x 256 dimension data)>Projection to a space having the same dimension as the source domain image data to obtain a projected feature +.>,/>,/>,…,/>Respectively represent the projected features of each dimension, and for +.>Structural constraints are imposed.
In this embodiment, for each classification class c of the source domain image data, a subset of subscripts with a length d is randomly selectedWherein each item represents a subscript of each source domain image selected in the classification category, respectively,/->,/>The method comprises the steps of carrying out a first treatment on the surface of the And the remaining subscripts are grouped into another subset of subscripts, denoted asWherein each term represents a subscript of each of the source domain images remaining in the classification category, respectively. Then construct structural constraint features using the two generated sub-sets of indices>The method comprises the following steps:
the influence of singular sample data is eliminated by randomly dividing the source domain image into two main classes and carrying out normalization processing.
Then, based on preset structure constraint conditions, constructing a structure constraint loss functionTraining the structure constraint network by utilizing the structure constraint loss function to obtain a trained structure constraint network, namely:
Based on the structure constraint loss function, the original features are projected to a space with the same dimension as the trained structure constraint network, and the projected features are obtained. Then, the structural constraint deviation represents the projection characteristicAnd structural constraint features->The deviation between them, i.e. the structural constraint deviation, is denoted +.>
Simulation experiments are performed based on source domain image data with dimension d=256 to obtain a projected feature simulation diagram and a corresponding structural constraint deviation simulation diagram shown in fig. 5, wherein the projected feature simulation diagram is shown in fig. 5 (a), the structural constraint deviation simulation diagram obtained based on the projected feature is shown in fig. 5 (b), and the abscissa represents the numerical value of dimension D and the ordinate represents the normalized value of the projected feature.
In one embodiment, the network is constrained by a structureOriginal feature to be extracted from source domain image data +.>Obtaining disturbance characteristics after disturbance is applied>Then the disturbance characteristic is->The projected features are projected into a space having the same dimension as the source domain image data and based on the same principle, structural constraint deviations are obtained.
Step S132: obtaining distribution constraint deviation based on the original characteristics, the class center and a preset distribution constraint condition;
Specifically, based on a preset distribution constraint condition, obtaining the correlation between the original characteristics and each class center; based on the correlation, obtaining weights of the original features and each class center; and obtaining distribution constraint deviation based on the distance and the weight between the original characteristic and each class center.
Because the basic feature extraction network is trained on the source domain, the source domain sample features can be clustered well in the middle of the features according to categories. But on the target domain, the feature distribution of these clusters becomes discrete due to some degree of disruption, and part of the features become far from the correct class center, due to the domain offset. Class center on the target domain, according to the source domain features defined aboveAnd original features->Distance to the center of each class, define the distribution constraint deviation as the original feature +.>A weighted sum of distances from the respective class centers, namely:
wherein,distribution constraint network->For calculating the correlation between the original feature and each class center and converting the correlation into weights for the deviation between the original feature and the corresponding class center. Distributed constraint networkIs generated through class center training learning of disturbance characteristics and source domain characteristics.
In one embodiment, the distribution constraint bias is defined as the original feature And (3) carrying out weighted summation on the deviation from the center of each class to obtain distribution constraint deviation.
Step S140: and inputting the original characteristics into an actuator network based on the constraint deviation to obtain the adjusted characteristics.
Specifically, the structural constraint deviation and the distribution constraint deviation are converted into guide signals by using a data conversion network. Specifically, a data conversion network is generated based on the learning model of the present embodimentThe structural constraint deviation and the distribution constraint deviation are mapped into a guide signal +.>The method comprises the following steps:
will guide the signalInput to the actuator network, guide the actuator network to perform the original feature +.>Is adjusted. The present embodiment provides a guidance signal->The characteristics of the source domain data are comprehensively characterized, and the accurate adjustment and optimization of the offset by subsequent learning are facilitated.
To enable better domain generalization, the present embodiment aims to learn an adaptive adjustmentThe ability of the features, and thus the ability of the features to be accurately classified, is maximized to achieve better classification performance. Under the guidance of the guidance signal of the sensor network, the embodiment is based on the original characteristicsAnd instruction signal->Generating an actuator network->Through the actuator network->Acquiring characteristic adjustment quantity->To adjust the original characteristic to obtain the adjusted characteristic +. >The method comprises the following steps:
in one embodiment, because the target domain is inaccessible, the actuator network needs to be trained on the source domain to obtain the ability to adaptively adjust the features. For this reason, the mean and variance of the gaussian distribution of the original features are obtained for the source domain samples during the training phase; scaling the Gaussian distribution of the original feature based on the mean and variance of the Gaussian distribution of the original feature to obtain a scaled mean and scaled variance; generating a disturbance sample by using the scaled mean value and the scaled variance to obtain disturbance characteristics, namely:
and to disturbance characteristicsAdjusting to obtain the adjusted characteristic +.>The method comprises the following steps:
specifically, the statistical parameters of the original features at the training batch level (such as the mean value, variance, and other parameters of the statistical distribution of the original features) are obtained, that is, the statistical parameters of the original feature data of a certain batch are obtained, or the statistical parameter mean value of the original feature data of a plurality of batches is obtained, or the statistical parameters of the fused data are obtained by fusing the original feature data of a plurality of batches, and then the feature disturbance is performed by changing the statistical parameters of the original features. The embodiment scales the distribution by adjusting the variance parameter of the Gaussian distribution of the original feature and utilizes the mean vector in the statistical parameter of the statistical distribution of the scaled original feature And standard deviation vector->To produce a perturbed sample, namely:
wherein,and->Is the original feature->Mean and standard deviation of the batch level of the distribution.
The purpose of the adjustment is to enable the adjusted features and undisturbed original features to be found in the underlying classification networkThe same classification results can be obtained, namely:
step S150: and inputting the adjusted characteristics into a basic classification network to obtain a classification result.
Based on the simulation experiment performed by the actuator network of the present embodiment, a feature distribution comparison graph of the disturbance feature before and after the disturbance feature adjustment is obtained, as shown in fig. 6, where (a) in fig. 6 is a feature distribution graph of the disturbance feature before the disturbance feature adjustment, (b) in fig. 6 is a feature distribution graph of the disturbance feature after the disturbance feature adjustment, and the abscissa and the ordinate of (a) and (b) in fig. 6 each represent an adjusted feature value, and each gray area represents a classification result. It is clear that the adjusted features are more distinguishable, i.e. more easily and accurately classified.
Step S160: and calculating joint loss based on the classification result and the image class label, and repeating the steps S130-S150 until the joint loss reaches a preset joint loss threshold value to obtain a trained domain generalized image classification model.
Specifically, generating a structural constraint loss function of the adjusted feature based on the structural constraint loss function and the adjusted feature; and constructing a joint loss function based on the structure constraint loss function of the adjusted feature and the classification performance of the adjusted feature.
The loss function used for the joint training consists of cross entropy loss and structural constraint loss, namely:
wherein,indicating loss of association->To be an adjusted featureThe structure constraint loss of the device is higher than that of the original characteristic, and the adjusted characteristic can better meet the structure constraint; />Weight lost for structural constraint of the adjusted feature, +.>The value of the device can be flexibly set according to actual requirements; />For cross entropy loss, namely:
wherein,representing cross entropy function, ++>Representing the classification result of the adjusted feature, y representing class labels, N representing the number of image class labels, cross entropy loss +.>For describing post-adjustment features->I.e. based on preset basic classification conditions and the adjusted features, obtaining classification results of the adjusted features; acquiring classification labels of source domain data, judging similarity between classification results of the adjusted features and the classification labels to evaluate the adjusted features +. >Is a classification property of (3).
It should be stated that during the training phase, the underlying feature extraction networkAnd basic classification network->Is frozen and does not participate in model updating. And in the structure constraint network->After training is completed, the distribution constraint network is distributed next +.>Data conversion network->And actuator network->Is frozen during the joint training process and does not participate in model updating.
The classification process using the trained domain generalized image classification model, as shown in fig. 7, is specifically as follows:
first, extracting a network using basic featuresExtracting original features->And calculates class centers of source domainsFor original features->Applying a disturbance to obtain a disturbance signature->The method comprises the steps of carrying out a first treatment on the surface of the Second, the network is constrained by structure->By structural constraint->Obtaining disturbance characteristics/>Structural constraint deviations +.>Distribution constraint network->Class center using source domain data>Acquisition of disturbance characteristics->Distribution constraint deviation +.>The method comprises the steps of carrying out a first treatment on the surface of the Then, use the data conversion network ∈ ->Deviation of perceived structural constraints>And distribution constraint deviation->Conversion to instruction signal->In the instruction signal->Is guided by means of an actuator network +.>Disturbance characteristics->Adjusting to obtain the adjusted characteristic->The method comprises the steps of carrying out a first treatment on the surface of the Finally, the adjusted Characteristics->Incoming base classification network->And obtaining a classification result.
In summary, the method of the embodiment perceives the constraint deviation caused by domain deviation by introducing the domain deviation sensor, scales the statistical parameters of the source domain image features according to the statistical distribution characteristics of the source domain image features to obtain the disturbed source domain image features, adjusts and optimizes the scaled and disturbed source domain features according to the constraint deviation to obtain the adjusted features, and trains each sub-network meeting the classification result precision based on the evaluation of the classification performance of the adjusted features, thereby improving the precision and the generalization capability of the picture classification model, and the whole generalization process has fewer parameters, less consumed computing resources and time, higher classification efficiency and no need of relying on complex data generation and enhancement, so that the model can keep good performance in different environments and has good robustness and robustness.
Exemplary System
As shown in fig. 8, corresponding to the above-mentioned feature adjustment-based domain generalization image classification method, an embodiment of the present invention further provides a feature adjustment-based domain generalization classification system, where the feature adjustment-based domain generalization classification system includes:
The data acquisition module 810 is configured to acquire an image of a target domain and an image class label, and acquire a trained domain-generalized image classification model, where the trained domain-generalized image classification model includes a trained basic feature extraction network, a trained domain offset perception network, a trained executor network, and a trained basic classification network;
the domain generalization classification module 820 is configured to construct a target set by using the image of the target domain, input the target set into the domain generalization image classification model, and classify the image of the target domain to obtain a final classification result.
Specifically, in this embodiment, the specific function of the domain generalization classification system based on feature adjustment may refer to the corresponding description in the domain generalization image classification method based on feature adjustment, which is not described herein again.
Based on the above embodiment, the present invention further provides an intelligent terminal, and a functional block diagram thereof may be shown in fig. 9. The intelligent terminal comprises a processor, a memory, a network interface and a display screen which are connected through a system bus. The processor of the intelligent terminal is used for providing computing and control capabilities. The memory of the intelligent terminal comprises a nonvolatile storage medium and an internal memory. The nonvolatile storage medium stores an operating system and a domain generalization classification program based on feature adjustment. The internal memory provides an environment for an operating system in a non-volatile storage medium and for the execution of domain generalization and classification programs based on feature-based adjustment. The network interface of the intelligent terminal is used for communicating with an external terminal through network connection. The feature adjustment-based domain generalization classification program, when executed by the processor, implements any one of the above feature adjustment-based domain generalization image classification methods. The display screen of the intelligent terminal can be a liquid crystal display screen or an electronic ink display screen.
It will be appreciated by those skilled in the art that the schematic block diagram shown in fig. 9 is merely a block diagram of a portion of the structure associated with the present invention and is not limiting of the smart terminal to which the present invention is applied, and that a particular smart terminal may include more or fewer components than shown, or may combine some of the components, or have a different arrangement of components.
In one embodiment, an intelligent terminal is provided, where the intelligent terminal includes a memory, a processor, and a feature-adjustment-based domain generalization classification program stored in the memory and executable on the processor, where the feature-adjustment-based domain generalization classification program implements any one of the steps of the feature-adjustment-based domain generalization image classification method provided in the embodiment of the present invention when executed by the processor.
The embodiment of the invention also provides a computer readable storage medium, wherein the computer readable storage medium stores a domain generalization classification program based on characteristic adjustment, and the domain generalization classification program based on characteristic adjustment realizes the steps of any domain generalization image classification method based on characteristic adjustment provided by the embodiment of the invention when being executed by a processor.
It should be understood that the sequence number of each step in the above embodiment does not mean the sequence of execution, and the execution sequence of each process should be determined by its function and internal logic, and should not be construed as limiting the implementation process of the embodiment of the present invention.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, the specific names of the functional units and modules are only for distinguishing from each other, and are not used for limiting the protection scope of the present invention. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the elements and algorithm steps of the examples described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other manners. For example, the apparatus/terminal device embodiments described above are merely illustrative, e.g., the division of the modules or units described above is merely a logical function division, and may be implemented in other manners, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed.
The above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, those of ordinary skill in the art will understand that; the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions are not intended to depart from the spirit and scope of the various embodiments of the invention, which are also within the spirit and scope of the invention.

Claims (10)

1. The domain generalization image classification method based on characteristic adjustment is characterized by comprising the following steps of:
acquiring an image and an image class label of a target domain, and acquiring a trained domain generalization image classification model, wherein the trained domain generalization image classification model comprises a trained basic feature extraction network, a trained domain offset perception network, a trained executor network and a trained basic classification network;
and constructing a target set by utilizing the image of the target domain, inputting the target set into the trained domain generalization image classification model, and classifying the image of the target domain to obtain a final classification result.
2. The feature adjustment-based domain generalization image classification method of claim 1, wherein the step of training the trained domain generalization image classification model comprises:
acquiring an image of a source domain, and constructing a domain generalization image classification model, wherein the domain generalization image classification model comprises a basic feature extraction network, a domain offset perception network, an executor network and a basic classification network;
constructing a training set based on the image of the source domain, and inputting the training set into the basic feature extraction network to obtain original features;
inputting the original characteristics into the domain offset sensing network based on preset constraint conditions to obtain constraint deviation;
inputting the original features to the actuator network based on the constraint deviation to obtain adjusted features;
inputting the adjusted characteristics into the basic classification network to obtain classification results;
and calculating joint loss based on the classification result and the image class label, and repeatedly executing the step of training the domain generalized image classification model until the joint loss reaches a preset joint loss threshold value to obtain a trained domain generalized image classification model.
3. The feature adjustment-based domain generalization image classification method of claim 2, further comprising updating the original features with perturbation features, in particular comprising:
based on the statistical distribution of the original features, acquiring the mean and variance of the original features;
scaling the Gaussian distribution of the original feature based on the mean and the variance to obtain a scaled mean and a scaled variance;
generating a disturbance sample by using the scaled mean value and the scaled variance to obtain disturbance characteristics;
and updating the original characteristic by using the disturbance characteristic.
4. A method of classifying a feature-based domain generalization image according to claim 2 or 3, characterized in that said inputting said original features into said domain offset aware network, obtaining constraint deviations, comprises:
acquiring classification categories of the images and class centers corresponding to each classification category;
obtaining a structural constraint deviation based on the original characteristics and a preset structural constraint condition;
obtaining distribution constraint deviation based on the original characteristics, the class center and a preset distribution constraint condition;
and obtaining the constraint deviation of the original feature based on the structural constraint deviation and the distribution constraint deviation.
5. The feature adjustment-based domain generalization image classification method according to claim 4, wherein the obtaining a structural constraint deviation based on the original feature and a preset structural constraint condition comprises:
projecting the original features to a space with the same dimension as the original features to obtain projected features;
based on the classification category of the image, carrying out normalization processing on the projected features to obtain structural constraint features;
and calculating the distance between the projected feature and the structural constraint feature to obtain structural constraint deviation.
6. The feature adjustment-based domain generalization image classification method according to claim 4, wherein said obtaining a distribution constraint deviation based on the original feature, the class center and a preset distribution constraint condition comprises:
based on the preset distribution constraint conditions, obtaining the correlation between the original features and each class center;
based on the correlation, obtaining weights of the original features and the centers of the classes;
and obtaining distribution constraint deviation based on the distance between the original feature and each class center and the weight.
7. The feature adjustment-based domain generalization image classification method of claim 5, wherein the computing a joint loss based on the classification result and the image class label comprises:
calculating cross entropy loss based on the classification result, the image class label and the number of the image class labels;
based on the classification category of the image, carrying out normalization processing on the projected features to obtain a structural constraint condition; constructing a structure constraint loss function based on the structure constraint condition and the projected features;
based on the cross entropy loss and the structure constraint loss function, a joint loss is calculated.
8. A feature adjustment-based domain generalization classification system, the system comprising:
the data acquisition module is used for acquiring an image of a target domain and an image category label, and acquiring a trained domain generalization image classification model, wherein the trained domain generalization image classification model comprises a trained basic feature extraction network, a trained domain offset perception network, a trained executor network and a trained basic classification network;
the domain generalization classification module is used for constructing a target set by utilizing the image of the target domain, inputting the target set into the domain generalization image classification model, and classifying the image of the target domain to obtain a final classification result.
9. Intelligent terminal, characterized in that it comprises a memory, a processor and a feature-adjustment-based domain generalization classification program stored on the memory and executable on the processor, which when executed by the processor implements the steps of the feature-adjustment-based domain generalization image classification method according to any of claims 1-7.
10. A computer readable storage medium, wherein a feature-based domain generalization classification program is stored on the computer readable storage medium, and the feature-based domain generalization classification program, when executed by a processor, implements the steps of the feature-based domain generalization image classification method according to any of claims 1-7.
CN202311371704.5A 2023-10-23 2023-10-23 Domain generalization image classification method, system, terminal and medium based on feature adjustment Active CN117115567B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311371704.5A CN117115567B (en) 2023-10-23 2023-10-23 Domain generalization image classification method, system, terminal and medium based on feature adjustment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311371704.5A CN117115567B (en) 2023-10-23 2023-10-23 Domain generalization image classification method, system, terminal and medium based on feature adjustment

Publications (2)

Publication Number Publication Date
CN117115567A true CN117115567A (en) 2023-11-24
CN117115567B CN117115567B (en) 2024-03-26

Family

ID=88811362

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311371704.5A Active CN117115567B (en) 2023-10-23 2023-10-23 Domain generalization image classification method, system, terminal and medium based on feature adjustment

Country Status (1)

Country Link
CN (1) CN117115567B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118096769A (en) * 2024-04-29 2024-05-28 中国科学院宁波材料技术与工程研究所 Retina OCT image analysis method and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023030281A1 (en) * 2021-09-03 2023-03-09 北京字节跳动网络技术有限公司 Training method and apparatus, image processing method, electronic device and storage medium
US20230104127A1 (en) * 2021-10-04 2023-04-06 Samsung Electronics Co., Ltd. Systems, methods, and apparatus for image classification with domain invariant regularization
CN116342938A (en) * 2023-03-10 2023-06-27 西安理工大学 Domain generalization image classification method based on mixture of multiple potential domains
CN116452862A (en) * 2023-03-30 2023-07-18 华南理工大学 Image classification method based on domain generalization learning

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023030281A1 (en) * 2021-09-03 2023-03-09 北京字节跳动网络技术有限公司 Training method and apparatus, image processing method, electronic device and storage medium
US20230104127A1 (en) * 2021-10-04 2023-04-06 Samsung Electronics Co., Ltd. Systems, methods, and apparatus for image classification with domain invariant regularization
CN116342938A (en) * 2023-03-10 2023-06-27 西安理工大学 Domain generalization image classification method based on mixture of multiple potential domains
CN116452862A (en) * 2023-03-30 2023-07-18 华南理工大学 Image classification method based on domain generalization learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
YI ZHANG ET AL.: "Cross-Modal Concept Learning and Inference for Vision-Language Models", 《ARXIV:2307.15460V1》, pages 1 - 13 *
ZHEHAN KAN ET AL.: "Self-Correctable and Adaptable Inference for Generalizable Human Pose Estimation", 《ARXIV:2303.11180V2》, pages 1 - 10 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118096769A (en) * 2024-04-29 2024-05-28 中国科学院宁波材料技术与工程研究所 Retina OCT image analysis method and device

Also Published As

Publication number Publication date
CN117115567B (en) 2024-03-26

Similar Documents

Publication Publication Date Title
CN111523621B (en) Image recognition method and device, computer equipment and storage medium
CN112446423B (en) Fast hybrid high-order attention domain confrontation network method based on transfer learning
WO2018121690A1 (en) Object attribute detection method and device, neural network training method and device, and regional detection method and device
CN111444951B (en) Sample recognition model generation method, device, computer equipment and storage medium
KR20220107120A (en) Method and apparatus of training anti-spoofing model, method and apparatus of performing anti-spoofing using anti-spoofing model, electronic device, storage medium, and computer program
CN114332578A (en) Image anomaly detection model training method, image anomaly detection method and device
KR19990082557A (en) Method and apparatus for training neural networks for detecting and classifying objects using uncertain training data
CN110827265B (en) Image anomaly detection method based on deep learning
CN117115567B (en) Domain generalization image classification method, system, terminal and medium based on feature adjustment
CN114503131A (en) Search device, search method, search program, and learning model search system
US20220222578A1 (en) Method of training local model of federated learning framework by implementing classification of training data
CN113902944A (en) Model training and scene recognition method, device, equipment and medium
CN113870254A (en) Target object detection method and device, electronic equipment and storage medium
CN114972871A (en) Image registration-based few-sample image anomaly detection method and system
CN114708645A (en) Object identification device and object identification method
CN116630816B (en) SAR target recognition method, device, equipment and medium based on prototype comparison learning
Zhang et al. BiLSTM-TANet: an adaptive diverse scenes model with context embeddings for few-shot learning
CN111461177A (en) Image identification method and device
CN115795355A (en) Classification model training method, device and equipment
CN111553249B (en) H-B grading-based accurate facial paralysis degree evaluation method and device under CV
CN114970732A (en) Posterior calibration method and device for classification model, computer equipment and medium
CN114818945A (en) Small sample image classification method and device integrating category adaptive metric learning
Trentin et al. Unsupervised nonparametric density estimation: A neural network approach
Soltani et al. Affine Takagi-Sugeno fuzzy model identification based on a novel fuzzy c-regression model clustering and particle swarm optimization
CN112750067A (en) Image processing system and training method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant