[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN112100466A - Method, device and equipment for generating search space and storage medium - Google Patents

Method, device and equipment for generating search space and storage medium Download PDF

Info

Publication number
CN112100466A
CN112100466A CN202011020815.8A CN202011020815A CN112100466A CN 112100466 A CN112100466 A CN 112100466A CN 202011020815 A CN202011020815 A CN 202011020815A CN 112100466 A CN112100466 A CN 112100466A
Authority
CN
China
Prior art keywords
search space
network
layer
options
hyper
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011020815.8A
Other languages
Chinese (zh)
Inventor
希滕
张刚
温圣召
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202011020815.8A priority Critical patent/CN112100466A/en
Publication of CN112100466A publication Critical patent/CN112100466A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/907Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/909Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using geographical or spatial information, e.g. location
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Library & Information Science (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a method, a device, equipment and a storage medium for generating a search space, and belongs to the field of artificial intelligence such as deep learning and computer vision in computer technology. The specific implementation scheme is as follows: the initial search space comprises search spaces of all layers of the target model, and the search space of each layer comprises options of all network structure units, so that the search space is greatly expanded; overlapping all the options in the search space of each layer by the same connection weight to form an initial super network, and training and updating model parameters of the initial super network and the connection weight corresponding to the options in each layer; and determining an optimal search space according to the connection weight corresponding to the option in each layer of the trained hyper-network, searching to obtain an optimal target model based on the optimal search space, and improving the performance of the obtained target model, so that the target model has higher precision and higher data processing speed when being applied to image processing, natural language processing, audio/video processing and the like.

Description

Method, device and equipment for generating search space and storage medium
Technical Field
The present application relates to the field of artificial intelligence, and in particular, to deep learning and computer vision, and more particularly, to a method, an apparatus, a device, and a storage medium for generating a search space.
Background
In recent years, deep learning techniques have been successful in many directions, and in the deep learning techniques, the quality of an artificial neural network structure has a very important influence on the effect of a final model. The manual design of network topology requires very rich experience and many attempts, and many parameters can generate explosive combinations, and the conventional random Search is hardly feasible, so that the Neural network Architecture Search technology (NAS for short) becomes a research hotspot.
In the NAS, a search space is very important, the search space in the existing NAS is designed manually, a small number of possible model structures, the number of search channels, expansion coefficients and the like are given, so that the search space has great limitations, only the optimal model structure can be searched in the small number of possible model structures in the limited search space, and finally the found model structure has poor performance and is low in precision and efficiency when used for data processing such as image processing, natural language processing, audio/video processing and the like.
Disclosure of Invention
The application provides a method, a device, equipment and a storage medium for generating a search space.
According to an aspect of the present application, there is provided a method of generating a search space, including:
acquiring an initial search space, wherein the initial search space comprises search spaces of all layers of a target model, and the search space of each layer comprises options of all network structure units;
superposing all the options in the search space of each layer by the same connection weight to form an initial hyper-network;
training the initial hyper-network, and updating model parameters of the initial hyper-network and connection weights corresponding to the options in each layer to obtain a trained hyper-network;
and determining an optimal search space according to the connection weight corresponding to the option in each layer of the trained hyper-network, wherein the optimal search space is used for searching to obtain an optimal target model, and the target model is used for executing a data processing task.
According to another aspect of the present application, there is provided an apparatus for generating a search space, including:
the initial search space module is used for acquiring an initial search space, wherein the initial search space comprises search spaces of all layers of the target model, and the search space of each layer comprises options of all network structure units;
a search module to: superposing all the options in the search space of each layer by the same connection weight to form an initial hyper-network; training the initial hyper-network, and updating model parameters of the initial hyper-network and connection weights corresponding to the options in each layer to obtain a trained hyper-network;
and the optimal search space determining module is used for determining an optimal search space according to the connection weight corresponding to the option in each layer of the trained hyper-network, wherein the optimal search space is used for searching to obtain an optimal target model, and the target model is used for executing a data processing task.
According to another aspect of the present application, there is provided an electronic device including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method described above.
According to another aspect of the present application, there is provided a non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method described above.
According to the technology of the application, an optimal search space can be obtained through searching, the precision and the efficiency of obtaining a target model through searching the neural network structure based on the optimal search space can be improved, the AI model of the technical scheme is used for carrying out depth perception learning on picture content, coding parameters can be intelligently adjusted according to picture scenes and complexity, the bandwidth cost requirement can be reduced under the same picture quality, the picture quality is improved, the bandwidth cost of a platform is greatly reduced, meanwhile, the subjective visual experience is greatly improved by combining the technologies of picture quality restoration, picture quality enhancement, ROI, super-resolution and the like.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present application, nor do they limit the scope of the present application. Other features of the present application will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not intended to limit the present application. Wherein:
FIG. 1 is a flowchart of a method for generating a search space according to a first embodiment of the present application;
FIG. 2 is a flowchart of a method for generating a search space according to a second embodiment of the present application;
FIG. 3 is a diagram of an apparatus for generating a search space according to a third embodiment of the present application;
FIG. 4 is a diagram of an apparatus for generating a search space according to a fourth embodiment of the present application;
FIG. 5 is a block diagram of an electronic device for implementing a method of generating a search space according to an embodiment of the present application.
Detailed Description
The following description of the exemplary embodiments of the present application, taken in conjunction with the accompanying drawings, includes various details of the embodiments of the application for the understanding of the same, which are to be considered exemplary only. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
In recent years, deep learning techniques have been successful in many directions, and in the deep learning techniques, the quality of an artificial neural network structure has a very important influence on the effect of a final model. The manual design of network topology requires very rich experience and many attempts, and many parameters can generate explosive combinations, and conventional random Search is hardly feasible, so in recent years, Neural network Architecture Search technology (NAS) becomes a research hotspot.
In the NAS, a search space is very important, the conventional NAS working search space is designed manually, and has great limitation, and the optimal model structure can be searched only in the limited search space. Taking the Mnasnet of Google as an example, the structure can only be limited to the number of search channels, expansion coefficients and the like, and the structure of the model is also the structure of mobilenet _ v 2-like. The search space will determine the upper bound of the model structure that can be searched. Even if the optimal model structure can be found, the performance will be poor if in an inappropriate search space.
The application provides a method, a device, equipment and a storage medium for generating a search space, which are applied to the artificial intelligence fields of deep learning, computer vision and the like in the computer technology to optimize the search space and obtain an optimal search space, and an optimal model can be obtained through NAS (non-access stratum) search based on the optimal search space, so that the precision and the efficiency of the model in data processing can be improved.
The method provided by the embodiment of the application can be used for automatically generating the optimal search space, then searching based on the optimal search space to obtain the optimal model, and improving the precision and efficiency of the model for processing data such as image processing, natural language processing, audio/video processing and the like.
The following describes in detail the technical solutions of the embodiments of the present application and how to solve the above technical problems with specific embodiments. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments. Embodiments of the present application will be described below with reference to the accompanying drawings.
Fig. 1 is a flowchart of a method for generating a search space according to a first embodiment of the present application. The method of this embodiment may be executed by an apparatus for generating a search space, where the apparatus for generating a search space may be specifically a client, a server, or a server cluster (hereinafter, collectively referred to as "electronic device") having a certain computing power, such as a desktop computer, a tablet computer, a notebook computer, or the like, or the apparatus may also be a chip in the electronic device, and the like, and this embodiment is not limited in this embodiment.
As shown in fig. 1, the method comprises the following specific steps:
step S101, obtaining an initial search space, wherein the initial search space comprises search spaces of all layers of a target model, and the search space of each layer comprises options of all network structure units.
The target model is a neural network model which is searched by the user through the NAS and is suitable for data processing of a corresponding scene. When applied to different application scenarios, the target model may be a different neural network model.
The neural network model is formed by combining a plurality of layers of structures, the network structure units are constituent units forming each layer of structure of the neural network model, and each layer of structure is formed by combining one or more network structure units according to a certain topology. The neural network model may comprise a plurality of structurally identical or different network building blocks.
The network structure unit may be a basic unit for constructing a neural network model, and specifically may be a single network layer, such as a single convolutional layer or a fully connected layer; or may be a structural unit formed by combining a plurality of network layers, such as a block structure (block) formed by combining a convolutional layer, a Batch Normalization layer (Batch Normalization), and a nonlinear layer (e.g., Relu).
For example, the network structure unit may be a Residual block in a Residual network ResNet, or may be a repeating unit conv + BN + Relu (convolutional layer + normalization layer + active layer) in the Residual block; or alternatively, it may be a stage (stage) in the residual error network RseNet; but also structural units formed from custom combinations of layers.
In this embodiment, the initial search space is a search space or a collection of search spaces, and includes search spaces of each layer of the neural network model. The search space of each layer contains all the options of the network structure unit, wherein each option can be a possible structure of one network structure unit and can be any structure of the network structure unit designed manually in the historical data. The search space at each level contains any combination of all artificial network fabric elements that are present. Inside each network fabric element may be a combination of existing artificially designed fabrics. Any combination of all layers is contained in the initial search space.
Therefore, the initialization condition of the search space of each layer is many times larger than that of the traditional search space, the index of the search space of each layer is increased, and the initial search space is greatly expanded.
And S102, overlapping all options in the search space of each layer by the same connection weight to form an initial hyper-network.
In this embodiment, before performing NAS, search of a search space is performed based on an initial search space to generate an optimal search space, and then, performing NAS based on the optimal search space can search to obtain an optimal model structure.
In this step, each layer of the initial super network includes all the options of the network structure units, wherein all the options are overlapped with the same connection weight to form a layer structure of the initial super network, and each layer structure of the initial super network is the same, that is, all the options of the network structure units are overlapped with equal probability to form a layer, and the multiple layers are overlapped to form the initial super network.
And S103, training the initial hyper-network, updating model parameters of the initial hyper-network and connection weights corresponding to the options in each layer, and obtaining the trained hyper-network.
After the initial hyper-network is obtained, training the initial hyper-network by using training data corresponding to a specific application scenario of the target model, and updating model parameters and OP (operation) connection parameters of the initial hyper-network in the training process to obtain the trained hyper-network. Wherein the OP connection parameters comprise connection weights corresponding to the options in each layer.
And S104, determining an optimal search space according to the connection weight corresponding to the options in each layer of the trained hyper-network, wherein the optimal search space is used for searching to obtain an optimal target model, and the target model is used for executing a data processing task.
After the trained hyper-network is obtained, screening the options in the search space of each layer according to the connection weight corresponding to each option in each layer of the trained hyper-network, and reserving the option with larger connection weight to obtain the optimal search space.
After the optimal search space is obtained, based on the optimal search space, an optimal target model can be obtained through NAS search, and then a data processing task of a corresponding application scene is executed based on the target model.
In this embodiment, an initial search space is obtained, where the initial search space includes search spaces of each layer of a target model, and the search space of each layer includes options of all network structure units, so that the search space can be greatly expanded, further, all options in the search space of each layer are overlapped with the same connection weight to form an initial hyper-network, the initial hyper-network is trained, and model parameters of the initial hyper-network and connection weights corresponding to the options in each layer are updated to obtain the trained hyper-network; and determining an optimal search space according to the connection weight corresponding to the option in each layer of the trained hyper-network, searching to obtain an optimal target model based on the optimal search space, and improving the performance of the obtained target model, so that the target model has higher precision and higher data processing speed when being applied to image processing, natural language processing, audio/video processing and the like. In an image/video processing scene, the AI model of the technical scheme is applied to carry out depth perception learning on image content, coding parameters can be intelligently adjusted according to the image scene and complexity, the bandwidth cost requirement can be reduced under the same image quality, the bandwidth cost of a platform is greatly reduced while the image quality is improved, and meanwhile, the subjective visual experience is greatly improved by combining the technologies of image quality restoration, image quality enhancement, ROI, super-resolution and the like. Further, at present, the core competitiveness of the target model obtained by training is the precision of the target model and the data processing speed of the target model on hardware, so that the precision and the efficiency of the target model are higher based on the same hardware; on the premise of ensuring the same precision and efficiency, the method can be realized by adopting cheaper hardware, thereby reducing the hardware cost.
Fig. 2 is a flowchart of a method for generating a search space according to a second embodiment of the present application. On the basis of the first embodiment, in this embodiment, training the initial hyper-network, and updating the model parameters of the initial hyper-network and the connection weights corresponding to the options in each layer to obtain the trained hyper-network includes: and carrying out repeated iterative training on the initial hyper-network, updating the model parameters of the initial hyper-network and the connection weights corresponding to the options in each layer in each iterative process, and obtaining the trained hyper-network until the iteration stop condition is met.
As shown in fig. 2, the method comprises the following steps:
step S201, obtaining an initial search space, where the initial search space includes search spaces of each layer of the target model, and the search space of each layer includes options of all network structure units.
The target model is a neural network model which is searched by the user through the NAS and is suitable for data processing of a corresponding scene. When applied to different application scenarios, the target model may be a different neural network model.
The neural network model is formed by combining a plurality of layers of structures, the network structure units are constituent units forming each layer of structure of the neural network model, and each layer of structure is formed by combining one or more network structure units according to a certain topology. The neural network model may comprise a plurality of structurally identical or different network building blocks.
The network structure unit may be a basic unit for constructing a neural network model, and specifically may be a single network layer, such as a single convolutional layer or a fully connected layer; or may be a structural unit formed by combining a plurality of network layers, such as a block structure (block) formed by combining a convolutional layer, a Batch Normalization layer (Batch Normalization), and a nonlinear layer (e.g., Relu).
For example, the network structure unit may be a Residual block in a Residual network ResNet, or may be a repeating unit conv + BN + Relu (convolutional layer + normalization layer + active layer) in the Residual block; or alternatively, it may be a stage (stage) in the residual error network RseNet; but also structural units formed from custom combinations of layers.
In this embodiment, the initial search space is a search space or a collection of search spaces, and includes search spaces of each layer of the neural network model. The search space of each layer contains all the options of the network structure unit, wherein each option can be a possible structure of one network structure unit and can be any structure of the network structure unit designed manually in the historical data. The search space at each level contains any combination of all artificial network fabric elements that are present. Inside each network fabric element may be a combination of existing artificially designed fabrics. Any combination of all layers is contained in the initial search space.
Therefore, the initialization condition of the search space of each layer is many times larger than that of the traditional search space, the index of the search space of each layer is increased, and the initial search space is greatly expanded.
Alternatively, any combination of layers in the initial search space may be represented in the form of weight superposition.
And S202, overlapping all options in the search space of each layer by the same connection weight to form an initial hyper-network.
In this embodiment, before performing NAS, search of a search space is performed based on an initial search space to generate an optimal search space, and then, performing NAS based on the optimal search space can search to obtain an optimal model structure.
In this step, each layer of the initial super network includes all the options of the network structure units, wherein all the options are overlapped with the same connection weight to form a layer structure of the initial super network, and each layer structure of the initial super network is the same, that is, all the options of the network structure units are overlapped with equal probability to form a layer, and the multiple layers are overlapped to form the initial super network.
And S203, carrying out iterative training on the initial hyper-network, and updating model parameters of the initial hyper-network and connection weights corresponding to options in each layer.
After the initial hyper-network is obtained, carrying out iterative training on the initial hyper-network for multiple times by using training data corresponding to a specific application scene of the target model, updating model parameters and OP connection parameters of the initial hyper-network in each iterative training process, and stopping iteration until iteration stopping conditions are met to obtain the trained hyper-network. Wherein the OP connection parameters comprise connection weights corresponding to the options in each layer.
Further, in each iteration process, the model parameters of the initial hyper-network and the connection weights corresponding to the options in each layer are alternately updated. Specifically, in the training process, fixing model parameters of the super network, and training the connection weight of the super network by using a first sample set; the connection weights of the super-network are fixed, and the model parameters of the super-network are trained by using a second template training set. In this way, the connection weight corresponding to each network structure unit option in each layer of the super network can be realized.
Optionally, the ratio of the model parameter of the super network and the connection weight being updated alternately may be 1:1 or not 1:1, and the ratio may be configured according to an actual application scenario, or may be determined by searching in a preset search space, which is not specifically limited in this embodiment.
And step S204, judging whether an iteration stop condition is met.
And after each iteration training is finished, judging whether the iteration stopping condition is met currently.
If the iteration stop condition is met, the loop iteration training is stopped to obtain the trained hyper-network, and step S205 is executed.
If the iteration stop condition is not satisfied, step S203 is executed to perform the next iteration training.
Alternatively, the iteration stop condition may be: the loss function of the current super-network satisfies the convergence condition. The convergence condition that the loss function satisfies may be configured according to an actual application scenario, and this embodiment is not specifically limited here.
Optionally, the number of times of performing the iterative training is recorded, and the iteration stop condition may be: the number of times of execution of the iterative training is greater than or equal to a preset threshold. The preset threshold may be configured according to an actual application scenario, and this embodiment is not specifically limited here.
Alternatively, the iteration stop condition may be: the total time length of the iterative training reaches a preset search time threshold value. The preset search time threshold may be configured according to an actual application scenario, and this embodiment is not specifically limited here.
In addition, the iteration stop condition may also be configured and adjusted according to a specific application scenario, and this embodiment is not specifically limited herein.
In another optional implementation manner of this embodiment, the search of the optimal search space may be performed layer by layer.
And S205, determining an optimal search space according to the connection weight corresponding to the options in each layer of the trained hyper-network.
After the trained hyper-network is obtained, screening the options in the search space of each layer according to the connection weight corresponding to each option in each layer of the trained hyper-network, and reserving the option with larger connection weight to obtain the optimal search space.
Illustratively, the optimal search space includes a per-layer optimized search space.
Optionally, for each layer of the trained hyper-network, sorting the options according to a descending order of the connection weights corresponding to the options in the layer, and determining the optimized search space of the layer according to the first k options in the sorting, so as to obtain the optimized search space of each layer, thereby obtaining the optimal search space. Where k is a positive integer, and a value of k may be configured according to an actual application scenario, and this embodiment is not specifically limited herein.
Specifically, in each layer, according to the ranking of the connection weights, the first k options of the network structure unit with the largest connection weight constitute the optimized search space of the layer.
Optionally, for each layer of the trained hyper-network, an option with a corresponding connection weight greater than a preset weight threshold may be screened out according to the preset weight threshold of the layer to form an optimized search space of the layer, so as to obtain the optimized search space of each layer, and obtain an optimal search space. The preset weight thresholds of different layers may be different, and the preset weight thresholds of each layer may be configured according to an actual application scenario, which is not specifically limited in this embodiment.
For example, it is assumed that there are 10000 possible options in each layer of search space in the initial search space, and each layer in the initial super network is formed by overlapping the 10000 options with equal probability, for example, the connection weight of each option may be 0.0001, and the trained super network is obtained through multiple iterative training. In each layer of the trained hyper-network, 50 options with the maximum connection weight are determined, and as the optimized search space of the layer, if the target model comprises 50 layers, the optimal search space comprises 5050Possible network fabric element options.
And S206, searching the neural network model in the optimal search space to obtain a target model.
After the optimal search space is determined, the NAS is carried out based on the optimal search space, and an optimal model is obtained through searching and is used as a target model.
And S207, acquiring data to be processed, and performing corresponding data processing on the data to be processed by using the target model.
In this embodiment, the target model obtained by the search may be applied to scenes such as image processing, natural language processing, audio/video processing, and the like, and execute a corresponding data processing task.
Illustratively, when the target model is applied to an image processing scene, an image to be processed is acquired, the image to be processed is input into the target model, and the image to be processed is processed by using the target model to obtain an image processing result.
For example, the target model may be specifically applied to face recognition, and in the process of generating the search space, the training set for training the face recognition model is used to train the initial model structure and record the performance, so as to search the optimal search space applied to the face recognition scene, and further obtain the optimal target model applied to the face recognition scene. When the method is applied, a face image to be recognized is obtained, the face features of the face image are extracted, the face features are input into a target model, face recognition is carried out through the target model, and a face recognition result is output.
In addition, the object model can also be applied to the field of natural language processing. For example, the method can be specifically applied to a speech recognition scenario, and in the process of generating the search space, the training set for training the speech recognition model is used to train the initial model structure and record the performance so as to search the optimal search space applied to the speech recognition scenario, and further obtain the optimal target model applied to the speech recognition scenario. During application, voice data to be processed is obtained, characteristics of the voice data are extracted, the characteristics of the voice data are input into a target model, voice recognition processing is carried out through the target model, and a voice recognition result is output.
In the embodiment, the initial search space is obtained, the initial search space comprises the search spaces of all layers of the target model, and the search space of each layer comprises all options of the network structure unit, so that the search space can be greatly expanded, further, the initial hyper-network is subjected to iterative training for many times, the model parameters of the initial hyper-network and the connection weights corresponding to the options in all layers are updated in each iterative process, and the trained hyper-network is obtained until the iteration stop condition is met; all the options in the search space of each layer are overlapped by the same connection weight to form an initial super network, the optimal search space comprises the search space after optimization of each layer, all the options are ranked according to the sequence of the connection weight corresponding to each option in the layer from large to small aiming at each layer of the trained super network, the optimized search space of the layer can be determined according to the first k options in the ranking, and therefore the optimal search space is obtained. In an image/video processing scene, the AI model of the technical scheme is applied to carry out depth perception learning on image content, coding parameters can be intelligently adjusted according to the image scene and complexity, the bandwidth cost requirement can be reduced under the same image quality, the bandwidth cost of a platform is greatly reduced while the image quality is improved, and meanwhile, the subjective visual experience is greatly improved by combining the technologies of image quality restoration, image quality enhancement, ROI, super-resolution and the like. Further, at present, the core competitiveness of the target model obtained by training is the precision of the target model and the data processing speed of the target model on hardware, so that the precision and the efficiency of the target model are higher based on the same hardware; on the premise of ensuring the same precision and efficiency, the method can be realized by adopting cheaper hardware, thereby reducing the hardware cost.
Fig. 3 is a schematic diagram of an apparatus for generating a search space according to a third embodiment of the present application. The device for generating the search space provided by the embodiment of the application can execute the processing flow provided by the method for generating the search space. As shown in fig. 3, the apparatus 30 for generating a search space includes: an initial search space module 301, a search module 302, and an optimal search space determination module 303.
Specifically, the initial search space module 301 is configured to obtain an initial search space, where the initial search space includes search spaces of each layer of the target model, and the search space of each layer includes options of all network structure units.
The search module 302 is configured to: superposing all options in the search space of each layer by the same connection weight to form an initial hyper-network; and training the initial hyper-network, and updating the model parameters of the initial hyper-network and the connection weights corresponding to the options in each layer to obtain the trained hyper-network.
The optimal search space determining module 303 is configured to determine an optimal search space according to the connection weight corresponding to the option in each layer of the trained hyper-network, where the optimal search space is used to search to obtain an optimal target model, and the target model is used to execute a data processing task.
The apparatus provided in this embodiment of the present application may be specifically configured to execute the method embodiment provided in the first embodiment, and specific functions are not described herein again.
In this embodiment, an initial search space is obtained, where the initial search space includes search spaces of each layer of a target model, and the search space of each layer includes options of all network structure units, so that the search space can be greatly expanded, further, all options in the search space of each layer are overlapped with the same connection weight to form an initial hyper-network, the initial hyper-network is trained, and model parameters of the initial hyper-network and connection weights corresponding to the options in each layer are updated to obtain the trained hyper-network; and determining an optimal search space according to the connection weight corresponding to the option in each layer of the trained hyper-network, searching to obtain an optimal target model based on the optimal search space, and improving the performance of the obtained target model, so that the target model has higher precision and higher data processing speed when being applied to image processing, natural language processing, audio/video processing and the like. In an image/video processing scene, the AI model of the technical scheme is applied to carry out depth perception learning on image content, coding parameters can be intelligently adjusted according to the image scene and complexity, the bandwidth cost requirement can be reduced under the same image quality, the bandwidth cost of a platform is greatly reduced while the image quality is improved, and meanwhile, the subjective visual experience is greatly improved by combining the technologies of image quality restoration, image quality enhancement, ROI, super-resolution and the like. Further, at present, the core competitiveness of the target model obtained by training is the precision of the target model and the data processing speed of the target model on hardware, so that the precision and the efficiency of the target model are higher based on the same hardware; on the premise of ensuring the same precision and efficiency, the method can be realized by adopting cheaper hardware, thereby reducing the hardware cost.
Fig. 4 is a schematic diagram of an apparatus for generating a search space according to a fourth embodiment of the present application. On the basis of the third embodiment, in this embodiment, the optimal search space determining module is further configured to:
the optimal search space comprises each layer of optimized search space, all options in each layer of the trained hyper-network are sorted according to the sequence of the connection weights corresponding to all options in the layer from large to small, and the optimized search space of the layer is determined according to the first k options in the sorting.
In an optional embodiment, the search module is further configured to:
and carrying out repeated iterative training on the initial hyper-network, updating the model parameters of the initial hyper-network and the connection weights corresponding to the options in each layer in each iterative process, and obtaining the trained hyper-network until the iteration stop condition is met.
In an alternative embodiment, the iteration stop condition comprises:
the number of times of iterative training is greater than or equal to a preset threshold value; or, the performance of the current super-network satisfies the convergence condition.
In an alternative embodiment, the optimal search space determining module is further configured to:
and in each iteration process, alternately updating the model parameters of the initial hyper-network and the connection weights corresponding to the options in each layer.
In an alternative embodiment, as shown in fig. 4, the apparatus 30 for generating a search space further includes:
a neural network model search module 304 to: and searching the neural network model in the optimal search space to obtain a target model.
In an alternative embodiment, as shown in fig. 4, the apparatus 30 for generating a search space further includes:
a data processing module 305 for: and acquiring data to be processed, and performing corresponding data processing on the data to be processed by using the target model.
In an alternative embodiment, the data processing module 305 is further configured to:
and acquiring an image to be processed, inputting the image to be processed into a target model, and performing image processing on the image to be processed by using the target model to obtain an image processing result.
The apparatus provided in the embodiment of the present application may be specifically configured to execute the method embodiment provided in the second embodiment, and specific functions are not described herein again.
In the embodiment, the initial search space is obtained, the initial search space comprises the search spaces of all layers of the target model, and the search space of each layer comprises all options of the network structure unit, so that the search space can be greatly expanded, further, the initial hyper-network is subjected to iterative training for many times, the model parameters of the initial hyper-network and the connection weights corresponding to the options in all layers are updated in each iterative process, and the trained hyper-network is obtained until the iteration stop condition is met; all the options in the search space of each layer are overlapped by the same connection weight to form an initial super network, the optimal search space comprises the search space after optimization of each layer, all the options are ranked according to the sequence of the connection weight corresponding to each option in the layer from large to small aiming at each layer of the trained super network, the optimized search space of the layer can be determined according to the first k options in the ranking, and therefore the optimal search space is obtained. In an image/video processing scene, the AI model of the technical scheme is applied to carry out depth perception learning on image content, coding parameters can be intelligently adjusted according to the image scene and complexity, the bandwidth cost requirement can be reduced under the same image quality, the bandwidth cost of a platform is greatly reduced while the image quality is improved, and meanwhile, the subjective visual experience is greatly improved by combining the technologies of image quality restoration, image quality enhancement, ROI, super-resolution and the like. Further, at present, the core competitiveness of the target model obtained by training is the precision of the target model and the data processing speed of the target model on hardware, so that the precision and the efficiency of the target model are higher based on the same hardware; on the premise of ensuring the same precision and efficiency, the method can be realized by adopting cheaper hardware, thereby reducing the hardware cost.
According to an embodiment of the present application, an electronic device and a readable storage medium are also provided.
As shown in fig. 5, the embodiment of the present application is a block diagram of an electronic device for generating a search space. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the present application that are described and/or claimed herein.
As shown in fig. 5, the electronic apparatus includes: one or more processors Y01, a memory Y02, and interfaces for connecting the various components, including a high speed interface and a low speed interface. The various components are interconnected using different buses and may be mounted on a common motherboard or in other manners as desired. The processor may process instructions for execution within the electronic device, including instructions stored in or on the memory to display graphical information of a GUI on an external input/output apparatus (such as a display device coupled to the interface). In other embodiments, multiple processors and/or multiple buses may be used, along with multiple memories, as desired. Also, multiple electronic devices may be connected, with each device providing portions of the necessary operations (e.g., as a server array, a group of blade servers, or a multi-processor system). In fig. 5, one processor Y01 is taken as an example.
Memory Y02 is a non-transitory computer readable storage medium as provided herein. The memory stores instructions executable by the at least one processor to cause the at least one processor to perform the method of generating a search space provided herein. The non-transitory computer readable storage medium of the present application stores computer instructions for causing a computer to perform the method of generating a search space provided herein.
The memory Y02 is a non-transitory computer readable storage medium, and can be used for storing non-transitory software programs, non-transitory computer executable programs, and modules, such as program instructions/modules corresponding to the method of generating a search space in the embodiment of the present application (for example, the initial search space module 301, the search module 302, and the optimal search space determination module 303 shown in fig. 3). The processor Y01 executes various functional applications of the server and data processing, i.e., implements the method of generating a search space in the above-described method embodiment, by executing non-transitory software programs, instructions, and modules stored in the memory Y02.
The memory Y02 may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to use of the electronic device generating the search space, and the like. Additionally, the memory Y02 may include high speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, memory Y02 may optionally include memory located remotely from processor Y01, which may be connected to the electronic device generating the search space via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The electronic device of the method of generating a search space may further include: an input device Y03 and an output device Y04. The processor Y01, the memory Y02, the input device Y03 and the output device Y04 may be connected by a bus or other means, and the connection by the bus is exemplified in fig. 5.
The input device Y03 may receive input numeric or character information and generate key signal inputs related to user settings and function controls of the electronic device generating the search space, such as a touch screen, keypad, mouse, track pad, touch pad, pointer stick, one or more mouse buttons, track ball, joystick, or other input device. The output device Y04 may include a display device, an auxiliary lighting device (e.g., LED), a tactile feedback device (e.g., vibration motor), and the like. The display device may include, but is not limited to, a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display, and a plasma display. In some implementations, the display device can be a touch screen.
Various implementations of the systems and techniques described here can be realized in digital electronic circuitry, integrated circuitry, application specific ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
These computer programs (also known as programs, software applications, or code) include machine instructions for a programmable processor, and may be implemented using high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. As used herein, the terms "machine-readable medium" and "computer-readable medium" refer to any computer program product, apparatus, and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term "machine-readable signal" refers to any signal used to provide machine instructions and/or data to a programmable processor.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
According to the technical scheme of the embodiment of the application, the initial search space is obtained and comprises the search spaces of all layers of the target model, and the search space of each layer comprises all options of network structure units, so that the search space can be greatly expanded, furthermore, the initial hyper-network is trained by overlapping all the options in the search space of each layer with the same connection weight to form the initial hyper-network, and the model parameters of the initial hyper-network and the connection weights corresponding to the options in all layers are updated to obtain the trained hyper-network; and determining an optimal search space according to the connection weight corresponding to the option in each layer of the trained hyper-network, searching to obtain an optimal target model based on the optimal search space, and improving the performance of the obtained target model, so that the target model has higher precision and higher data processing speed when being applied to image processing, natural language processing, audio/video processing and the like. In an image/video processing scene, the AI model of the technical scheme is applied to carry out depth perception learning on image content, coding parameters can be intelligently adjusted according to the image scene and complexity, the bandwidth cost requirement can be reduced under the same image quality, the bandwidth cost of a platform is greatly reduced while the image quality is improved, and meanwhile, the subjective visual experience is greatly improved by combining the technologies of image quality restoration, image quality enhancement, ROI, super-resolution and the like. Further, at present, the core competitiveness of the target model obtained by training is the precision of the target model and the data processing speed of the target model on hardware, so that the precision and the efficiency of the target model are higher based on the same hardware; on the premise of ensuring the same precision and efficiency, the method can be realized by adopting cheaper hardware, thereby reducing the hardware cost.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present application may be executed in parallel, sequentially, or in different orders, and the present invention is not limited thereto as long as the desired results of the technical solutions disclosed in the present application can be achieved.
The above-described embodiments should not be construed as limiting the scope of the present application. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (18)

1. A method of generating a search space, comprising:
acquiring an initial search space, wherein the initial search space comprises search spaces of all layers of a target model, and the search space of each layer comprises options of all network structure units;
superposing all the options in the search space of each layer by the same connection weight to form an initial hyper-network;
training the initial hyper-network, and updating model parameters of the initial hyper-network and connection weights corresponding to the options in each layer to obtain a trained hyper-network;
and determining an optimal search space according to the connection weight corresponding to the option in each layer of the trained hyper-network, wherein the optimal search space is used for searching to obtain an optimal target model, and the target model is used for executing a data processing task.
2. The method of claim 1, wherein determining an optimal search space according to connection weights corresponding to the options in each layer of the trained hyper-network comprises:
the optimal search space comprises each optimized search space, all the options in each layer of the trained hyper-network are sorted according to the descending order of the connection weights corresponding to all the options in the layer, and the optimized search space in the layer is determined according to the first k options in the sorting.
3. The method of claim 1, wherein the training the initial hyper-network, updating model parameters of the initial hyper-network and connection weights corresponding to the options in each layer, and obtaining a trained hyper-network comprises:
and carrying out repeated iterative training on the initial hyper-network, updating the model parameters of the initial hyper-network and the connection weights corresponding to the options in each layer in each iterative process, and obtaining the trained hyper-network until an iteration stop condition is met.
4. The method of claim 3, wherein the iteration stop condition comprises:
the number of times of iterative training is greater than or equal to a preset threshold value;
or,
the performance of the current super-network satisfies the convergence condition.
5. The method of claim 3, wherein said updating model parameters of said initial hyper-network and connection weights corresponding to said options in each layer during each iteration comprises:
and in each iteration process, alternately updating the model parameters of the initial hyper-network and the connection weights corresponding to the options in each layer.
6. The method according to any one of claims 1-5, wherein after determining the optimal search space according to the connection weights corresponding to the options in each layer of the trained hyper-network, further comprising:
and searching a neural network model in the optimal search space to obtain the target model.
7. The method of claim 6, wherein after the performing the neural network model search in the optimal search space to obtain the target model, further comprises:
and acquiring data to be processed, and performing corresponding data processing on the data to be processed by using the target model.
8. The method of claim 7, further comprising:
and acquiring an image to be processed, inputting the image to be processed into the target model, and performing image processing on the image to be processed by using the target model to obtain an image processing result.
9. An apparatus that generates a search space, comprising:
the initial search space module is used for acquiring an initial search space, wherein the initial search space comprises search spaces of all layers of the target model, and the search space of each layer comprises options of all network structure units;
a search module to: superposing all the options in the search space of each layer by the same connection weight to form an initial hyper-network; training the initial hyper-network, and updating model parameters of the initial hyper-network and connection weights corresponding to the options in each layer to obtain a trained hyper-network;
and the optimal search space determining module is used for determining an optimal search space according to the connection weight corresponding to the option in each layer of the trained hyper-network, wherein the optimal search space is used for searching to obtain an optimal target model, and the target model is used for executing a data processing task.
10. The apparatus of claim 9, wherein the optimal search space determination module is further configured to:
the optimal search space comprises each optimized search space, all the options in each layer of the trained hyper-network are sorted according to the descending order of the connection weights corresponding to all the options in the layer, and the optimized search space in the layer is determined according to the first k options in the sorting.
11. The apparatus of claim 9, wherein the search module is further configured to:
and carrying out repeated iterative training on the initial hyper-network, updating the model parameters of the initial hyper-network and the connection weights corresponding to the options in each layer in each iterative process, and obtaining the trained hyper-network until an iteration stop condition is met.
12. The apparatus of claim 11, wherein the iteration stop condition comprises:
the number of times of iterative training is greater than or equal to a preset threshold value;
or,
the performance of the current super-network satisfies the convergence condition.
13. The apparatus of claim 11, wherein the optimal search space determination module is further configured to:
and in each iteration process, alternately updating the model parameters of the initial hyper-network and the connection weights corresponding to the options in each layer.
14. The apparatus of any of claims 9-13, further comprising:
a neural network model search module to: and searching a neural network model in the optimal search space to obtain the target model.
15. The apparatus of claim 14, further comprising:
a data processing module to: and acquiring data to be processed, and performing corresponding data processing on the data to be processed by using the target model.
16. The apparatus of claim 15, the data processing module further to:
and acquiring an image to be processed, inputting the image to be processed into the target model, and performing image processing on the image to be processed by using the target model to obtain an image processing result.
17. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-8.
18. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-8.
CN202011020815.8A 2020-09-25 2020-09-25 Method, device and equipment for generating search space and storage medium Pending CN112100466A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011020815.8A CN112100466A (en) 2020-09-25 2020-09-25 Method, device and equipment for generating search space and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011020815.8A CN112100466A (en) 2020-09-25 2020-09-25 Method, device and equipment for generating search space and storage medium

Publications (1)

Publication Number Publication Date
CN112100466A true CN112100466A (en) 2020-12-18

Family

ID=73755325

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011020815.8A Pending CN112100466A (en) 2020-09-25 2020-09-25 Method, device and equipment for generating search space and storage medium

Country Status (1)

Country Link
CN (1) CN112100466A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112766282A (en) * 2021-01-18 2021-05-07 上海明略人工智能(集团)有限公司 Image recognition method, device, equipment and computer readable medium
CN113744729A (en) * 2021-09-17 2021-12-03 北京达佳互联信息技术有限公司 Speech recognition model generation method, device, equipment and storage medium
CN116051964A (en) * 2023-03-30 2023-05-02 阿里巴巴(中国)有限公司 Deep learning network determining method, image classifying method and device
WO2024127462A1 (en) * 2022-12-12 2024-06-20 Nec Corporation Automatic machine learning development system, automatic machine learning development method and program

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB201818183D0 (en) * 2018-11-08 2018-12-26 Robinson Healthcare Ltd Vaginal speculum
CN110826696A (en) * 2019-10-30 2020-02-21 北京百度网讯科技有限公司 Search space construction method and device of hyper network and electronic equipment
WO2020068437A1 (en) * 2018-09-28 2020-04-02 Xilinx, Inc. Training of neural networks by including implementation cost as an objective
WO2020082663A1 (en) * 2018-10-26 2020-04-30 北京图森未来科技有限公司 Structural search method and apparatus for deep neural network
CN111382868A (en) * 2020-02-21 2020-07-07 华为技术有限公司 Neural network structure search method and neural network structure search device
WO2020160252A1 (en) * 2019-01-30 2020-08-06 Google Llc Task-aware neural network architecture search
CN111667057A (en) * 2020-06-05 2020-09-15 北京百度网讯科技有限公司 Method and apparatus for searching model structure

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020068437A1 (en) * 2018-09-28 2020-04-02 Xilinx, Inc. Training of neural networks by including implementation cost as an objective
WO2020082663A1 (en) * 2018-10-26 2020-04-30 北京图森未来科技有限公司 Structural search method and apparatus for deep neural network
GB201818183D0 (en) * 2018-11-08 2018-12-26 Robinson Healthcare Ltd Vaginal speculum
WO2020160252A1 (en) * 2019-01-30 2020-08-06 Google Llc Task-aware neural network architecture search
CN110826696A (en) * 2019-10-30 2020-02-21 北京百度网讯科技有限公司 Search space construction method and device of hyper network and electronic equipment
CN111382868A (en) * 2020-02-21 2020-07-07 华为技术有限公司 Neural network structure search method and neural network structure search device
CN111667057A (en) * 2020-06-05 2020-09-15 北京百度网讯科技有限公司 Method and apparatus for searching model structure

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
卞伟伟;邱旭阳;申研;: "基于神经网络结构搜索的目标识别方法", 空军工程大学学报(自然科学版), no. 04, 25 August 2020 (2020-08-25) *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112766282A (en) * 2021-01-18 2021-05-07 上海明略人工智能(集团)有限公司 Image recognition method, device, equipment and computer readable medium
CN112766282B (en) * 2021-01-18 2024-04-12 上海明略人工智能(集团)有限公司 Image recognition method, device, equipment and computer readable medium
CN113744729A (en) * 2021-09-17 2021-12-03 北京达佳互联信息技术有限公司 Speech recognition model generation method, device, equipment and storage medium
WO2024127462A1 (en) * 2022-12-12 2024-06-20 Nec Corporation Automatic machine learning development system, automatic machine learning development method and program
CN116051964A (en) * 2023-03-30 2023-05-02 阿里巴巴(中国)有限公司 Deep learning network determining method, image classifying method and device

Similar Documents

Publication Publication Date Title
CN112100466A (en) Method, device and equipment for generating search space and storage medium
CN111667057B (en) Method and apparatus for searching model structures
EP3869403A2 (en) Image recognition method, apparatus, electronic device, storage medium and program product
CN111708922A (en) Model generation method and device for representing heterogeneous graph nodes
CN112001180A (en) Multi-mode pre-training model acquisition method and device, electronic equipment and storage medium
CN111680517B (en) Method, apparatus, device and storage medium for training model
CN111582375A (en) Data enhancement strategy searching method, device, equipment and storage medium
CN111241234B (en) Text classification method and device
CN111667056A (en) Method and apparatus for searching model structure
CN111967569A (en) Neural network structure generation method and device, storage medium and electronic equipment
CN111639753B (en) Method, apparatus, device and storage medium for training image processing super network
CN111582454A (en) Method and device for generating neural network model
CN111882035A (en) Super network searching method, device, equipment and medium based on convolution kernel
CN111767833A (en) Model generation method and device, electronic equipment and storage medium
CN111783951B (en) Model acquisition method, device, equipment and storage medium based on super network
CN111680600A (en) Face recognition model processing method, device, equipment and storage medium
CN110766089A (en) Model structure sampling method and device of hyper network and electronic equipment
CN113961765A (en) Searching method, device, equipment and medium based on neural network model
CN111783950A (en) Model obtaining method, device, equipment and storage medium based on hyper network
CN111967591A (en) Neural network automatic pruning method and device and electronic equipment
CN111652354A (en) Method, apparatus, device and storage medium for training a hyper-network
CN111680597A (en) Face recognition model processing method, device, equipment and storage medium
CN111160552B (en) News information recommendation processing method, device, equipment and computer storage medium
CN111680599B (en) Face recognition model processing method, device, equipment and storage medium
CN112580723B (en) Multi-model fusion method, device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination