[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

WO2022100607A1 - 一种神经网络结构确定方法及其装置 - Google Patents

一种神经网络结构确定方法及其装置 Download PDF

Info

Publication number
WO2022100607A1
WO2022100607A1 PCT/CN2021/129757 CN2021129757W WO2022100607A1 WO 2022100607 A1 WO2022100607 A1 WO 2022100607A1 CN 2021129757 W CN2021129757 W CN 2021129757W WO 2022100607 A1 WO2022100607 A1 WO 2022100607A1
Authority
WO
WIPO (PCT)
Prior art keywords
neural network
target
block
weights
blocks
Prior art date
Application number
PCT/CN2021/129757
Other languages
English (en)
French (fr)
Inventor
肖一凡
张健
钟钊
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to EP21891127.9A priority Critical patent/EP4227858A4/en
Publication of WO2022100607A1 publication Critical patent/WO2022100607A1/zh
Priority to US18/316,369 priority patent/US20230289572A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/082Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/0985Hyperparameter optimisation; Meta-learning; Learning-to-learn
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
    • G06V10/451Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
    • G06V10/454Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Definitions

  • the present application relates to the field of artificial intelligence, and in particular, to a method and device for determining a neural network structure.
  • Machine learning has achieved considerable success in recent years, and more and more products derived from machine learning are revolutionizing people's lives.
  • the current progress of machine learning relies on ML experts to manually design and debug the model, which not only limits the wide application of machine learning, but also prolongs the product iteration cycle.
  • the present application provides a method for determining a neural network structure, the method comprising:
  • the initial neural network includes M first structural blocks and second blocks, the structural blocks can also be called network structural blocks, and the structural blocks can include a certain number of atomic operations. Operations can include, but are not limited to, operations such as convolution, pooling, and residual connections.
  • the second block is connected to each first block.
  • the so-called connection relationship between the blocks can be understood as the data transmission direction between the blocks.
  • the block can perform the operation corresponding to the block on the input data, and obtain the operation result.
  • the operation result can be input to the next block connected to this block and used as the input data of the next block.
  • a connection relationship between two first blocks can indicate that the output of one block is used as the input of another block, and each of the first blocks corresponds to a target weight.
  • a trainable parameter also called the target weight in this embodiment
  • the output of a block can be It is multiplied by the corresponding target weight (also referred to as a product operation in this embodiment), and then the result of the product operation is input into another block, and the second block of the weight is used to perform all the operations according to the M first outputs.
  • the operation corresponding to the second block wherein, the M first outputs are obtained by multiplying the output of each first block with the corresponding target weight, and the target weight is a trainable weight, and the M is an integer greater than 1; model training is performed on the initial neural network to obtain updated M target weights; in the embodiment of the present application, the training device can perform model training on the initial neural network on the target task, and The M target weights are updated, and the updated M target weights can be obtained when the M target weights are stable.
  • the so-called stable target weights can be understood as the change of target weights within a certain range during the iterative training process. In some implementations, the number of iterative training times can be used to determine whether the M target weights are stable.
  • the training equipment can The initial neural network performs model training for a first preset number of iterations to obtain updated M target weights.
  • the first preset number of iterations can be a preset value, which can be determined according to the total number of times required for iterative training. Sure. For example, when the number of times of iterative training reaches a certain percentage of the total number of times of training, it is considered that the M target weights are stable.
  • the second block in is used to perform the operation corresponding to the second block according to the output of the first block corresponding to the largest N target weights among the updated M target weights, and the N is smaller than the M.
  • the second block in the first neural network can be used to perform the first block according to the sum of the outputs of the first blocks corresponding to the largest N target weights among the updated M target weights. The operation corresponding to the second block.
  • the size of the updated M target weights can indicate whether the connection between the blocks where they are located is important, and the important criterion is that the larger the size of the updated target weights, the stronger the connection between the blocks where they are located. important.
  • the connections where the largest N target weights are located among the updated M target weights may be retained, and the connections where the target weights other than the largest N target weights are located from the updated M target weights are eliminated.
  • a trainable target weight is added to the connection between the blocks, and the updated target weight is used as the connection relationship between the blocks according to the size of the updated target weight.
  • the M first blocks and the second blocks in the initial neural network form a serial connection in sequence, and the second block is the end point of the serial connection, and the initial The M first blocks in the neural network include a target block, the target block is connected to the second block on the serial connection, and the updated target weight corresponding to the target block does not belong to the target block.
  • the second block in the first neural network is further configured to perform an operation corresponding to the second block according to the output of the target block.
  • the connection between the target block and the second block is always preserved, and the target block and the first
  • the connection between the two blocks can be called a backbone connection, and the backbone connection will not be eliminated to ensure that the backbone architecture of the entire neural network is not destroyed.
  • the connection where the N updated target weights are located may be retained. If the updated target weight of the backbone connection is not one of the largest N target weights among the M target weights, the connection where the N+1 updated target weights are located can be reserved.
  • the N is 1.
  • performing model training on the initial neural network to obtain updated M target weights including:
  • the method further includes:
  • Model training is performed on the first neural network until the data processing accuracy of the first neural network satisfies a preset condition or the number of iterations of the model training reaches a second preset number of iterations to obtain a second neural network.
  • the ratio between the first preset number of iterations and the second preset number of iterations may be preset, which is equivalent to obtaining the updated M target weights at a fixed percentage of the overall training rounds. It not only ensures the stability of the target weight, but also ensures that the network after the optimized topology is fully trained. At the same time, the time for a single topology optimization is maintained, which is basically the same as the original training time, which ensures the search efficiency.
  • the number of input and output channels of each of the M first blocks is consistent with the number of input and output channels of the second block.
  • the second block in the initial neural network is configured to perform an operation corresponding to the second block according to the summing result of the M first outputs
  • the second block in the first neural network is used to perform the corresponding output of the second block according to the summation results of the outputs of the first blocks corresponding to the largest N target weights in the updated M target weights. operation.
  • the method further includes:
  • the data to be trained includes at least one of the following: image data, text data, and voice data; correspondingly, performing model training on the initial neural network includes:
  • Model training is performed on the initial neural network according to the data to be trained.
  • the present application provides a method for determining a neural network structure, the method comprising:
  • the initial neural network includes M first structural blocks and a second block, the second block is connected to each first block, and each first block corresponds to a target weight, each first block is used to perform the operation corresponding to the first block according to the corresponding first output; wherein, the first output corresponding to each first block is the target weight corresponding to the first block
  • the output of the second block is multiplied to obtain, the target weight is a trainable weight, and the M is an integer greater than 1; the difference from the embodiment described in the first aspect is that in the first aspect
  • the output of the M first blocks is used as the input of the second block.
  • the subsequent selection process of the connection relationship it is also based on the size of the M updated target weights corresponding to the M first blocks.
  • the output of the second block is used as the input of M first blocks.
  • Model training is performed on the initial neural network to obtain updated M target weights
  • the connection relationship between the second block and the M first blocks in the initial neural network is updated to obtain the first neural network; wherein the updated M
  • the first blocks corresponding to the largest N target weights among the target weights are used to perform an operation corresponding to the first block according to the output of the second block, and the N is smaller than the M.
  • the second block in the initial neural network forms a serial connection with the M first blocks in sequence, and the second block is the starting point of the serial connection, so
  • the M first blocks in the initial neural network include a target block, and the target block is connected with the second block on the serial path, and the updated target weight corresponding to the target block does not belong to
  • the target block in the first neural network is also used to perform the target block corresponding to the target block according to the output of the second block. operation.
  • a trainable target weight is added to the connection between the blocks, and the updated target weight is used as the connection relationship between the blocks according to the size of the updated target weight.
  • the N is 1.
  • performing model training on the initial neural network to obtain updated M target weights including:
  • the method further includes:
  • Model training is performed on the first neural network until the data processing accuracy of the first neural network satisfies a preset condition or the number of iterations of the model training reaches a second preset number of iterations to obtain a second neural network.
  • the number of input and output channels of each of the M first blocks is consistent with the number of input and output channels of the second block.
  • the method further includes:
  • the data to be trained includes at least one of the following: image data, text data, and voice data; correspondingly, performing model training on the initial neural network includes:
  • Model training is performed on the initial neural network according to the data to be trained.
  • the present application provides a method for determining a neural network structure, the method comprising:
  • each target code is used to indicate a candidate neural network
  • the multiple target codes include a first target code and a plurality of first codes
  • the first target codes are used to indicate the first neural network
  • the structural features of the neural network can be written in the form of codes, and each code is used to indicate at least one of the following structural features of a candidate neural network: the candidate neural network includes: The type of the operation unit, the number of operation units included in the candidate neural network, and the number of input features and output feature channels of the operation units included in the candidate neural network.
  • the operation unit may refer to each atomic operation in the block, and in another way, each code is used to indicate the type of atomic operation included in a candidate neural network, the number of atomic operations included in the candidate neural network, and the number of atomic operations included in the candidate neural network.
  • the number of input feature and output feature channels for the atomic operation Since in the same stage, the number of input features and output feature channels of atomic operations in each block is the same, it is equivalent to that each code is used to indicate the number of input features and output feature channels of blocks included in a candidate neural network.
  • Model training is performed on the first neural network to obtain the data processing accuracy of the first neural network; in the embodiment of the present application, the data processing accuracy of the neural network may be the value of the loss function of the training network and the test accuracy of the neural network etc., this is not limited in the embodiments of the present application.
  • model training is not performed on multiple candidate neural networks indicated by multiple target codes, and based on the data processing accuracy of multiple candidate neural networks, a candidate neural network with higher accuracy is selected as the model search.
  • a candidate neural network with higher accuracy is selected as the model search.
  • the model training is performed on the first neural network.
  • the data processing accuracy of the candidate neural network indicated by the remaining target codes (multiple first codes).
  • the target code may include a plurality of bits, and each bit indicates a structural feature of the candidate neural network.
  • each target code may be standardized. Exemplarily, for each bit of the target code, the mean and standard deviation of multiple target codes are calculated respectively, and then the mean value is subtracted from each bit of the target code, and then divided by the standard deviation. After that, the dimension of the target code has no effect on subsequent algorithms.
  • a Gaussian process may be used to determine the candidate neural network indicated by each first code according to the degree of difference between the first target code and the plurality of first codes and the data processing accuracy of the first neural network
  • the data processing accuracy of the network specifically, can estimate the value of other sample points according to the distance between two sample points and the value of a part of the sample points.
  • the sample point is each target code
  • the value of the sample point is the data processing accuracy of the candidate neural network indicated by the target code.
  • Model training is performed on the first candidate neural network to obtain a first target neural network.
  • the method further includes:
  • the multiple target codes include a second target code, and the second target code is used to indicate the first target neural network;
  • the plurality of targets are determined according to the degree of difference between the second target code and codes other than the second target code among the plurality of target codes and the data processing accuracy of the first target neural network The data processing accuracy of the candidate neural network indicated by each target code except the second target code in the code;
  • a second candidate neural network with the highest data processing accuracy is determined, and model training is performed on the second candidate neural network to obtain a second target neural network.
  • the training device can repeat the above process, and through a preset number of iterations (for example, 4 rounds), a very ideal model can be obtained as the result of the neural model structure search.
  • each target code is used to indicate at least one of the following structural features of a candidate neural network:
  • the type of operation units included in the candidate neural network the number of operation units included in the candidate neural network, and the number of input features and output feature channels of the operation units included in the candidate neural network.
  • the method further includes:
  • a plurality of codes are clustered to obtain a plurality of code sets, each code set corresponds to a clustering category, the plurality of code sets include a target code set, and the target code set includes the plurality of target codes.
  • the first target code is a cluster center of the target code set.
  • multiple codes may be clustered to obtain multiple code sets, each code set corresponds to a clustering category, the multiple code sets include a target code set, and the target code set includes all multiple target codes.
  • the above-mentioned multiple codes may be obtained after screening multiple candidate codes.
  • the first target code may be one code in the target code set, and in one implementation, the first target code may be a cluster center of the target code set.
  • the first target code is used to indicate the first neural network. It should be understood that the above-mentioned clustering may be K-Means algorithm, DBSCAN algorithm, BIRCH algorithm, MeanShift algorithm, etc.
  • the candidate neural network indicated by each target code satisfies at least one of the following conditions:
  • the amount of computation required when running the candidate neural network indicated by each target code is less than the first preset value
  • the number of weights included in the candidate neural network indicated by each target code is less than the second preset value
  • the running speed when running the candidate neural network indicated by each target code is higher than the third preset value.
  • the training device may generate multiple candidate codes, and screen the multiple candidate codes based on a preset rule, where the preset rule may be at least one of the following: from the multiple candidate codes
  • the amount of computation required to run the indicated candidate neural network is less than the first preset value, the amount of weights included is less than the second preset value, and the running speed when running the indicated candidate neural network is higher than the third preset value. set value.
  • the calculation amount can be the number of floating-point multiplications that need to be performed in the entire neural network, and the operation of floating-point multiplication is the most time-consuming, so it can be used to represent the calculation amount of the neural network.
  • the above-mentioned first preset value, second preset value and third preset value may be preset.
  • the first candidate neural network includes M first structural blocks and second blocks, the second blocks are connected to each first block, and each first block corresponds to a target weight, the first candidate neural network is used to multiply the output of each first block with the corresponding target weight to obtain M first outputs, and the second block is used to obtain M first outputs according to the M first Output, carry out the operation corresponding to the second block; wherein, the target weight is a trainable weight, and the M is an integer greater than 1;
  • the performing model training on the first candidate neural network to obtain the first target neural network including:
  • Model training is performed on the first candidate neural network to obtain updated M target weights
  • the connection relationship between the second block and the M first blocks in the first candidate neural network is updated to obtain a second neural network; wherein, the second The second block in the neural network is used to perform the operation corresponding to the second block according to the output of the first block corresponding to the largest N target weights among the updated M target weights, and the N is less than the M;
  • Model training is performed on the second neural network to obtain the first target neural network.
  • the first candidate neural network includes M first structural blocks and second blocks, the second blocks are connected to each first block, and each first block corresponds to a target weight, the first candidate neural network is used to multiply the output of the second block and each target weight to obtain M first outputs, and each first block is used to obtain M first outputs according to the corresponding first output, Carry out the operation corresponding to the first block; wherein, the target weight is a trainable weight, and the M is an integer greater than 1;
  • the performing model training on the first candidate neural network to obtain the first target neural network including:
  • Model training is performed on the first candidate neural network to obtain updated M target weights
  • the connection relationship between the second block and the M first blocks in the first candidate neural network is updated to obtain a second neural network; wherein, the updated The first block corresponding to the largest N target weights among the M target weights is used to perform the operation corresponding to the first block according to the output of the second block, and the N is less than the M;
  • Model training is performed on the second neural network to obtain the first target neural network.
  • the present application provides an apparatus for determining a neural network structure, the apparatus comprising:
  • the acquisition module is used to acquire the initial neural network to be trained, the initial neural network includes M first structural blocks and second blocks, the second blocks are connected to each first block, and each of the first blocks One block corresponds to one target weight, and the second block is used to perform an operation corresponding to the second block according to the M first outputs; wherein, the M first outputs are the outputs of each first block. are obtained by performing a product operation with the corresponding target weights, where the target weights are trainable weights, and the M is an integer greater than 1;
  • a model training module for performing model training on the initial neural network to obtain updated M target weights
  • a model updating module configured to update the connection relationship between the second block and the M first blocks in the initial neural network according to the updated M target weights to obtain the first neural network; wherein, The second block in the first neural network is used to perform the operation corresponding to the second block according to the output of the first block corresponding to the largest N target weights among the updated M target weights, and the N is smaller than the M.
  • the M first blocks and the second blocks in the initial neural network form a serial connection in sequence, and the second block is the end point of the serial connection, and the initial The M first blocks in the neural network include a target block, the target block is connected to the second block on the serial connection, and the updated target weight corresponding to the target block does not belong to the target block.
  • the second block in the first neural network is further configured to perform an operation corresponding to the second block according to the output of the target block.
  • the N is 1.
  • the model training module is configured to perform model training on the initial neural network for a first preset number of iterations to obtain updated M target weights.
  • the model training module is configured to perform model training on the first neural network until the data processing accuracy of the first neural network satisfies a preset condition or the number of iterations of the model training reaches the third Two preset number of iterations to obtain the second neural network.
  • the number of input and output channels of each of the M first blocks is consistent with the number of input and output channels of the second block.
  • the second block in the initial neural network is configured to perform an operation corresponding to the second block according to the summing result of the M first outputs
  • the second block in the first neural network is used to perform the corresponding output of the second block according to the summation results of the outputs of the first blocks corresponding to the largest N target weights in the updated M target weights. operation.
  • the acquisition module is configured to acquire data to be trained, and the data to be trained includes at least one of the following: image data, text data, and voice data;
  • the above initial neural network is used for model training, including:
  • the model training module is configured to perform model training on the initial neural network according to the data to be trained.
  • the present application provides an apparatus for determining a neural network structure, the apparatus comprising:
  • the acquisition module is used to acquire the initial neural network to be trained, the initial neural network includes M first structural blocks and second blocks, the second blocks are connected to each first block, and each of the first blocks One block corresponds to one target weight, and each first block is used to perform an operation corresponding to the first block according to the corresponding first output; wherein, the first output corresponding to each first block is determined by the first
  • the target weight corresponding to the block is obtained by multiplying the output of the second block, the target weight is a trainable weight, and the M is an integer greater than 1;
  • a model training module for performing model training on the initial neural network to obtain updated M target weights
  • a model updating module configured to update the connection relationship between the second block and the M first blocks in the initial neural network according to the updated M target weights to obtain the first neural network; wherein, The first block corresponding to the largest N target weights among the updated M target weights is used to perform an operation corresponding to the first block according to the output of the second block, and the N is smaller than the M.
  • the second block in the initial neural network forms a serial connection with the M first blocks in sequence, and the second block is the starting point of the serial connection, so
  • the M first blocks in the initial neural network include a target block, and the target block is connected with the second block on the serial path, and the updated target weight corresponding to the target block does not belong to
  • the target block in the first neural network is also used to perform the target block corresponding to the target block according to the output of the second block. operation.
  • the N is 1.
  • the model training module is configured to perform model training on the initial neural network for a first preset number of iterations to obtain updated M target weights.
  • the model training module is configured to perform model training on the first neural network until the data processing accuracy of the first neural network satisfies a preset condition or the number of iterations of the model training reaches the third Two preset number of iterations to obtain the second neural network.
  • the number of input and output channels of each of the M first blocks is consistent with the number of input and output channels of the second block.
  • the acquisition module is configured to acquire data to be trained, and the data to be trained includes at least one of the following: image data, text data, and voice data;
  • the above initial neural network is used for model training, including:
  • the model training module is configured to perform model training on the initial neural network according to the data to be trained.
  • the present application provides an apparatus for determining a neural network structure, the apparatus comprising:
  • an acquisition module configured to acquire a plurality of target codes, each target code is used to indicate a candidate neural network, the plurality of target codes include a first target code and a plurality of first codes, and the first target codes are used to indicate the first neural network;
  • a model training module for performing model training on the first neural network to obtain the data processing accuracy of the first neural network
  • An accuracy determination module configured to determine the accuracy of the candidate neural network indicated by each first code according to the degree of difference between the first target code and the plurality of first codes and the data processing accuracy of the first neural network data processing accuracy
  • the obtaining module is configured to obtain the first candidate neural network with the highest data processing accuracy among the candidate neural networks indicated by the multiple target codes;
  • the model training module is used to perform model training on the first candidate neural network to obtain a first target neural network.
  • the obtaining module is configured to obtain the data processing accuracy of the first target neural network, the multiple target codes include a second target code, and the second target code is used to indicate the Describe the first target neural network;
  • the plurality of targets are determined according to the degree of difference between the second target code and codes other than the second target code among the plurality of target codes and the data processing accuracy of the first target neural network The data processing accuracy of the candidate neural network indicated by each target code except the second target code in the code;
  • a second candidate neural network with the highest data processing accuracy is determined, and model training is performed on the second candidate neural network to obtain a second target neural network.
  • each target code is used to indicate at least one of the following structural features of a candidate neural network:
  • the type of operation units included in the candidate neural network the number of operation units included in the candidate neural network, and the number of input features and output feature channels of the operation units included in the candidate neural network.
  • the apparatus further includes:
  • a clustering module configured to cluster multiple codes to obtain multiple code sets, each code set corresponds to a clustering category, the multiple code sets include a target code set, and the target code set includes the Multiple target encodings.
  • the first target code is a cluster center of the target code set.
  • the candidate neural network indicated by each target code satisfies at least one of the following conditions:
  • the amount of computation required when running the candidate neural network indicated by each target code is less than the first preset value
  • the candidate neural network indicated by each target code includes a weight less than the second preset value
  • the running speed when running the candidate neural network indicated by each target code is higher than the third preset value.
  • the first candidate neural network includes M first structural blocks and second blocks, the second blocks are connected to each of the first blocks, and each of the first blocks corresponds to A target weight, the second block is used to perform an operation corresponding to the second block according to the M first outputs; wherein, the M first outputs are respectively corresponding to the outputs of the first blocks
  • the target weight is obtained by multiplying the target weight, the target weight is a trainable weight, and the M is an integer greater than 1;
  • the model training module is used to perform model training on the first candidate neural network to obtain updated M target weights
  • the connection relationship between the second block and the M first blocks in the first candidate neural network is updated to obtain a second neural network; wherein, the second The second block in the neural network is used to perform the operation corresponding to the second block according to the output of the first block corresponding to the largest N target weights among the updated M target weights, and the N is less than the M;
  • Model training is performed on the second neural network to obtain the first target neural network.
  • the first candidate neural network includes M first structural blocks and second blocks, the second blocks are connected to each of the first blocks, and each of the first blocks corresponds to A target weight, and each first block is used to perform an operation corresponding to the first block according to the corresponding first output; wherein, the first output corresponding to each first block is determined by the corresponding first block.
  • the target weight and the output of the second block are multiplied to obtain, the target weight is a trainable weight, and the M is an integer greater than 1;
  • the model training module is used to perform model training on the first candidate neural network to obtain the updated M target weights
  • the connection relationship between the second block and the M first blocks in the first candidate neural network is updated to obtain a second neural network; wherein, the updated The first block corresponding to the largest N target weights among the M target weights is used to perform the operation corresponding to the first block according to the output of the second block, and the N is less than the M;
  • Model training is performed on the second neural network to obtain the first target neural network.
  • an embodiment of the present application provides an apparatus for determining a neural network structure, which may include a memory, a processor, and a bus system, wherein the memory is used to store a program, and the processor is used to execute the program in the memory to execute the above-mentioned first step.
  • An aspect and any optional method of the first aspect may include a memory, a processor, and a bus system, wherein the memory is used to store a program, and the processor is used to execute the program in the memory to execute the above-mentioned first step.
  • an embodiment of the present application provides a neural network training apparatus, which may include a memory, a processor, and a bus system, wherein the memory is used to store a program, and the processor is used to execute the program in the memory to execute the second method described above. Aspect and any optional method of the first aspect.
  • an embodiment of the present application provides a neural network training apparatus, which may include a memory, a processor, and a bus system, wherein the memory is used to store a program, and the processor is used to execute the program in the memory, so as to execute the above-mentioned third Aspect and any optional method of the first aspect.
  • an embodiment of the present application provides a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, and when it runs on a computer, causes the computer to execute the first aspect and any one of the above-mentioned first aspect. optional method.
  • an embodiment of the present application provides a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, and when it runs on a computer, causes the computer to execute the second aspect and any of the above-mentioned aspects. an optional method.
  • an embodiment of the present application provides a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, and when the computer program is run on a computer, causes the computer to execute the third aspect and any of the above-mentioned aspects. an optional method.
  • an embodiment of the present application provides a computer program, which, when run on a computer, causes the computer to execute the above-mentioned first aspect and any optional method thereof.
  • an embodiment of the present application provides a computer program, which, when run on a computer, causes the computer to execute the second aspect and any optional method thereof.
  • an embodiment of the present application provides a computer program, which, when run on a computer, causes the computer to execute the above-mentioned third aspect and any optional method thereof.
  • an embodiment of the present application provides a computer program product, including code, when the code is executed, for executing the first aspect and any optional method thereof.
  • an embodiment of the present application provides a computer program product, including code, when the code is executed, for executing the second aspect and any optional method thereof.
  • an embodiment of the present application provides a computer program product, including code, when the code is executed, for executing the third aspect and any optional method thereof.
  • the present application provides a chip system
  • the chip system includes a processor for supporting an execution device or a training device to implement the functions involved in the above aspects, for example, sending or processing data involved in the above methods ; or, information.
  • the chip system further includes a memory for storing program instructions and data necessary for executing the device or training the device.
  • the chip system may be composed of chips, or may include chips and other discrete devices.
  • An embodiment of the present application provides a method for determining a neural network structure, the method includes: acquiring an initial neural network to be trained, the initial neural network includes M first structural blocks and a second block, the second block is connected to each first block, and each first block corresponds to a target weight, and the second block is used to perform an operation corresponding to the second block according to the M first outputs; wherein, the M A first output is obtained by multiplying the output of each first block with the corresponding target weight, where the target weight is a trainable weight, and the M is an integer greater than 1; for the initial neural network Carry out model training to obtain the updated M target weights; according to the updated M target weights, update the connection relationship between the second block and the M first blocks in the initial neural network, to Obtain the first neural network; wherein, the second block in the first neural network is used to perform all the steps according to the outputs of the N first blocks corresponding to the largest N target weights in the updated M target weights.
  • the N is smaller than the M.
  • the trainable target weight is added to the connection between the blocks, and the updated target weight is used as the connection relationship between the blocks.
  • the importance judgment basis of , and based on the updated target weight size the connection relationship between blocks is selected and eliminated, so as to realize the search of the topology structure of the neural network.
  • Fig. 1 is a kind of structural schematic diagram of artificial intelligence main frame
  • FIG. 2 is an application scenario of an embodiment of the present application
  • FIG. 3 is a schematic diagram of a system architecture provided by an embodiment of the present application.
  • FIG. 4 is a schematic flowchart of a method for determining a neural network structure provided by an embodiment of the present application
  • 5a is a schematic diagram of a method for determining a neural network structure provided by an embodiment of the present application.
  • 5b is a schematic diagram of a method for determining a neural network structure provided by an embodiment of the application.
  • 5c is a schematic diagram of a method for determining a neural network structure provided by an embodiment of the present application.
  • FIG. 6 is a schematic diagram of a method for determining a neural network structure provided by an embodiment of the present application.
  • FIG. 7 is a schematic diagram of a method for determining a neural network structure provided by an embodiment of the present application.
  • FIG. 8 is a schematic diagram of a method for determining a neural network structure provided by an embodiment of the present application.
  • FIG. 9 is a schematic diagram of a method for determining a neural network structure provided by an embodiment of the present application.
  • FIG. 10 is a schematic flowchart of a method for determining a neural network structure provided by an embodiment of the present application.
  • FIG. 11 is a schematic diagram of a method for determining a neural network structure provided by an embodiment of the present application.
  • FIG. 12 is a schematic diagram of a method for determining a neural network structure provided by an embodiment of the present application.
  • FIG. 13 is a schematic diagram of a method for determining a neural network structure provided by an embodiment of the present application.
  • FIG. 14 is a schematic flowchart of a method for determining a neural network structure provided by an embodiment of the present application.
  • 15 is a schematic diagram of an apparatus for determining a neural network structure provided by an embodiment of the present application.
  • 16 is a schematic diagram of an apparatus for determining a neural network structure provided by an embodiment of the present application.
  • 17 is a schematic diagram of an apparatus for determining a neural network structure provided by an embodiment of the present application.
  • FIG. 18 is a schematic diagram of an apparatus for determining a neural network structure provided by an embodiment of the present application.
  • FIG. 19 is a schematic structural diagram of an execution device provided by an embodiment of the application.
  • FIG. 20 is a schematic structural diagram of a training device provided by an embodiment of the application.
  • FIG. 21 is a schematic structural diagram of a chip provided by an embodiment of the present application.
  • Figure 1 shows a schematic structural diagram of the main frame of artificial intelligence.
  • the above-mentioned artificial intelligence theme framework is explained in two dimensions (vertical axis).
  • the "intelligent information chain” reflects a series of processes from data acquisition to processing. For example, it can be the general process of intelligent information perception, intelligent information representation and formation, intelligent reasoning, intelligent decision-making, intelligent execution and output. In this process, data has gone through the process of "data-information-knowledge-wisdom".
  • the "IT value chain” reflects the value brought by artificial intelligence to the information technology industry from the underlying infrastructure of human intelligence, information (providing and processing technology implementation) to the industrial ecological process of the system.
  • the infrastructure provides computing power support for artificial intelligence systems, realizes communication with the outside world, and supports through the basic platform. Communication with the outside world through sensors; computing power is provided by smart chips (hardware acceleration chips such as CPU, NPU, GPU, ASIC, FPGA); the basic platform includes distributed computing framework and network-related platform guarantee and support, which can include cloud storage and computing, interconnection networks, etc. For example, sensors communicate with external parties to obtain data, and these data are provided to the intelligent chips in the distributed computing system provided by the basic platform for calculation.
  • smart chips hardware acceleration chips such as CPU, NPU, GPU, ASIC, FPGA
  • the basic platform includes distributed computing framework and network-related platform guarantee and support, which can include cloud storage and computing, interconnection networks, etc. For example, sensors communicate with external parties to obtain data, and these data are provided to the intelligent chips in the distributed computing system provided by the basic platform for calculation.
  • the data on the upper layer of the infrastructure is used to represent the data sources in the field of artificial intelligence.
  • the data involves graphics, images, voice, and text, as well as IoT data from traditional devices, including business data from existing systems and sensory data such as force, displacement, liquid level, temperature, and humidity.
  • Data processing usually includes data training, machine learning, deep learning, search, reasoning, decision-making, etc.
  • machine learning and deep learning can perform symbolic and formalized intelligent information modeling, extraction, preprocessing, training, etc. on data.
  • Reasoning refers to the process of simulating human's intelligent reasoning method in a computer or intelligent system, using formalized information to carry out machine thinking and solving problems according to the reasoning control strategy, and the typical function is search and matching.
  • Decision-making refers to the process of making decisions after intelligent information is reasoned, usually providing functions such as classification, sorting, and prediction.
  • some general capabilities can be formed based on the results of data processing, such as algorithms or a general system, such as translation, text analysis, computer vision processing, speech recognition, image identification, etc.
  • Intelligent products and industry applications refer to the products and applications of artificial intelligence systems in various fields. They are the encapsulation of the overall artificial intelligence solution, and the productization of intelligent information decision-making to achieve landing applications. Its application areas mainly include: intelligent terminals, intelligent transportation, Smart healthcare, autonomous driving, smart city, etc.
  • the embodiments of the present application may be applied to scenarios such as image classification, object detection, semantic segmentation, room layout, image completion, or automatic coding.
  • Application Scenario 1 ADAS/ADS Visual Perception System
  • ADAS and ADS multiple types of 2D object detection need to be performed in real time, including: dynamic obstacles (Pedestrian, Cyclist, Tricycle, Car, Truck). (Truck, Bus), Static Obstacles (TrafficCone, TrafficStick, FireHydrant, Motorcycle, Bicycle), Traffic Signs ( (TrafficSign), guide sign (GuideSign), billboard (Billboard), red traffic light (TrafficLight_Red)/yellow traffic light (TrafficLight_Yellow)/green traffic light (TrafficLight_Green)/black traffic light (TrafficLight_Black), road sign (RoadSign)).
  • dynamic obstacles Pedestrian, Cyclist, Tricycle, Car, Truck. (Truck, Bus), Static Obstacles (TrafficCone, TrafficStick, FireHydrant, Motorcycle, Bicycle), Traffic Signs ( (TrafficSign), guide sign (GuideSign), billboard (Billboard), red traffic light (
  • the mask and key points of the human body are detected by the neural network provided by the embodiments of the present application, and the corresponding parts of the human body can be enlarged and reduced, such as waist and hip beautification operations, so as to output a beautifying image.
  • the category of the object in the image to be classified can be acquired based on the neural network, and then the image to be classified can be classified according to the category of the object in the image to be classified.
  • photos can be quickly classified according to the content in the photos, which can be divided into photos containing animals, photos containing people, and photos containing plants.
  • the category of the commodity in the image of the commodity can be acquired through the processing of the neural network, and then the commodity is classified according to the category of the commodity.
  • the object recognition method of the present application can quickly complete the classification of commodities, reducing time overhead and labor costs.
  • the embodiments of the present application can perform structure search of the neural network, and train the neural network obtained by the search, and the obtained trained neural network can perform task processing in the above several scenarios.
  • object detection using image processing, machine learning, computer graphics and other related methods, object detection can determine the category of the image object and determine the detection frame used to locate the object.
  • Convolutional Neural Network (Convosutionas Neuras Network, CNN) is a deep neural network with a convolutional structure.
  • a convolutional neural network consists of a feature extractor consisting of convolutional and subsampling layers. This feature extractor can be thought of as a filter.
  • the perception network in this embodiment may include a convolutional neural network, which is configured to perform convolution processing on an image or perform convolution processing on a feature map to generate a feature map.
  • the convolutional neural network can use the error back propagation (BP) algorithm to correct the size of the parameters in the initial super-resolution model during the training process, so that the reconstruction error loss of the super-resolution model becomes smaller and smaller. Specifically, forwarding the input signal until the output will generate an error loss, and updating the parameters in the initial super-resolution model by back-propagating the error loss information, so that the error loss converges.
  • the back-propagation algorithm is a back-propagation motion dominated by the error loss, aiming to obtain the parameters of the optimal super-resolution model, such as the weight matrix.
  • the perception network may be updated based on the back-propagation algorithm.
  • Feature map Feature Map.
  • the input data, output data, intermediate result data, etc. of the neural network can all be called feature maps.
  • data exists in three-dimensional form (length, width, and number of channels), which can be viewed as multiple two-dimensional pictures stacked together.
  • Network structure block that is, block.
  • the first step is to design Blocks, which are units composed of atomic units (such as convolution operations, pooling operations, etc.); the second step is to combine Blocks into a complete network structure.
  • Channel that is, Channel, the third dimension in the feature map in addition to length and width. It can be understood as the thickness of the feature map.
  • atomic operations such as convolutional layers, there is also the dimension of the number of channels.
  • Block width For a block, the topological relationship of its internal atomic units is fixed, but the number of channels of the input and output features of the atomic unit is not fixed. This is a variable property of the Block called the width of the Block.
  • the width of the network the set of widths of all blocks in the neural network is called the width of the network. Usually a set of integers.
  • the depth of the network when the blocks are stacked into a neural network, the stacking number of blocks. It is positively related to the convolution stacking depth of the network.
  • the stage of the network the Stage.
  • the input feature map is gradually reduced by multiple downsampling. Between two downsamplings, a stage of the network is formed.
  • the blocks within a stage of the network have the same width.
  • Network structure coding In the present invention, the depth and width of the network constitute the network structure coding. After the topology structure is determined, the network structure code can uniquely determine the structure of a network. The number of bits encoded by the network structure is generally the same as the number of stages in the network.
  • Network structure coding candidate set In the present invention, the set of network structure coding that may meet the requirements is called a candidate set.
  • the amount of computation of the network FLOPs.
  • the number of floating-point multiplications performed in the entire network This part is the most time-consuming and is therefore used to represent the computational load of the network.
  • Weighted summation When the outputs of different atomic operations are aggregated, the feature maps of the same shape can be summed or stacked. In the present invention, summation is always used. But in the summation, each input is multiplied by a weight that can be learned. This is weighted summation.
  • the data processing performance of the network the indicators of the quality of the neural network, such as the accuracy of the network on the test set, the loss function value on the training set, etc. It needs to be manually specified according to business needs.
  • Target task The final task to be solved, exists relative to the agent task. For example, image classification on the ImageNet dataset, face recognition on business datasets, etc.
  • Proxy task When AutoML optimizes the network structure, it needs to evaluate the performance of a large number of networks. If training and testing are directly performed on the target task, the resource consumption will become unacceptable. Therefore, a smaller task will be manually designed, which can quickly complete the training and testing of the network, which is the agent task.
  • FIG. 3 is a schematic diagram of a system architecture provided by an embodiment of the present application.
  • the execution device 110 is configured with an input/output (I/O) interface 112, which is used for data interaction with external devices. Data may be input to I/O interface 112 through client device 140 .
  • I/O input/output
  • the execution device 120 may call the data storage system 150
  • the data, codes, etc. in the corresponding processing can also be stored in the data storage system 150 .
  • the I/O interface 112 returns the processing results to the client device 140 for provision to the user.
  • the client device 140 can be, for example, a control unit in an automatic driving system or a functional algorithm module in a mobile phone terminal, for example, the functional algorithm module can be used to implement related tasks.
  • the training device 120 can generate corresponding target models/rules based on different training data for different goals or tasks, and the corresponding target models/rules can be used to achieve the above-mentioned goals or complete the above-mentioned tasks. , which provides the user with the desired result.
  • the user can manually specify input data, which can be operated through the interface provided by the I/O interface 112 .
  • the client device 140 can automatically send the input data to the I/O interface 112 . If the user's authorization is required to request the client device 140 to automatically send the input data, the user can set the corresponding permission in the client device 140 .
  • the user can view the result output by the execution device 110 on the client device 140, and the specific present form can be a specific manner such as display, sound, and action.
  • the client device 140 can also be used as a data collection terminal to collect the input data of the input I/O interface 112 and the output result of the output I/O interface 112 as new sample data as shown in the figure, and store them in the database 130.
  • the I/O interface 112 directly uses the input data input into the I/O interface 112 and the output result of the output I/O interface 112 as shown in the figure as a new sample The data is stored in database 130 .
  • FIG. 3 is only a schematic diagram of a system architecture provided by an embodiment of the present application, and the positional relationship among the devices, devices, modules, etc. shown in the figure does not constitute any limitation.
  • the data The storage system 150 is an external memory relative to the execution device 110 , and in other cases, the data storage system 150 may also be placed in the execution device 110 .
  • FIG. 4 is a schematic flowchart of a method for determining a neural network structure provided by an embodiment of the present application. As shown in FIG. 4, the method for determining a neural network structure provided by an embodiment of the present application includes:
  • the initial neural network includes M first structural blocks and a second block, the second block is connected to each first block, and each first block corresponds to A target weight, the second block is used to perform an operation corresponding to the second block according to the M first outputs; wherein, the M first outputs are respectively corresponding to the outputs of the first blocks
  • the target weights are obtained by multiplying the target weights, the target weights are trainable weights, and the M is an integer greater than 1.
  • the topology structure of the neural network can be searched.
  • the topology structure of the neural network in this embodiment may specifically refer to the connection relationship between the structural blocks in the neural network.
  • the structural block block can also be called a network structural block.
  • the structural block block can include a certain number of atomic operations, and the atomic operations can include but are not limited to operations such as convolution, pooling, residual connection, etc., for example, can include the following types of operations : 1x3 and 3x1 convolution, 1x7 and 7x1 convolution, 3x3 dilated convolution, 3x3 average pooling, 3x3 max pooling, 5x5 max pooling, 7x7 max pooling, 1x1 convolution, 3x3 convolution, 3x3 separable conv, 5x5 separable conv, 7x7 separable conv, skip Connection operation, zeroing operation (Zero, all neurons in the corresponding position are zeroed) and so on.
  • 3x3 average pooling means pooling kernel size is 3 ⁇ 3 average pooling
  • 3x3 max pooling means pooling kernel size is 3 ⁇ 3 max pooling
  • 3x3 dilated convolution means convolution kernel size is A 3 ⁇ 3 atrous convolution with a dilation rate of 2
  • 3x3 separable conv represents a separate convolution with a kernel size of 3 ⁇ 3
  • 5x5 seperable conv represents a separate convolution with a kernel size of 5 ⁇ 5.
  • the first step is to design the structural block block
  • the second step is to connect the blocks to form a complete network structure.
  • connection relationship between blocks can be understood as the direction of data transmission between blocks.
  • a block can perform operations corresponding to the blocks on the input data, and obtain an operation result.
  • the operation result can be input to and This block is connected to the next block and is used as the input data of the next block. That is to say, the connection between the two first blocks can indicate that the output of one block is used as the input of another block.
  • a large number of blocks in the neural network to be searched are firstly connected, and during the training process of the model, it is determined which connections can be retained and which connections can be discarded .
  • the specific type of the block in the neural network is expressed either as the type of atomic operations included in the block, the width of the network, or as the number of input feature channels and output feature channels for each atomic operation in the block. After the equal weights are determined, all or part of the blocks in the neural network can be connected in pairs.
  • pairwise connections can be made between all or some blocks of the same stage in the neural network.
  • the input feature map is gradually reduced by multiple downsampling. Between two downsamplings, a stage of the neural network is formed.
  • the blocks in a stage of the neural network their widths (the number of channels of the input features of each atomic operation in the block and the channels of the output features) number) are the same.
  • pairwise connections can be made between all blocks in the same stage of the neural network.
  • block1, block2 and block3 are blocks in the same stage of the neural network, block1 is connected to block2, block2 is connected to block3, and block1 is connected to block3.
  • block1, block2, block3 and block4 are blocks in the same stage of the neural network, block1 is connected to block2, block2 is connected to block3, block1 is connected to block3, block2 is connected to block4, and block1 is connected to block4 is connected, block3 is connected with block4.
  • connections can be made between partial blocks of the same stage in the neural network.
  • block1, block2, block3 and block4 are blocks in the same stage of the neural network, block1 is connected to block2, block2 is connected to block3, block1 is connected to block3, block2 is connected to block4, and block3 is connected to block4 is connected, while block1 and block4 are not connected. It should be noted that although it can be considered that there is no connection between block1 and block4, other data paths can exist between block1 and block4.
  • the output of block1 can be used as the input of block2, and the output of block2 can be used as the input of block4, even if block1 A data path of block1-block1-block1 exists between block1 and block4, and this embodiment still considers that there is no connection relationship between block1 and block4.
  • a trainable weight parameter (also referred to as a trainable weight parameter in this embodiment) may be set on the connection between two blocks. is the target weight), the output of one block can be multiplied by the corresponding target weight (also called a product operation in this embodiment), and then the result of the product operation is input into another block.
  • target weight 1 between block1 and block2 as an example. If target weight 1 is not set, the output of block1 is directly used as the input of block2. If target weight 1 is set, the output of block1 will be First perform a product operation with the target weight 1, and then use the result of the product operation as the input of block2.
  • each target weight will be updated, and the size of the updated target weight can indicate whether the connection is important.
  • the training device can obtain the initial neural network to be trained, and the initial neural network to be trained can be determined by the specific type of block in the neural network, the width and depth of the network and other weights, and then the neural network Obtained after pairwise connection between all or part of the blocks.
  • the initial neural network may include M first structural blocks and second blocks, the second blocks are connected to each first block, and each first block corresponds to a target weight, the initial neural network It is used to multiply the output of each first block with the corresponding target weight to obtain M first outputs, and the second block is used to perform the corresponding second block according to the M first outputs. operation.
  • the second block in the initial neural network may be used to perform an operation corresponding to the second block according to the summing result of the M first outputs.
  • the number of input and output channels of each of the M first blocks is consistent with the number of input and output channels of the second block, that is, the M first blocks and the second blocks is the block in the same stage in the initial neural network.
  • the initial neural network may include 3 first blocks (including the first block1, the first block2 and the first block3) and the second block, the second block and the first block1, the first block block2 and the first block3 are connected, the first block1 corresponds to the target weight 1, the first block2 corresponds to the target weight 2, the first block3 corresponds to the target weight 3, the initial neural network is used to multiply the output of the first block1 with the target weight 1 operation to obtain the first output 1, the output of the first block2 is multiplied with the target weight 2 to obtain the first output 2, and the output of the first block3 is multiplied with the target weight 3 to obtain the first output 3 , the second block is used to perform the operation corresponding to the second block according to the first output 1, the first output 2 and the first output 3. The addition result of output 2 and the first output 3 is performed, and the operation corresponding to the second block is performed.
  • the training device may perform model training on the initial neural network to obtain the updated M target weights.
  • the training device may perform model training on the target task for the initial neural network, update the M target weights, and obtain the updated M target weights when the M target weights are stable.
  • the so-called stable target weights can be understood as the change of target weights within a certain range during the iterative training process.
  • the number of iterative training times can be used to determine whether the M target weights are stable.
  • the training equipment can The initial neural network performs model training for a first preset number of iterations to obtain updated M target weights.
  • the first preset number of iterations can be a preset value, which can be determined according to the total number of times required for iterative training. Sure. For example, when the number of times of iterative training reaches a certain percentage of the total number of times of training, it is considered that the M target weights are stable.
  • the updated M target weights are obtained at a fixed percentage of the overall training rounds, which not only ensures the stability of the target weights, but also ensures that the network after the optimized topology is fully trained.
  • the time for a single topology optimization is maintained, which is basically the same as the original training time, which ensures the search efficiency.
  • the network general weights that is, the weights to be trained in the atomic operations included in the block
  • the M target weights can be updated at the same time, or the network general weights can be updated.
  • the weights and the M target weights are updated alternately, which is not limited in this application.
  • the updated M target weights update the connection relationship between the second block and the M first blocks in the initial neural network to obtain a first neural network; wherein, the first The second block in the neural network is used to perform the operation corresponding to the second block according to the output of the first block corresponding to the largest N target weights among the updated M target weights, and the N is less than the M.
  • the first neural network in the initial neural network may be updated according to the updated M target weights.
  • the connection relationship between the second block and the M first blocks is to obtain the first neural network; wherein, the second block in the first neural network is used to obtain the largest N according to the updated M target weights.
  • the operation corresponding to the second block is performed.
  • the second block in the first neural network can be used to perform the first block according to the sum of the outputs of the first blocks corresponding to the largest N target weights among the updated M target weights. The operation corresponding to the second block.
  • the size of the updated M target weights can indicate whether the connection between the blocks where they are located is important, and the important criterion is that the larger the size of the updated target weights, the stronger the connection between the blocks where they are located. important.
  • the connections where the largest N target weights are located among the updated M target weights may be retained, and the connections where the target weights other than the largest N target weights are located from the updated M target weights are eliminated.
  • the M first blocks and the second blocks in the initial neural network form a serial connection in sequence, and the second block is the end point of the serial connection, and the initial neural network
  • the M first blocks in include target block, and the target block is connected with the second block on the serial connection, and the updated target weight corresponding to the target block does not belong to the updated target weight.
  • the second block in the first neural network is further configured to perform an operation corresponding to the second block according to the output of the target block.
  • connection between the target block and the second block is always preserved, and the target block and the first
  • the connection between the two blocks can be called a backbone connection, and the backbone connection will not be eliminated to ensure that the backbone architecture of the entire neural network is not destroyed.
  • the connection where the N updated target weights are located may be retained. If the updated target weight of the backbone connection is not one of the largest N target weights among the M target weights, the connection where the N+1 updated target weights are located can be reserved.
  • the second block in the first neural network is used according to the first block1 and the second block.
  • the output of a block3 is used to perform the operation corresponding to the second block.
  • the second block in the first neural network is used to perform the second block according to the summation result of the outputs of the first block1 and the first block3. corresponding operation.
  • the updated target weight 3 is greater than the updated target weight 1 and the updated target weight 2, then only the first block 3 can be reserved.
  • the connection between the second block and the second block, as shown in FIG. 8 the second block in the first neural network is used to perform the operation corresponding to the second block according to the output of the first block3.
  • each block in the initial neural network can be used as the second block in the above-mentioned embodiment, and the block whose output is used as the input of the second block can be used as the first block, and the above-mentioned connections can be eliminated and selected. to get the first neural network.
  • model training may be performed on the first neural network until the data processing accuracy of the first neural network satisfies a preset condition or the number of iterations of the model training reaches the third Two preset number of iterations to obtain the second neural network.
  • the training device may acquire data to be trained, and the data to be trained includes at least one of the following: image data, text data, and voice data; correspondingly, the training device may, according to the data to be trained,
  • the initial neural network is used for model training.
  • An embodiment of the present application provides a method for determining a neural network structure, the method includes: acquiring an initial neural network to be trained, the initial neural network includes M first structural blocks and a second block, the second block is connected to each first block, and each first block corresponds to a target weight, and the second block is used to perform an operation corresponding to the second block according to the M first outputs; wherein, the M A first output is obtained by multiplying the output of each first block with the corresponding target weight, where the target weight is a trainable weight, and the M is an integer greater than 1; for the initial neural network Carry out model training to obtain the updated M target weights; according to the updated M target weights, update the connection relationship between the second block and the M first blocks in the initial neural network, to Obtain the first neural network; wherein, the second block in the first neural network is used to perform all the steps according to the outputs of the N first blocks corresponding to the largest N target weights in the updated M target weights.
  • the N is smaller than the M.
  • the trainable target weight is added to the connection between the blocks, and the updated target weight is used as the connection relationship between the blocks.
  • the importance judgment basis of , and based on the updated target weight size the connection relationship between blocks is selected and eliminated, so as to realize the search of the topology structure of the neural network.
  • FIG. 10 is a schematic flowchart of a method for determining a neural network structure provided by an embodiment of the present application. As shown in FIG. 10, the method for determining a neural network structure provided by an embodiment of the present application includes:
  • the initial neural network includes M first structural blocks and a second block, the second block is connected to each first block, and each first block corresponds to A target weight, and each first block is used to perform an operation corresponding to the first block according to the corresponding first output; wherein, the first output corresponding to each first block is determined by the corresponding first block.
  • the target weight is obtained by multiplying the output of the second block, the target weight is a trainable weight, and the M is an integer greater than 1.
  • the outputs of the M first blocks are used as the input of the second block, and in the subsequent selection process of the connection relationship, it is also based on the corresponding M first blocks.
  • the size of M updated target weights is carried out.
  • the output of the second block is used as the input of M first blocks.
  • the M first blocks include the first block1 and the first block2
  • the output of the first block3 and the second block can be used as the input of the first block1, the input of the first block2 and the input of the first block3.
  • the output of the second block can be multiplied by the target weight 1, the multiplication result can be used as the input of the first block1, the output of the second block can be multiplied by the target weight 2, and the multiplication result can be used as the input of the first block2.
  • the output of the second block can be multiplied by the target weight 3, and the multiplication result is used as the input of the first block3.
  • the number of input and output channels of each of the M first blocks is consistent with the number of input and output channels of the second block.
  • step 1001 reference may be made to similar descriptions in the embodiments corresponding to step 401, and details are not repeated here.
  • the training device may perform model training on the initial neural network for a first preset number of iterations to obtain the updated M target weights.
  • step 100 For the specific description of step 1002, reference may be made to the description in the embodiment corresponding to step 402, and details are not repeated here.
  • the updated M target weights update the connection relationship between the second block and the M first blocks in the initial neural network to obtain the first neural network; wherein, after the updated The first block corresponding to the largest N target weights among the M target weights is used to perform the operation corresponding to the first block according to the output of the second block, and the N is smaller than the M.
  • the first block corresponding to the largest N target weights among the updated M target weights is used to perform the first block according to the output of the second block.
  • the operation corresponding to the block is used to perform the first block according to the output of the second block.
  • the second block in the initial neural network forms a serial connection with the M first blocks in sequence, and the second block is the serial connection.
  • the starting point of the connection the M first blocks in the initial neural network include a target block, and the target block is connected with the second block on the serial path, and after the update corresponding to the target block.
  • the target block in the first neural network is also used to perform a The operation corresponding to the target block.
  • the N is 1.
  • the updated target weight 1 is greater than the updated target weight 2 and the updated target weight 3, then the first block 1 and The connection between the second block and the connection (backbone connection) between the first block3 and the second block, as shown in Figure 12, the output of the second block in the first neural network is used as the first The input of block1 and the first block3, the first block1 is used to perform the operation corresponding to the first block1 according to the output of the second block, and the first block3 is used to perform the corresponding operation of the first block3 according to the output of the second block Specifically, the first block3 in the first neural network is configured to perform the operation corresponding to the first block3 according to the summation result of the output of the second block and the output of the first block2.
  • the updated target weight 2 is greater than the updated target weight 1 and the updated target weight 3, then only the first block1 can be reserved.
  • the connection between the second block and the second block, as shown in FIG. 13 a block1 in the first neural network is used to perform the operation corresponding to the first block1 according to the output of the second block.
  • each block in the initial neural network can be used as the second block in the above-mentioned embodiment, and the output of the second block can be used as the input block as the first block, and the above-mentioned connections can be eliminated and selected. to get the first neural network.
  • the training device may perform model training on the first neural network until the data processing accuracy of the first neural network satisfies a preset condition or the number of iterations of model training reaches a second preset number of iterations , to get the second neural network.
  • the data to be trained is acquired, and the data to be trained includes at least one of the following: image data, text data and voice data; correspondingly, the training device may, according to the data to be trained, Model training is performed on the initial neural network.
  • the present application provides a method for determining the structure of a neural network.
  • the method includes: acquiring an initial neural network to be trained, the initial neural network including M first structural blocks and second blocks, the second block and each A first block is connected, and each first block corresponds to a target weight, and each first block is used to perform an operation corresponding to the first block according to the corresponding first output;
  • the first output corresponding to the first block is obtained by multiplying the target weight corresponding to the first block and the output of the second block, where the target weight is a trainable weight, and the M is an integer greater than 1; wherein , the target weight is a trainable weight, and the M is an integer greater than 1; model training is performed on the initial neural network to obtain the updated M target weights; according to the updated M target weights , update the connection relationship between the second block and the M first blocks in the initial neural network to obtain the first neural network; wherein, the largest N target weights among the updated M target weights
  • the corresponding first block is used to perform an operation
  • the trainable target weight is added to the connection between the blocks, and the updated target weight is used as the connection relationship between the blocks.
  • the importance judgment basis of , and based on the updated target weight size the connection relationship between blocks is selected and eliminated, so as to realize the search of the topology structure of the neural network.
  • FIG. 14 is a schematic flowchart of a method for determining a neural network structure provided by an embodiment of the present application. As shown in FIG. 14, the method for determining a neural network structure provided by an embodiment of the present application includes:
  • each target code is used to indicate a candidate neural network
  • the plurality of target codes include a first target code and a plurality of first codes
  • the first target codes are used to indicate the first neural network.
  • the structural features of the neural network can be written in the form of codes, and each code is used to indicate at least one of the following structural features of a candidate neural network: the candidate neural network includes: The type of the operation unit, the number of operation units included in the candidate neural network, and the number of input features and output feature channels of the operation units included in the candidate neural network.
  • the operation unit may refer to each atomic operation in the block, and in another way, each code is used to indicate the type of atomic operation included in a candidate neural network, the number of atomic operations included in the candidate neural network, and the number of atomic operations included in the candidate neural network.
  • the number of input feature and output feature channels for the atomic operation Since in the same stage, the atomic operations in each block have the same number of input feature and output feature channels, it is equivalent to each code used to indicate the number of input feature and output feature channels of a block included in a candidate neural network.
  • the training device may generate multiple candidate codes, and screen the multiple candidate codes based on a preset rule, where the preset rule may be at least one of the following: from the multiple candidate codes
  • the amount of computation required to run the indicated candidate neural network is less than the first preset value, the amount of weights included is less than the second preset value, and the running speed when running the indicated candidate neural network is higher than the third preset value. set value.
  • the amount of calculation can be the number of floating-point multiplications that need to be performed in the entire neural network, and the operation of floating-point multiplication is the most time-consuming, so it can be used to represent the amount of calculation of the neural network.
  • the above-mentioned first preset value, second preset value and third preset value may be preset.
  • multiple codes may be clustered to obtain multiple code sets, each code set corresponds to a clustering category, the multiple code sets include a target code set, and the target code set includes all multiple target codes.
  • the above-mentioned multiple codes may be obtained after screening multiple candidate codes.
  • the first target code may be one code in the target code set, and in one implementation, the first target code may be a cluster center of the target code set.
  • the first target code is used to indicate the first neural network.
  • clustering may be K-Means algorithm, DBSCAN algorithm, BIRCH algorithm, MeanShift algorithm, etc.
  • the training device may select a first neural network indicated by one of the multiple target codes (the first target code) to perform model training, so as to obtain the data processing accuracy of the first neural network.
  • the network topology can be optimized for the first neural network. Perform optimization.
  • model training may be performed on the optimized first neural network to obtain the data processing accuracy of the first neural network.
  • the data processing accuracy of the neural network may be the value of the loss function of the training network, the test accuracy of the neural network, etc., which is not limited in the embodiment of the present application.
  • model training is not performed on multiple candidate neural networks indicated by multiple target codes, and based on the data processing accuracy of multiple candidate neural networks, a candidate neural network with higher accuracy is selected as the model search.
  • a candidate neural network with higher accuracy is selected as the model search.
  • the model training is performed on the first neural network.
  • the data processing accuracy of the candidate neural network indicated by the remaining target codes (multiple first codes).
  • the relationship between the first target code and the plurality of first codes can be determined according to the and the data processing accuracy of the first neural network to determine the data processing accuracy of each candidate neural network indicated by the first code.
  • the target code may include multiple bits, and each bit indicates a structural feature of the candidate neural network.
  • each target code may be standardized .
  • the mean and standard deviation of multiple target codes are calculated respectively, and then the mean value is subtracted from each bit of the target code, and then divided by the standard deviation. After that, the dimension of the target code has no effect on subsequent algorithms.
  • Gaussian Process is a very classic and mature machine learning algorithm, which can estimate the value of other sample points according to the distance between two sample points and the value of some sample points.
  • the sample point is each target code, and the value of the sample point is the data processing accuracy of the candidate neural network indicated by the target code.
  • a specific Gaussian process is uniquely determined by its mean function and covariance function. Modeling with a Gaussian process is actually learning the mean function and covariance function. In this embodiment, the Gaussian process can be learned in the following way:
  • the covariance function can be learned, wherein the covariance function can be the following formula 1:
  • x 1 and x 2 are the target codes
  • is the standard deviation that needs to be learned.
  • the calculation method of the standard deviation is: encode two pairs of pairs of all completed targets, and calculate their encoding distances ⁇ Distance 1 ,Distance 2 ,...,Distance L ⁇ , these distances conform to Distance 1 ⁇ Distance 2 ⁇ ... ⁇
  • the order of Distance L then take as an estimate of ⁇ . This completes the learning of the covariance function.
  • Equation 1 The kernel in the formula is calculated by Equation 1, and x (i) is the ith target code that has completed training.
  • the data processing accuracy of the candidate neural network indicated by all target codes can be calculated.
  • its mean function and covariance function have been obtained.
  • the candidate neural network indicated by the target code can be predicted according to formula 6. The data processing accuracy of the network:
  • the target code x is the target code with the highest data processing accuracy compared to the currently indicated candidate neural network, how much the data processing accuracy is expected to improve, that is, Expected Improvement (EI).
  • EI Expected Improvement
  • f(x) in this formula is a Gaussian process function:
  • each target code can predict the improvement of the data processing accuracy of the candidate neural network indicated by itself through the above process, thereby obtaining the data processing accuracy of the candidate neural network indicated by each first code.
  • the first candidate with the highest data processing accuracy among the candidate neural networks indicated by the multiple target codes may be selected Neural Networks.
  • the first candidate neural network with the highest data processing accuracy can be The candidate neural network performs model training to obtain the first target neural network.
  • the network topology can be optimized for the first candidate neural network.
  • a candidate neural network is optimized.
  • the training device may perform model training on the optimized first candidate neural network to obtain the first target neural network.
  • the first candidate neural network includes M first structural blocks and second blocks, the second blocks are connected to each first block, and each first block corresponds to one target weight, the first candidate neural network is used to multiply the output of each first block and the corresponding target weight to obtain M first outputs, and the second block is used to One output is to perform the operation corresponding to the second block; wherein, the target weight is a trainable weight, and the M is an integer greater than 1.
  • the training device can perform model training on the first candidate neural network to obtain updated M target weights, and update the second candidate neural network in the first candidate neural network according to the updated M target weights.
  • the connection relationship between the block and the M first blocks is to obtain a second neural network; wherein, the second block in the second neural network is used for the largest N according to the updated M target weights For the output of the first block corresponding to the target weight, the operation corresponding to the second block is performed, and the N is smaller than the M, and model training is performed on the second neural network to obtain the first target neural network.
  • the first candidate neural network includes M first structural blocks and second blocks, the second blocks are connected to each first block, and each first block corresponds to a target weight, The first candidate neural network is used for multiplying the output of the second block and each target weight to obtain M first outputs, and each first block is used for performing the desired output according to the corresponding first output.
  • the training device can perform model training on the first candidate neural network to obtain updated M target weights; update the second block in the first candidate neural network according to the updated M target weights The connection relationship with the M first blocks to obtain the second neural network; wherein, the first blocks corresponding to the largest N target weights in the updated M target weights are used according to the second block. , perform the operation corresponding to the first block, and the N is less than the M; perform model training on the second neural network to obtain the first target neural network.
  • the training device may further acquire the data processing accuracy of the first target neural network, the multiple target codes include a second target code, and the second target code is used to indicate the first target neural network network, and determine the data processing accuracy of the candidate neural network indicated by each target code in the multiple target codes except the second target code, and then the training device may, according to the data processing accuracy of the candidate neural network indicated by the multiple target codes, A second candidate neural network with the highest data processing accuracy is determined, and model training is performed on the second candidate neural network to obtain a second target neural network.
  • the training device can repeat the above process, and through a preset number of iterations (for example, 4 rounds), a very ideal model can be obtained as the result of the neural model structure search.
  • the candidate neural network model with the highest data processing accuracy in the coding set corresponding to the 10 clustering categories is obtained each time to determine the remaining target coding indications in the coding set.
  • the model training of 40 candidate neural networks can be performed, and then the one with the highest data processing accuracy among the 40 candidate neural networks can be selected as the neural model structure search. result.
  • model training is not performed on multiple candidate neural networks indicated by multiple target codes, and based on the data processing accuracy of multiple candidate neural networks, a candidate neural network with higher accuracy is selected as the model search.
  • a candidate neural network with higher accuracy is selected as the model search.
  • the model training is performed on the first neural network.
  • the data processing accuracy of the candidate neural network indicated by the remaining target codes greatly reduces the number of model training times and greatly improves the search efficiency compared with current various topology search algorithms.
  • a coding candidate set can be generated.
  • the MobileNetV2 network itself can be divided into 7 stages.
  • Structural features such as the number of output feature channels are encoded, for example: [1, 2, 3, 4, 3, 3, 1, 16, 24, 32, 48, 64, 192, 376] means that these seven stages are repeated 1, 2, 3, 4, 3, 3, and 1 basic network structure (that is, the above-mentioned arithmetic unit), and the number of output channels are 16, 24, 32, 48, 64, 192, and 376, respectively.
  • the length of each code is 14 bits.
  • the code can uniquely determine a candidate neural network. After the encoding method is determined, the upper and lower limits of the search can also be set for each bit of the code.
  • the upper and lower limits of a 14-bit search are respectively limited to : 3 and 1, 4 and 1, 5 and 2, 6 and 2, 5 and 1, 5 and 1, 3 and 1, 48 and 16, 48 and 16, 64 and 24, 96 and 32, 96 and 32, 256 and 112, 512 and 256.
  • the training device can generate a large number of codes uniformly based on the number of bits encoded and the upper and lower search limits for each bit.
  • the computational effort of the neural network indicated by that encoding is calculated.
  • the codes that meet the requirements are reserved to form a code candidate set. For example, under the 300M limit, about 20000 candidate codes can be obtained.
  • the training device can model the encoding candidate set, and in particular, can normalize each encoding. Specifically, for each bit of the encoding, the mean and standard deviation are calculated separately on the entire candidate set, and then the mean is subtracted from each bit of the encoding, and then divided by the standard deviation. After that, the dimension of the encoding has no effect on subsequent algorithms.
  • the training equipment can perform K-Means clustering on multiple codes obtained after standardization, and the obtained cluster center (the first target code) can be considered as the most representative structure in the current coding space, and its data processing accuracy can represent The performance of the entire class, performance evaluation of them can more efficiently model the performance of all network structures in the entire search space.
  • the training device can train the fully connected network. Specifically, the training device can convert each code to be trained into a specific neural network through the network parser D. Before topology optimization, a fully connected network will be generated.
  • the specific fully connected rule is to connect all network blocks in each stage to form a very dense network structure.
  • the network general weight and the target weight are optimized at the same time, when the number of training rounds reaches 40% of the total number of rounds, it is considered that the updated target weight is stable, and Based on the updated target weights, the topology of the neural network is optimized, the optimized neural network is trained to converge, and the coding candidate set model is updated. Specifically, the model training is completed in all 10 network codes, and data processing is obtained. After performance, the modeling of all candidate set codes can be updated, and 10 new codes can be generated according to the updated cluster centers, and the above process can be repeated. (for example, four rounds of repetition), a total of 40 models are trained. As the search progresses, it can be found that the network trained later will perform better than the one trained earlier. Among the 40 models, the best one is selected and output as a result of the neural model structure search.
  • the embodiments of the present application can be provided to users as a part of the AutoML system.
  • the user provides the platform with data sets, network size requirements (weight weight requirements/speed requirements, etc.) and a basic network to be adjusted.
  • the optimized network structure can be obtained by the neural network structure determination method described in the embodiments corresponding to FIG. 4 to FIG. 14 .
  • This embodiment can be provided to users through cloud services as a part of the AutoML system.
  • this embodiment can also be provided to the user as a separate algorithm package, and the user can obtain the optimized network structure according to the methods for determining the neural network structure described in the embodiments corresponding to FIG. 4 to FIG. 14 .
  • the apparatus for determining a neural network structure may include a network coding generator A, a network size judger B, a network coding modeler C, a network parser D, a trainer E, and an edge selector F, The relationship between them can be seen in Figure 15.
  • the network code generator A can generate multiple codes as uniformly as possible according to the possible code space. These codes define the structural characteristics of the neural network. During the generation process, it will continuously decide whether to add this code to the code candidate set according to the result of the network size judger B.
  • the network size determiner B can evaluate the calculation amount, weight amount, running speed, etc. of the candidate neural network indicated by each code to determine whether the user's restriction is satisfied.
  • the network coding modeler C can model the coding, evaluate the possible data processing accuracy of the candidate neural network indicated by each coding, and send the coding to be trained to the network parser D, while receiving these indications returned by the trainer E The data processing accuracy of the candidate neural network.
  • the modeling results are updated to make their evaluation more and more accurate. Until the end of the search, the target code with the best performance is given.
  • the network parser D can convert the encoding into a concrete neural network.
  • the trainer E can train a specific neural network according to the training data provided by the user, and output the data processing accuracy (eg test accuracy, training loss function value, etc.) and the trained neural network.
  • data processing accuracy eg test accuracy, training loss function value, etc.
  • the edge selector F can optimize the topology of the converted neural network.
  • FIG. 16 is a schematic structural diagram of a neural network structure determination apparatus 1600 provided by an embodiment of the present application.
  • the neural network structure determination apparatus 1600 provided by an embodiment of the present application may include:
  • the acquisition module 1601 is used to acquire an initial neural network to be trained, the initial neural network includes M first structural blocks and a second block, the second block is connected to each first block, and each of the The first block corresponds to a target weight, and the second block is used to perform an operation corresponding to the second block according to the M first outputs;
  • the outputs are obtained by multiplying the corresponding target weights respectively, the target weights are trainable weights, and the M is an integer greater than 1.
  • a model training module 1602 configured to perform model training on the initial neural network to obtain updated M target weights
  • model training module 160 For the specific description of the model training module 1602, reference may be made to the description in step 402 and the corresponding embodiments, and details are not repeated here.
  • Model updating module 1603 configured to update the connection relationship between the second block and the M first blocks in the initial neural network according to the updated M target weights to obtain the first neural network; wherein , the second block in the first neural network is used to perform the operation corresponding to the second block according to the output of the first block corresponding to the largest N target weights among the updated M target weights, so The N is smaller than the M.
  • model updating module 1603 For the specific description of the model updating module 1603, reference may be made to the description in step 403 and the corresponding embodiments, which will not be repeated here.
  • the M first blocks and the second blocks in the initial neural network form a serial connection in sequence, and the second block is the end point of the serial connection, and the initial The M first blocks in the neural network include a target block, the target block is connected to the second block on the serial connection, and the updated target weight corresponding to the target block does not belong to the target block.
  • the second block in the first neural network is further configured to perform an operation corresponding to the second block according to the output of the target block.
  • the N is 1.
  • the model training module is configured to perform model training on the initial neural network for a first preset number of iterations to obtain updated M target weights.
  • the model training module is configured to perform model training on the first neural network until the data processing accuracy of the first neural network satisfies a preset condition or the number of iterations of the model training reaches the third Two preset number of iterations to obtain the second neural network.
  • the number of input and output channels of each of the M first blocks is consistent with the number of input and output channels of the second block.
  • the second block in the initial neural network is configured to perform an operation corresponding to the second block according to the summing result of the M first outputs
  • the second block in the first neural network is used to perform the corresponding output of the second block according to the summation results of the outputs of the first blocks corresponding to the largest N target weights in the updated M target weights. operation.
  • the acquisition module is configured to acquire data to be trained, and the data to be trained includes at least one of the following: image data, text data, and voice data;
  • the above initial neural network is used for model training, including:
  • the model training module is configured to perform model training on the initial neural network according to the data to be trained.
  • FIG. 17 is a schematic structural diagram of a neural network structure determination apparatus 1700 provided by an embodiment of the present application.
  • the neural network structure determination apparatus 1700 provided by an embodiment of the present application may include:
  • the acquisition module 1701 is used to acquire an initial neural network to be trained, the initial neural network includes M first structural blocks and a second block, the second block is connected to each first block, and each The first block corresponds to a target weight, and each first block is used to perform an operation corresponding to the first block according to the corresponding first output; wherein, the first output corresponding to each first block is determined by the first block.
  • the target weight corresponding to one block is obtained by multiplying the output of the second block, the target weight is a trainable weight, and the M is an integer greater than 1;
  • step 1001 For the specific description of the obtaining module 1701, reference may be made to the description in step 1001 and the corresponding embodiment, which will not be repeated here.
  • a model training module 1702 configured to perform model training on the initial neural network to obtain updated M target weights
  • model training module 1702 For the specific description of the model training module 1702, reference may be made to the description in step 1002 and the corresponding embodiments, which will not be repeated here.
  • a model updating module 1703 configured to update the connection relationship between the second block and the M first blocks in the initial neural network according to the updated M target weights to obtain the first neural network; wherein , the first block corresponding to the largest N target weights among the updated M target weights is used to perform the operation corresponding to the first block according to the output of the second block, and the N is less than the M .
  • model updating module 1703 For the specific description of the model updating module 1703, reference may be made to the description in step 1003 and the corresponding embodiments, which will not be repeated here.
  • the second block in the initial neural network forms a serial connection with the M first blocks in sequence, and the second block is the starting point of the serial connection, so
  • the M first blocks in the initial neural network include a target block, and the target block is connected with the second block on the serial path, and the updated target weight corresponding to the target block does not belong to
  • the target block in the first neural network is also used to perform the target block corresponding to the target block according to the output of the second block. operation.
  • the N is 1.
  • the model training module is configured to perform model training on the initial neural network for a first preset number of iterations to obtain updated M target weights.
  • the model training module is configured to perform model training on the first neural network until the data processing accuracy of the first neural network satisfies a preset condition or the number of iterations of the model training reaches the third Two preset number of iterations to obtain the second neural network.
  • the number of channels of input and output of each of the M first blocks is consistent with the number of channels of input and output of the second block.
  • the acquisition module is configured to acquire data to be trained, and the data to be trained includes at least one of the following: image data, text data, and voice data;
  • the above initial neural network is used for model training, including:
  • the model training module is configured to perform model training on the initial neural network according to the data to be trained.
  • FIG. 18 is a schematic structural diagram of a neural network structure determination apparatus 1800 provided by an embodiment of the present application.
  • the neural network structure determination apparatus 1800 provided by an embodiment of the present application may include:
  • the obtaining module 1801 is configured to obtain a plurality of target codes, each target code is used to indicate a candidate neural network, the plurality of target codes include a first target code and a plurality of first codes, and the first target codes are used for indicates the first neural network;
  • a model training module 1802 configured to perform model training on the first neural network to obtain the data processing accuracy of the first neural network
  • model training module 180 For the specific description of the model training module 1802, reference may be made to the description in step 1402 and the corresponding embodiments, and details are not repeated here.
  • An accuracy determination module 1803, configured to determine the candidate neural network indicated by each first code according to the degree of difference between the first target code and the plurality of first codes and the data processing accuracy of the first neural network data processing accuracy;
  • precision determination module 1803 For the specific description of the precision determination module 1803, reference may be made to the description in step 1403 and the corresponding embodiments, and details are not repeated here.
  • the obtaining module 1801 is configured to obtain the first candidate neural network with the highest data processing accuracy among the candidate neural networks indicated by the multiple target codes;
  • the model training module 1802 is configured to perform model training on the first candidate neural network to obtain a first target neural network.
  • step 1405 For the specific description of the model training module 1802, reference may be made to step 1405 and the description in the corresponding embodiment, and details are not repeated here.
  • the obtaining module is configured to obtain the data processing accuracy of the first target neural network, the multiple target codes include a second target code, and the second target code is used to indicate the Describe the first target neural network;
  • the plurality of targets are determined according to the degree of difference between the second target code and codes other than the second target code among the plurality of target codes and the data processing accuracy of the first target neural network The data processing accuracy of the candidate neural network indicated by each target code except the second target code in the code;
  • the candidate neural networks indicated by the multiple target codes determine the second candidate neural network with the highest data processing accuracy, and perform model training on the second candidate neural network to obtain the second target neural network.
  • each target code is used to indicate at least one of the following structural features of a candidate neural network:
  • the type of operation units included in the candidate neural network the number of operation units included in the candidate neural network, and the number of input features and output feature channels of the operation units included in the candidate neural network.
  • the apparatus further includes:
  • a clustering module configured to cluster multiple codes to obtain multiple code sets, each code set corresponds to a clustering category, the multiple code sets include a target code set, and the target code set includes the Multiple target encodings.
  • the first target code is a cluster center of the target code set.
  • the candidate neural network indicated by each target code satisfies at least one of the following conditions:
  • the amount of computation required when running the candidate neural network indicated by each target code is less than the first preset value
  • the candidate neural network indicated by each target code includes a weight less than the second preset value
  • the running speed when running the candidate neural network indicated by each target code is higher than the third preset value.
  • the first candidate neural network includes M first structural blocks and second blocks, the second blocks are connected to each of the first blocks, and each of the first blocks corresponds to A target weight, the second block is used to perform an operation corresponding to the second block according to the M first outputs; wherein, the M first outputs are respectively corresponding to the outputs of the first blocks
  • the target weight is obtained by multiplying the target weight, the target weight is a trainable weight, and the M is an integer greater than 1;
  • the model training module is used to perform model training on the first candidate neural network to obtain updated M target weights
  • the connection relationship between the second block and the M first blocks in the first candidate neural network is updated to obtain a second neural network; wherein, the second The second block in the neural network is used to perform the operation corresponding to the second block according to the output of the first block corresponding to the largest N target weights among the updated M target weights, and the N is less than the M;
  • Model training is performed on the second neural network to obtain the first target neural network.
  • the first candidate neural network includes M first structural blocks and second blocks, the second blocks are connected to each of the first blocks, and each of the first blocks corresponds to A target weight, and each first block is used to perform an operation corresponding to the first block according to the corresponding first output; wherein, the first output corresponding to each first block is determined by the corresponding first block.
  • the target weight and the output of the second block are multiplied to obtain, the target weight is a trainable weight, and the M is an integer greater than 1;
  • the model training module is used to perform model training on the first candidate neural network to obtain updated M target weights
  • the connection relationship between the second block and the M first blocks in the first candidate neural network is updated to obtain a second neural network; wherein, the updated The first block corresponding to the largest N target weights among the M target weights is used to perform the operation corresponding to the first block according to the output of the second block, and the N is less than the M;
  • Model training is performed on the second neural network to obtain the first target neural network.
  • FIG. 19 is a schematic structural diagram of the execution device provided by the embodiment of the present application.
  • the execution device 1900 may specifically be represented as a virtual reality VR device, a mobile phone, Tablets, notebook computers, smart wearable devices, monitoring data processing devices, etc., are not limited here.
  • the execution device 1900 may be used to implement the methods for determining the neural network structure in the embodiments corresponding to FIG. 4 to FIG. 14 .
  • the execution device 1900 includes: a receiver 1901, a transmitter 1902, a processor 1903, and a memory 1904 (wherein the number of processors 1903 in the execution device 1900 may be one or more, and one processor is taken as an example in FIG.
  • the processor 1903 may include an application processor 19031 and a communication processor 19032.
  • the receiver 1901, the transmitter 1902, the processor 1903, and the memory 1904 may be connected by a bus or otherwise.
  • Memory 1904 may include read-only memory and random access memory, and provides instructions and data to processor 1903 .
  • a portion of memory 1904 may also include non-volatile random access memory (NVRAM).
  • NVRAM non-volatile random access memory
  • the memory 1904 stores processors and operating instructions, executable modules or data structures, or a subset thereof, or an extended set thereof, wherein the operating instructions may include various operating instructions for implementing various operations.
  • the processor 1903 controls the operation of the execution device.
  • various components of the execution device are coupled together through a bus system, where the bus system may include a power bus, a control bus, a status signal bus, and the like in addition to a data bus.
  • the various buses are referred to as bus systems in the figures.
  • the methods disclosed in the above embodiments of the present application may be applied to the processor 1903 or implemented by the processor 1903 .
  • the processor 1903 may be an integrated circuit chip with signal processing capability. In the implementation process, each step of the above-mentioned method can be completed by an integrated logic circuit of hardware in the processor 1903 or an instruction in the form of software.
  • the above-mentioned processor 1903 can be a general-purpose processor, a digital signal processor (digital signal processing, DSP), a microprocessor or a microcontroller, and may further include an application specific integrated circuit (ASIC), a field programmable Field-programmable gate array (FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
  • DSP digital signal processing
  • ASIC application specific integrated circuit
  • FPGA field programmable Field-programmable gate array
  • the processor 1903 may implement or execute the methods, steps, and logical block diagrams disclosed in the embodiments of this application.
  • a general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
  • the steps of the method disclosed in conjunction with the embodiments of the present application may be directly embodied as executed by a hardware decoding processor, or executed by a combination of hardware and software modules in the decoding processor.
  • the software modules may be located in random access memory, flash memory, read-only memory, programmable read-only memory or electrically erasable programmable memory, registers and other storage media mature in the art.
  • the storage medium is located in the memory 1904, and the processor 1903 reads the information in the memory 1904, and completes the steps of the above method in combination with its hardware.
  • the receiver 1901 can be used to receive input numerical or character information, and generate signal input related to the relevant settings and function control of the execution device.
  • the transmitter 1902 can be used to output digital or character information through the first interface; the transmitter 1902 can also be used to send instructions to the disk group through the first interface to modify the data in the disk group; the transmitter 1902 can also include display devices such as a display screen .
  • FIG. 20 is a schematic structural diagram of the training device provided by the embodiment of the present application.
  • the training device 2000 may be deployed with all of the training devices in the corresponding embodiments of FIGS. 15 to 17 .
  • the described apparatus for determining the structure of a neural network is used to implement the functions of the apparatus for determining the structure of a neural network described in the corresponding embodiments of FIGS. 15 to 17 .
  • the training device 2000 is implemented by one or more servers, and the training device 2000 may Large differences in configuration or performance may include one or more central processing units (CPU) 2020 (eg, one or more processors) and memory 2032, and one or more storage applications 2042 or storage medium 2030 for data 2044 (eg, one or more mass storage devices).
  • the memory 2032 and the storage medium 2030 may be short-term storage or persistent storage.
  • the program stored in the storage medium 2030 may include one or more modules (not shown in the figure), and each module may include a series of instruction operations on the training device.
  • the central processing unit 2020 may be configured to communicate with the storage medium 2030 to execute a series of instruction operations in the storage medium 2030 on the training device 2000 .
  • the training device 2000 may also include one or more power supplies 2026, one or more wired or wireless network interfaces 2050, one or more input and output interfaces 2058; or, one or more operating systems 2041, such as Windows ServerTM, Mac OS XTM , UnixTM, LinuxTM, FreeBSDTM and so on.
  • operating systems 2041 such as Windows ServerTM, Mac OS XTM , UnixTM, LinuxTM, FreeBSDTM and so on.
  • the central processing unit 2020 is configured to perform steps related to the method for determining a neural network structure described in the foregoing embodiments.
  • the embodiment of the present application also provides a computer program product, including code, when the code runs on the computer, the computer is made to execute the steps performed by the foregoing execution device, or the computer is made to execute as executed by the foregoing training device. A step of.
  • Embodiments of the present application further provide a computer-readable storage medium, where a program for performing signal processing is stored in the computer-readable storage medium, and when it runs on a computer, the computer executes the steps performed by the aforementioned execution device. , or, causing the computer to perform the steps as performed by the aforementioned training device.
  • the execution device, training device, or terminal device provided in this embodiment of the present application may specifically be a chip, and the chip includes: a processing unit and a communication unit, the processing unit may be, for example, a processor, and the communication unit may be, for example, an input/output interface, pins or circuits, etc.
  • the processing unit can execute the computer executable instructions stored in the storage unit, so that the chip in the execution device executes the data processing method described in the above embodiments, or the chip in the training device executes the data processing method described in the above embodiment.
  • the storage unit is a storage unit in the chip, such as a register, a cache, etc.
  • the storage unit may also be a storage unit located outside the chip in the wireless access device, such as only Read-only memory (ROM) or other types of static storage devices that can store static information and instructions, random access memory (RAM), etc.
  • ROM Read-only memory
  • RAM random access memory
  • FIG. 21 is a schematic structural diagram of a chip provided by an embodiment of the present application.
  • the chip may be represented as a neural network processor NPU 2100, and the NPU 2100 is mounted on the main CPU (Host CPU) as a co-processor. CPU), tasks are allocated by the Host CPU.
  • the core part of the NPU is the arithmetic circuit 2103, which is controlled by the controller 2104 to extract the matrix data in the memory and perform multiplication operations.
  • the arithmetic circuit 2103 includes multiple processing units (Process Engine, PE). In some implementations, the arithmetic circuit 2103 is a two-dimensional systolic array. The arithmetic circuit 2103 may also be a one-dimensional systolic array or other electronic circuitry capable of performing mathematical operations such as multiplication and addition. In some implementations, the arithmetic circuit 2103 is a general-purpose matrix processor.
  • the arithmetic circuit fetches the data corresponding to the matrix B from the weight memory 2102 and buffers it on each PE in the arithmetic circuit.
  • the arithmetic circuit fetches the data of matrix A and matrix B from the input memory 2101 to perform matrix operation, and stores the partial result or final result of the matrix in the accumulator 2108 .
  • Unified memory 2106 is used to store input data and output data.
  • the weight data is directly passed through the storage unit access controller (Direct Memory Access Controller, DMAC) 2105, and the DMAC is transferred to the weight memory 2102.
  • Input data is also moved to unified memory 2106 via the DMAC.
  • DMAC Direct Memory Access Controller
  • the BIU is the Bus Interface Unit, that is, the bus interface unit 2110, which is used for the interaction between the AXI bus and the DMAC and the instruction fetch buffer (Instruction Fetch Buffer, IFB) 2109.
  • IFB Instruction Fetch Buffer
  • the bus interface unit 2110 (Bus Interface Unit, BIU for short) is used for the instruction fetch memory 2109 to obtain instructions from the external memory, and also for the storage unit access controller 2105 to obtain the original data of the input matrix A or the weight matrix B from the external memory.
  • the DMAC is mainly used to transfer the input data in the external memory DDR to the unified memory 2106 , the weight data to the weight memory 2102 , or the input data to the input memory 2101 .
  • the vector calculation unit 2107 includes a plurality of operation processing units, and further processes the output of the operation circuit 2103, such as vector multiplication, vector addition, exponential operation, logarithmic operation, size comparison, etc., if necessary. It is mainly used for non-convolutional/fully connected layer network computation in neural networks, such as Batch Normalization, pixel-level summation, and upsampling of feature planes.
  • the vector computation unit 2107 can store the processed output vectors to the unified memory 2106 .
  • the vector calculation unit 2107 can apply a linear function; or a nonlinear function to the output of the operation circuit 2103, such as performing linear interpolation on the feature plane extracted by the convolution layer, such as a vector of accumulated values, to generate activation values.
  • the vector computation unit 2107 generates normalized values, pixel-level summed values, or both.
  • the vector of processed outputs can be used as activation input to the arithmetic circuit 2103, eg, for use in subsequent layers in a neural network.
  • the instruction fetch memory (instruction fetch buffer) 2109 connected to the controller 2104 is used to store the instructions used by the controller 2104;
  • the unified memory 2106, the input memory 2101, the weight memory 2102 and the instruction fetch memory 2109 are all On-Chip memories. External memory is private to the NPU hardware architecture.
  • the processor mentioned in any one of the above may be a general-purpose central processing unit, a microprocessor, an ASIC, or one or more integrated circuits for controlling the execution of the above program.
  • the device embodiments described above are only schematic, wherein the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be A physical unit, which can be located in one place or distributed over multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
  • the connection relationship between the modules indicates that there is a communication connection between them, which can be specifically implemented as one or more communication buses or signal lines.
  • the computer program product includes one or more computer instructions.
  • the computer may be a general purpose computer, special purpose computer, computer network, or other programmable device.
  • the computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be retrieved from a website, computer, training device, or data Transmission from the center to another website site, computer, training facility or data center via wired (eg coaxial cable, fiber optic, digital subscriber line (DSL)) or wireless (eg infrared, wireless, microwave, etc.) means.
  • wired eg coaxial cable, fiber optic, digital subscriber line (DSL)
  • wireless eg infrared, wireless, microwave, etc.
  • the computer-readable storage medium may be any available medium that can be stored by a computer, or a data storage device such as a training device, a data center, or the like that includes an integration of one or more available media.
  • the usable media may be magnetic media (eg, floppy disks, hard disks, magnetic tapes), optical media (eg, DVD), or semiconductor media (eg, Solid State Disk (SSD)), and the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Neurology (AREA)
  • Evolutionary Biology (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Image Analysis (AREA)
  • Feedback Control In General (AREA)

Abstract

一种神经网络结构确定方法,包括:获取待训练的初始神经网络,初始神经网络包括M个第一结构块block和第二block,第二block与每个第一block连接,且每个第一block对应一个可训练的目标权重;对初始神经网络进行模型训练,以获取更新后的M个目标权重;根据更新后的M个目标权重,更新初始神经网络中第二block与M个第一block的连接关系,以获取第一神经网络。在对初始神经网络进行block之间的连接关系的搜索过程中,通过在block之间的连接上加入可训练的目标权重,根据更新后的目标权重大小,作为block之间连接关系的重要性判断依据,并基于更新后的目标权重大小进block之间的连接关系的选择和剔除,从而实现了对神经网络的拓扑结构的搜索。

Description

一种神经网络结构确定方法及其装置
本申请要求于2020年11月13日提交中国专利局、申请号为202011268949.1、发明名称为“一种神经网络结构确定方法及其装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及人工智能领域,尤其涉及一种神经网络结构确定方法及其装置。
背景技术
机器学习(machine learning,ML)近年来取得了相当大的成功,越来越多机器学习衍生的产品正在使人们的生活发生翻天覆地的变化。但是当前机器学习的进展依赖于ML专家手动对模型进行繁冗的设计和调试,这不仅限制了机器学习的广泛应用,也延长了产品的迭代周期。
随着人工智能技术的快速发展,一个性能优良的神经网络往往拥有精妙的网络结构,而这需要具有高超技能和丰富经验的人类专家花费大量精力进行构建。神经网络结构存在着极多的组合可能,人工设计时一般通过繁琐的多次尝试,寻找高精度网络结构的一些规律,进而设计出好的结构。经典的AlexNet、ResNet、DenseNet等网络结构,都是由人工完成设计的,它们的出现都大幅提升了各类任务的精度。
随着各项技术的进步和计算资源的增加,自动化机器学习(AutoML)技术逐渐开始替代人工,设计新的网络结构。通过对网络结构进行编码,然后对大量的编码评估性能,再通过强化学习、遗传等算法进行学习,最终生成最优编码。然而现有技术中,常常只能对神经网络的宽度(神经网络中运算单元的输入特征的通道数以及输出特征的通道数)和深度(神经网络包括的运算单元数量)进行搜索,而不能对神经网络的拓扑结构进行搜索。
发明内容
第一方面,本申请提供了一种神经网络结构确定方法,所述方法包括:
获取待训练的初始神经网络,所述初始神经网络包括M个第一结构块block和第二block,结构块block也可以称之为网络结构块,结构块block可以包括一定数量的原子操作,原子操作可以包含但不限于卷积、池化、残差连接等操作。所述第二block与每个第一block连接,所谓block之间的连接关系,可以理解为block之间的数据传输方向,具体的,block可以对输入数据进行block对应的运算,并得到运算结果,运算结果可以输入到与该block相连的下一个block,并作为下一个block的输入数据。也就是说,两个第一block之间具有连接关系可以表示一个block的输出是作为另一个block的输入,且所述每个第一block对应一个目标权重,本申请实施例中,为了可以在模型的训练过程中,确定出block之间的哪些连接被保留,可以在两个block之间的连接上设置一个可以训练的参数(本实施例也称之为目标权重),一个block的输出可以与对应的目标权重相乘(本实施例中也称之为乘积运算),之后将乘积运算的结果输入到另一个block中,权重所述第二block用于根据M个第一输出,进行所述第 二block对应的运算;其中,所述M个第一输出由所述每个第一block的输出分别与对应的目标权重进行乘积运算得到,所述目标权重为可训练的权重,所述M为大于1的整数;对所述初始神经网络进行模型训练,以获取更新后的M个目标权重;本申请实施例中,训练设备可以对初始神经网络在目标任务上进行模型训练,并对M个目标权重进行更新,在M个目标权重稳定时可以获取到更新后的M个目标权重。所谓目标权重稳定,可以理解为在迭代训练过程中目标权重的变化在一定范围内了,在一些实现中,可以通过迭代训练的次数来确定M个目标权重是否稳定,例如,训练设备可以对所述初始神经网络进行第一预设迭代次数的模型训练,以获取更新后的M个目标权重,第一预设迭代次数可以为预先设定的值,其可以根据迭代训练所需的总次数来确定。例如当迭代训练的次数达到总共需要训练的次数的一定百分比时,认为M个目标权重稳定。
根据所述更新后的M个目标权重,更新所述初始神经网络中所述第二block与所述M个第一block的连接关系,以获取第一神经网络;其中,所述第一神经网络中的第二block用于根据所述更新后的M个目标权重中最大的N个目标权重对应的第一block的输出,进行所述第二block对应的运算,所述N小于所述M。具体的,所述第一神经网络中的第二block可以用于根据所述更新后的M个目标权重中最大的N个目标权重对应的第一block的输出的加和结果,进行所述第二block对应的运算。其中,更新后的M个目标权重的大小可以指示所在的block之间的连接是否重要,其中,重要的判断标准是,更新后的目标权重的大小越大,则所在的block之间的连接越重要。具体的,可以保留更新后的M个目标权重中最大的N个目标权重所在的连接,而将更新后的M个目标权重中除了最大的N个目标权重之外的目标权重所在的连接剔除。
本实施例在对初始神经网络进行block之间的连接关系的搜索过程中,通过在block之间的连接上加入可训练的目标权重,根据更新后的目标权重大小,作为block之间连接关系的重要性判断依据,并基于更新后的目标权重大小进block之间的连接关系的选择和剔除,从而实现了对神经网络的拓扑结构的搜索。
在一种可能的实现中,所述初始神经网络中的所述M个第一block以及第二block依次形成串行连接,且所述第二block为所述串行连接的终点,所述初始神经网络中的所述M个第一block包括目标block,所述目标block在所述串行连接上与所述第二block连接,在所述目标block对应的更新后的目标权重不属于所述更新后的M个目标权重中最大的N个目标权重的情况下,所述第一神经网络中的第二block还用于根据所述目标block的输出,进行所述第二block对应的运算。
也就是说,无论目标block对应的更新后的目标权重是否是更新后的M个目标权重中最大的N个目标权重,目标block与第二block之间的连接总是被保留,目标block与第二block之间的连接可以称之为骨干连接,骨干连接不会被剔除可以保证整个神经网络的骨干架构不被破坏。具体的,在一种实现中,若属于骨干连接的更新后的目标权重为M个目标权重中最大的N个目标权重之一,则可以保留N个更新后的目标权重所在的连接,若属于骨干连接的更新后的目标权重不为M个目标权重中最大的N个目标权重之一,则可以保留N+1个更新后的目标权重所在的连接。
在一种可能的实现中,所述N为1。
在一种可能的实现中,所述对所述初始神经网络进行模型训练,以获取更新后的M个目标权重,包括:
对所述初始神经网络进行第一预设迭代次数的模型训练,以获取更新后的M个目标权重。
在一种可能的实现中,所述方法还包括:
对所述第一神经网络进行模型训练,直至所述第一神经网络的数据处理精度满足预设条件或模型训练的迭代次数达到第二预设迭代次数,以得到第二神经网络。
本申请实施例中,第一预设迭代次数和第二预设迭代次数之间的比例可以预先设置,相当于更新后的M个目标权重的获取为在整体训练轮数的固定百分比位置进行,既保证了目标权重的稳定,又保证了优化拓扑后的网络得到充分训练。同时保持了单个拓扑优化的时间,和原始训练时间基本相同,保证搜索效率。
在一种可能的实现中,所述M个第一block的每个第一block的输入和输出的通道数与所述第二block的输入和输出的通道数一致。
在一种可能的实现中,所述初始神经网络中的所述第二block用于根据所述M个第一输出的加和结果,进行所述第二block对应的运算;
所述第一神经网络中的第二block用于根据所述更新后的M个目标权重中最大的N个目标权重对应的第一block的输出的加和结果,进行所述第二block对应的运算。
在一种可能的实现中,所述方法还包括:
获取待训练的数据,所述待训练的数据包括如下的至少一种:图像数据、文字数据以及语音数据;相应的,所述对所述初始神经网络进行模型训练,包括:
根据所述待训练的数据,对所述初始神经网络进行模型训练。
第二方面,本申请提供了一种神经网络结构确定方法,所述方法包括:
获取待训练的初始神经网络,所述初始神经网络包括M个第一结构块block和第二block,所述第二block与每个第一block连接,且所述每个第一block对应一个目标权重,所述每个第一block用于根据对应的第一输出,进行所述第一block对应的运算;其中,所述每个第一block对应的第一输出由第一block对应的目标权重与所述第二block的输出进行乘积运算得到,所述目标权重为可训练的权重,所述M为大于1的整数;和上述第一方面描述的实施例中不同的是,第一方面中M个第一block的输出是作为第二block的输入,在后续进行连接关系的选择过程中,也是基于M个第一block对应的M个更新后的目标权重的大小进行的,本实施例中,第二block的输出是作为M个第一block的输入。
对所述初始神经网络进行模型训练,以获取更新后的M个目标权重;
根据所述更新后的M个目标权重,更新所述初始神经网络中所述第二block与所述M个第一block的连接关系,以获取第一神经网络;其中,所述更新后的M个目标权重中最大的N个目标权重对应的第一block用于根据所述第二block的输出,进行所述第一block对应的运算,所述N小于所述M。
在一种可能的实现中,所述初始神经网络中的所述第二block与所述M个第一block依次形成串行连接,且所述第二block为所述串行连接的起点,所述初始神经网络中的所述M个第一block包括目标block,所述目标block在所述串行通路上与所述第二block连接,在所述目标block对应的更新后的目标权重不属于所述更新后的M个目标权重中最大的N个目标权重的情况下,所述第一神经网络中的所述目标block还用于根据所述第二block的输出,进行所述目标block对应的运算。
本实施例在对初始神经网络进行block之间的连接关系的搜索过程中,通过在block之间的连接上加入可训练的目标权重,根据更新后的目标权重大小,作为block之间连接关系的重要性判断依据,并基于更新后的目标权重大小进block之间的连接关系的选择和剔除,从而实现了对神经网络的拓扑结构的搜索。
在一种可能的实现中,所述N为1。
在一种可能的实现中,所述对所述初始神经网络进行模型训练,以获取更新后的M个目标权重,包括:
对所述初始神经网络进行第一预设迭代次数的模型训练,以获取更新后的M个目标权重。
在一种可能的实现中,所述方法还包括:
对所述第一神经网络进行模型训练,直至所述第一神经网络的数据处理精度满足预设条件或模型训练的迭代次数达到第二预设迭代次数,以得到第二神经网络。
在一种可能的实现中,所述M个第一block的每个第一block的输入和输出的通道数与所述第二block的输入和输出的通道数一致。
在一种可能的实现中,所述方法还包括:
获取待训练的数据,所述待训练的数据包括如下的至少一种:图像数据、文字数据以及语音数据;相应的,所述对所述初始神经网络进行模型训练,包括:
根据所述待训练的数据,对所述初始神经网络进行模型训练。
第三方面,本申请提供了一种神经网络结构确定方法,所述方法包括:
获取多个目标编码,每个目标编码用于指示一个候选神经网络,所述多个目标编码包括第一目标编码和多个第一编码,所述第一目标编码用于指示第一神经网络;本申请实施例中,在进行神经网络的结构搜索时,可以将神经网络的结构特征写成编码的形式,每个编码用于指示一个候选神经网络的如下结构特征的至少一种:候选神经网络包括的运算单元的类型、候选神经网络包括的运算单元的数量以及候选神经网络包括的运算单元的输入特征和输出特征通道数量。其中,运算单元可以指block中各个原子操作,则换一种表述,每个编码用于指示一个候选神经网络包括的原子操作的类型,候选神经网络包括的原子操作的数量以及候选神经网络包括的原子操作的输入特征和输出特征通道数量。由于在同一阶段内,各个block中的原子操作的输入特征和输出特征通道数量相同,则相当于每个编码 用于指示一个候选神经网络包括的block的输入特征和输出特征通道数量。
对所述第一神经网络进行模型训练,以获取所述第一神经网络的数据处理精度;本申请实施例中,神经网络数据处理精度可以为训练网络的损失函数的值、神经网络的测试精度等等,本申请实施例对此并不限定。
本申请实施例中,并不对多个目标编码指示的多个候选神将网络都进行模型训练,并基于多个候选神经网络的数据处理精度,从中选择精度较高的候选神经网络作为模型的搜索结果,而是只选择其中的一个目标编码(第一目标编码)指示的第一神经网络,并且对第一神经网络进行模型训练,之后基于目标编码之间的差异度,确定多个目标编码中其余目标编码(多个第一编码)指示的候选神经网络的数据处理精度。
根据所述第一目标编码与所述多个第一编码之间的差异度以及所述第一神经网络的数据处理精度,确定每个第一编码指示的候选神经网络的数据处理精度;本申请实施例中,目标编码可以包括多个位,每个位指示候选神经网络的一个结构特征,为了消除位与位之间的量纲差异对后续的影响,可以对每个目标编码进行标准化。示例性的,可以是对目标编码的每一位,分别计算多个目标编码的均值和标准差,然后对目标编码的每一位减去均值,再除以标准差。此后,目标编码的量纲对后续算法不会再产生影响。本实施例可以可以利用高斯过程根据所述第一目标编码与所述多个第一编码之间的差异度以及所述第一神经网络的数据处理精度,确定每个第一编码指示的候选神经网络的数据处理精度,具体的,可以根据两两样本点之间的距离、一部分样本点的值,对其他样本点的值进行估计。在本实施例中,样本点就是各个目标编码,样本点的值就是目标编码指示的候选神经网络的数据处理精度。
获取所述多个目标编码指示的候选神经网络中数据处理精度最高的第一候选神经网络;
对所述第一候选神经网络进行模型训练,以得到第一目标神经网络。
在一种可能的实现中,所述方法还包括:
获取所述第一目标神经网络的数据处理精度,所述多个目标编码包括第二目标编码,所述第二目标编码用于指示所述第一目标神经网络;
根据所述第二目标编码与所述多个目标编码中除所述第二目标编码之外的编码之间的差异度以及所述第一目标神经网络的数据处理精度,确定所述多个目标编码中除所述第二目标编码之外的每个目标编码指示的候选神经网络的数据处理精度;
根据所述多个目标编码指示的候选神经网络的数据处理精度,确定数据处理精度最高的第二候选神经网络,并对所述第二候选神经网络进行模型训练,以得到第二目标神经网络。之后训练设备可以重复上述过程,通过预设次数的迭代(例如4轮),就可以得到非常理想的模型作为神经模型结构搜索的结果。
在一种可能的实现中,每个目标编码用于指示一个候选神经网络的如下结构特征的至少一种:
候选神经网络包括的运算单元的类型、候选神经网络包括的运算单元的数量以及候选神经网络包括的运算单元的输入特征和输出特征通道数量。
在一种可能的实现中,所述方法还包括:
对多个编码进行聚类,以得到多个编码集合,每个编码集合对应一个聚类类别,所述多个编码集合包括目标编码集合,所述目标编码集合包括所述多个目标编码。
在一种可能的实现中,所述第一目标编码为所述目标编码集合的聚类中心。
本申请实施例中,可以对多个编码进行聚类,以得到多个编码集合,每个编码集合对应一个聚类类别,所述多个编码集合包括目标编码集合,所述目标编码集合包括所述多个目标编码。其中,上述多个编码可以是对多个候选编码进行筛选后得到的。所述第一目标编码可以为所述目标编码集合中的一个编码,在一种实现中,第一目标编码可以为所述目标编码集合的聚类中心。所述第一目标编码用于指示第一神经网络。应理解,上述聚类可以为K-Means算法、DBSCAN算法、BIRCH算法和MeanShift算法等。
在一种可能的实现中,每个目标编码指示的候选神经网络满足如下条件的至少一种:
在运行每个目标编码指示的候选神经网络时所需的计算量小于第一预设值;
每个目标编码指示的候选神经网络包括的权重的数量小于第二预设值;以及,
在运行每个目标编码指示的候选神经网络时的运行速度高于第三预设值。
在一种实现中,训练设备可以生成多个候选的编码,并基于预设的规则对多个候选的编码进行筛选,其中预设的规则可以是如下的至少一种:从多个候选的编码中选择在运行指示的候选神经网络时所需的计算量小于第一预设值、包括的权重量小于第二预设值,以及在运行指示的候选神经网络时的运行速度高于第三预设值。其中,计算量可以为整个神经网络中需要进行的浮点乘法的数量,浮点乘法的运算是最耗时的,因而可以用来表示神经网络的计算量。上述第一预设值、第二预设值以及第三预设值可以为预先设定的。
在一种可能的实现中,所述第一候选神经网络包括M个第一结构块block和第二block,所述第二block与每个第一block连接,且每个第一block对应一个目标权重,所述第一候选神经网络用于将每个第一block的输出与对应的目标权重进行乘积运算,以得到M个第一输出,所述第二block用于根据所述M个第一输出,进行所述第二block对应的运算;其中,所述目标权重为可训练的权重,所述M为大于1的整数;
所述对所述第一候选神经网络进行模型训练,以得到第一目标神经网络,包括:
对所述第一候选神经网络进行模型训练,以获取更新后的M个目标权重;
根据所述更新后的M个目标权重,更新所述第一候选神经网络中所述第二block与所述M个第一block的连接关系,以获取第二神经网络;其中,所述第二神经网络中的第二block用于根据所述更新后的M个目标权重中最大的N个目标权重对应的第一block的输出,进行所述第二block对应的运算,所述N小于所述M;
对所述第二神经网络进行模型训练,以得到所述第一目标神经网络。
在一种可能的实现中,所述第一候选神经网络包括M个第一结构块block和第二block,所述第二block与每个第一block连接,且每个第一block对应一个目标权重,所述第一候选神经网络用于将所述第二block的输出与每个目标权重进行乘积运算,以得到M个第一输出,每个第一block用于根据对应的第一输出,进行所述第一block对应的运算;其中,所述目标 权重为可训练的权重,所述M为大于1的整数;
所述对所述第一候选神经网络进行模型训练,以得到第一目标神经网络,包括:
对所述第一候选神经网络进行模型训练,以获取更新后的M个目标权重;
根据所述更新后的M个目标权重,更新所述第一候选神经网络中所述第二block与所述M个第一block的连接关系,以获取第二神经网络;其中,所述更新后的M个目标权重中最大的N个目标权重对应的第一block用于根据所述第二block的输出,进行所述第一block对应的运算,所述N小于所述M;
对所述第二神经网络进行模型训练,以得到所述第一目标神经网络。
第四方面,本申请提供了一种神经网络结构确定装置,所述装置包括:
获取模块,用于获取待训练的初始神经网络,所述初始神经网络包括M个第一结构块block和第二block,所述第二block与每个第一block连接,且所述每个第一block对应一个目标权重,所述第二block用于根据M个第一输出,进行所述第二block对应的运算;其中,所述M个第一输出由所述每个第一block的输出分别与对应的目标权重进行乘积运算得到,所述目标权重为可训练的权重,所述M为大于1的整数;
模型训练模块,用于对所述初始神经网络进行模型训练,以获取更新后的M个目标权重;
模型更新模块,用于根据所述更新后的M个目标权重,更新所述初始神经网络中所述第二block与所述M个第一block的连接关系,以获取第一神经网络;其中,所述第一神经网络中的第二block用于根据所述更新后的M个目标权重中最大的N个目标权重对应的第一block的输出,进行所述第二block对应的运算,所述N小于所述M。
在一种可能的实现中,所述初始神经网络中的所述M个第一block以及第二block依次形成串行连接,且所述第二block为所述串行连接的终点,所述初始神经网络中的所述M个第一block包括目标block,所述目标block在所述串行连接上与所述第二block连接,在所述目标block对应的更新后的目标权重不属于所述更新后的M个目标权重中最大的N个目标权重的情况下,所述第一神经网络中的第二block还用于根据所述目标block的输出,进行所述第二block对应的运算。
在一种可能的实现中,所述N为1。
在一种可能的实现中,所述模型训练模块,用于对所述初始神经网络进行第一预设迭代次数的模型训练,以获取更新后的M个目标权重。
在一种可能的实现中,所述模型训练模块,用于对所述第一神经网络进行模型训练,直至所述第一神经网络的数据处理精度满足预设条件或模型训练的迭代次数达到第二预设迭代次数,以得到第二神经网络。
在一种可能的实现中,所述M个第一block的每个第一block的输入和输出的通道数与所述第二block的输入和输出的通道数一致。
在一种可能的实现中,所述初始神经网络中的所述第二block用于根据所述M个第一输出的加和结果,进行所述第二block对应的运算;
所述第一神经网络中的第二block用于根据所述更新后的M个目标权重中最大的N个目标权重对应的第一block的输出的加和结果,进行所述第二block对应的运算。
在一种可能的实现中,所述获取模块,用于获取待训练的数据,所述待训练的数据包括如下的至少一种:图像数据、文字数据以及语音数据;相应的,所述对所述初始神经网络进行模型训练,包括:
所述模型训练模块,用于根据所述待训练的数据,对所述初始神经网络进行模型训练。
第五方面,本申请提供了一种神经网络结构确定装置,所述装置包括:
获取模块,用于获取待训练的初始神经网络,所述初始神经网络包括M个第一结构块block和第二block,所述第二block与每个第一block连接,且所述每个第一block对应一个目标权重,所述每个第一block用于根据对应的第一输出,进行所述第一block对应的运算;其中,所述每个第一block对应的第一输出由第一block对应的目标权重与所述第二block的输出进行乘积运算得到,所述目标权重为可训练的权重,所述M为大于1的整数;
模型训练模块,用于对所述初始神经网络进行模型训练,以获取更新后的M个目标权重;
模型更新模块,用于根据所述更新后的M个目标权重,更新所述初始神经网络中所述第二block与所述M个第一block的连接关系,以获取第一神经网络;其中,所述更新后的M个目标权重中最大的N个目标权重对应的第一block用于根据所述第二block的输出,进行所述第一block对应的运算,所述N小于所述M。
在一种可能的实现中,所述初始神经网络中的所述第二block与所述M个第一block依次形成串行连接,且所述第二block为所述串行连接的起点,所述初始神经网络中的所述M个第一block包括目标block,所述目标block在所述串行通路上与所述第二block连接,在所述目标block对应的更新后的目标权重不属于所述更新后的M个目标权重中最大的N个目标权重的情况下,所述第一神经网络中的所述目标block还用于根据所述第二block的输出,进行所述目标block对应的运算。
在一种可能的实现中,所述N为1。
在一种可能的实现中,所述模型训练模块,用于对所述初始神经网络进行第一预设迭代次数的模型训练,以获取更新后的M个目标权重。
在一种可能的实现中,所述模型训练模块,用于对所述第一神经网络进行模型训练,直至所述第一神经网络的数据处理精度满足预设条件或模型训练的迭代次数达到第二预设迭代次数,以得到第二神经网络。
在一种可能的实现中,所述M个第一block的每个第一block的输入和输出的通道数与所述第二block的输入和输出的通道数一致。
在一种可能的实现中,所述获取模块,用于获取待训练的数据,所述待训练的数据包括如下的至少一种:图像数据、文字数据以及语音数据;相应的,所述对所述初始神经网络进行模型训练,包括:
所述模型训练模块,用于根据所述待训练的数据,对所述初始神经网络进行模型训练。
第六方面,本申请提供了一种神经网络结构确定装置,所述装置包括:
获取模块,用于获取多个目标编码,每个目标编码用于指示一个候选神经网络,所述多个目标编码包括第一目标编码和多个第一编码,所述第一目标编码用于指示第一神经网络;
模型训练模块,用于对所述第一神经网络进行模型训练,以获取所述第一神经网络的数据处理精度;
精度确定模块,用于根据所述第一目标编码与所述多个第一编码之间的差异度以及所述第一神经网络的数据处理精度,确定每个第一编码指示的候选神经网络的数据处理精度;
所述获取模块,用于获取所述多个目标编码指示的候选神经网络中数据处理精度最高的第一候选神经网络;
所述模型训练模块,用于对所述第一候选神经网络进行模型训练,以得到第一目标神经网络。
在一种可能的实现中,所述获取模块,用于获取所述第一目标神经网络的数据处理精度,所述多个目标编码包括第二目标编码,所述第二目标编码用于指示所述第一目标神经网络;
根据所述第二目标编码与所述多个目标编码中除所述第二目标编码之外的编码之间的差异度以及所述第一目标神经网络的数据处理精度,确定所述多个目标编码中除所述第二目标编码之外的每个目标编码指示的候选神经网络的数据处理精度;
根据所述多个目标编码指示的候选神经网络的数据处理精度,确定数据处理精度最高的第二候选神经网络,并对所述第二候选神经网络进行模型训练,以得到第二目标神经网络。
在一种可能的实现中,每个目标编码用于指示一个候选神经网络的如下结构特征的至 少一种:
候选神经网络包括的运算单元的类型、候选神经网络包括的运算单元的数量以及候选神经网络包括的运算单元的输入特征和输出特征通道数量。
在一种可能的实现中,所述装置还包括:
聚类模块,用于对多个编码进行聚类,以得到多个编码集合,每个编码集合对应一个聚类类别,所述多个编码集合包括目标编码集合,所述目标编码集合包括所述多个目标编码。
在一种可能的实现中,所述第一目标编码为所述目标编码集合的聚类中心。
在一种可能的实现中,每个目标编码指示的候选神经网络满足如下条件的至少一种:
在运行每个目标编码指示的候选神经网络时所需的计算量小于第一预设值;
每个目标编码指示的候选神经网络包括的权重量小于第二预设值;以及,
在运行每个目标编码指示的候选神经网络时的运行速度高于第三预设值。
在一种可能的实现中,所述第一候选神经网络包括M个第一结构块block和第二block,所述第二block与每个第一block连接,且所述每个第一block对应一个目标权重,所述第二block用于根据M个第一输出,进行所述第二block对应的运算;其中,所述M个第一输出由所述每个第一block的输出分别与对应的目标权重进行乘积运算得到,所述目标权重为可训练的权重,所述M为大于1的整数;
所述模型训练模块,用于对所述第一候选神经网络进行模型训练,以获取更新后的M个目标权重;
根据所述更新后的M个目标权重,更新所述第一候选神经网络中所述第二block与所述M个第一block的连接关系,以获取第二神经网络;其中,所述第二神经网络中的第二block用于根据所述更新后的M个目标权重中最大的N个目标权重对应的第一block的输出,进行所述第二block对应的运算,所述N小于所述M;
对所述第二神经网络进行模型训练,以得到所述第一目标神经网络。
在一种可能的实现中,所述第一候选神经网络包括M个第一结构块block和第二block,所述第二block与每个第一block连接,且所述每个第一block对应一个目标权重,所述每个第一block用于根据对应的第一输出,进行所述第一block对应的运算;其中,所述每个第一block对应的第一输出由第一block对应的目标权重与所述第二block的输出进行乘积运算得到,所述目标权重为可训练的权重,所述M为大于1的整数;
所述模型训练模块,用于对所述第一候选神经网络进行模型训练,以获取更新后的M个目标权重;
根据所述更新后的M个目标权重,更新所述第一候选神经网络中所述第二block与所述M个第一block的连接关系,以获取第二神经网络;其中,所述更新后的M个目标权重中最 大的N个目标权重对应的第一block用于根据所述第二block的输出,进行所述第一block对应的运算,所述N小于所述M;
对所述第二神经网络进行模型训练,以得到所述第一目标神经网络。
第七方面,本申请实施例提供了一种神经网络结构确定装置,可以包括存储器、处理器以及总线系统,其中,存储器用于存储程序,处理器用于执行存储器中的程序,以执行如上述第一方面及第一方面任一可选的方法。
第八方面,本申请实施例提供了一种神经网络训练装置,可以包括存储器、处理器以及总线系统,其中,存储器用于存储程序,处理器用于执行存储器中的程序,以执行如上述第二方面及第一方面任一可选的方法。
第九方面,本申请实施例提供了一种神经网络训练装置,可以包括存储器、处理器以及总线系统,其中,存储器用于存储程序,处理器用于执行存储器中的程序,以执行如上述第三方面及第一方面任一可选的方法。
第十方面,本申请实施例提供了一种计算机可读存储介质,所述计算机可读存储介质中存储有计算机程序,当其在计算机上运行时,使得计算机执行上述第一方面及其任一可选的方法。
第十一方面,本申请实施例提供了一种计算机可读存储介质,所述计算机可读存储介质中存储有计算机程序,当其在计算机上运行时,使得计算机执行上述第二方面及其任一可选的方法。
第十二方面,本申请实施例提供了一种计算机可读存储介质,所述计算机可读存储介质中存储有计算机程序,当其在计算机上运行时,使得计算机执行上述第三方面及其任一可选的方法。
第十三方面,本申请实施例提供了一种计算机程序,当其在计算机上运行时,使得计算机执行上述第一方面及其任一可选的方法。
第十四方面,本申请实施例提供了一种计算机程序,当其在计算机上运行时,使得计算机执行上述第二方面及其任一可选的方法。
第十五方面,本申请实施例提供了一种计算机程序,当其在计算机上运行时,使得计算机执行上述第三方面及其任一可选的方法。
第十六方面,本申请实施例提供了一种计算机程序产品,包括代码,当所述代码被执行时,用于执行上述第一方面及其任一可选的方法。
第十七方面,本申请实施例提供了一种计算机程序产品,包括代码,当所述代码被执行时,用于执行上述第二方面及其任一可选的方法。
第十八方面,本申请实施例提供了一种计算机程序产品,包括代码,当所述代码被执行时,用于执行上述第三方面及其任一可选的方法。
第十九方面,本申请提供了一种芯片系统,该芯片系统包括处理器,用于支持执行设备或训练设备实现上述方面中所涉及的功能,例如,发送或处理上述方法中所涉及的数据;或,信息。在一种可能的设计中,所述芯片系统还包括存储器,所述存储器,用于保存执 行设备或训练设备必要的程序指令和数据。该芯片系统,可以由芯片构成,也可以包括芯片和其他分立器件。
本申请实施例提供了一种神经网络结构确定方法,所述方法包括:获取待训练的初始神经网络,所述初始神经网络包括M个第一结构块block和第二block,所述第二block与每个第一block连接,且所述每个第一block对应一个目标权重,所述第二block用于根据M个第一输出,进行所述第二block对应的运算;其中,所述M个第一输出由所述每个第一block的输出分别与对应的目标权重进行乘积运算得到,所述目标权重为可训练的权重,所述M为大于1的整数;对所述初始神经网络进行模型训练,以获取更新后的M个目标权重;根据所述更新后的M个目标权重,更新所述初始神经网络中所述第二block与所述M个第一block的连接关系,以获取第一神经网络;其中,所述第一神经网络中的第二block用于根据所述更新后的M个目标权重中最大的N个目标权重对应的N个第一block的输出,进行所述第二block对应的运算,所述N小于所述M。通过上述方式,在对初始神经网络进行block之间的连接关系的搜索过程中,通过在block之间的连接上加入可训练的目标权重,根据更新后的目标权重大小,作为block之间连接关系的重要性判断依据,并基于更新后的目标权重大小进block之间的连接关系的选择和剔除,从而实现了对神经网络的拓扑结构的搜索。
附图说明
图1为人工智能主体框架的一种结构示意图;
图2为本申请实施例的应用场景;
图3为本申请实施例提供的一种系统架构的示意图;
图4为本申请实施例提供的一种神经网络结构确定方法的流程示意;
图5a为本申请实施例提供的一种神经网络结构确定方法的示意;
图5b为本申请实施例提供的一种神经网络结构确定方法的示意;
图5c为本申请实施例提供的一种神经网络结构确定方法的示意;
图6为本申请实施例提供的一种神经网络结构确定方法的示意;
图7为本申请实施例提供的一种神经网络结构确定方法的示意;
图8为本申请实施例提供的一种神经网络结构确定方法的示意;
图9为本申请实施例提供的一种神经网络结构确定方法的示意;
图10为本申请实施例提供的一种神经网络结构确定方法的流程示意;
图11为本申请实施例提供的一种神经网络结构确定方法的示意;
图12为本申请实施例提供的一种神经网络结构确定方法的示意;
图13为本申请实施例提供的一种神经网络结构确定方法的示意;
图14为本申请实施例提供的一种神经网络结构确定方法的流程示意;
图15为本申请实施例提供的一种神经网络结构确定装置的示意;
图16为本申请实施例提供的一种神经网络结构确定装置的示意;
图17为本申请实施例提供的一种神经网络结构确定装置的示意;
图18为本申请实施例提供的一种神经网络结构确定装置的示意;
图19为本申请实施例提供的一种执行设备的结构示意;
图20为本申请实施例提供的一种训练设备的结构示意;
图21为本申请实施例提供的一种芯片的示结构意。
具体实施方式
下面结合本发明实施例中的附图对本发明实施例进行描述。本发明的实施方式部分使用的术语仅用于对本发明的具体实施例进行解释,而非旨在限定本发明。
下面结合附图,对本申请的实施例进行描述。本领域普通技术人员可知,随着技术的发展和新场景的出现,本申请实施例提供的技术方案对于类似的技术问题,同样适用。
本申请的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的术语在适当情况下可以互换,这仅仅是描述本申请的实施例中对相同属性的对象在描述时所采用的区分方式。此外,术语“包括”和“具有”以及他们的任何变形,意图在于覆盖不排他的包含,以便包含一系列单元的过程、方法、系统、产品或设备不必限于那些单元,而是可包括没有清楚地列出的或对于这些过程、方法、产品或设备固有的其它单元。
首先对人工智能系统总体工作流程进行描述,请参见图1,图1示出的为人工智能主体框架的一种结构示意图,下面从“智能信息链”(水平轴)和“IT价值链”(垂直轴)两个维度对上述人工智能主题框架进行阐述。其中,“智能信息链”反映从数据的获取到处理的一列过程。举例来说,可以是智能信息感知、智能信息表示与形成、智能推理、智能决策、智能执行与输出的一般过程。在这个过程中,数据经历了“数据—信息—知识—智慧”的凝练过程。“IT价值链”从人智能的底层基础设施、信息(提供和处理技术实现)到系统的产业生态过程,反映人工智能为信息技术产业带来的价值。
(1)基础设施
基础设施为人工智能系统提供计算能力支持,实现与外部世界的沟通,并通过基础平台实现支撑。通过传感器与外部沟通;计算能力由智能芯片(CPU、NPU、GPU、ASIC、FPGA等硬件加速芯片)提供;基础平台包括分布式计算框架及网络等相关的平台保障和支持,可以包括云存储和计算、互联互通网络等。举例来说,传感器和外部沟通获取数据,这些数据提供给基础平台提供的分布式计算系统中的智能芯片进行计算。
(2)数据
基础设施的上一层的数据用于表示人工智能领域的数据来源。数据涉及到图形、图像、语音、文本,还涉及到传统设备的物联网数据,包括已有系统的业务数据以及力、位移、液位、温度、湿度等感知数据。
(3)数据处理
数据处理通常包括数据训练,机器学习,深度学习,搜索,推理,决策等方式。
其中,机器学习和深度学习可以对数据进行符号化和形式化的智能信息建模、抽取、预处理、训练等。
推理是指在计算机或智能系统中,模拟人类的智能推理方式,依据推理控制策略,利用形式化的信息进行机器思维和求解问题的过程,典型的功能是搜索与匹配。
决策是指智能信息经过推理后进行决策的过程,通常提供分类、排序、预测等功能。
(4)通用能力
对数据经过上面提到的数据处理后,进一步基于数据处理的结果可以形成一些通用的能力,比如可以是算法或者一个通用系统,例如,翻译,文本的分析,计算机视觉的处理,语音识别,图像的识别等等。
(5)智能产品及行业应用
智能产品及行业应用指人工智能系统在各领域的产品和应用,是对人工智能整体解决方案的封装,将智能信息决策产品化、实现落地应用,其应用领域主要包括:智能终端、智能交通、智能医疗、自动驾驶、智慧城市等。
本申请实施例可以应用在图片分类、物体检测、语义分割、室内布局(room layout)、图片补全或自动编码等等场景中。
下面以ADAS/ADS视觉感知系统和手机美颜两种应用场景为例对本申请的应用场景做简单的介绍。
应用场景1:ADAS/ADS视觉感知系统
如图2所示,在ADAS和ADS中,需要实时进行多类型的2D目标检测,包括:动态障碍物(行人(Pedestrian)、骑行者(Cyclist)、三轮车(Tricycle)、轿车(Car)、卡车(Truck)、公交车(Bus)),静态障碍物(交通锥标(TrafficCone)、交通棍标(TrafficStick)、消防栓(FireHydrant)、摩托车(Motocycle)、自行车(Bicycle)),交通标志((TrafficSign)、导向标志(GuideSign)、广告牌(Billboard)、红色交通灯(TrafficLight_Red)/黄色交通灯(TrafficLight_Yellow)/绿色交通灯(TrafficLight_Green)/黑色交通灯(TrafficLight_Black)、路标(RoadSign))。另外,为了准确获取动态障碍物的在3维空间所占的区域,还需要对动态障碍物进行3D估计,输出3D框。为了与激光雷达的数据进行融合,需要获取动态障碍物的Mask,从而把打到动态障碍物上的激光点云筛选出来;为了进行精确的泊车位,需要同时检测出泊车位的4个关键点;为了进行构图定位,需要检测出静态目标的关键点。使用本申请实施例提供的技术方案,可以在神经网络中完成上述的全部或一部分功能。
应用场景2:手机美颜功能
在手机中,通过本申请实施例提供的神经网络检测出人体的Mask和关键点,可以对人体相应的部位进行放大缩小,比如进行收腰和美臀操作,从而输出美颜的图像。
应用场景3:图像分类场景:
在获取待分类图像后,可以基于神经网络获取待分类图像中的物体的类别,然后可根据待分类图像中物体的类别对待分类图像进行分类。对于摄影师来说,每天会拍很多照片,有动物的,有人物,有植物的。采用本申请的方法可以快速地将照片按照照片中的内容进行分类,可分成包含动物的照片、包含人物的照片和包含植物的照片。
对于图像数量比较庞大的情况,人工分类的方式效率比较低下,并且人在长时间处理同一件事情时很容易产生疲劳感,此时分类的结果会有很大的误差;而采用本申请的方法可以快速地将图像进行分类,并且不会有误差。
应用场景4:商品分类:
在获取包括商品的图像后,可以神经网路的处理获取商品的图像中商品的类别,然后根据商品的类别对商品进行分类。对于大型商场或超市中种类繁多的商品,采用本申请的物体识别方法可以快速完成商品的分类,降低了时间开销和人工成本。
本申请实施例可以进行神经网络的结构搜索,并对搜索得到的神经网络进行训练,得到的训练后的神经网络可以进行如上几种场景的任务处理。
由于本申请实施例涉及大量神经网络的应用,为了便于理解,下面先对本申请实施例涉及的相关术语及神经网络等相关概念进行介绍。
(1)物体检测,利用图像处理和机器学习、计算机图形学等相关方法,物体检测可以确定图像物体的类别,并确定用于定位物体的检测框。
(2)卷积神经网络(Convosutionas Neuras Network,CNN)是一种带有卷积结构的深度神经网络。卷积神经网络包含了一个由卷积层和子采样层构成的特征抽取器。该特征抽取器可以看作是滤波器。本实施例中的感知网络可以包括卷积神经网络,用于对图像进行卷积处理或者对特征图进行卷积处理来生成特征图。
(3)反向传播算法
卷积神经网络可以采用误差反向传播(back propagation,BP)算法在训练过程中修正初始的超分辨率模型中参数的大小,使得超分辨率模型的重建误差损失越来越小。具体地,前向传递输入信号直至输出会产生误差损失,通过反向传播误差损失信息来更新初始的超分辨率模型中参数,从而使误差损失收敛。反向传播算法是以误差损失为主导的反向传播运动,旨在得到最优的超分辨率模型的参数,例如权重矩阵。本实施例中,在进行感知网络的训练时,可以基于反向传播算法来更新感知网络。
3、特征图:即Feature Map。神经网络的输入数据、输出数据、中间结果数据等,都可以称为特征图。在神经网络中,数据都是以三维形式(长、宽、通道数量)存在的,可以把它看成多个二维图片堆叠在一起。
4、网络结构块:即block。在神经网络的设计中,往往分为两步。第一步是设计Block,Block是由原子单元(例如卷积操作、池化操作等)组成的单元;第二步是把Block组合成一个完整的网络结构。
5、通道:即Channel,特征图中除了长度、宽度之外的第三个维度。可以理解成特征图的厚度。另外对于卷积层等原子操作,也有通道数量这个维度。
6、Block的宽度:对一个Block而言,其内部原子单元的拓扑关系已固定,但原子单元的输入输出特征的通道数不固定。这是Block的一个可变属性,叫做Block的宽度。
7、网络的宽度:神经网络中所有Block的宽度的集合,称为这个网络的宽度。一般是一组整数。
8、网络的深度:由Block堆叠成神经网络时,Block的堆叠数量。它和网络的卷积堆叠深度是正相关的。
9、网络的阶段:即Stage。神经网络中,会通过多次降采样把输入的特征图逐渐变小。两次降采样之间,构成了网络的一个阶段。一般而言,网络的一个阶段内的Block,它们的 宽度是相同的。
10、网络结构编码:在本发明中,网络的深度、宽度组成网络结构编码。当拓扑结构确定之后,网络结构编码可以唯一确定一个网络的结构。网络结构编码的位数,一般和网络的阶段数量相同。
11、网络结构编码候选集:在本发明中,可能符合要求的网络结构编码的集合,称为候选集。
12、网络的计算量:即FLOPs。整个网络中进行的浮点乘法的数量,这部分是最耗时的,因而用来表示网络的计算量。
13、加权求和:当不同原子操作的输出进行聚合时,对于相同形状的特征图,可以进行求和或者堆叠。本发明中,一律使用求和。但是在求和中,给每个输入都乘以一个可以学习的权重。这就是加权求和。
14、网络的数据处理性能:神经网络好坏的指标,例如网络在测试集的精度、在训练集上的损失函数值等。需要根据业务需求,人工指定。
15、目标任务:想要解决的最终任务,相对于代理任务存在。例如对ImageNet数据集进行图像分类、在业务数据集上进行人脸识别等等。
16、代理任务:AutoML进行网络结构优化时,需要对大量的网络评估性能,如果直接在目标任务上进行训练、测试,资源消耗会变得无法接受。所以会人工设计一个更小的任务,能够快速完成网络的训练和测试,这就是代理任务。
图3是本申请实施例提供的一种系统架构的示意图,在图3中,执行设备110配置输入/输出(input/output,I/O)接口112,用于与外部设备进行数据交互,用户可以通过客户设备140向I/O接口112输入数据。
在执行设备120对输入数据进行预处理,或者在执行设备120的计算模块111执行计算等相关的处理(比如进行本申请中神经网络的功能实现)过程中,执行设备120可以调用数据存储系统150中的数据、代码等以用于相应的处理,也可以将相应处理得到的数据、指令等存入数据存储系统150中。
最后,I/O接口112将处理结果返回给客户设备140,从而提供给用户。
可选地,客户设备140,例如可以是自动驾驶系统中的控制单元、手机终端中的功能算法模块,例如该功能算法模块可以用于实现相关的任务。
值得说明的是,训练设备120可以针对不同的目标或称不同的任务,基于不同的训练数据生成相应的目标模型/规则,该相应的目标模型/规则即可以用于实现上述目标或完成上述任务,从而为用户提供所需的结果。
在图3中所示情况下,用户可以手动给定输入数据,该手动给定可以通过I/O接口112提供的界面进行操作。另一种情况下,客户设备140可以自动地向I/O接口112发送输入数据,如果要求客户设备140自动发送输入数据需要获得用户的授权,则用户可以在客户设备140中设置相应权限。用户可以在客户设备140查看执行设备110输出的结果,具体的现形式可以是显示、声音、动作等具体方式。客户设备140也可以作为数据采集端,采集如图所示输 入I/O接口112的输入数据及输出I/O接口112的输出结果作为新的样本数据,并存入数据库130。当然,也可以不经过客户设备140进行采集,而是由I/O接口112直接将如图所示输入I/O接口112的输入数据及输出I/O接口112的输出结果,作为新的样本数据存入数据库130。
值得注意的是,图3仅是本申请实施例提供的一种系统架构的示意图,图中所示设备、器件、模块等之间的位置关系不构成任何限制,例如,在图3中,数据存储系统150相对执行设备110是外部存储器,在其它情况下,也可以将数据存储系统150置于执行设备110中。
参见图4,图4为本申请实施例提供的一种神经网络结构确定方法的流程示意,如图4所示,本申请实施例提供的神经网络结构确定方法包括:
401、获取待训练的初始神经网络,所述初始神经网络包括M个第一结构块block和第二block,所述第二block与每个第一block连接,且所述每个第一block对应一个目标权重,所述第二block用于根据M个第一输出,进行所述第二block对应的运算;其中,所述M个第一输出由所述每个第一block的输出分别与对应的目标权重权重进行乘积运算得到,所述目标权重权重为可训练的权重权重,所述M为大于1的整数。
本申请实施例可以对神经网络的拓扑结构进行搜索。其中,本实施例中的神经网络的拓扑结构,可以具体指神经网络中的结构块block之间的连接关系。
其中,结构块block也可以称之为网络结构块,结构块block可以包括一定数量的原子操作,原子操作可以包含但不限于卷积、池化、残差连接等操作,例如可以包括以下操作类型:1x3和3x1 convolution、1x7和7x1 convolution、3x3 dilated convolution、3x3 average pooling、3x3 max pooling、5x5 max pooling、7x7 max pooling、1x1 convolution、3x3 convolution、3x3 separable conv、5x5 seperable conv、7x7 separable conv、跳连接操作、置零操作(Zero,相应位置所有神经元置零)等等。其中,示例性的,3x3 average pooling表示池化核大小为3×3的均值池化;3x3 max pooling表示池化核大小为3×3的最大值池化;3x3 dilated convolution表示卷积核大小为3×3且空洞率为2的空洞卷积;3x3 separable conv表示卷积核大小为3×3的分离卷积;5x5 seperable conv表示卷积核大小为5×5的分离卷积。
在神经网络的设计中,往往分为两步,第一步是设计结构块block,第二步是把block进行连接以组合成一个完整的网络结构。
本申请实施例中,所谓block之间的连接关系,可以理解为block之间的数据传输方向,具体的,block可以对输入数据进行block对应的运算,并得到运算结果,运算结果可以输入到与该block相连的下一个block,并作为下一个block的输入数据。也就是说,两个第一block之间具有连接关系可以表示一个block的输出是作为另一个block的输入。
本申请实施例中,为了对神经网络的拓扑结构进行搜索,首先将待搜索的神经网络中的block进行大量的连接,并在模型的训练过程中确定哪些连接可以被保留,哪些连接可以被舍弃。
接下来首先描述如何将待搜索的神经网络中的block进行连接。
在一些场景中,在神经网络中的block的具体类型,或者表述为block中包括的原子操作的类型,网络的宽度,或者表述为block中各个原子操作的输入特征的通道数以及输出特征 的通道数,以及深度,或者表述为神经网络包括的block的数量,等权重确定之后,可以将神经网络中的全部或部分block之间进行两两连接。
在一种实现中,可以将神经网络中同一阶段的全部或部分block之间进行两两连接。在神经网络中,会通过多次降采样把输入的特征图逐渐变小。在两次降采样之间,构成了神经网络的一个阶段,一般而言,在神经网络的一个阶段内的block,它们的宽度(block中各个原子操作的输入特征的通道数以及输出特征的通道数)是相同的。
在一种实现中,可以将神经网络中同一阶段的全部block之间进行两两连接。示例性的,如图5a所示,block1、block2以及block3为神经网络中同一个阶段内的block,block1与block2连接,block2与block3连接,且block1与block3连接。示例性的,如图5b所示,block1、block2、block3以及block4为神经网络中同一个阶段内的block,block1与block2连接,block2与block3连接,block1与block3连接,block2与block4连接,block1与block4连接,block3与block4连接。
在一种实现中,可以将神经网络中同一阶段的部分block之间进行连接。示例性的,如图5c所示,block1、block2、block3以及block4为神经网络中同一个阶段内的block,block1与block2连接,block2与block3连接,block1与block3连接,block2与block4连接,block3与block4连接,而block1与block4之间未连接。需要说明是,虽然可以认为block1与block4之间没有连接关系,但是block1与block4之间还可以存在其他数据通路,例如block1的输出可以作为block2的输入,block2的输出可以作为block4的输入,即使block1与block4之间存在block1-block1-block1的数据通路,本实施例仍然认为block1与block4之间没有连接关系。
本申请实施例中,为了可以在模型的训练过程中,确定出block之间的哪些连接被保留,可以在两个block之间的连接上设置一个可以训练的权重参数(本实施例也称之为目标权重),一个block的输出可以与对应的目标权重相乘(本实施例中也称之为乘积运算),之后将乘积运算的结果输入到另一个block中。更具体的,以在block1和block2之间设置目标权重1为例,在未设置目标权重1的情况下,block1的输出直接作为block2的输入,在设置目标权重1的情况下,block1的输出会先与目标权重1进行乘积运算,之后将乘积运算的结果作为block2的输入。在模型的训练过程中,随着训练迭代的进行,各个目标权重会被更新,更新后的目标权重的大小可以表示所在的连接是否重要。
应理解,在多个block与同一个block存在连接关系时,例如多个block的输出同时作为一个block的输入,此时多个block的输出与对应的目标权重进行乘积运算之后,可以将乘积运算的结果进行相加,并将加和结果作为与上述多个block连接的block的输入。示例性的,可以参照图6,第一block1和第一block2都与第二block连接,第一block1的输出与目标权重1相乘,第一block2的输出与目标权重2相乘,两个相乘的结果可以进行加和运算,两个加和运算的结果可以作为第二block的输入。
本申请实施例中,训练设备可以获取到待训练的初始神经网络,待训练的初始神经网络可以是在神经网络中的block的具体类型、网络的宽度以及深度等权重确定之后,将神经网络中的全部或部分block之间进行两两连接之后得到的。
其中,所述初始神经网络可以包括M个第一结构块block和第二block,所述第二block 与每个第一block连接,且每个第一block对应一个目标权重,所述初始神经网络用于将每个第一block的输出与对应的目标权重进行乘积运算,以得到M个第一输出,所述第二block用于根据所述M个第一输出,进行所述第二block对应的运算。具体的,所述初始神经网络中的所述第二block可以用于根据所述M个第一输出的加和结果,进行所述第二block对应的运算。
其中,所述M个第一block的每个第一block的输入和输出的通道数与所述第二block的输入和输出的通道数一致,也就是说M个第一block和第二block为初始神经网络中同一个阶段内的block。
以M为3为例,参照图7,初始神经网络可以包括3个第一block(包括第一block1,第一block2以及第一block3)以及第二block,第二block与第一block1,第一block2以及第一block3连接,第一block1对应目标权重1,第一block2对应目标权重2,第一block3对应目标权重3,所述初始神经网络用于将第一block1的输出与目标权重1进行乘积运算,以得到第一输出1,将第一block2的输出与目标权重2进行乘积运算,以得到第一输出2,将第一block3的输出与目标权重3进行乘积运算,以得到第一输出3,所述第二block用于根据第一输出1、第一输出2以及第一输出3,进行所述第二block对应的运算,具体的,第二block用于根据第一输出1、第一输出2以及第一输出3的加和结果,进行所述第二block对应的运算。
应理解,在初始神经网络中具有连接关系的block之间,还可以具有除了目标权重的其他运算单元,例如用于调整特征图大小的运算单元等等,本申请并不限定。
402、对所述初始神经网络进行模型训练,以获取更新后的M个目标权重。
本申请实施例中,训练设备在获取到待训练的初始神经网络之后,可以对所述初始神经网络进行模型训练,以获取更新后的M个目标权重。
本申请实施例中,训练设备可以对初始神经网络在目标任务上进行模型训练,并对M个目标权重进行更新,在M个目标权重稳定时可以获取到更新后的M个目标权重。所谓目标权重稳定,可以理解为在迭代训练过程中目标权重的变化在一定范围内了,在一些实现中,可以通过迭代训练的次数来确定M个目标权重是否稳定,例如,训练设备可以对所述初始神经网络进行第一预设迭代次数的模型训练,以获取更新后的M个目标权重,第一预设迭代次数可以为预先设定的值,其可以根据迭代训练所需的总次数来确定。例如当迭代训练的次数达到总共需要训练的次数的一定百分比时,认为M个目标权重稳定。
本申请实施例中,更新后的M个目标权重的获取为在整体训练轮数的固定百分比位置进行,既保证了目标权重的稳定,又保证了优化拓扑后的网络得到充分训练。同时保持了单个拓扑优化的时间,和原始训练时间基本相同,保证搜索效率。
应理解,在对初始神经网络进行模型训练的过程中,可以对网络普通权重(也就是block中包括的原子操作中待训练的权重)和M个目标权重同时进行更新,也可以是对网络普通权重和M个目标权重交替进行更新,本申请并不限定。
403、根据所述更新后的M个目标权重,更新所述初始神经网络中所述第二block与所述 M个第一block的连接关系,以获取第一神经网络;其中,所述第一神经网络中的第二block用于根据所述更新后的M个目标权重中最大的N个目标权重对应的第一block的输出,进行所述第二block对应的运算,所述N小于所述M。
本申请实施例中,在对所述初始神经网络进行模型训练,以获取更新后的M个目标权重之后,可以根据所述更新后的M个目标权重,更新所述初始神经网络中所述第二block与所述M个第一block的连接关系,以获取第一神经网络;其中,所述第一神经网络中的第二block用于根据所述更新后的M个目标权重中最大的N个目标权重对应的第一block的输出,进行所述第二block对应的运算。具体的,所述第一神经网络中的第二block可以用于根据所述更新后的M个目标权重中最大的N个目标权重对应的第一block的输出的加和结果,进行所述第二block对应的运算。
其中,更新后的M个目标权重的大小可以指示所在的block之间的连接是否重要,其中,重要的判断标准是,更新后的目标权重的大小越大,则所在的block之间的连接越重要。具体的,可以保留更新后的M个目标权重中最大的N个目标权重所在的连接,而将更新后的M个目标权重中除了最大的N个目标权重之外的目标权重所在的连接剔除。
在一种实现中,所述初始神经网络中的所述M个第一block以及第二block依次形成串行连接,且所述第二block为所述串行连接的终点,所述初始神经网络中的所述M个第一block包括目标block,所述目标block在所述串行连接上与所述第二block连接,在所述目标block对应的更新后的目标权重不属于所述更新后的M个目标权重中最大的N个目标权重的情况下,所述第一神经网络中的第二block还用于根据所述目标block的输出,进行所述第二block对应的运算。
也就是说,无论目标block对应的更新后的目标权重是否是更新后的M个目标权重中最大的N个目标权重,目标block与第二block之间的连接总是被保留,目标block与第二block之间的连接可以称之为骨干连接,骨干连接不会被剔除可以保证整个神经网络的骨干架构不被破坏。
具体的,在一种实现中,若属于骨干连接的更新后的目标权重为M个目标权重中最大的N个目标权重之一,则可以保留N个更新后的目标权重所在的连接,若属于骨干连接的更新后的目标权重不为M个目标权重中最大的N个目标权重之一,则可以保留N+1个更新后的目标权重所在的连接。
示例性的,可以参照图8,以M的数量为3,N为1为例,更新后的目标权重1大于更新后的目标权重2以及更新后的目标权重3,则可以保留第一block1和第二block之间的连接,以及第一block3和第二block之间的连接(骨干连接),如图8所示,所述第一神经网络中的第二block用于根据第一block1以及第一block3的输出,进行所述第二block对应的运算,具体的,第一神经网络中的第二block用于根据第一block1以及第一block3的输出的加和结果,进行所述第二block对应的运算。
示例性的,可以参照图9,以M的数量为3,N为1为例,更新后的目标权重3大于更新后的目标权重1以及更新后的目标权重2,则可以仅保留第一block3和第二block之间的连接,如图8所示,所述第一神经网络中的第二block用于根据第一block3的输出,进行所述第二 block对应的运算。
本申请实施例中,可以将初始神经网络中的每个block作为上述实施例中的第二block,将输出作为第二block的输入的block作为第一block,并进行上述连接的剔除和选择,以得到第一神经网络。
本申请实施例中,在获取到第一神经网络之后,可以对所述第一神经网络进行模型训练,直至所述第一神经网络的数据处理精度满足预设条件或模型训练的迭代次数达到第二预设迭代次数,以得到第二神经网络。
具体的,训练设备可以获取待训练的数据,所述待训练的数据包括如下的至少一种:图像数据、文字数据以及语音数据;相应的,训练设备可以根据所述待训练的数据,对所述初始神经网络进行模型训练。
本申请实施例提供了一种神经网络结构确定方法,所述方法包括:获取待训练的初始神经网络,所述初始神经网络包括M个第一结构块block和第二block,所述第二block与每个第一block连接,且所述每个第一block对应一个目标权重,所述第二block用于根据M个第一输出,进行所述第二block对应的运算;其中,所述M个第一输出由所述每个第一block的输出分别与对应的目标权重进行乘积运算得到,所述目标权重为可训练的权重,所述M为大于1的整数;对所述初始神经网络进行模型训练,以获取更新后的M个目标权重;根据所述更新后的M个目标权重,更新所述初始神经网络中所述第二block与所述M个第一block的连接关系,以获取第一神经网络;其中,所述第一神经网络中的第二block用于根据所述更新后的M个目标权重中最大的N个目标权重对应的N个第一block的输出,进行所述第二block对应的运算,所述N小于所述M。通过上述方式,在对初始神经网络进行block之间的连接关系的搜索过程中,通过在block之间的连接上加入可训练的目标权重,根据更新后的目标权重大小,作为block之间连接关系的重要性判断依据,并基于更新后的目标权重大小进block之间的连接关系的选择和剔除,从而实现了对神经网络的拓扑结构的搜索。
参见图10,图10为本申请实施例提供的一种神经网络结构确定方法的流程示意,如图10所示,本申请实施例提供的神经网络结构确定方法包括:
1001、获取待训练的初始神经网络,所述初始神经网络包括M个第一结构块block和第二block,所述第二block与每个第一block连接,且所述每个第一block对应一个目标权重,所述每个第一block用于根据对应的第一输出,进行所述第一block对应的运算;其中,所述每个第一block对应的第一输出由第一block对应的目标权重与所述第二block的输出进行乘积运算得到,所述目标权重为可训练的权重,所述M为大于1的整数。
和图4对应的实施例中不同的是,图4中,M个第一block的输出是作为第二block的输入,在后续进行连接关系的选择过程中,也是基于M个第一block对应的M个更新后的目标权重的大小进行的。本实施例中,第二block的输出是作为M个第一block的输入,示例性的,如图11所示,以M为3为例,M个第一block包括第一block1,第一block2以及第一block3,第二block的输出可以作为第一block1的输入,第一block2的输入以及第一block3的输入。具体的,第二block的输出可以与目标权重1相乘,相乘结果作为第一block1的输入,第二block的输出可以与目标权重2相乘,相乘结果作为第一block2的输入,第二block的输出可以与目标权重 3相乘,相乘结果作为第一block3的输入。
在一种可能的实现中,所述M个第一block的每个第一block的输入和输出的通道数与所述第二block的输入和输出的通道数一致。
应理解,步骤1001的其他具体描述可以参照步骤401对应的实施例中相似的描述,此处不再赘述。
1002、对所述初始神经网络进行模型训练,以获取更新后的M个目标权重。
在一种可能的实现中,训练设备可以对所述初始神经网络进行第一预设迭代次数的模型训练,以获取更新后的M个目标权重。
步骤1002的具体描述可以参照步骤402对应的实施例中的描述,这里不再赘述。
1003、根据所述更新后的M个目标权重,更新所述初始神经网络中所述第二block与所述M个第一block的连接关系,以获取第一神经网络;其中,所述更新后的M个目标权重中最大的N个目标权重对应的第一block用于根据所述第二block的输出,进行所述第一block对应的运算,所述N小于所述M。
和步骤403对应的实施例中相似,本实施例中,更新后的M个目标权重中最大的N个目标权重对应的第一block用于根据所述第二block的输出,进行所述第一block对应的运算。
为了保留骨干连接,在一种可能的实现中,所述初始神经网络中的所述第二block与所述M个第一block依次形成串行连接,且所述第二block为所述串行连接的起点,所述初始神经网络中的所述M个第一block包括目标block,所述目标block在所述串行通路上与所述第二block连接,在所述目标block对应的更新后的目标权重不属于所述更新后的M个目标权重中最大的N个目标权重的情况下,所述第一神经网络中的所述目标block还用于根据所述第二block的输出,进行所述目标block对应的运算。
在一种可能的实现中,所述N为1。
示例性的,可以参照图12,以M的数量为3,N为1为例,更新后的目标权重1大于更新后的目标权重2以及更新后的目标权重3,则可以保留第一block1和第二block之间的连接,以及第一block3和第二block之间的连接(骨干连接),如图12所示,所述第一神经网络中的第二block的输出用于分别作为第一block1、以及第一block3的输入,第一block1用于根据第二block的输出,进行所述第一block1对应的运算,第一block3用于根据第二block的输出,进行所述第一block3对应的运算,具体的,第一神经网络中的第一block3用于根据第二block的输出以及第一block2的输出的加和结果,进行所述第一block3对应的运算。
示例性的,可以参照图13,以M的数量为3,N为1为例,更新后的目标权重2大于更新后的目标权重1以及更新后的目标权重3,则可以仅保留第一block1和第二block之间的连接,如图13所示,所述第一神经网络中的一block1用于根据第二block的输出,进行所述第一block1对应的运算。
本申请实施例中,可以将初始神经网络中的每个block作为上述实施例中的第二block,将第二block的输出作为输入的block作为第一block,并进行上述连接的剔除和选择,以得到第一神经网络。
在一种可能的实现中,训练设备可以对所述第一神经网络进行模型训练,直至所述第 一神经网络的数据处理精度满足预设条件或模型训练的迭代次数达到第二预设迭代次数,以得到第二神经网络。
在一种可能的实现中,获取待训练的数据,所述待训练的数据包括如下的至少一种:图像数据、文字数据以及语音数据;相应的,训练设备可以根据所述待训练的数据,对所述初始神经网络进行模型训练。
本申请提供了一种神经网络结构确定方法,所述方法包括:获取待训练的初始神经网络,所述初始神经网络包括M个第一结构块block和第二block,所述第二block与每个第一block连接,且所述每个第一block对应一个目标权重,所述每个第一block用于根据对应的第一输出,进行所述第一block对应的运算;其中,所述每个第一block对应的第一输出由第一block对应的目标权重与所述第二block的输出进行乘积运算得到,所述目标权重为可训练的权重,所述M为大于1的整数;其中,所述目标权重为可训练的权重,所述M为大于1的整数;对所述初始神经网络进行模型训练,以获取更新后的M个目标权重;根据所述更新后的M个目标权重,更新所述初始神经网络中所述第二block与所述M个第一block的连接关系,以获取第一神经网络;其中,所述更新后的M个目标权重中最大的N个目标权重对应的第一block用于根据所述第二block的输出,进行所述第一block对应的运算,所述N小于所述M。通过上述方式,在对初始神经网络进行block之间的连接关系的搜索过程中,通过在block之间的连接上加入可训练的目标权重,根据更新后的目标权重大小,作为block之间连接关系的重要性判断依据,并基于更新后的目标权重大小进block之间的连接关系的选择和剔除,从而实现了对神经网络的拓扑结构的搜索。
参照图14,图14为本申请实施例提供的一种神经网络结构确定方法的流程示意,如图14所示,本申请实施例提供的神经网络结构确定方法包括:
1401、获取多个目标编码,每个目标编码用于指示一个候选神经网络,所述多个目标编码包括第一目标编码和多个第一编码,所述第一目标编码用于指示第一神经网络。
本申请实施例中,在进行神经网络的结构搜索时,可以将神经网络的结构特征写成编码的形式,每个编码用于指示一个候选神经网络的如下结构特征的至少一种:候选神经网络包括的运算单元的类型、候选神经网络包括的运算单元的数量以及候选神经网络包括的运算单元的输入特征和输出特征通道数量。其中,运算单元可以指block中各个原子操作,则换一种表述,每个编码用于指示一个候选神经网络包括的原子操作的类型,候选神经网络包括的原子操作的数量以及候选神经网络包括的原子操作的输入特征和输出特征通道数量。由于在同一阶段内,各个block中的原子操作的输入特征和输出特征通道数量相同,则相当于每个编码用于指示一个候选神经网络包括的block的输入特征和输出特征通道数量。
在一种实现中,训练设备可以生成多个候选的编码,并基于预设的规则对多个候选的编码进行筛选,其中预设的规则可以是如下的至少一种:从多个候选的编码中选择在运行指示的候选神经网络时所需的计算量小于第一预设值、包括的权重量小于第二预设值,以及在运行指示的候选神经网络时的运行速度高于第三预设值。其中,计算量可以为整个神 经网络中需要进行的浮点乘法的数量,浮点乘法的运算是最耗时的,因而可以用来表示神经网络的计算量。上述第一预设值、第二预设值以及第三预设值可以为预先设定的。
本申请实施例中,可以对多个编码进行聚类,以得到多个编码集合,每个编码集合对应一个聚类类别,所述多个编码集合包括目标编码集合,所述目标编码集合包括所述多个目标编码。其中,上述多个编码可以是对多个候选编码进行筛选后得到的。所述第一目标编码可以为所述目标编码集合中的一个编码,在一种实现中,第一目标编码可以为所述目标编码集合的聚类中心。所述第一目标编码用于指示第一神经网络。
应理解,上述聚类可以为K-Means算法、DBSCAN算法、BIRCH算法和MeanShift算法等。
1402、对所述第一神经网络进行模型训练,以获取所述第一神经网络的数据处理精度。
本申请实施例中,训练设备可以选择多个目标编码中的一个编码(第一目标编码)指示的第一神经网络进行模型训练,以获取所述第一神经网络的数据处理精度。
应理解,在获取第一神经网络之后,可以对第一神经网络进行网络拓扑的优化,例如可以通过图4至图13对应的实施例中所描述的神经网络结构确定方法来对第一神经网络进行优化,在这种情况下,可以对优化后的第一神经网络进行模型训练,以获取所述第一神经网络的数据处理精度。
本申请实施例中,神经网络数据处理精度可以为训练网络的损失函数的值、神经网络的测试精度等等,本申请实施例对此并不限定。
本申请实施例中,并不对多个目标编码指示的多个候选神将网络都进行模型训练,并基于多个候选神经网络的数据处理精度,从中选择精度较高的候选神经网络作为模型的搜索结果,而是只选择其中的一个目标编码(第一目标编码)指示的第一神经网络,并且对第一神经网络进行模型训练,之后基于目标编码之间的差异度,确定多个目标编码中其余目标编码(多个第一编码)指示的候选神经网络的数据处理精度。
1403、根据所述第一目标编码与所述多个第一编码之间的差异度以及所述第一神经网络的数据处理精度,确定每个第一编码指示的候选神经网络的数据处理精度。
本申请实施例中,在对所述第一神经网络进行模型训练,以获取所述第一神经网络的数据处理精度之后,可以根据所述第一目标编码与所述多个第一编码之间的差异度以及所述第一神经网络的数据处理精度,确定每个第一编码指示的候选神经网络的数据处理精度。
本申请实施例中,目标编码可以包括多个位,每个位指示候选神经网络的一个结构特征,为了消除位与位之间的量纲差异对后续的影响,可以对每个目标编码进行标准化。示例性的,可以是对目标编码的每一位,分别计算多个目标编码的均值和标准差,然后对目标编码的每一位减去均值,再除以标准差。此后,目标编码的量纲对后续算法不会再产生影响。
接下来描述,如何根据所述第一目标编码与所述多个第一编码之间的差异度以及所述第一神经网络的数据处理精度,确定每个第一编码指示的候选神经网络的数据处理精度。
示例性的,可以利用高斯过程。高斯过程(Gaussian Process)是一种非常经典、成熟的机器学习算法,它可以根据两两样本点之间的距离、一部分样本点的值,对其他样本点 的值进行估计。在本实施例中,样本点就是各个目标编码,样本点的值就是目标编码指示的候选神经网络的数据处理精度。一个具体的高斯过程,由它的均值函数、协方差函数唯一确定。使用高斯过程进行建模,其实就是学习均值函数和协方差函数。本实施例中可以使用如下的方式学习高斯过程:
首先可以学习协方差函数,其中,协方差函数可以为如下的公式1:
Figure PCTCN2021129757-appb-000001
式中x 1和x 2就是目标编码,σ是需要学习的标准差。标准差的计算方法是:对已经完成的所有目标编码两两组对,计算它们的编码距离{Distance 1,Distance 2,…,Distance L},这些距离符合Distance 1<Distance 2<...<Distance L的顺序,然后取
Figure PCTCN2021129757-appb-000002
作为σ的估计值。这样就完成了对协方差函数的学习。
之后可以计算所有未训练编码的性能均值和标准差,假设当前累计已有n个目标编码完成了训练,它们指示的候选神经网络的数据处理精度记做Acc i,可以定义以下变量:
Figure PCTCN2021129757-appb-000003
式中的kernel是通过公式1进行计算的,x (i)是第i个已完成训练的目标编码。
得到上面两个矩阵之后,对于任何未训练的目标编码x,根据公式3计算它和所有已经完成训练的编码的协方差函数的值:
Figure PCTCN2021129757-appb-000004
根据公式1、2、3、4,可以计算它的均值:
Figure PCTCN2021129757-appb-000005
其中η=0.1,I是单位矩阵。
根据公式1、2、3、5,可以计算它的协方差函数:
Figure PCTCN2021129757-appb-000006
之后可以计算所有目标编码指示的候选神经网络的数据处理精度,对于一个未训练的目标编码,已经获得了它的均值函数和协方差函数,此时可以根据公式6预测该目标编码指示的候选神经网络的数据处理精度:
Figure PCTCN2021129757-appb-000007
其中,
Figure PCTCN2021129757-appb-000008
的含义是目标编码x相对于当前指示的候选神经网络的数据处理精度最高的目标编码,期望有多少的数据处理精度提升,也就是Expected Improvement(EI)。该数值越大,在下一轮中应当优先被训练。该式中的f(x)是一个高斯过程函数:
Figure PCTCN2021129757-appb-000009
式中的
Figure PCTCN2021129757-appb-000010
通过公式4已经求得,
Figure PCTCN2021129757-appb-000011
通过公式5已经求得。
通过上述方式,每个目标编码,都可以通过上面的过程预测自己指示的候选神经网络的数据处理精度提升,进而得到每个第一编码指示的候选神经网络的数据处理精度。
1404、获取所述多个目标编码指示的候选神经网络中数据处理精度最高的第一候选神经网络。
本申请实施例中,在获取到多个目标编码指示的候选神经网络中每个候选神经网络的数据处理精度之后,可以选择多个目标编码指示的候选神经网络中数据处理精度最高的第一候选神经网络。
1405、对所述第一候选神经网络进行模型训练,以得到第一目标神经网络。
由于此时除了第一目标神经网络之外的候选神经网络的数据处理精度是基于目标编码之间的差异度确定的,并不是很精度,因此可以对候选神经网络中数据处理精度最高的第一候选神经网络进行模型训练,以得到第一目标神经网络。
应理解,在获取到第一候选神经网络之后,可以对第一候选神经网络进行网络拓扑的优化,例如可以通过图4至图13对应的实施例中所描述的神经网络结构确定方法来对第一候选神经网络进行优化,在这种情况下,训练设备可以对优化后的第一候选神经网络进行模型训练,以得到第一目标神经网络。
具体的,在一种实现中,所述第一候选神经网络包括M个第一结构块block和第二block,所述第二block与每个第一block连接,且每个第一block对应一个目标权重,所述第一候选神经网络用于将每个第一block的输出与对应的目标权重进行乘积运算,以得到M个第一输出,所述第二block用于根据所述M个第一输出,进行所述第二block对应的运算;其中,所述目标权重为可训练的权重,所述M为大于1的整数。
训练设备可以对所述第一候选神经网络进行模型训练,以获取更新后的M个目标权重,并根据所述更新后的M个目标权重,更新所述第一候选神经网络中所述第二block与所述M个第一block的连接关系,以获取第二神经网络;其中,所述第二神经网络中的第二block用于根据所述更新后的M个目标权重中最大的N个目标权重对应的第一block的输出,进行所述第二block对应的运算,所述N小于所述M,对所述第二神经网络进行模型训练,以得到所述第一目标神经网络。
在一种实现中,所述第一候选神经网络包括M个第一结构块block和第二block,所述第二block与每个第一block连接,且每个第一block对应一个目标权重,所述第一候选神经网络用于将所述第二block的输出与每个目标权重进行乘积运算,以得到M个第一输出,每个第一block用于根据对应的第一输出,进行所述第一block对应的运算;其中,所述目标权重为可训练的权重,所述M为大于1的整数。
训练设备可以对所述第一候选神经网络进行模型训练,以获取更新后的M个目标权重;根据所述更新后的M个目标权重,更新所述第一候选神经网络中所述第二block与所述M个第一block的连接关系,以获取第二神经网络;其中,所述更新后的M个目标权重中最大的N个目标权重对应的第一block用于根据所述第二block的输出,进行所述第一block对应的运算,所述N小于所述M;对所述第二神经网络进行模型训练,以得到所述第一目标神经网络。
本申请实施例中,训练设备还可以获取所述第一目标神经网络的数据处理精度,所述多个目标编码包括第二目标编码,所述第二目标编码用于指示所述第一目标神经网络,并根据所述第二目标编码与所述多个目标编码中除所述第二目标编码之外的编码之间的差异度以及所述第一目标神经网络的数据处理精度,确定所述多个目标编码中除所述第二目标编码之外的每个目标编码指示的候选神经网络的数据处理精度,之后训练设备可以根据所述多个目标编码指示的候选神经网络的数据处理精度,确定数据处理精度最高的第二候选神经网络,并对所述第二候选神经网络进行模型训练,以得到第二目标神经网络。
之后训练设备可以重复上述过程,通过预设次数的迭代(例如4轮),就可以得到非常理想的模型作为神经模型结构搜索的结果。
示例性的,若聚类结果得到了10个聚类类别,每次都获取到10个聚类类别对应的编码集合中具有最高数据处理精度的候选神经网络模型来确定编码集合中其余目标编码指示的候选神经网络的数据处理精度,经过4轮的迭代处理,可以对40个候选神经网络进行模型训练,之后可以选择这40个候选神经网络中数据处理精度最高的一个,作为神经模型结构搜索的结果。
本申请实施例中,并不对多个目标编码指示的多个候选神将网络都进行模型训练,并基于多个候选神经网络的数据处理精度,从中选择精度较高的候选神经网络作为模型的搜索结果,而是只选择其中的一个目标编码(第一目标编码)指示的第一神经网络,并且对第一神经网络进行模型训练,之后基于目标编码之间的差异度,确定多个目标编码中其余目标编码(多个第一编码)指示的候选神经网络的数据处理精度,相比于当前的各类拓扑搜索算法,大大减少了模型训练的次数,大幅提升了搜索效率。
接下来以候选神经网络为MobileNetV2网络,在ImageNet图像分类任务上进行神经网络的拓扑结构优化为例进行说明。
首先可以生成编码候选集,MobileNetV2网络本身可以分成7个阶段串行而成,可以对每个阶段的神经网络的深度(候选神经网络包括的运算单元的数量)、候选神经网络包括的运算单元的输出特征通道数量等结构特征进行编码,例如:[1,2,3,4,3,3,1,16,24,32,48,64,192,376]代表着这7个阶段分别要重复1、2、3、4、3、3、1次基础网络结构(即上述的运算单元),同时输出通道数分别是16、24、32、48、64、192、376。每个编码长度都是14位,编码可以唯一确定一个候选神经网络,在编码方式确定之后,还可以对编码的每一位设置搜索的上下限,例如一把14位的搜索上下限分别限制成:3和1、4和1、5和2、6和2、5和1、5和1、3和1、48和16、48和16、64和24、96和32、96和32、256和112、512和256。
之后,训练设备可以根据编码位数以及每一位的搜索上下限,均匀地生成大量编码。每生成一个编码,都会计算该编码指示的神经网络的计算量。根据指定的限制,保留符合要求的编码,组成编码候选集。例如,在300M限制下,可以得到约20000个候选编码。
之后,训练设备可以对编码候选集建模,具体的,可以对每个编码进行标准化。具体是对编码的每一位,在整个候选集上分别计算均值和标准差,然后对编码的每一位减去均值,再除以标准差。此后,编码的量纲对后续算法不会再产生影响。训练设备可以对标准化之后得到的多个编码进行K-Means聚类,得到的聚类中心(第一目标编码)可以被认为 是当前编码空间中最具有代表性的结构,其数据处理精度可以代表整个类的性能,对它们进行性能评估可以更高效地对整个搜索空间中所有网络结构性能进行建模。通过这种方式,第一轮一共生成10个网络编码,分别开始训练,进行拓扑优化并得到数据处理精度。之后,训练设备可以训练全连接网络,具体的,训练设备可以对于每个要训练的编码,都要通过网络解析器D转换成一个具体的神经网络。进行拓扑优化之前,会生成全连接网络,具体的全连接规则是把每个阶段内的所有网络块都连接起来,形成非常密集的网络结构。对网络普通权重和目标权重(具体可以参照图4至图13对应的实施例中的描述)同时进行优化,当训练轮数达到总轮数的40%时,认为更新后的目标权重稳定,并基于更新后的目标权重进行神将网络的拓扑结构优化,并对优化的神经网络进行训练至收敛,并更新编码候选集模型,具体的,在10个网络编码都完成模型训练,并得到数据处理性能之后可以开始更新对所有候选集编码的建模,并根据更新后的聚类中心,再生成10个新的编码,并重复上述过程。(例如进行四轮的重复),则一共训练了40个模型。随着搜索的进行,可以发现后面训练的网络的性能会比前面训练的更好。在这40个模型中,选择最好的一个,作为神经模型结构搜索的结果而被输出。
在具体的产品形态上,本申请实施例可以作为AutoML系统的一个环节提供给用户,用户向平台提供数据集、网络大小要求(权重量要求/速度要求等)并给定一个待调整的基础网络结构,通过图4至图14对应的实施例所描述的神经网络结构确定方法就可以得到优化后的网络结构。本实施例可以作为AutoML系统的一个环节通过云服务向用户提供。
同时,本实施例也可以作为单独的算法包提供给用户,用户按照图4至图14对应的实施例所描述的神经网络结构确定方法,可以得到优化后的网络结构。
参照图15,本申请实施例提供的神经网络结构确定装置可以包含网络编码生成器A、网络大小判断器B、网络编码建模器C、网络解析器D、训练器E以及选边器F,它们之间的相互关系可见图15。
具体的,网络编码生成器A可以根据可能的编码空间,尽量均匀地生成多个编码。这些编码会定义神经网络的结构特征。在生成过程中,会不断根据网络大小判断器B的结果,决定是否将这个编码加入到编码候选集中。
网络大小判断器B可以对各个编码指示的候选神经网络的计算量、权重量、运行速度等进行评估,判断是否满足用户的限制。
网络编码建模器C可以对编码进行建模,评估每个编码指示的候选神经网络可能的数据处理精度,并对网络解析器D发送要训练的编码,同时收到训练器E返回的这些指示的候选神经网络的数据处理精度。在搜索过程,根据收到的数据处理精度,更新建模结果,使自身的评估越来越准。直到搜索结束,给出性能最优的目标编码。
网络解析器D可以把编码转换成一个具体的神经网络。
训练器E可以根据用户提供的训练数据,对一个具体的神经网络进行训练,并输出数据处理精度(例如测试精度、训练损失函数值等)以及训练后的神经网络。
选边器F可以对转换得到的神经网络进行拓扑结构的优化。
参照图16,图16为本申请实施例提供的神经网络结构确定装置1600的结构示意,如图16所示,本申请实施例提供的神经网络结构确定装置1600可以包括:
获取模块1601,用于获取待训练的初始神经网络,所述初始神经网络包括M个第一结构块block和第二block,所述第二block与每个第一block连接,且所述每个第一block对应一个目标权重,所述第二block用于根据M个第一输出,进行所述第二block对应的运算;其中,所述M个第一输出由所述每个第一block的输出分别与对应的目标权重进行乘积运算得到,所述目标权重为可训练的权重,所述M为大于1的整数。
获取模块1601的具体描述可以参照步骤401以及对应的实施例中的描述,这里不再赘述。
模型训练模块1602,用于对所述初始神经网络进行模型训练,以获取更新后的M个目标权重;
模型训练模块1602的具体描述可以参照步骤402以及对应的实施例中的描述,这里不再赘述。
模型更新模块1603,用于根据所述更新后的M个目标权重,更新所述初始神经网络中所述第二block与所述M个第一block的连接关系,以获取第一神经网络;其中,所述第一神经网络中的第二block用于根据所述更新后的M个目标权重中最大的N个目标权重对应的第一block的输出,进行所述第二block对应的运算,所述N小于所述M。
模型更新模块1603的具体描述可以参照步骤403以及对应的实施例中的描述,这里不再赘述。
在一种可能的实现中,所述初始神经网络中的所述M个第一block以及第二block依次形成串行连接,且所述第二block为所述串行连接的终点,所述初始神经网络中的所述M个第一block包括目标block,所述目标block在所述串行连接上与所述第二block连接,在所述目标block对应的更新后的目标权重不属于所述更新后的M个目标权重中最大的N个目标权重的情况下,所述第一神经网络中的第二block还用于根据所述目标block的输出,进行所述第二block对应的运算。
在一种可能的实现中,所述N为1。
在一种可能的实现中,所述模型训练模块,用于对所述初始神经网络进行第一预设迭代次数的模型训练,以获取更新后的M个目标权重。
在一种可能的实现中,所述模型训练模块,用于对所述第一神经网络进行模型训练,直至所述第一神经网络的数据处理精度满足预设条件或模型训练的迭代次数达到第二预设迭代次数,以得到第二神经网络。
在一种可能的实现中,所述M个第一block的每个第一block的输入和输出的通道数与所述第二block的输入和输出的通道数一致。
在一种可能的实现中,所述初始神经网络中的所述第二block用于根据所述M个第一输出的加和结果,进行所述第二block对应的运算;
所述第一神经网络中的第二block用于根据所述更新后的M个目标权重中最大的N个目标权重对应的第一block的输出的加和结果,进行所述第二block对应的运算。
在一种可能的实现中,所述获取模块,用于获取待训练的数据,所述待训练的数据包括如下的至少一种:图像数据、文字数据以及语音数据;相应的,所述对所述初始神经网络进行模型训练,包括:
所述模型训练模块,用于根据所述待训练的数据,对所述初始神经网络进行模型训练。
参照图17,图17为本申请实施例提供的神经网络结构确定装置1700的结构示意,如图17所示,本申请实施例提供的神经网络结构确定装置1700可以包括:
获取模块1701,用于获取待训练的初始神经网络,所述初始神经网络包括M个第一结构块block和第二block,所述第二block与每个第一block连接,且所述每个第一block对应一个目标权重,所述每个第一block用于根据对应的第一输出,进行所述第一block对应的运算;其中,所述每个第一block对应的第一输出由第一block对应的目标权重与所述第二block的输出进行乘积运算得到,所述目标权重为可训练的权重,所述M为大于1的整数;
获取模块1701的具体描述可以参照步骤1001以及对应的实施例中的描述,这里不再赘述。
模型训练模块1702,用于对所述初始神经网络进行模型训练,以获取更新后的M个目标权重;
模型训练模块1702的具体描述可以参照步骤1002以及对应的实施例中的描述,这里不再赘述。
模型更新模块1703,用于根据所述更新后的M个目标权重,更新所述初始神经网络中所述第二block与所述M个第一block的连接关系,以获取第一神经网络;其中,所述更新后的M个目标权重中最大的N个目标权重对应的第一block用于根据所述第二block的输出,进行所述第一block对应的运算,所述N小于所述M。
模型更新模块1703的具体描述可以参照步骤1003以及对应的实施例中的描述,这里不再赘述。
在一种可能的实现中,所述初始神经网络中的所述第二block与所述M个第一block依次形成串行连接,且所述第二block为所述串行连接的起点,所述初始神经网络中的所述M个第一block包括目标block,所述目标block在所述串行通路上与所述第二block连接,在所述目标block对应的更新后的目标权重不属于所述更新后的M个目标权重中最大的N个目标权重的情况下,所述第一神经网络中的所述目标block还用于根据所述第二block的输出,进行所述目标block对应的运算。
在一种可能的实现中,所述N为1。
在一种可能的实现中,所述模型训练模块,用于对所述初始神经网络进行第一预设迭代次数的模型训练,以获取更新后的M个目标权重。
在一种可能的实现中,所述模型训练模块,用于对所述第一神经网络进行模型训练,直至所述第一神经网络的数据处理精度满足预设条件或模型训练的迭代次数达到第二预设迭代次数,以得到第二神经网络。
在一种可能的实现中,所述M个第一block的每个第一block的输入和输出的通道数 与所述第二block的输入和输出的通道数一致。
在一种可能的实现中,所述获取模块,用于获取待训练的数据,所述待训练的数据包括如下的至少一种:图像数据、文字数据以及语音数据;相应的,所述对所述初始神经网络进行模型训练,包括:
所述模型训练模块,用于根据所述待训练的数据,对所述初始神经网络进行模型训练。
参照图18,图18为本申请实施例提供的神经网络结构确定装置1800的结构示意,如图18所示,本申请实施例提供的神经网络结构确定装置1800可以包括:
获取模块1801,用于获取多个目标编码,每个目标编码用于指示一个候选神经网络,所述多个目标编码包括第一目标编码和多个第一编码,所述第一目标编码用于指示第一神经网络;
获取模块1801的具体描述可以参照步骤1401以及对应的实施例中的描述,这里不再赘述。
模型训练模块1802,用于对所述第一神经网络进行模型训练,以获取所述第一神经网络的数据处理精度;
模型训练模块1802的具体描述可以参照步骤1402以及对应的实施例中的描述,这里不再赘述。
精度确定模块1803,用于根据所述第一目标编码与所述多个第一编码之间的差异度以及所述第一神经网络的数据处理精度,确定每个第一编码指示的候选神经网络的数据处理精度;
精度确定模块1803的具体描述可以参照步骤1403以及对应的实施例中的描述,这里不再赘述。
所述获取模块1801,用于获取所述多个目标编码指示的候选神经网络中数据处理精度最高的第一候选神经网络;
所述获取模块1801的具体描述可以参照步骤1401以及对应的实施例中的描述,这里不再赘述。
所述模型训练模块1802,用于对所述第一候选神经网络进行模型训练,以得到第一目标神经网络。
模型训练模块1802的具体描述可以参照步骤1405以及对应的实施例中的描述,这里不再赘述。
在一种可能的实现中,所述获取模块,用于获取所述第一目标神经网络的数据处理精度,所述多个目标编码包括第二目标编码,所述第二目标编码用于指示所述第一目标神经网络;
根据所述第二目标编码与所述多个目标编码中除所述第二目标编码之外的编码之间的差异度以及所述第一目标神经网络的数据处理精度,确定所述多个目标编码中除所述第二目标编码之外的每个目标编码指示的候选神经网络的数据处理精度;
根据所述多个目标编码指示的候选神经网络的数据处理精度,确定数据处理精度最高 的第二候选神经网络,并对所述第二候选神经网络进行模型训练,以得到第二目标神经网络。
在一种可能的实现中,每个目标编码用于指示一个候选神经网络的如下结构特征的至少一种:
候选神经网络包括的运算单元的类型、候选神经网络包括的运算单元的数量以及候选神经网络包括的运算单元的输入特征和输出特征通道数量。
在一种可能的实现中,所述装置还包括:
聚类模块,用于对多个编码进行聚类,以得到多个编码集合,每个编码集合对应一个聚类类别,所述多个编码集合包括目标编码集合,所述目标编码集合包括所述多个目标编码。
在一种可能的实现中,所述第一目标编码为所述目标编码集合的聚类中心。
在一种可能的实现中,每个目标编码指示的候选神经网络满足如下条件的至少一种:
在运行每个目标编码指示的候选神经网络时所需的计算量小于第一预设值;
每个目标编码指示的候选神经网络包括的权重量小于第二预设值;以及,
在运行每个目标编码指示的候选神经网络时的运行速度高于第三预设值。
在一种可能的实现中,所述第一候选神经网络包括M个第一结构块block和第二block,所述第二block与每个第一block连接,且所述每个第一block对应一个目标权重,所述第二block用于根据M个第一输出,进行所述第二block对应的运算;其中,所述M个第一输出由所述每个第一block的输出分别与对应的目标权重进行乘积运算得到,所述目标权重为可训练的权重,所述M为大于1的整数;
所述模型训练模块,用于对所述第一候选神经网络进行模型训练,以获取更新后的M个目标权重;
根据所述更新后的M个目标权重,更新所述第一候选神经网络中所述第二block与所述M个第一block的连接关系,以获取第二神经网络;其中,所述第二神经网络中的第二block用于根据所述更新后的M个目标权重中最大的N个目标权重对应的第一block的输出,进行所述第二block对应的运算,所述N小于所述M;
对所述第二神经网络进行模型训练,以得到所述第一目标神经网络。
在一种可能的实现中,所述第一候选神经网络包括M个第一结构块block和第二block,所述第二block与每个第一block连接,且所述每个第一block对应一个目标权重,所述每个第一block用于根据对应的第一输出,进行所述第一block对应的运算;其中,所述每个第一block对应的第一输出由第一block对应的目标权重与所述第二block的输出进行乘积运算得到,所述目标权重为可训练的权重,所述M为大于1的整数;
所述模型训练模块,用于对所述第一候选神经网络进行模型训练,以获取更新后的M个目标权重;
根据所述更新后的M个目标权重,更新所述第一候选神经网络中所述第二block与所述M个第一block的连接关系,以获取第二神经网络;其中,所述更新后的M个目标权重中最大的N个目标权重对应的第一block用于根据所述第二block的输出,进行所述第一 block对应的运算,所述N小于所述M;
对所述第二神经网络进行模型训练,以得到所述第一目标神经网络。
接下来介绍本申请实施例提供的一种执行设备,请参阅图19,图19为本申请实施例提供的执行设备的一种结构示意图,执行设备1900具体可以表现为虚拟现实VR设备、手机、平板、笔记本电脑、智能穿戴设备、监控数据处理设备等,此处不做限定。其中,执行设备1900可以用于实现图4至图14对应实施例中神经网络结构确定方法。具体的,执行设备1900包括:接收器1901、发射器1902、处理器1903和存储器1904(其中执行设备1900中的处理器1903的数量可以一个或多个,图19中以一个处理器为例),其中,处理器1903可以包括应用处理器19031和通信处理器19032。在本申请的一些实施例中,接收器1901、发射器1902、处理器1903和存储器1904可通过总线或其它方式连接。
存储器1904可以包括只读存储器和随机存取存储器,并向处理器1903提供指令和数据。存储器1904的一部分还可以包括非易失性随机存取存储器(non-volatile random access memory,NVRAM)。存储器1904存储有处理器和操作指令、可执行模块或者数据结构,或者它们的子集,或者它们的扩展集,其中,操作指令可包括各种操作指令,用于实现各种操作。
处理器1903控制执行设备的操作。具体的应用中,执行设备的各个组件通过总线系统耦合在一起,其中总线系统除包括数据总线之外,还可以包括电源总线、控制总线和状态信号总线等。但是为了清楚说明起见,在图中将各种总线都称为总线系统。
上述本申请实施例揭示的方法可以应用于处理器1903中,或者由处理器1903实现。处理器1903可以是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法的各步骤可以通过处理器1903中的硬件的集成逻辑电路或者软件形式的指令完成。上述的处理器1903可以是通用处理器、数字信号处理器(digital signal processing,DSP)、微处理器或微控制器,还可进一步包括专用集成电路(application specific integrated circuit,ASIC)、现场可编程门阵列(field-programmable gate array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。该处理器1903可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本申请实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器1904,处理器1903读取存储器1904中的信息,结合其硬件完成上述方法的步骤。
接收器1901可用于接收输入的数字或字符信息,以及产生与执行设备的相关设置以及功能控制有关的信号输入。发射器1902可用于通过第一接口输出数字或字符信息;发射器1902还可用于通过第一接口向磁盘组发送指令,以修改磁盘组中的数据;发射器1902还可以包括显示屏等显示设备。
本申请实施例还提供了一种训练设备,请参阅图20,图20是本申请实施例提供的训练设备一种结构示意图,训练设备2000上可以部署有图15至图17对应实施例中所描述的神经网络结构确定装置,用于实现图15至图17对应实施例中所描述的神经网络结构确定装置的功能,具体的,训练设备2000由一个或多个服务器实现,训练设备2000可因配置或性能不同而产生比较大的差异,可以包括一个或一个以上中央处理器(central processing units,CPU)2020(例如,一个或一个以上处理器)和存储器2032,一个或一个以上存储应用程序2042或数据2044的存储介质2030(例如一个或一个以上海量存储设备)。其中,存储器2032和存储介质2030可以是短暂存储或持久存储。存储在存储介质2030的程序可以包括一个或一个以上模块(图示没标出),每个模块可以包括对训练设备中的一系列指令操作。更进一步地,中央处理器2020可以设置为与存储介质2030通信,在训练设备2000上执行存储介质2030中的一系列指令操作。
训练设备2000还可以包括一个或一个以上电源2026,一个或一个以上有线或无线网络接口2050,一个或一个以上输入输出接口2058;或,一个或一个以上操作系统2041,例如Windows ServerTM,Mac OS XTM,UnixTM,LinuxTM,FreeBSDTM等等。
本申请实施例中,中央处理器2020,用于执行上述实施例中所描述的神经网络结构确定方法相关的步骤。
本申请实施例中还提供一种包括计算机程序产品,包括代码,当所述代码在计算机上运行时,使得计算机执行如前述执行设备所执行的步骤,或者,使得计算机执行如前述训练设备所执行的步骤。
本申请实施例中还提供一种计算机可读存储介质,该计算机可读存储介质中存储有用于进行信号处理的程序,当其在计算机上运行时,使得计算机执行如前述执行设备所执行的步骤,或者,使得计算机执行如前述训练设备所执行的步骤。
本申请实施例提供的执行设备、训练设备或终端设备具体可以为芯片,芯片包括:处理单元和通信单元,所述处理单元例如可以是处理器,所述通信单元例如可以是输入/输出接口、管脚或电路等。该处理单元可执行存储单元存储的计算机执行指令,以使执行设备内的芯片执行上述实施例描述的数据处理方法,或者,以使训练设备内的芯片执行上述实施例描述的数据处理方法。可选地,所述存储单元为所述芯片内的存储单元,如寄存器、缓存等,所述存储单元还可以是所述无线接入设备端内的位于所述芯片外部的存储单元,如只读存储器(read-only memory,ROM)或可存储静态信息和指令的其他类型的静态存储设备,随机存取存储器(random access memory,RAM)等。
具体的,请参阅图21,图21为本申请实施例提供的芯片的一种结构示意图,所述芯片可以表现为神经网络处理器NPU 2100,NPU 2100作为协处理器挂载到主CPU(Host CPU)上,由Host CPU分配任务。NPU的核心部分为运算电路2103,通过控制器2104控制运算电路2103提取存储器中的矩阵数据并进行乘法运算。
在一些实现中,运算电路2103内部包括多个处理单元(Process Engine,PE)。在一些实现中,运算电路2103是二维脉动阵列。运算电路2103还可以是一维脉动阵列或者能够 执行例如乘法和加法这样的数学运算的其它电子线路。在一些实现中,运算电路2103是通用的矩阵处理器。
举例来说,假设有输入矩阵A,权重矩阵B,输出矩阵C。运算电路从权重存储器2102中取矩阵B相应的数据,并缓存在运算电路中每一个PE上。运算电路从输入存储器2101中取矩阵A数据与矩阵B进行矩阵运算,得到的矩阵的部分结果或最终结果,保存在累加器(accumulator)2108中。
统一存储器2106用于存放输入数据以及输出数据。权重数据直接通过存储单元访问控制器(Direct Memory Access Controller,DMAC)2105,DMAC被搬运到权重存储器2102中。输入数据也通过DMAC被搬运到统一存储器2106中。
BIU为Bus Interface Unit即,总线接口单元2110,用于AXI总线与DMAC和取指存储器(Instruction Fetch Buffer,IFB)2109的交互。
总线接口单元2110(Bus Interface Unit,简称BIU),用于取指存储器2109从外部存储器获取指令,还用于存储单元访问控制器2105从外部存储器获取输入矩阵A或者权重矩阵B的原数据。
DMAC主要用于将外部存储器DDR中的输入数据搬运到统一存储器2106或将权重数据搬运到权重存储器2102中或将输入数据数据搬运到输入存储器2101中。
向量计算单元2107包括多个运算处理单元,在需要的情况下,对运算电路2103的输出做进一步处理,如向量乘,向量加,指数运算,对数运算,大小比较等等。主要用于神经网络中非卷积/全连接层网络计算,如Batch Normalization(批归一化),像素级求和,对特征平面进行上采样等。
在一些实现中,向量计算单元2107能将经处理的输出的向量存储到统一存储器2106。例如,向量计算单元2107可以将线性函数;或,非线性函数应用到运算电路2103的输出,例如对卷积层提取的特征平面进行线性插值,再例如累加值的向量,用以生成激活值。在一些实现中,向量计算单元2107生成归一化的值、像素级求和的值,或二者均有。在一些实现中,处理过的输出的向量能够用作到运算电路2103的激活输入,例如用于在神经网络中的后续层中的使用。
控制器2104连接的取指存储器(instruction fetch buffer)2109,用于存储控制器2104使用的指令;
统一存储器2106,输入存储器2101,权重存储器2102以及取指存储器2109均为On-Chip存储器。外部存储器私有于该NPU硬件架构。
其中,上述任一处提到的处理器,可以是一个通用中央处理器,微处理器,ASIC,或一个或多个用于控制上述程序执行的集成电路。
另外需说明的是,以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。另外,本申请提供的装 置实施例附图中,模块之间的连接关系表示它们之间具有通信连接,具体可以实现为一条或多条通信总线或信号线。
通过以上的实施方式的描述,所属领域的技术人员可以清楚地了解到本申请可借助软件加必需的通用硬件的方式来实现,当然也可以通过专用硬件包括专用集成电路、专用CPU、专用存储器、专用元器件等来实现。一般情况下,凡由计算机程序完成的功能都可以很容易地用相应的硬件来实现,而且,用来实现同一功能的具体硬件结构也可以是多种多样的,例如模拟电路、数字电路或专用电路等。但是,对本申请而言更多情况下软件程序实现是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在可读取的存储介质中,如计算机的软盘、U盘、移动硬盘、ROM、RAM、磁碟或者光盘等,包括若干指令用以使得一台计算机设备(可以是个人计算机,训练设备,或者网络设备等)执行本申请各个实施例所述的方法。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。
所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、训练设备或数据中心通过有线(例如同轴电缆、光纤、数字用户线(DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、训练设备或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存储的任何可用介质或者是包含一个或多个可用介质集成的训练设备、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态硬盘(Solid State Disk,SSD))等。

Claims (26)

  1. 一种神经网络结构确定方法,其特征在于,所述方法包括:
    获取待训练的初始神经网络,所述初始神经网络包括M个第一结构块block和第二block,所述第二block与每个第一block连接,且所述每个第一block对应一个目标权重,所述第二block用于根据M个第一输出,进行所述第二block对应的运算;其中,所述M个第一输出由所述每个第一block的输出分别与对应的目标权重进行乘积运算得到,所述目标权重为可训练的权重,所述M为大于1的整数;
    对所述初始神经网络进行模型训练,以获取更新后的M个目标权重;
    根据所述更新后的M个目标权重,更新所述初始神经网络中所述第二block与所述M个第一block的连接关系,以获取第一神经网络;其中,所述第一神经网络中的第二block用于根据所述更新后的M个目标权重中最大的N个目标权重对应的N个第一block的输出,进行所述第二block对应的运算,所述N小于所述M。
  2. 根据权利要求1所述的方法,其特征在于,所述初始神经网络中的所述M个第一block以及第二block依次形成串行连接,且所述第二block为所述串行连接的终点,所述初始神经网络中的所述M个第一block包括目标block,所述目标block在所述串行连接上与所述第二block连接,在所述目标block对应的更新后的目标权重不属于所述更新后的M个目标权重中最大的N个目标权重的情况下,所述第一神经网络中的第二block还用于根据所述目标block的输出,进行所述第二block对应的运算。
  3. 根据权利要求2所述的方法,其特征在于,所述N为1。
  4. 根据权利要求1至3任一所述的方法,其特征在于,所述对所述初始神经网络进行模型训练,以获取更新后的M个目标权重,包括:
    对所述初始神经网络进行第一预设迭代次数的模型训练,以获取更新后的M个目标权重。
  5. 根据权利要求1至4任一所述的方法,其特征在于,所述方法还包括:
    对所述第一神经网络进行模型训练,直至所述第一神经网络的数据处理精度满足预设条件或模型训练的迭代次数达到第二预设迭代次数,以得到第二神经网络。
  6. 根据权利要求1至5任一所述的方法,其特征在于,所述M个第一block的每个第一block的输入和输出的通道数与所述第二block的输入和输出的通道数相同。
  7. 根据权利要求1至6任一所述的方法,其特征在于,所述初始神经网络中的所述第二block用于根据所述M个第一输出的加和结果,进行所述第二block对应的运算;
    所述第一神经网络中的第二block用于根据所述更新后的M个目标权重中最大的N个 目标权重对应的第一block的输出的加和结果,进行所述第二block对应的运算。
  8. 根据权利要求1至7任一所述的方法,其特征在于,所述方法还包括:
    获取待训练的数据,所述待训练的数据包括如下的至少一种:图像数据、文字数据以及语音数据;相应的,所述对所述初始神经网络进行模型训练,包括:
    根据所述待训练的数据,对所述初始神经网络进行模型训练。
  9. 一种神经网络结构确定方法,其特征在于,所述方法包括:
    获取待训练的初始神经网络,所述初始神经网络包括M个第一结构块block和第二block,所述第二block与每个第一block连接,且所述每个第一block对应一个目标权重,所述每个第一block用于根据对应的第一输出,进行所述第一block对应的运算;其中,所述每个第一block对应的第一输出由第一block对应的目标权重与所述第二block的输出进行乘积运算得到,所述目标权重为可训练的权重,所述M为大于1的整数;
    对所述初始神经网络进行模型训练,以获取更新后的M个目标权重;
    根据所述更新后的M个目标权重,更新所述初始神经网络中所述第二block与所述M个第一block的连接关系,以获取第一神经网络;其中,所述更新后的M个目标权重中最大的N个目标权重对应的第一block用于根据所述第二block的输出,进行所述第一block对应的运算,所述N小于所述M。
  10. 根据权利要求9所述的方法,其特征在于,所述初始神经网络中的所述第二block与所述M个第一block依次形成串行连接,且所述第二block为所述串行连接的起点,所述初始神经网络中的所述M个第一block包括目标block,所述目标block在所述串行通路上与所述第二block连接,在所述目标block对应的更新后的目标权重不属于所述更新后的M个目标权重中最大的N个目标权重的情况下,所述第一神经网络中的所述目标block还用于根据所述第二block的输出,进行所述目标block对应的运算。
  11. 根据权利要求10所述的方法,其特征在于,所述N为1。
  12. 根据权利要求9至11任一所述的方法,其特征在于,所述对所述初始神经网络进行模型训练,以获取更新后的M个目标权重,包括:
    对所述初始神经网络进行第一预设迭代次数的模型训练,以获取更新后的M个目标权重。
  13. 根据权利要求9至12任一所述的方法,其特征在于,所述方法还包括:
    对所述第一神经网络进行模型训练,直至所述第一神经网络的数据处理精度满足预设条件或模型训练的迭代次数达到第二预设迭代次数,以得到第二神经网络。
  14. 根据权利要求9至13任一所述的方法,其特征在于,所述M个第一block的每个 第一block的输入和输出的通道数与所述第二block的输入和输出的通道数一致。
  15. 根据权利要求9至14任一所述的方法,其特征在于,所述方法还包括:
    获取待训练的数据,所述待训练的数据包括如下的至少一种:图像数据、文字数据以及语音数据;相应的,所述对所述初始神经网络进行模型训练,包括:
    根据所述待训练的数据,对所述初始神经网络进行模型训练。
  16. 一种神经网络结构确定方法,其特征在于,所述方法包括:
    获取多个目标编码,每个目标编码用于指示一个候选神经网络,所述多个目标编码包括第一目标编码和多个第一编码,所述第一目标编码用于指示第一神经网络;
    对所述第一神经网络进行模型训练,以获取所述第一神经网络的数据处理精度;
    根据所述第一目标编码与所述多个第一编码之间的差异度以及所述第一神经网络的数据处理精度,确定每个第一编码指示的候选神经网络的数据处理精度;
    获取所述多个目标编码指示的候选神经网络中数据处理精度最高的第一候选神经网络;
    对所述第一候选神经网络进行模型训练,以得到第一目标神经网络。
  17. 根据权利要求16所述的方法,其特征在于,所述方法还包括:
    获取所述第一目标神经网络的数据处理精度,所述多个目标编码包括第二目标编码,所述第二目标编码用于指示所述第一目标神经网络;
    根据所述第二目标编码与所述多个目标编码中除所述第二目标编码之外的编码之间的差异度以及所述第一目标神经网络的数据处理精度,确定所述多个目标编码中除所述第二目标编码之外的每个目标编码指示的候选神经网络的数据处理精度;
    根据所述多个目标编码指示的候选神经网络的数据处理精度,确定数据处理精度最高的第二候选神经网络,并对所述第二候选神经网络进行模型训练,以得到第二目标神经网络。
  18. 根据权利要求16或17所述的方法,其特征在于,每个目标编码用于指示一个候选神经网络的如下结构特征的至少一种:
    候选神经网络包括的运算单元的类型、候选神经网络包括的运算单元的数量以及候选神经网络包括的运算单元的输入特征和输出特征通道数量。
  19. 根据权利要求16至18任一所述的方法,其特征在于,所述方法还包括:
    对多个编码进行聚类,以得到多个编码集合,每个编码集合对应一个聚类类别,所述多个编码集合包括目标编码集合,所述目标编码集合包括所述多个目标编码。
  20. 根据权利要求19所述的方法,其特征在于,所述第一目标编码为所述目标编码集合的聚类中心。
  21. 根据权利要求16至20任一所述的方法,其特征在于,每个目标编码指示的候选神经网络满足如下条件的至少一种:
    在运行每个目标编码指示的候选神经网络时所需的计算量小于第一预设值;
    每个目标编码指示的候选神经网络包括的权重的数量小于第二预设值;以及,
    在运行每个目标编码指示的候选神经网络时的运行速度高于第三预设值。
  22. 根据权利要求16至21任一所述的方法,其特征在于,所述第一候选神经网络包括M个第一结构块block和第二block,所述第二block与每个第一block连接,且所述每个第一block对应一个目标权重,所述第二block用于根据M个第一输出,进行所述第二block对应的运算;其中,所述M个第一输出由所述每个第一block的输出分别与对应的目标权重进行乘积运算得到,所述目标权重为可训练的权重,所述M为大于1的整数;
    所述对所述第一候选神经网络进行模型训练,以得到第一目标神经网络,包括:
    对所述第一候选神经网络进行模型训练,以获取更新后的M个目标权重;
    根据所述更新后的M个目标权重,更新所述第一候选神经网络中所述第二block与所述M个第一block的连接关系,以获取第二神经网络;其中,所述第二神经网络中的第二block用于根据所述更新后的M个目标权重中最大的N个目标权重对应的第一block的输出,进行所述第二block对应的运算,所述N小于所述M;
    对所述第二神经网络进行模型训练,以得到所述第一目标神经网络。
  23. 根据权利要求16至21任一所述的方法,其特征在于,所述第一候选神经网络包括M个第一结构块block和第二block,所述第二block与每个第一block连接,且所述每个第一block对应一个目标权重,所述每个第一block用于根据对应的第一输出,进行所述第一block对应的运算;其中,所述每个第一block对应的第一输出由第一block对应的目标权重与所述第二block的输出进行乘积运算得到,所述目标权重为可训练的权重,所述M为大于1的整数;
    所述对所述第一候选神经网络进行模型训练,以得到第一目标神经网络,包括:
    对所述第一候选神经网络进行模型训练,以获取更新后的M个目标权重;
    根据所述更新后的M个目标权重,更新所述第一候选神经网络中所述第二block与所述M个第一block的连接关系,以获取第二神经网络;其中,所述更新后的M个目标权重中最大的N个目标权重对应的第一block用于根据所述第二block的输出,进行所述第一block对应的运算,所述N小于所述M;
    对所述第二神经网络进行模型训练,以得到所述第一目标神经网络。
  24. 一种神经网络结构确定装置,其特征在于,包括存储介质、处理电路以及总线系统;其中,所述存储介质用于存储指令,所述处理电路用于执行存储器中的指令,以执行所述权利要求1至23中任一项所述的方法的步骤。
  25. 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时实现权利要求1至23中任一项所述的方法的步骤。
  26. 一种计算机程序产品,其特征在于,所述计算机程序产品包括代码,当所述代码被执行时,用于实现权利要求1至23任一项所述的方法的步骤。
PCT/CN2021/129757 2020-11-13 2021-11-10 一种神经网络结构确定方法及其装置 WO2022100607A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP21891127.9A EP4227858A4 (en) 2020-11-13 2021-11-10 METHOD FOR DETERMINING THE STRUCTURE OF A NEURAL NETWORK AND DEVICE THEREFOR
US18/316,369 US20230289572A1 (en) 2020-11-13 2023-05-12 Neural network structure determining method and apparatus

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011268949.1 2020-11-13
CN202011268949.1A CN114565092A (zh) 2020-11-13 2020-11-13 一种神经网络结构确定方法及其装置

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/316,369 Continuation US20230289572A1 (en) 2020-11-13 2023-05-12 Neural network structure determining method and apparatus

Publications (1)

Publication Number Publication Date
WO2022100607A1 true WO2022100607A1 (zh) 2022-05-19

Family

ID=81600764

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/129757 WO2022100607A1 (zh) 2020-11-13 2021-11-10 一种神经网络结构确定方法及其装置

Country Status (4)

Country Link
US (1) US20230289572A1 (zh)
EP (1) EP4227858A4 (zh)
CN (1) CN114565092A (zh)
WO (1) WO2022100607A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116051507A (zh) * 2023-01-28 2023-05-02 瑞纳智能设备股份有限公司 热力管道故障监测方法及存储介质

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220207742A1 (en) * 2020-12-30 2022-06-30 United Imaging Research Institute of Innovative Medical Equipment Image segmentation method, device, equipment and storage medium
CN115293227A (zh) * 2022-06-21 2022-11-04 华为技术有限公司 一种模型训练方法及相关设备

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106709565A (zh) * 2016-11-16 2017-05-24 广州视源电子科技股份有限公司 一种神经网络的优化方法及装置
CN106960243A (zh) * 2017-03-06 2017-07-18 中南大学 一种改进卷积神经网络结构的方法
CN107341512A (zh) * 2017-07-06 2017-11-10 广东工业大学 一种迁移学习分类的方法及装置
WO2019089553A1 (en) * 2017-10-31 2019-05-09 Wave Computing, Inc. Tensor radix point calculation in a neural network
CN111160488A (zh) * 2020-01-02 2020-05-15 中国民航大学 融合注意力选择机制的CondenseNet算法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106709565A (zh) * 2016-11-16 2017-05-24 广州视源电子科技股份有限公司 一种神经网络的优化方法及装置
CN106960243A (zh) * 2017-03-06 2017-07-18 中南大学 一种改进卷积神经网络结构的方法
CN107341512A (zh) * 2017-07-06 2017-11-10 广东工业大学 一种迁移学习分类的方法及装置
WO2019089553A1 (en) * 2017-10-31 2019-05-09 Wave Computing, Inc. Tensor radix point calculation in a neural network
CN111160488A (zh) * 2020-01-02 2020-05-15 中国民航大学 融合注意力选择机制的CondenseNet算法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4227858A4

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116051507A (zh) * 2023-01-28 2023-05-02 瑞纳智能设备股份有限公司 热力管道故障监测方法及存储介质

Also Published As

Publication number Publication date
CN114565092A (zh) 2022-05-31
US20230289572A1 (en) 2023-09-14
EP4227858A1 (en) 2023-08-16
EP4227858A4 (en) 2024-05-01

Similar Documents

Publication Publication Date Title
CN111797893B (zh) 一种神经网络的训练方法、图像分类系统及相关设备
CN111401516B (zh) 一种神经网络通道参数的搜索方法及相关设备
US20230229919A1 (en) Learning to generate synthetic datasets for training neural networks
WO2022083536A1 (zh) 一种神经网络构建方法以及装置
CN109522942B (zh) 一种图像分类方法、装置、终端设备和存储介质
WO2022068623A1 (zh) 一种模型训练方法及相关设备
CN112434721A (zh) 一种基于小样本学习的图像分类方法、系统、存储介质及终端
WO2022100607A1 (zh) 一种神经网络结构确定方法及其装置
CN111507378A (zh) 训练图像处理模型的方法和装置
WO2022179587A1 (zh) 一种特征提取的方法以及装置
WO2022111617A1 (zh) 一种模型训练方法及装置
CN113570029A (zh) 获取神经网络模型的方法、图像处理方法及装置
CN111382868A (zh) 神经网络结构搜索方法和神经网络结构搜索装置
CN113807399A (zh) 一种神经网络训练方法、检测方法以及装置
CN111368656A (zh) 一种视频内容描述方法和视频内容描述装置
US20240232575A1 (en) Neural network obtaining method, data processing method, and related device
EP4273754A1 (en) Neural network training method and related device
CN113592060A (zh) 一种神经网络优化方法以及装置
CN111950702A (zh) 一种神经网络结构确定方法及其装置
WO2023231753A1 (zh) 一种神经网络的训练方法、数据的处理方法以及设备
CN113065634B (zh) 一种图像处理方法、神经网络的训练方法以及相关设备
WO2021169366A1 (zh) 数据增强方法和装置
CN112420125A (zh) 分子属性预测方法、装置、智能设备和终端
Chen et al. Research on object detection algorithm based on multilayer information fusion
CN113627421B (zh) 一种图像处理方法、模型的训练方法以及相关设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21891127

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021891127

Country of ref document: EP

Effective date: 20230511

NENP Non-entry into the national phase

Ref country code: DE