CN111382833A - Method and device for training and applying multilayer neural network model and storage medium - Google Patents
Method and device for training and applying multilayer neural network model and storage medium Download PDFInfo
- Publication number
- CN111382833A CN111382833A CN201811633954.0A CN201811633954A CN111382833A CN 111382833 A CN111382833 A CN 111382833A CN 201811633954 A CN201811633954 A CN 201811633954A CN 111382833 A CN111382833 A CN 111382833A
- Authority
- CN
- China
- Prior art keywords
- filter
- channels
- channel
- network model
- expansion
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 117
- 238000012549 training Methods 0.000 title claims abstract description 68
- 238000003062 neural network model Methods 0.000 title claims abstract description 65
- 238000010586 diagram Methods 0.000 claims description 35
- 238000009825 accumulation Methods 0.000 claims description 19
- 230000001902 propagating effect Effects 0.000 claims description 2
- 230000008569 process Effects 0.000 description 26
- 238000012545 processing Methods 0.000 description 22
- 238000013138 pruning Methods 0.000 description 15
- 230000006870 function Effects 0.000 description 11
- 238000001514 detection method Methods 0.000 description 10
- 239000011159 matrix material Substances 0.000 description 9
- 238000013527 convolutional neural network Methods 0.000 description 8
- 238000013139 quantization Methods 0.000 description 8
- 238000005457 optimization Methods 0.000 description 6
- 238000011176 pooling Methods 0.000 description 6
- 238000007781 pre-processing Methods 0.000 description 5
- 230000003213 activating effect Effects 0.000 description 4
- 230000004913 activation Effects 0.000 description 4
- 230000009467 reduction Effects 0.000 description 4
- 238000013528 artificial neural network Methods 0.000 description 3
- 238000004422 calculation algorithm Methods 0.000 description 3
- 230000010076 replication Effects 0.000 description 3
- 230000004044 response Effects 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 230000006835 compression Effects 0.000 description 2
- 238000007906 compression Methods 0.000 description 2
- 238000011002 quantification Methods 0.000 description 2
- 238000012935 Averaging Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000007796 conventional method Methods 0.000 description 1
- 230000006378 damage Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 238000012821 model calculation Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/06—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/082—Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- General Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Computational Linguistics (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Artificial Intelligence (AREA)
- Neurology (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The disclosure provides a method, a device and a storage medium for training and applying a multilayer neural network model. The filter channels in at least one convolution layer in the multilayer neural network model are expanded, and the filter with the expanded channel number is used for executing convolution operation, so that the performance of the network model is not reduced while the network model is simplified.
Description
Technical Field
The present disclosure relates to the field of modeling for a multilayer neural network, and more particularly, to a method for simplifying a structure of a multilayer neural network model and achieving performance comparable to that of a conventional technique.
Background
In recent years, a modeling-based multilayer neural network model has been widely used in computer businesses such as image classification, object detection, and image segmentation. In order to improve service accuracy, network models are designed to be deeper (the number of layers is large) and wider (the number of output feature maps of each layer is large), for example, network models such as VGGNet, ResNet and Xception. Because these network models have the disadvantages of large computation amount and slow processing speed, it is difficult to apply them to resource-limited devices, such as smart phones, robot devices, etc. At present, there are several ways to simplify the structure of the network model and to ensure the performance of the network model.
1. Network Pruning (Network Pruning). According to the method, the weight of the filter is thinned by setting partial parameters in the filter to be 0, or the number of the filters is reduced by directly removing partial filters, so that the aim of simplifying a network model is fulfilled. Network pruning, while effective in simplifying the network model, is difficult to achieve accurate hyper-parameter settings to determine which filters in the convolutional layer are removable, which makes network pruning limited in practical application.
2. Parametric quantification (Parameter quantification). The method reduces the storage space of the network model and improves the operation speed by reducing the representation precision of the parameters in the filter. For example, the full precision of the 32-bit representation is quantized to a binary precision of the 1-bit representation. The method can reduce the storage space occupied by the network model, but the performance of the network model is deteriorated due to the reduction of the representation precision of the parameters.
3. Low order approximation (Low-rank approximation). The method decomposes filter parameters expressed by a large-scale matrix into a plurality of matrix parameters expressed by a small-scale matrix, thereby making a network model smaller. However, this method has a limited compression ratio and does not bring about a significant reduction in the amount of calculation.
4. Taking the LBCNN model as an example, a conventional primary convolution process is decomposed into two convolution processes, the first convolution process is performed by using a sparse and fixed binary convolution filter, the second convolution process is performed by using a filter which can be learned and has small parameter scale (1 × 1) (1) and small matrix scale of the filter in the second convolution process), so that the storage space occupied by the network model is reduced as a whole.
5. The method introduces more filters by using different thresholds α on the convolutional layer and the full-link layer through a residual extension (residual expansion) technology, and effectively improves the accuracy of the network model.
The methods described above have their own drawbacks, and cannot achieve a good balance between simplifying the network model and maintaining performance.
Disclosure of Invention
The present disclosure is directed to providing a solution for optimizing a conventional multi-layer neural network model, which is expected to achieve a good balance in terms of simplifying the neural network model and maintaining performance.
According to an aspect of the present disclosure, there is provided an application method of a multilayer neural network model, the application method including: aiming at least one convolution layer in the multilayer neural network model, expanding the number of filter channels of the convolution layer; when forward propagation is carried out, based on data corresponding to application requirements, data operation in the convolutional layer is carried out by using a filter after channel number expansion; and outputting the application result after the forward propagation is executed.
According to another aspect of the present disclosure, there is provided a training method of a multi-layer neural network model, the training method including: aiming at least one convolution layer in a multilayer neural network model to be trained, expanding the number of filter channels of the convolution layer; in forward propagation, based on data used for training, performing data operation in the convolutional layer by using a filter after channel number expansion; during reverse propagation, updating the gradient value of the weight on the channel before channel number expansion according to the gradient value of the weight on the same channel in the channel after channel number expansion, so as to realize the training of the network model; wherein the weight on the same channel is the weight on the channel before the channel number is expanded, and the gradient value is to be updated.
According to an aspect of the present disclosure, there is provided an application method of a multilayer neural network model, the application method including: during forward propagation, accumulating a plurality of input characteristic graphs of at least one convolutional layer, and performing convolution operation in the convolutional layer by using the accumulated input characteristic graphs and a filter in the convolutional layer; and outputting the application result after the forward propagation is executed.
According to another aspect of the present disclosure, there is provided an application apparatus of a multi-layer neural network model, the application apparatus including: an expansion unit configured to expand in advance the number of filter channels in at least one convolution layer in the multilayer neural network model; a forward propagation unit configured to perform data operation in the convolutional layer using the filter after the channel number expansion based on data corresponding to an application requirement; an output unit configured to output the application result after performing the forward propagation.
According to another aspect of the present disclosure, there is provided a training apparatus of a multi-layer neural network model, the training apparatus including: an expansion unit configured to expand in advance the number of filter channels in at least one convolution layer in a multilayer neural network model to be trained; a forward propagation unit configured to perform data operations in the convolutional layer with a channel number extended filter based on data for training; and the back propagation unit is configured to update the gradient values of the weights on the channels before the channel number expansion according to the gradient values of the weights on the same channels in the channels after the channel number expansion, so as to realize the training of the network model, wherein the weights on the same channels are the weights on the channels before the channel number expansion, and the gradient values are to be updated.
According to another aspect of the present disclosure, there is provided an application apparatus of a multi-layer neural network model, the application apparatus including: an accumulation unit configured to accumulate a plurality of input profiles of at least one convolutional layer for the convolutional layer while propagating forward; an operation unit configured to perform convolution operations in the convolutional layer using the accumulated input feature map and a filter in the convolutional layer; an output unit configured to output the application result after performing the forward propagation.
According to another aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium storing instructions that, when executed by a computer, cause the computer to perform a method of applying the above-described multilayer neural network model.
According to another aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium storing instructions that, when executed by a computer, cause the computer to perform the above-described method of training a multi-layer neural network model.
Other features of the present disclosure will become apparent from the following description of exemplary embodiments with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of the specification, illustrate embodiments of the disclosure and together with the description of the embodiments, serve to explain the principles of the disclosure.
Fig. 1 shows the pruning and fine-tuning process of the pruning network model.
Fig. 2 shows the forward propagation process of the conventional convolutional neural network model and the LBCNN model.
Fig. 3(a) and 3(b) show the forward propagation process and the filter matrix parameters before and after quantization, respectively, of the ternary weight optimization method.
Fig. 4(a) shows the filter structure for forward and backward propagation in the ternary weight method, and fig. 4(b) and 4(c) show the filter structure for forward and backward propagation in the method of the present disclosure.
FIG. 5 illustrates a hardware environment of the present disclosure.
Fig. 6 shows a schematic diagram of the internal structure of the network model in the present disclosure.
Fig. 7 shows a flowchart of a training method according to a first exemplary embodiment of the present disclosure.
Fig. 8 and 9 respectively show schematic diagrams of filter channel number expansion of the first exemplary embodiment of the present disclosure.
Fig. 10 shows the process of convolution with a channel extended filter.
Fig. 11 is a flowchart illustrating an application method according to a second exemplary embodiment of the present disclosure.
Fig. 12 shows a feature distribution diagram of an input feature map.
Fig. 13 is a flowchart illustrating an application method according to a third exemplary embodiment of the present disclosure.
FIG. 14 illustrates the process of input signature graph accumulation.
Fig. 15 is a schematic structural diagram of a training device according to a fourth exemplary embodiment of the present disclosure.
Fig. 16 is a schematic structural diagram of an application device according to a fifth exemplary embodiment of the present disclosure.
Fig. 17 is a schematic structural diagram of an application device according to a sixth exemplary embodiment of the present disclosure.
Detailed Description
Most of the traditional multilayer neural network models are complex in structure, and a network pruning method is an available method for simplifying the model structure. Fig. 1 shows the processing of an original model (a model without simplified processing), a pruning model, and a fine-tuning model for fine-tuning the pruning model during Forward Propagation (Forward Propagation).
In the Original Model (Original Model), assuming that the ith layer is provided with three filters, the input feature map of the ith layer and the three filters are respectively subjected to convolution operation, then the convolution operation result is taken as the input feature map of the (i + 1) th layer (namely, the output feature map of the ith layer), and the convolution operation is continuously carried out with the filter of the (i + 1) th layer, so that forward propagation is performed.
In the pruning Model (Pruned Model), the filter in the i-th layer (shown by the dotted line in the i-th layer filter) which has small contribution to the overall performance of the network Model is removed first, and the corresponding input feature map and filter channel of the filter in the i + 1-th layer (shown by the dotted line in the i + 1-th layer input feature map and the filter) are also removed. And when convolution operation is carried out, carrying out convolution operation on the input characteristic diagram of the ith layer and the residual two filters of the ith layer, taking the convolution operation result of the ith layer as the input characteristic diagram of the (i + 1) th layer, and continuing the convolution operation of the (i + 1) th layer so as to execute forward propagation processing. The Fine-tuned Model is a Fine tuning of the pruning Model such that the performance of the pruning Model is approximately comparable to the original Model.
Based on the above network pruning method, the network model can be effectively simplified by removing unimportant filters, but determining which filters in the network model are removable is a difficulty of the network pruning method. For example, according to the contribution degree to the network model, calculating respective information entropy scores of feature maps in the layers, and taking a filter corresponding to the feature map with the score lower than a threshold value < T > as a filter which can be removed; for another example, the feature map information entropy scores obtained by calculation are sorted in descending order, and only the filters corresponding to the front threshold < K > feature maps are reserved by using a fixed compression rate, and other filters are all used as filters which can be removed. However, in practical applications, the threshold < T > and the threshold < K > are difficult to be determined, resulting in a limitation in practical applications of the network pruning method.
Another common optimization model is the LBCNN model, and fig. 2 illustrates the forward propagation process of the conventional Convolutional Neural Network (CNN) model and the LBCNN model.
The upper side of fig. 2 is the forward propagation process of the conventional Convolutional Neural Network (CNN) model, assuming that there are three scales 3 × 3 filters in the i-th layer, the input feature map of the i-th layer (X on the upper left side of fig. 2)i) Convolution operation is performed with these three filters to generate a Response Map (Response Map), and activation operation is performed on the elements in the Response Map to generate an output Feature Map (Feature Map) (X on the upper right side of fig. 2)i+1) And output to the (i + 1) th layer.
FIG. 2 is a schematic view ofThe lower side is the forward propagation process of the LBCNN model, the model comprises a set of binary convolution filters, matrix parameters of the binary convolution filters are sparser relative to those of the upper side CNN model, the binary convolution filters are not updated (parameters are fixed) during the training of the network model, the LBCNN model also comprises a nonlinear activation function and a set of learnable filters, matrix parameter scales of the learnable filters are small, for example 1 × 1, in the forward propagation process based on the LBCNN model, firstly, an input feature map of the ith layer (X on the left lower side of the figure 2)i) Performing convolution operation with a binary convolution filter, activating the operation result by a nonlinear activation function, performing convolution operation on the activated bitmap (Bit Map) and a learnable filter, and finally generating an output feature Map of the i-th layer (X on the lower right side of FIG. 2)i+1) And output to the (i + 1) th layer.
Compared with the traditional CNN model, the binary convolution filter in the LBCNN model can be shared by multiple layers, and important parameters are stored in a learnable filter with a small matrix parameter scale, so that the size of the LBCNN model can be effectively reduced. However, there are two convolution processes in the LBCNN model, one is for the sparse and fixed binary convolution filter, and the other is for the learnable filter, so the depth of the LBCNN model is increased, and increasing the depth of the network model means that the training of the network model becomes more difficult.
In addition to the above LBCNN optimization model, fig. 3(a) shows a network model optimized using the ternary weight method. The top of fig. 3(a) is a conventional multilayer convolutional neural network model, and the bottom is a network model incorporating more filters. To add a convolution layer 1 at the convolution layer 1rFor example, first, a full-precision 3 × 3 filter is quantized to obtain two 2-bit precision filters, and then different thresholds α and α are respectively usedrThe filter matrix parameters shown in fig. 3(b) are obtained.
As can be seen from fig. 3(a) and 3(b), the accuracy of the neural network model can be improved by introducing a new filter, but the size of the network model is multiplied by the new filter, and the simplification of the network model cannot be realized.
In the optimization process of the multilayer neural network model at present, problems such as network performance reduction or simplification which is difficult to realize are often caused in order to simplify the size of the network model. Based on the above, the present disclosure provides an optimization process for a multi-layer neural network model, which, when training and applying the network model, expands the number of channels of a filter in at least one convolutional layer, performs convolution operation using the filter after channel number expansion, and ensures the accuracy of the network model by connecting the filter after channel number expansion, so as to ensure that the network performance is not reduced on the basis of simplifying the network model. Taking one convolutional layer in the multi-layer neural network model as an example, fig. 4(a) depicts the structure of the filter in forward propagation and backward propagation based on the ternary weight method, and fig. 4(b) depicts the structure of the filter in forward propagation and backward propagation based on the present disclosure. In the forward propagation of FIG. 4(a), all filters (W) in a convolutional layer1To W9) All stored in a storage area for storing a network model, and then quantized using the aforementioned residual extension method to obtain α -based 9 filters (W)1α to W9α) and based on αr9 filters (W)1αrTo W9αr) And performs convolution operation using the 18 filters. In the forward propagation shown in FIG. 4(b), the original c in each template filter is filteredtExpanding each channel by 3 times to obtain 9 filters (W) with expanded channel number "1To W'9) The number of channels per filter is 3ct. The template filter with the expanded number of channels is called a target filter, and the convolution operation of the layer is performed by the target filter. Fig. 4(b) is described by taking 9 filters as an example, and fig. 4(c) shows a case where the number of channels in 1 filter is extended for the sake of simplifying the description. In the forward propagation of FIG. 4(c), the filter W 14 times expansion is to be performed, therefore, W1Before expansion, the number of channels in (2) is c'/4.
As can be seen from a comparison between fig. 4(a) and fig. 4(b), in the network model of the present disclosure, the number of filter channels is extended to enrich the weight connection during convolution, so that the performance of the network model is not reduced compared to the network model of the ternary weight method, and since a filter with a smaller number of channels can be designed in the network model, the architecture of the network model can be simplified.
It should be noted that the template filter and the target filter described herein are filters used for characterizing the weight parameters in the multi-layer neural network model, and the operation method and the function in the convolution operation are the same as those of the conventional filter. The template filter and the target filter are used to distinguish the filter before the channel number expansion from the filter after the channel number expansion, and the function and structure of the filter are not limited.
Various exemplary embodiments of the present disclosure will be described in detail below with reference to the accompanying drawings. It is to be understood that the present disclosure is not limited to the various exemplary embodiments described below. In addition, as a solution to the problem of the present disclosure, it is not necessary to include all combinations of the features described in all the exemplary embodiments.
FIG. 5 illustrates a hardware environment for processing a multi-layer neural network model, including: a processor unit 11, an internal memory unit 12, a network interface unit 13, an input unit 14, an external memory 15, and a bus unit 16.
The processor unit 11 may be a CPU or a GPU. The memory unit 12 includes a Random Access Memory (RAM), a Read Only Memory (ROM). The RAM may be used as a main memory, a work area, and the like of the processor unit 11. The ROM may be used to store a control program for the processor unit 11, and may also be used to store files or other data to be used when running the control program. The network interface unit 13 can be connected to a network and performs network communication. The input unit 14 controls input from a keyboard, a mouse, or the like. The external memory 15 stores a boot program, various applications, and the like. The bus unit 16 is used to connect the units in the optimization apparatus of the multilayer neural network model.
Fig. 6 is a schematic diagram illustrating an internal structure of a network model in the present disclosure, and the network model can be operated based on the internal structure shown in fig. 6 during the training and application processes of the network model. The structure includes: a network model storage unit 20, a feature map storage unit 21, a convolution unit 22, a pooling unit 23, an activation unit 24, a quantization unit 25, and a control unit 26. Each unit is described below.
The network model storage unit 20 stores information related to the multi-layer neural network model, including but not limited to network structure information, filter information required for convolution operation, and information required for operation in each layer, and may include information related to filter channel number expansion, for example: which convolutional layers have the number of filter channels to be extended, the extension coefficients of the number of filter channels, the extension scheme, etc. The feature map storage unit 21 stores feature map information required for network model calculation.
The convolution unit 22 is configured to perform convolution processing based on the filter information input from the network model storage unit 20 and the feature map information input from the feature map storage unit 21. Of course, if the filter channel number needs to be expanded, the convolution unit 22 may also perform expansion according to the information related to the expansion of the filter channel number stored in the network model storage unit 20.
Here, the pooling unit 23, the activating unit 24, and the quantizing unit 25 are units for performing corresponding pooling, activating, and quantizing processes, and their functions are not described again. Note that fig. 6 illustrates an example in which the multi-layer neural network model includes a pooling layer and a quantization layer, and the present disclosure is not limited to other cases, for example, in the case where only convolution layer and quantization layer are included in the multi-layer neural network model, the output result of the convolution unit 22 may be directly passed to the quantization unit 25. In addition, the pooling unit 23, the activating unit 24, and the quantizing unit 25 are exemplarily shown in the structure shown in fig. 6, and other units that may include a unit capable of performing regularization processing and a unit capable of performing scaling processing are omitted, and thus, detailed description thereof is omitted.
The control unit 26 controls the operations of the network model storage unit 20 to the quantization unit 25 by outputting control signals to the other units in fig. 6.
Hereinafter, embodiments of the present disclosure will be described in detail with reference to the accompanying drawings.
< first exemplary embodiment >
Fig. 7 depicts a flowchart of steps of a training method for a multi-layer neural network model according to a first exemplary embodiment of the present disclosure. In the present exemplary embodiment, the training flow of the multilayer neural network model shown in fig. 6 is implemented by causing the GPU/CPU 11 to execute a program (such as a neural network forward/backward propagation algorithm) stored in the ROM and/or the external memory 15 with the RAM as a work memory.
Step S101: a multi-layer neural network model to be trained is determined, the number of filter channels in at least one convolutional layer in the network model to be extended.
In the present embodiment, information of at least one network model may be stored in the network model storage unit 20 shown in fig. 6, and feature map information used when the network model is executed may be stored in the feature map storage unit 21. When training is triggered (e.g., a training request is received or a training trigger time arrives), the present step S101 is started.
Step S102: and expanding the number of filter channels of the layer aiming at least one convolution layer in the multilayer neural network model to be trained to obtain the filter with the expanded number of channels.
In step S102, the filter before the channel number expansion may be referred to as a template filter, and the filter after the channel number expansion may be referred to as a target filter.
In this step S102, the expansion of the number of filter channels may be performed based on the information on the expansion of the number of filter channels stored in the network model storage unit 20 shown in fig. 6. Here, the expansion of the number of filter channels refers to the reproduction of a plurality of filter channels, in other words, when one channel is expanded into a plurality of channels, the plurality of channels after the expansion are the same as the channels before the expansion, that is: the weights on the channel before expansion are the same as the weights on the channel after expansion. For example, the filter W is applied before the number of channels is expanded1With a channel C1-1Channel C2-1And channel C3-1When the channel number is expanded, the three channels are connectedThe tracks are duplicated twice, respectively, to obtain a new channel C1-2And channel C1-3Channel C2-2And channel C2-3Channel C3-2And channel C3-3. After the number of channels is expanded, channel C1-1To channel C1-3Are identical channels, and the weight on the channel is derived from the channel C before expansion1-1The weight of (c); in a similar manner, channel C2-1To channel C2-3Are identical channels, and the weight on the channel is derived from the channel C before expansion2-1The weight of (c); channel C3-1To channel C3-3Are identical channels, and the weight on the channel is derived from the channel C before expansion3-1The weight of (c). The specific filter channel number expansion process will be described later, and details are not described here.
The above steps S101 to S102 are preprocessing executed for training the network model in the present embodiment, and through this preprocessing, the number of filter channels of at least one layer in the network model is extended, so that convolution operation can be performed by using the filter after channel number extension in the subsequent forward propagation and backward propagation.
Step S103: data for training is input into a multi-layer neural network model to be trained, and forward propagation is performed in the network model.
In step S103, the data for training is subjected to corresponding operation or processing layer by layer in the multi-layer neural network. In the convolutional layer with the expanded filter channel number, the convolution operation is carried out on the input characteristic diagram of the convolutional layer and the filter with the expanded channel.
Here, the data used for training may be, for example, a group of pictures and corresponding description data or a group of voices and corresponding description data, and the type of the data is not limited in this embodiment. Data used for image processing and voice processing in the multi-layer neural network model can be applied to the training method of the first embodiment.
Step S104: in the backward propagation, for the convolutional layer in which the number of filter channels is extended, the gradient value of the weight on each channel after the number of channels is extended is determined.
Step S105: and updating the gradient value of the weight on the channel before the channel number expansion according to the gradient value of the weight on the same channel determined in the step S104, thereby realizing the training of the network model.
Here, the same channel refers to a channel expanded from the same channel before expansion. The filter W is still used in step S1021Three channels C of1-1To channel C1-3Expanded to nine channels C1-1To channel C3-3For example, in the backward propagation, first nine channels C are calculated respectively1-1To channel C3-3The gradient value of the upper weight element. Then, the gradient values of the weights on the channel before expansion are updated with the gradient values of the weights on the same channel.
Here, the reason why the above gradient value processing of the weight on the channel is performed is that: in forward propagation, the input feature maps are respectively convolved with the filters after channel expansion, so that, in backward propagation, if the gradient values of the weights on the expanded channels are directly calculated in a known manner, even if the weights are derived from the same weight before expansion, the gradient values of the weights are different, which destroys the structure of the channel expansion. Therefore, after the update processing of the gradient value of the weight on the channel before the expansion in step S105 of the present embodiment, the destruction of the channel structure can be avoided.
Of course, if the number of filter channels is not expanded in the convolutional layer, the gradient values of the weight elements in each channel of the filter are calculated in a known manner without executing the processing in step S105, and then the update of the filter weight is completed (i.e., the training of the network model is implemented).
The following respectively describes a detailed implementation of the first embodiment of the present disclosure for training a network model.
< extended Filter channel number >
Here, description will be made taking an example in which the number of channels is expanded by duplicating the channels of the template filter. Suppose that: in a certain convolution layer, 9 stencil filters are provided, each having a height (row) h of 3, a width (column) w of 3, and a number of channels c of 16. The number of channels needs to be copied by 3 times, namely the number of channels C of the filter after copying is 48, and the height/width of the filter after channel copying is the same as that of the filter before copying. Of course, the height/width of the template filter may also be different before and after copying, for example, in copying the channel, the height of the filter after channel copying is made larger than the height of the filter before copying by copying the row of the channel, and the width of the filter after channel copying is made larger than the width of the filter before copying by copying the column of the channel. In addition, the number of times the channel is duplicated may be preset according to actual needs or experimental effects, and the disclosure does not limit this.
To extend the number of channels of a filter from 16 to 48, the channels of each filter may be duplicated in their entirety. Referring to fig. 8, a target filter including 48 channels is constructed by copying 16 channels of the template filter in their entirety twice (i.e., three times spread) by taking the example of keeping the rows/columns of the template filter unchanged. As can be seen from fig. 8, since the channels are entirely copied, the first 16 channels, the middle 16 channels, and the last 16 channels are the same for any target filter obtained after copying.
Fig. 8 shows a case where the number of channels after expansion is an integer multiple of the number of channels before expansion, and if the number of channels after expansion is not an integer multiple of the number of channels before expansion, the channel duplication process can be decomposed into channel total duplication and channel individual duplication so that the number of channels after duplication satisfies a requirement. Taking the case shown in fig. 9 as an example, it is assumed that: the convolutional layer is provided with 9 template filters, each of which has a height (row) h of 3, a width (column) w of 3, a number of channels C of 16, and a number of expanded channels C of 42, and in this case, the number of expanded channels is 1 time and the remainder is 10 times the number of channels before expansion. First, 16 channels of the template filter are copied in their entirety, and then the first 10 channels of the 16 channels of the template filter are copied again to construct a target filter with 42 channels. Here, the target filter may be constructed by copying the first 10 channels of the template filter, or may also be configured by copying the last 10 channels of the template filter, or may be configured by copying the 10 channels at other positions.
< Forward propagation >
After the number of the filter channels is expanded by the method, the filter after the channel expansion is used for convolution operation in the forward propagation process. Taking the case shown in fig. 10 as an example, assume that: the i-th layer of the convolution layer has two template filters W1And W2The respective number of channels is 16. In the preprocessing for the expansion of the number of filter channels, W is divided in the manner shown in fig. 81And W2The number of channels (c) is copied twice (triple expansion), and a target filter W 'including 48 channels is generated'1And W'2。W’1Respectively with C1-1(corresponding to the original 16 channels before replication), C1-2(corresponding to the 16 channels obtained from the first copy pass) and C1-3(corresponding to the 16 channels resulting from the second pass replication). Similarly, W'2Is connected with W2-1~W2-3Shown (not shown in fig. 10). Matching the 48 input feature maps of the ith layer with a target filter W'1And W'2And performing convolution operation to generate two output characteristic graphs. The convolution operation is the same as the conventional convolution operation, and is not described herein again.
< reverse propagation >
In the first embodiment, the number of filter channels in at least one convolutional layer in the network model is extended, and therefore, in order to ensure the inherent structure of channel extension in the back propagation, it is necessary to calculate the gradient value of each weight in the channel before extension. An alternative implementation is to determine the weight gradient value of the filter after the number of channels of the layer is expanded according to the gradient value of the output feature map passed from the next layer, and further calculate the arithmetic mean of the gradient values corresponding to the same channel by weight, as the gradient value corresponding to the weight on the channel before expansion. In the foregoing step S105, the updating process of the gradient values of the weights on the channels before expansion is described by taking the case shown in fig. 10 as an example. More specifically, with channel C1-1To channel C1-3For example, will lead toWay C1-1To channel C1-3The gradient values of the weights at the (0,0) position of (C) are averaged, and the average value is taken as the channel C before expansion1-1The gradient value at the (0,0) position. And so on until updating channel C1-1And finishing updating the weight gradient value of the channel before expansion according to the gradient values of the weights at all positions.
Specifically, the above method of averaging the gradient values to calculate the weight gradient values on the channel before expansion can be expressed by formula (1).
Wherein n represents the number of filters, c represents the number of channels of the template filter, h and w represent the spatial positions of the weight elements in the filter, c' represents the number of channels of the extended filter, and r represents the number of identical channels after extension; grad(n,c′,h,w)A gradient value representing a weight at the (h, w) position of one of the same channels; AvgGrad(n,c,h,w)Representing the gradient value of the element at the (h, w) position of the weight on the updated, pre-expansion channel.
By the training method according to the first exemplary embodiment of the present disclosure, even if a filter with a small number of channels is designed, the network model performance is not degraded while the network model result is simplified by expanding the number of channels and enriching the weight connection.
It should be noted that, in the training mode of the first exemplary embodiment, the weight gradient values on the channel before filter expansion are updated, that is: the weights on the channel before expansion are trained. After the training of the weights on the channels before expansion is completed, in order to save storage space, the channel information of the new expanded channel stored in the temporary storage area can be released, so that the network model is always kept in a relatively simplified state. Neither is the present embodiment limited to other variations of the training process. For example, in back propagation, after calculating the average gradient value of the weights on the same channel after expansion, the gradient value of the weights on the channel before expansion is not calculated, that is, the weights on the channel after expansion are trained, and in the future network model application, the trained weights on the channel after expansion can be directly used to execute the corresponding application service.
< second exemplary embodiment >
After the training of the network model is implemented based on the first exemplary embodiment, the second embodiment describes a method for applying the trained network model. Fig. 11 depicts a flow chart of steps of an application method of the second embodiment. In the second exemplary embodiment, the processing flow of the multilayer neural network model shown in fig. 11 is implemented by causing the GPU/CPU 11 to execute a program (such as an application algorithm) stored in the ROM and/or the external memory 15 with the RAM as a work memory.
Step S201: a multi-layer neural network model is determined for the application service, the number of filter channels in at least one convolutional layer of the network model to be extended.
Step S202: and expanding the number of channels of the filter of the convolution layer to obtain the filter with the expanded number of channels.
The above step S201 and step S202 are preprocessing steps similar to the first exemplary embodiment. Here, the extension of the number of filter channels is the same as the way of extending the number of channels in the first exemplary embodiment, and is not described here again.
Step S203: and inputting data corresponding to the application requirements into the multilayer neural network model.
In step S203, taking a face detection service as an example, a face image is input into the multi-layer neural network model as data of a face detection application, so as to execute the face detection service in the network model.
Step S204: and executing operation from top to bottom in the multilayer neural network model until an application result is output.
The application method of the above steps S201 to S204 is based on the case of training the channel before expansion in the first exemplary embodiment. If the extended channel is trained in the first exemplary embodiment, in the application method of the second embodiment, the forward propagation may be directly performed without performing the preprocessing of the filter channel number extension in step S201 and step S202.
The performance of the network model and the size of the network model will be compared between three cases, a conventional Baseline network model (without channel spread), 4 times channel spread and 8 times channel spread based on the first exemplary embodiment of the present disclosure, taking the Baseline network model as an example.
Table 1 is an example of a baseline network model designed to perform object detection tasks, and the network model shown in table 1 is a traditional baseline model that does not use the disclosed methods. For ease of understanding, the network model in table 1 shows only convolutional layers 1 through 8, and other convolutional layers or layers such as pooling layers, quantization layers, normalization layers, etc., which may be included in the network model, are not shown, but do not affect the understanding of the baseline network model.
TABLE 1
In the scheme of 4-fold channel expansion of the first exemplary embodiment of the present disclosure, the structure of the network model used is similar to the structure shown in table 1, see table 2, except that: the channel numbers of the filters of convolutional layers 5 to 8 (i.e., the template filters in the first exemplary embodiment of the present disclosure) are 32, 64, 128, and 256, respectively. Similarly, in the 8-channel expansion scheme, referring to table 3, the number of channels of the filters from convolutional layer 5 to convolutional layer 8 are 16, 32, 64, and 128, respectively.
Network layer | Number of filters | Number of channels of filter | Size of the |
Convolutional layer | |||
1 | 16 | 3 | 3*3 |
|
32 | 16 | 3*3 |
|
64 | 32 | 3*3 |
|
128 | 64 | 3*3 |
Convolutional layer 5 | 256 | 32 | 3*3 |
Convolutional layer 6 | 512 | 64 | 3*3 |
Convolutional layer 7 | 1024 | 128 | 3*3 |
Convolutional layer 8 | 1024 | 256 | 3*3 |
TABLE 2
TABLE 3
When the three network models shown in tables 1 to 3 are used for forward propagation of face detection, it is necessary to perform 4-fold channel expansion on the convolutional layers 5 to 8 of the network model shown in table 2 and 8-fold channel expansion on the network model shown in table 3. Table 4 shows a structural description of the network model after performing 4-fold channel expansion based on the baseline network model shown in table 1, performing 4-fold channel expansion based on the network model shown in table 2, and performing 8-fold channel expansion based on the network model shown in table 3.
TABLE 4
Based on the three network models shown in table 4, after face detection is performed, the comparison result of the face detection rate shown in table 5 and the comparison result of the storage size shown in table 6 are obtained.
|
4 times channel expansion | 8 times channel expansion |
0.8699 | 0.8598 | 0.8559 |
TABLE 5
Network | Base line | 4 times channel expansion | 8 times channel expansion | |
Convolutional layer 5 | 256*128*3*3 | 256*32*3*3 | 256*16*3*3 | |
Convolutional layer 6 | 512*256*3*3 | 512*64*3*3 | 512*32*3*3 | |
Convolutional layer 7 | 1024*512*3*3 | 1024*128*3*3 | 1024*64*3*3 | |
Convolutional layer 8 | 1024*1024*3*3 | 1024*256*3*3 | 1024*128*3*3 | |
The overall size of the channel (bytes) | 1.953,792=1.95M | 488,448=488K | 244,224=244K |
TABLE 6
On the one hand, as can be seen from table 6, compared to the channel size of the conventional baseline model, the number of channels from convolutional layer 5 to convolutional layer 8 of the network model based on channel expansion of the first exemplary embodiment of the present disclosure is significantly reduced, especially the number of channels is smaller as the multiple of the channel expansion is higher. On the other hand, as can be seen from table 5, when the face detection is performed by using the network model based on channel expansion according to the first exemplary embodiment of the present disclosure, the detection performance is approximately equivalent to that of the conventional baseline model.
Fig. 12 shows a characteristic distribution diagram of the input characteristic diagram in the conventional network model (without channel expansion) and the network model of the channel expansion of the first exemplary embodiment of the present disclosure, and it can be seen from fig. 12 that after convolution and quantization, the distribution of the input characteristic diagram in the two network models is close, which indicates that the performance of the network model of the first exemplary embodiment of the present disclosure and the conventional network model is equivalent in service processing.
< third exemplary embodiment >
The third exemplary embodiment of the present disclosure describes an application method of a multi-layer application network model implemented by accumulating input feature maps of convolutional layers, and the application method of the third exemplary embodiment may be a training method of a network model obtained by training based on the training method of the first exemplary embodiment, but does not exclude application of network models obtained by other methods. Fig. 13 is a flow chart illustrating steps of the application method of the third embodiment. In the third exemplary embodiment, the processing flow of the multilayer neural network model shown in fig. 13 is implemented by causing the GPU/CPU 11 to execute a program (such as an application algorithm) stored in the ROM and/or the external memory 15 with the RAM as a work memory.
Step S301: in forward propagation, for at least one convolutional layer, a plurality of input signatures of the convolutional layer are accumulated.
Step S302: and performing convolution operation in the convolutional layer by using the accumulated input characteristic diagram and a filter in the convolutional layer.
Step S303: and outputting the application result after the forward propagation is executed.
In the third embodiment, if the number of input feature maps of the convolutional layer is greater than the number of channels of the filter, a feasible way is to expand the number of channels of the filter according to the second embodiment, so that the convolution operation is performed on the input feature maps and the filter with the expanded number of channels; another feasible method is the third embodiment, which accumulates a large number of input feature maps into a small number of input feature maps, matches the number of the accumulated input feature maps with the number of channels of the filter, and performs convolution operation by using the accumulated input feature maps and the filter with the number of channels not expanded; a third feasible method is to expand the number of channels of the filter, accumulate the input feature maps if the expansion multiple is small and the number of channels after expansion is still smaller than the number of input feature maps, and perform convolution operation by using the accumulated input feature maps and the filter after channel number expansion.
In the third embodiment, an optional manner of accumulating the input feature maps is as follows:
the first step is as follows: a plurality of input profiles of the convolutional layer are grouped.
When grouping is carried out, if the number of the input characteristic diagrams is integral multiple of the number of filter channels, the number of the input characteristic diagrams of each group after grouping is equal to the number of the filter channels. If the number of the input characteristic diagrams is not integral multiple of the number of filter channels, dividing the plurality of input characteristic diagrams into two parts, wherein the number of the input characteristic diagrams of the first part is integral multiple of the number of the filter channels, and grouping the input characteristic diagrams of the first part, wherein the number of the input characteristic diagrams of each group is equal to the number of the filter channels. The second part is that the number of characteristic graphs is less than the number of filter channels, and the input characteristic graphs of the second part are used as a group. Taking the example that the number (e.g. 48) of the input feature maps is an integral multiple of the number (e.g. 16) of the filter channels, the input feature maps are divided into three groups in the order of position, and each group has 16 input feature maps. Taking the example that the number (e.g. 42) of the input feature maps is not an integral multiple of the number (e.g. 16) of the filter channels, the input feature maps are divided into three groups according to the position sequence, 16 input feature maps are arranged in the first group and the second group, and 10 input feature maps are arranged in the third group.
The second step is that: and accumulating the input characteristic diagrams in each group to obtain the accumulated input characteristic diagrams with the number equal to the number of channels of the filter.
Still taking the example where the number of input profiles (e.g., 48) is an integer multiple of the number of filter channels (e.g., 16), the input profiles are divided into three groups of 16 input profiles. One input feature map is read from each group, and one input feature map read from each group (three input feature maps in total) is accumulated element by element into one input feature map, and so on until all 16 input feature maps in each group are accumulated to obtain 16 input feature maps after accumulation, as shown in fig. 14. The element-by-element accumulation here means: and accumulating the elements at the same position in the three input feature maps. For example, the input features in the first group are plotted in FIG. 1 at (h)1,w2) The element at the position, the input feature map 17 in the second group is at (h)1,w2) The element at the position and the input feature map 33 in the third group are in (h)1,w2) The elements at the positions are accumulated to obtain an accumulated input feature map at (h)1,w2) An element at a location. Then, the number (e.g. 42) of the input feature maps is not an integer of the number (e.g. 16) of filter channelsFor example, the input profiles are divided into three groups, 16 input profiles in the first group and the second group, respectively, and 10 input profiles in the third group. And reading one input feature map from each group, and accumulating the three read input feature maps into one input feature map element by element. When 10 accumulated input feature maps are obtained (i.e. the accumulation process is performed 10 times), the input feature maps in the third group are all accumulated, and then the input feature maps are not read from the third group, but the input feature maps in the first group and the second group which are not accumulated yet are read for accumulation processing until 16 accumulated input feature maps are obtained.
If the application method in this embodiment is implemented based on the training method in the first embodiment that trains the multi-layer neural network model by expanding the number of channels of the filter, the accumulation process in the third embodiment may satisfy the condition that: the position of the input feature map for accumulation is the same as the position of the input feature map for operation with the same channel after expansion in the training method. The same channel here has the same meaning as in the first embodiment, and refers to a channel expanded from the same channel before expansion.
For example, assuming that in the training method of the first embodiment, the number of input feature maps of a certain convolution layer is 42, the number of channels of the filter is 16, and the number of channels of the filter is copied according to the method shown in fig. 9 by using the method of the first embodiment, so as to obtain a copied filter including 42 channels, and assuming that the channel C is1-1After replication, three identical channels C are obtained1-1To C1-3. The 42 input feature maps are convolved with the channel-extended filter, and in this case, it is assumed that the input feature maps at position 1, position 17, and position 33 correspond to the same channel C1-1To C1-3. In the application method of the third embodiment, 42 input feature maps that are the same in number and shape but different in element value from those in the training method are divided into three groups in the order of position, the first group and the second group have 16 input feature maps, the third group has 10 input feature maps, and the number of channels of the filter is 16. In the input feature mapWhen the lines are added up, the same channel C is used in the training method1-1To C1-3The positions of the three input profiles that are computed are position 1, position 17 and position 33, respectively, so that the three input profiles selected from the three groups at position 1, position 17 and position 33 are accumulated to obtain an accumulated sum, which is summed with channel C in the filter1-1And inputting a feature map for operation.
By using the application method shown in the third embodiment, on one hand, since the input feature diagram is accumulated, the bit number of the elements of the input feature diagram after accumulation is large (larger than the bit number of the input feature diagram before accumulation), so that the useful information of the input feature diagram before accumulation is better retained by the input feature diagram after accumulation, and the problem of accuracy reduction caused by large loss of information is avoided; on the other hand, compared with the convolution process shown in fig. 10 in the second embodiment, the convolution process shown in the third embodiment can effectively reduce the amount of computation and increase the computation speed.
< fourth exemplary embodiment >
The fourth exemplary embodiment of the present disclosure describes a training apparatus of a multilayer neural network model, which is an apparatus having the same inventive concept as the training method in the first exemplary embodiment of the present disclosure. As shown in fig. 15, the training apparatus includes: an extension unit 31, a forward propagation unit 32 and a backward propagation unit 33. Specifically, the extension unit 31 is configured to extend the number of filter channels in at least one convolution layer in the multilayer neural network model to be trained in advance; the forward propagation unit 32 is configured to perform data operation in the convolutional layer by using the filter after channel number expansion based on the data used for training; the back propagation unit 33 is configured to update the gradient value of the weight on the channel before the channel number expansion according to the gradient value of the weight on the same channel in the channel after the channel number expansion, so as to implement training of the network model, where the same channel is expanded from the same channel before the expansion.
Preferably, the expanding unit 31 expands the number of channels of the filter by duplicating the channels of the filter.
Preferably, the back propagation unit 33 determines gradient values of the weights on the same channel, and averages the gradient values of the weights located at the same position on the same channel, and takes the gradient average value as the gradient value of the weight at the position on the channel before the number expansion, at which the gradient value is to be updated.
< fifth exemplary embodiment >
The fifth exemplary embodiment of the present disclosure describes an application apparatus of a multilayer neural network model, which is an apparatus having the same inventive concept as the application method in the second exemplary embodiment of the present disclosure. As shown in fig. 16, the application device includes: an extension unit 41, a forward propagation unit 42 and an output unit 43. Specifically, the expanding unit 41 pre-expands the number of filter channels in at least one convolution layer in the multilayer neural network model; the forward propagation unit 42 performs data operation in the convolutional layer by using the filter after the channel number expansion based on the data corresponding to the task requirement; the output unit 43 outputs the application result after performing forward propagation.
Preferably, the expanding unit 41 expands the number of channels of the filter by duplicating the channels of the filter.
< sixth exemplary embodiment >
The sixth exemplary embodiment of the present disclosure describes an application apparatus of a multilayer neural network model, which is an apparatus having the same inventive concept as the application method in the third exemplary embodiment of the present disclosure. As shown in fig. 17, the application device includes: an accumulation unit 51, an arithmetic unit 52 and an output unit 53. Specifically, the accumulation unit 51 is configured to accumulate, for at least one convolutional layer, a plurality of input feature maps of the convolutional layer during forward propagation; the operation unit 52 performs convolution operation in the convolutional layer using the accumulated input feature map and the filter in the convolutional layer; the output unit 53 outputs the application result after performing the forward propagation.
Preferably, the accumulation unit 51 is configured to group the plurality of input feature maps such that the number of input feature maps in a group is equal to the number of channels of the filter in the convolutional layer and at most the number of input feature maps in a group is smaller than the number of channels of the filter, and accumulate the input feature maps in each group to obtain an accumulated input feature map whose number is equal to the number of channels of the filter.
Preferably, in the training method for the multilayer neural network model before the application method, when the number of channels of the filter is expanded, and the filter expanded by the number of channels is operated with the input feature map, the accumulation unit 51 is configured to accumulate the input feature maps satisfying the following condition in each group: the position of the accumulated input feature maps in the plurality of input feature maps is the same as the position of the input feature map in the training method that is operated with the same channel after the expansion, which is expanded from the same channel before the expansion. Other embodiments
Embodiments of the invention may also be implemented by a computer of a system or apparatus that reads and executes computer-executable instructions (e.g., one or more programs) recorded on a storage medium (also referred to more fully as a "non-transitory computer-readable storage medium") to perform the functions of one or more of the above-described embodiments and/or that includes one or more circuits (e.g., an application-specific integrated circuit (ASIC)) for performing the functions of one or more of the above-described embodiments, and by a method performed by a computer of a system or apparatus by, for example, reading and executing the computer-executable instructions from the storage medium to perform the functions of one or more of the above-described embodiments and/or controlling one or more circuits to perform the functions of one or more of the above-described embodiments. The computer may include one or more processors (e.g., a Central Processing Unit (CPU), Micro Processing Unit (MPU)) and may include a separate computer or a network of separate processors to read out and execute computer-executable instructions. The computer-executable instructions may be provided to the computer from, for example, a network or a storage medium. The storage medium may include, for example, one or more of a hard disk, a Random Access Memory (RAM), a Read Only Memory (ROM), storage of a distributed computing system, an optical disk such as a Compact Disk (CD), a Digital Versatile Disk (DVD), or a blu-ray disk (BD) (registered trademark), a flash memory device, a memory card, and the like.
The embodiments of the present invention can also be realized by a method in which software (programs) that perform the functions of the above-described embodiments are supplied to a system or an apparatus through a network or various storage media, and a computer or a Central Processing Unit (CPU), a Micro Processing Unit (MPU) of the system or the apparatus reads out and executes the methods of the programs.
While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.
Claims (21)
1. A method for applying a multilayer neural network model is characterized by comprising the following steps:
aiming at least one convolution layer in the multilayer neural network model, expanding the number of filter channels of the convolution layer;
when forward propagation is carried out, based on data corresponding to application requirements, data operation in the convolutional layer is carried out by using a filter after channel number expansion;
and outputting the application result after the forward propagation is executed.
2. The application method of claim 1, wherein the number of channels of the filter is extended by duplicating the channels of the filter.
3. The application method of claim 1, further comprising:
the height of the filter after the channel number expansion is not less than the height of the filter before the channel number expansion, and,
the width of the filter after the channel number expansion is not less than that of the filter before the channel number expansion.
4. A method for training a multilayer neural network model is characterized by comprising the following steps:
aiming at least one convolution layer in a multilayer neural network model to be trained, expanding the number of filter channels of the convolution layer;
in forward propagation, based on data used for training, performing data operation in the convolutional layer by using a filter after channel number expansion;
during reverse propagation, updating the gradient value of the weight on the channel before channel number expansion according to the gradient value of the weight on the same channel in the channel after channel number expansion, so as to realize the training of the network model;
wherein the same channel is expanded from the same channel before expansion.
5. The training method of claim 4, wherein the number of channels of the filter is extended by duplicating the channels of the filter.
6. The training method of claim 4, further comprising:
the height of the filter after the channel number expansion is not less than the height of the filter before the channel number expansion, and,
the width of the filter after the channel number expansion is not less than that of the filter before the channel number expansion.
7. The training method according to claim 4, wherein updating the gradient values of the weights on the channels before the number of the channels is expanded specifically comprises:
determining gradient values of the weights on the same channel;
the gradient values of the weights located at the same position on the same channel are averaged, and the gradient average value is taken as the gradient value of the weight at the position on the channel before the expansion of the number, at which the gradient value is to be updated.
8. A method for applying a multilayer neural network model is characterized by comprising the following steps:
during forward propagation, accumulating a plurality of input characteristic graphs of at least one convolutional layer, and performing convolution operation in the convolutional layer by using the accumulated input characteristic graphs and a filter in the convolutional layer;
and outputting the application result after the forward propagation is executed.
9. The application method of claim 8, wherein accumulating the plurality of input feature maps specifically comprises:
grouping the plurality of input feature maps such that the number of input feature maps in a group is equal to the number of channels of a filter in the convolutional layer and at most the number of input feature maps in a group is less than the number of channels of the filter;
and accumulating the input characteristic diagrams in each group to obtain the accumulated input characteristic diagrams with the number equal to the number of channels of the filter.
10. The application method of claim 9, wherein in a training method for the multi-layer neural network model before the application method, in a case where the filter expanded by the number of channels and the input feature map are operated by expanding the number of channels of the filter, accumulating the input feature maps in each group specifically includes:
accumulating the input feature maps in each group that satisfy the following conditions: the position of the accumulated input feature maps in the plurality of input feature maps is the same as the position of the input feature map in the training method that is operated with the same channel after the expansion, which is expanded from the same channel before the expansion.
11. An apparatus for applying a multi-layer neural network model, comprising:
an expansion unit configured to expand in advance the number of filter channels in at least one convolution layer in the multilayer neural network model;
a forward propagation unit configured to perform data operation in the convolutional layer using the filter after the channel number expansion based on data corresponding to an application requirement;
an output unit configured to output the application result after performing the forward propagation.
12. The application device of claim 11,
the expanding unit expands the number of channels of the filter by duplicating the channels of the filter.
13. A training device for a multilayer neural network model, comprising:
an expansion unit configured to expand in advance the number of filter channels in at least one convolution layer in a multilayer neural network model to be trained;
a forward propagation unit configured to perform data operations in the convolutional layer with a channel number extended filter based on data for training;
and the back propagation unit is configured to update the gradient value of the weight on the channel before the channel number expansion according to the gradient value of the weight on the same channel in the channel after the channel number expansion, so as to realize the training of the network model, wherein the same channel is expanded from the same channel before the expansion.
14. The training device of claim 13,
the expanding unit expands the number of channels of the filter by duplicating the channels of the filter.
15. The training device of claim 13,
the back propagation unit determines gradient values of the weights on the same channel, and averages the gradient values of the weights located at the same position on the same channel, and takes the gradient average value as the gradient value of the weight at the position on the channel before the number expansion, at which the gradient value is to be updated.
16. An apparatus for applying a multi-layer neural network model, comprising:
an accumulation unit configured to accumulate a plurality of input profiles of at least one convolutional layer for the convolutional layer while propagating forward;
an operation unit configured to perform convolution operations in the convolutional layer using the accumulated input feature map and a filter in the convolutional layer;
an output unit configured to output the application result after performing the forward propagation.
17. The application device of claim 16,
the accumulation unit is used for grouping the plurality of input characteristic diagrams, enabling the number of the input characteristic diagrams in the group to be equal to the number of channels of the filter in the convolutional layer, enabling the number of the input characteristic diagrams in at most one group to be smaller than the number of the channels of the filter, and accumulating the input characteristic diagrams in each group to obtain the accumulated input characteristic diagrams with the number equal to the number of the channels of the filter.
18. The application device of claim 17,
in the training method for the multilayer neural network model before the application method, when the number of channels of the filter is expanded, and the filter expanded by the number of channels and the input feature map are operated, the accumulation unit is configured to accumulate the input feature maps satisfying the following conditions in each group, where the conditions are: the position of the accumulated input feature maps in the plurality of input feature maps is the same as the position of the input feature map in the training method that is operated with the same channel after the expansion, which is expanded from the same channel before the expansion.
19. A non-transitory computer-readable storage medium storing instructions that, when executed by a computer, cause the computer to perform a method of applying the multi-layer neural network model of claim 1.
20. A non-transitory computer-readable storage medium storing instructions that, when executed by a computer, cause the computer to perform the method of training the multi-layer neural network model of claim 4.
21. A non-transitory computer-readable storage medium storing instructions that, when executed by a computer, cause the computer to perform a method of applying the multi-layer neural network model of claim 8.
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811633954.0A CN111382833A (en) | 2018-12-29 | 2018-12-29 | Method and device for training and applying multilayer neural network model and storage medium |
US16/721,606 US11847569B2 (en) | 2018-12-29 | 2019-12-19 | Training and application method of a multi-layer neural network model, apparatus and storage medium |
JP2019229345A JP6890653B2 (en) | 2018-12-29 | 2019-12-19 | Multi-layer neural network model learning and application methods, devices, and storage media |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811633954.0A CN111382833A (en) | 2018-12-29 | 2018-12-29 | Method and device for training and applying multilayer neural network model and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111382833A true CN111382833A (en) | 2020-07-07 |
Family
ID=71123080
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811633954.0A Pending CN111382833A (en) | 2018-12-29 | 2018-12-29 | Method and device for training and applying multilayer neural network model and storage medium |
Country Status (3)
Country | Link |
---|---|
US (1) | US11847569B2 (en) |
JP (1) | JP6890653B2 (en) |
CN (1) | CN111382833A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112559870A (en) * | 2020-12-18 | 2021-03-26 | 北京百度网讯科技有限公司 | Multi-model fusion method and device, electronic equipment and storage medium |
CN112867010A (en) * | 2021-01-14 | 2021-05-28 | 中国科学院国家空间科学中心 | Radio frequency fingerprint embedded real-time identification method and system based on convolutional neural network |
CN114995782A (en) * | 2022-08-03 | 2022-09-02 | 上海登临科技有限公司 | Data processing method, device, equipment and readable storage medium |
CN116781484A (en) * | 2023-08-25 | 2023-09-19 | 腾讯科技(深圳)有限公司 | Data processing method, device, computer equipment and storage medium |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20210076687A (en) * | 2019-12-16 | 2021-06-24 | 삼성전자주식회사 | Neural processing apparatus and method for processing neural network thereof |
CN111241985B (en) * | 2020-01-08 | 2022-09-09 | 腾讯科技(深圳)有限公司 | Video content identification method and device, storage medium and electronic equipment |
US11295430B2 (en) * | 2020-05-20 | 2022-04-05 | Bank Of America Corporation | Image analysis architecture employing logical operations |
JP7533933B2 (en) | 2020-07-20 | 2024-08-14 | 国立大学法人 和歌山大学 | Neural network processing device, neural network processing method, and computer program |
CN112468203B (en) * | 2020-11-19 | 2022-07-26 | 杭州勒贝格智能系统股份有限公司 | Low-rank CSI feedback method, storage medium and equipment for deep iterative neural network |
CN112785663B (en) * | 2021-03-17 | 2024-05-10 | 西北工业大学 | Image classification network compression method based on convolution kernel of arbitrary shape |
CN113792770A (en) * | 2021-08-31 | 2021-12-14 | 南京信息工程大学 | Zero-sample rolling bearing fault diagnosis method and system based on attribute description |
WO2023075372A1 (en) * | 2021-10-26 | 2023-05-04 | 삼성전자 주식회사 | Method and electronic device for performing deep neural network operation |
CN117217318B (en) * | 2023-11-07 | 2024-01-26 | 瀚博半导体(上海)有限公司 | Text generation method and device based on Transformer network model |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160162782A1 (en) * | 2014-12-09 | 2016-06-09 | Samsung Electronics Co., Ltd. | Convolution neural network training apparatus and method thereof |
US20170032222A1 (en) * | 2015-07-30 | 2017-02-02 | Xerox Corporation | Cross-trained convolutional neural networks using multimodal images |
CN107909008A (en) * | 2017-10-29 | 2018-04-13 | 北京工业大学 | Video target tracking method based on multichannel convolutive neutral net and particle filter |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB201709672D0 (en) * | 2017-06-16 | 2017-08-02 | Ucl Business Plc | A system and computer-implemented method for segmenting an image |
CN109754402B (en) * | 2018-03-15 | 2021-11-19 | 京东方科技集团股份有限公司 | Image processing method, image processing apparatus, and storage medium |
-
2018
- 2018-12-29 CN CN201811633954.0A patent/CN111382833A/en active Pending
-
2019
- 2019-12-19 US US16/721,606 patent/US11847569B2/en active Active
- 2019-12-19 JP JP2019229345A patent/JP6890653B2/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160162782A1 (en) * | 2014-12-09 | 2016-06-09 | Samsung Electronics Co., Ltd. | Convolution neural network training apparatus and method thereof |
US20170032222A1 (en) * | 2015-07-30 | 2017-02-02 | Xerox Corporation | Cross-trained convolutional neural networks using multimodal images |
CN107909008A (en) * | 2017-10-29 | 2018-04-13 | 北京工业大学 | Video target tracking method based on multichannel convolutive neutral net and particle filter |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112559870A (en) * | 2020-12-18 | 2021-03-26 | 北京百度网讯科技有限公司 | Multi-model fusion method and device, electronic equipment and storage medium |
CN112559870B (en) * | 2020-12-18 | 2023-10-31 | 北京百度网讯科技有限公司 | Multi-model fusion method, device, electronic equipment and storage medium |
CN112867010A (en) * | 2021-01-14 | 2021-05-28 | 中国科学院国家空间科学中心 | Radio frequency fingerprint embedded real-time identification method and system based on convolutional neural network |
CN112867010B (en) * | 2021-01-14 | 2023-04-18 | 中国科学院国家空间科学中心 | Radio frequency fingerprint embedded real-time identification method and system based on convolutional neural network |
CN114995782A (en) * | 2022-08-03 | 2022-09-02 | 上海登临科技有限公司 | Data processing method, device, equipment and readable storage medium |
CN114995782B (en) * | 2022-08-03 | 2022-10-25 | 上海登临科技有限公司 | Data processing method, device, equipment and readable storage medium |
WO2024027039A1 (en) * | 2022-08-03 | 2024-02-08 | 北京登临科技有限公司 | Data processing method and apparatus, and device and readable storage medium |
CN116781484A (en) * | 2023-08-25 | 2023-09-19 | 腾讯科技(深圳)有限公司 | Data processing method, device, computer equipment and storage medium |
CN116781484B (en) * | 2023-08-25 | 2023-11-07 | 腾讯科技(深圳)有限公司 | Data processing method, device, computer equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
JP6890653B2 (en) | 2021-06-18 |
JP2020109647A (en) | 2020-07-16 |
US11847569B2 (en) | 2023-12-19 |
US20200210843A1 (en) | 2020-07-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111382833A (en) | Method and device for training and applying multilayer neural network model and storage medium | |
JP6794593B2 (en) | Methods and devices for optimizing and applying multi-layer neural network models, and storage media | |
CN109002889B (en) | Adaptive iterative convolution neural network model compression method | |
TWI748041B (en) | Apparatus and method for designing super resolution deep convolutional neural networks | |
CN110555450B (en) | Face recognition neural network adjusting method and device | |
WO2023024407A1 (en) | Model pruning method and apparatus based on adjacent convolutions, and storage medium | |
JPWO2019146189A1 (en) | Neural network rank optimizer and optimization method | |
CN107967516A (en) | A kind of acceleration of neutral net based on trace norm constraint and compression method | |
CN111814448B (en) | Pre-training language model quantization method and device | |
Singh et al. | Acceleration of deep convolutional neural networks using adaptive filter pruning | |
CN112651485A (en) | Method and apparatus for recognizing image and method and apparatus for training neural network | |
JP2024502225A (en) | Method and system for convolution with activation sparsity with workload leveling | |
JP6990813B2 (en) | Learning and application methods, devices, and storage media for multi-layer neural network models | |
KR102505946B1 (en) | Method and system for training artificial neural network models | |
JP7572753B2 (en) | Bank-balanced sparse activation feature maps for neural network models | |
CN115660070A (en) | Compression method for confrontation generation neural network | |
US20220012856A1 (en) | Processing apparatus | |
US11847567B1 (en) | Loss-aware replication of neural network layers | |
CN109740733B (en) | Deep learning network model optimization method and device and related equipment | |
CN113822768A (en) | Community network processing method, device, equipment and storage medium | |
CN113554104B (en) | Image classification method based on deep learning model | |
KR20230000686A (en) | Electronic device and controlling method of electronic device | |
CN110610227B (en) | Artificial neural network adjusting method and neural network computing platform | |
CN116157808B (en) | Systems and methods for group balanced sparse activation and joint activation weight sparse training for neural networks | |
CN112580796A (en) | Pruning method, device and system for neural network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20200707 |