[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN108885713B - Image classification neural network - Google Patents

Image classification neural network Download PDF

Info

Publication number
CN108885713B
CN108885713B CN201680084514.7A CN201680084514A CN108885713B CN 108885713 B CN108885713 B CN 108885713B CN 201680084514 A CN201680084514 A CN 201680084514A CN 108885713 B CN108885713 B CN 108885713B
Authority
CN
China
Prior art keywords
sub
output
network
neural network
stack
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201680084514.7A
Other languages
Chinese (zh)
Other versions
CN108885713A (en
Inventor
V.O.范霍克
C.塞格迪
S.伊奥弗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Google LLC
Original Assignee
Google LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Google LLC filed Critical Google LLC
Priority to CN202111545570.5A priority Critical patent/CN114386567A/en
Publication of CN108885713A publication Critical patent/CN108885713A/en
Application granted granted Critical
Publication of CN108885713B publication Critical patent/CN108885713B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
    • G06V10/451Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
    • G06V10/454Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/94Hardware or software architectures specially adapted for image or video understanding
    • G06V10/95Hardware or software architectures specially adapted for image or video understanding structured as a network, e.g. client-server architectures

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Image Analysis (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

A neural network system, comprising a sub-network comprising a first sub-network comprising a plurality of first modules, each first module comprising: a pass-through convolutional layer that processes a sub-network input to generate a pass-through output; an average aggregate stack of a neural network layer that processes the sub-network inputs of the first sub-network to generate an average aggregate output; a first stack of convolutional neural network layers that processes the subnetwork input to generate a first stack output; a second stack of convolutional neural network layers that processes the sub-network input to generate a second stack output; and a cascade layer that cascades the generated outputs to generate a first module output. By including a sub-network of modules in a deep neural network, the deep neural network can better perform image processing tasks and can be trained faster and more efficiently while maintaining improved performance of the image processing tasks.

Description

Image classification neural network
Cross Reference to Related Applications
This application claims priority to U.S. provisional application serial No. 62/297,101 filed on 18/2/2016. The disclosure of the prior application is considered to be part of the disclosure of the present application and is incorporated by reference into the disclosure of the present application.
Background
This description relates to processing images using deep neural networks (e.g., convolutional neural networks).
Convolutional neural networks typically include at least two neural network layers, a convolutional neural network layer, and a fully connected neural network layer. The convolutional neural network layers have sparse connectivity, where each node in the convolutional layer receives input from only a subset of nodes in the next lowest neural network layer. Some convolutional neural network layers have nodes that share weights with other nodes in the layer. However, the nodes in the fully connected layer receive input from each node in the next lowest neural network layer.
Disclosure of Invention
In general, one innovative aspect of the subject matter described in this specification can be embodied in a first neural network system that is configured to receive an image and generate a classification output for an input image. The first neural network system may be implemented as a computer program on one or more computers in one or more locations. The first neural network system may include: a plurality of sub-networks arranged on top of each other in a stack, wherein each sub-network is configured to process a sub-network input to generate a sub-network output and to provide the sub-network output as an input to another sub-network above the sub-network in the stack, and wherein the plurality of sub-networks comprises: a first sub-network comprising a plurality of first modules, each first module comprising: a pass-through convolutional layer configured to process a subnet input of the first subnet to generate a pass-through output; an average aggregated stack of neural network layers, wherein each of the average aggregated stack is configured to collectively process the sub-network inputs of the first sub-network to generate an average aggregated output; a first stack of convolutional neural network layers, wherein layers in the first stack are configured to collectively process the subnetwork inputs of the first subnetwork to generate a first stack output; a second stack of convolutional neural network layers, wherein each of the second stack is configured to collectively process the subnetwork inputs of the first subnetwork to generate a second stack output; and a cascade layer configured to cascade the pass-through output, the average-gather output, the first stack output, and the second stack output to generate a first module output of the first module.
The foregoing and other embodiments may each optionally include one or more of the following features, either alone or in combination. The first subnetwork comprises four first modules. The pass-through convolutional layer is a 1x1 convolutional layer. The average gather stack includes an average gather layer followed by 1x1 convolutional layers. The first stack includes 1x1 convolutional layers followed by 3x3 convolutional layers. The second stack includes 1x1 convolutional layers, followed by 3x3 convolutional layers, followed by 3x3 convolutional layers. The first sub-network is configured to combine the first module outputs generated by the plurality of first sub-networks to generate a first sub-network output of the first sub-network. The first subnetwork receives an input of 35x35x384, and each first module generates an output of 35x35x 384.
Another innovative aspect of the subject matter described in this specification can be embodied in a second neural network system that is configured to receive images and generate classification outputs for input images. The second neural network system may be implemented by one or more computers and configured to receive the images and generate classification outputs for the input images. The second neural network system may include: a plurality of sub-networks arranged on top of each other in a stack, wherein each sub-network is configured to process a sub-network input to generate a sub-network output and to provide the sub-network output as an input to another sub-network above the sub-network in the stack, and wherein the plurality of sub-networks comprises: a second sub-network comprising a plurality of second modules, each second module comprising: a pass-through convolutional layer configured to process the subnetwork input of the second subnetwork to generate a pass-through output; an average aggregated stack of neural network layers, wherein each of the average aggregated stack is configured to collectively process the sub-network inputs of the second sub-network to generate an average aggregated output; a third stack of convolutional neural network layers, wherein each of the third stack is configured to collectively process the subnetwork inputs of the second subnetwork to generate a third stack output; a fourth stack of convolutional neural network layers, wherein each of the fourth stack is configured to collectively process the subnetwork input of the second subnetwork to generate a fourth stack output; and a cascade layer configured to cascade the pass-through output, the average aggregate output, the third stack output, and the fourth stack output to generate a second module output of the second module.
The foregoing and other embodiments may each optionally include one or more of the following features, either alone or in combination. The second sub-network may comprise seven second modules. The pass-through convolutional layer may be a 1x1 convolutional layer. The average gather stack may include an average gather layer followed by 1x1 convolutional layers. The third stack may include 1x1 convolutional layers, followed by 1x7 convolutional layers, followed by 1x7 convolutional layers. The fourth stack may include 1x1 convolutional layers, followed by 1x7 convolutional layers, followed by 7x1 convolutional layers, followed by 1x7 convolutional layers, followed by 7x1 convolutional layers. The second sub-network may be configured to combine the second module outputs generated by the plurality of second modules to generate a second sub-network output of the second sub-network. The second subnetwork may receive 17x17x1024 inputs and each first module generates 17x17x1024 outputs.
Another innovative aspect of the subject matter described in this specification can be embodied in a third neural network system that is implemented by one or more computers and configured to receive images and generate classification outputs for the input images. The third neural network system may include: a plurality of sub-networks arranged on top of each other in a stack, wherein each sub-network is configured to process a sub-network input to generate a sub-network output and to provide the sub-network output as an input to another sub-network above the sub-network in the stack, and wherein the plurality of sub-networks comprises: a third sub-network comprising a plurality of third modules, each third module comprising: a pass-through convolutional layer configured to process a subnetwork input of the third subnetwork to generate a pass-through output; an average aggregated stack of neural network layers, wherein each of the average aggregated stack is configured to collectively process the sub-network inputs of the third sub-network to generate an average aggregated output; a first set of convolutional neural network layers, wherein the layers in the first set are configured to collectively process the subnetwork inputs of the third subnetwork to generate a first set of outputs; a second set of convolutional neural network layers, wherein the layers in the second set are configured to collectively process the sub-network inputs of the third sub-network to generate a second set of outputs; and a cascade layer configured to cascade the pass-through output, the average aggregate output, the first set of outputs, and the second set of outputs to generate a third module output of the third module.
The foregoing and other embodiments may each optionally include one or more of the following features, either alone or in combination. The second sub-network comprises three second modules. The pass-through convolutional layer may be a 1x1 convolutional layer. The average gather stack may include an average gather layer followed by 1x1 convolutional layers. The first group may include: 1x1 convolutional layers configured to process the subnetwork input of the third subnetwork to generate a first intermediate output; 1x3 convolutional layers configured to process the first intermediate output to generate a second intermediate output; a 3x1 convolutional layer configured to process the first intermediate output to generate a third intermediate output; and a first set of cascade layers configured to cascade the second intermediate outputs and the third intermediate outputs to generate the first set of outputs. The second group may include: a fifth stack of convolutional layers configured to process the subnetwork input of the third subnetwork to generate a fifth stack output; 1x3 convolutional layers configured to process the fifth stack output to generate a fourth intermediate output; a 3x1 convolutional layer configured to process the fifth stack output to generate a fifth intermediate output; and a second set of cascade layers configured to cascade the fourth intermediate output and the fifth intermediate output to generate the second set of outputs. The fifth group may include 1x1 convolutional layers, followed by 1x3 convolutional layers, followed by 3x1 convolutional layers. The third sub-network may be configured to combine the third module outputs generated by the plurality of third modules to generate a third sub-network output of the third sub-network. The third sub-network may receive an 8x8x1536 input and each third module generates an 8x8x1536 output.
Another innovative aspect of the subject matter described in this specification can be embodied in a fourth neural network system that is implemented by one or more computers and that is configured to receive images and generate classification outputs for the input images. The fourth neural network system may include: a plurality of sub-networks arranged on top of each other in a stack, wherein each sub-network is configured to process a sub-network input to generate a sub-network output and to provide the sub-network output as an input to another sub-network above the sub-network in the stack, and wherein the plurality of sub-networks comprises: the first sub-network, the second sub-network, and the third sub-network.
The foregoing and other embodiments may each optionally include one or more of the following features, either alone or in combination. The fourth neural network system may further include: a trunk sub-network below the first sub-network, the second sub-network and the third sub-network in the stack, wherein the trunk sub-network is configured to: receiving the image; and processing the image to generate a mains subnetwork output. The fourth neural network system may further include: a first reduced sub-network between the first sub-network and the second sub-network in the stack. The fourth neural network system may further include: a second reduced sub-network between the second sub-network and the third sub-network in the stack.
Another innovative aspect of the subject matter described in this specification can be embodied in one or more non-transitory storage media encoded with instructions that, when executed by one or more computers, cause the one or more computers to implement one of the first, second, third, or fourth neural network systems.
Another innovative aspect of the subject matter described in this specification can be embodied in a fifth neural network system that is implemented by one or more computers and that is configured to receive images and generate classification outputs for the input images. The neural network system may include: a plurality of sub-networks arranged on top of each other in a stack, wherein each sub-network is configured to process a sub-network input to generate a sub-network output and to provide the sub-network output as an input to another sub-network above the sub-network in the stack, and wherein the plurality of sub-networks comprises: a first remaining sub-network comprising a plurality of first remaining modules, each first remaining module comprising: a first sub-module, the first sub-module comprising: a pass-through convolutional layer configured to process subnetwork inputs of the first remaining subnetwork to generate a pass-through output; one or more sets of neural network layers, each of the one or more sets of neural network layers configured to process the subnetwork inputs of the first remaining subnetwork to generate a respective set output; and a filter expansion layer configured to generate an expanded output by scaling up dimensions of the pass-through output and each of the set of outputs; a summing layer configured to generate a summed output from the sub-network input and the expanded output of the first remaining sub-network; and an activation function layer configured to apply an activation function to the summation output to generate a first remaining module output of the first remaining module.
The foregoing and other embodiments may each optionally include one or more of the following features, either alone or in combination. The pass-through convolutional layer may be a 1x1 convolutional layer. The filter expansion layer may be configured to receive the pass-through output and the group output and apply a 1x1 convolution to the pass-through output and the group output to generate the expanded output. The summing layer may be configured to: summing the sub-network inputs of the first sub-network and the extended outputs to generate the summed output. The summing layer may be configured to: scaling the expanded output to generate a scaled expanded output; and summing a sub-network input of the first sub-network and the scaled expanded output to generate the summed output. The activation function may be a modified linear unit (Relu) activation function. The one or more sets of neural network layers may include a first set that is a stack of a plurality of convolutional neural network layers. The one or more sets of neural network layers may also include a second set that is a different stack of the plurality of convolutional neural network layers. The first remaining subnetwork may be configured to: combining the first remaining module outputs generated by the plurality of first remaining modules to generate a first remaining sub-network output of the first remaining sub-network.
Another innovative aspect of the subject matter described in this specification can be embodied in one or more non-transitory storage media encoded with instructions that, when executed by one or more computers, cause the one or more computers to implement the fifth neural network system.
Particular embodiments of the subject matter described in this specification can be implemented to realize one or more of the following advantages. By including sub-networks, and in particular modular sub-networks, in a deep neural network, the deep neural network can better perform image processing tasks such as object recognition or image classification. Furthermore, a deep neural network that includes a modular subnetwork may be trained faster and more efficiently than a deep neural network that does not include a modular subnetwork, while maintaining improved performance of the image processing task.
The details of one or more embodiments of the subject matter of this specification are set forth in the accompanying drawings and the description below. Other features, aspects, and advantages of the present subject matter will become apparent from the description, the drawings, and the claims.
Drawings
Fig. 1 shows an example of a neural network system.
Fig. 2 shows an example of a first sub-network.
Fig. 3 shows an example of a second sub-network.
Fig. 4 shows an example of a third sub-network.
Fig. 5 shows an example of the remaining sub-networks.
FIG. 6 is a flow diagram of an exemplary process for generating output from an input image.
FIG. 7 is a flow diagram of an exemplary process for processing an input using a deep neural network.
Like reference numbers and designations in the various drawings indicate like elements.
Detailed Description
Fig. 1 illustrates an exemplary neural network system 100. The neural network system 100 is an example of a system implemented as a computer program on one or more computers in one or more locations, in which the systems, components, and techniques described below may be implemented.
The neural network system 100 receives data characterizing an input image, such as pixel information of the input image, or other information characterizing the input image. For example, the neural network system 100 may receive input image data 102. The neural network system 100 processes the received data using the deep neural network 150 and the output layer 112 to generate an output of the input image, such as the output 114 from the input image data 102.
The neural network system 100 may be configured to receive input image data and generate any kind of score or classification output based on the input image, i.e., may be configured to perform any type of image processing task. The score or classification output generated by the system depends on the task that the neural network system has been configured to validate. For example, for an image classification or recognition task, the output generated by the neural network system for a given image may be a score for each of a set of object classes, where each score represents a likelihood that the image contains an image of an object belonging to the class. As another example, for an object detection task, the output generated by the neural network system may identify the location, size, or both of the object of interest in the input image.
Generally, the deep neural network 150 includes a plurality of sub-networks arranged on top of each other in a stack, where each sub-network is configured to process sub-network inputs to generate sub-network outputs. Each sub-network then provides the sub-network output as an input to another sub-network above the sub-network in the stack, or as an output to the deep neural network 150 if no sub-network exists above the sub-network in the stack. The output layer 112 then processes the output of the deep neural network 150 to generate the output 114 of the neural network system 100. As described above, the type of output generated by the output layer 112 depends on the image classification task that the neural network system 100 has been configured to validate. Similarly, the type of output layer 112 used to generate the output 114 also depends on the task. In particular, the output layer 112 is an output layer suitable for the task, i.e. generating the type of output required by the image processing task. For example, for an image classification task, the output layer may be a softmax output layer that generates a respective score for each of the set of object categories.
The sub-networks in the deep neural network 150 include a plurality of sub-networks of modules and one or more other sub-networks. Each of the other sub-networks is composed of one or more conventional neural network layers, such as a maximum aggregation layer, an average aggregation layer, a convolution layer, a fully connected layer, a regularization layer, an output layer, such as a softmax output layer or a linear regression output layer, and so forth.
For example, in some implementations, the deep neural network 150 includes subnetwork a 202, subnetwork B302, or subnetwork C402, or a combination thereof. Examples of sub-network a 202, sub-network B302, and sub-network C402 are provided in detail below with reference to fig. 2-4.
In various implementations, the modular sub-networks may also include a trunk sub-network that is the lowest sub-network in the stack and is configured to receive the image and process the image to generate a trunk output that is the input for the next higher sub-network in the stack. For example, as shown in FIG. 1, trunk sub-network 104 is configured to receive input image data 102 and process input image data 102 to generate a trunk output, which is the input to sub-network A202.
In various implementations, the modular sub-networks may also include one or more simplified sub-networks that receive the sub-network outputs and process the sub-network outputs to reduce the dimensionality of the sub-network outputs. For example, FIG. 1 shows simplified sub-network X106 between sub-network A202 and sub-network B302, and simplified sub-network Y108 between sub-network B302 and sub-network C402. Simplified subnetwork X106 is configured to receive the output of subnetwork a 202 and process the output to reduce the dimensionality of the output. Simplified sub-network Y108 is configured to receive the output of sub-network B302 and process the output to reduce the dimensionality of the output.
In some implementations, the deep neural network can include an average aggregation sub-network (e.g., average aggregation sub-network 110) that is the highest sub-network in the stack and is configured to average aggregate the outputs of the preceding sub-networks to generate the output of the deep neural network 150.
In some implementations, a modular subnetwork includes one or more remaining subnetworks. The remaining sub-networks comprise a plurality of remaining modules. Each remaining module includes one or more remaining sub-modules. Examples of the remaining sub-modules are described in detail below with reference to fig. 5.
FIG. 2 shows an example of subnetwork A202. Sub-network a 202 is depicted as a modular sub-network comprising a first module. Although only a single module is shown in the example of fig. 2, a modular subnetwork will typically include a plurality of first modules. For example, module subnetwork A202 can include four first modules. As shown in fig. 2, the first module includes a pass-through convolutional layer, such as pass-through convolutional layer 210; an average collection stack of neural network layers, such as the average collection stack of neural network layer 224; one or more stacks of neural network layers, such as a stack of neural network layer 226 and another stack of neural network layer 118; and a cascade layer, such as cascade layer 222. Module subnetwork a 202 receives input from a previous subnetwork (e.g., previous subnetwork 204) and generates an output representation from the received input.
The pass-through convolutional layer is configured to process the subnet input of subnet A202 obtained from the previous subnet 204 to generate a pass-through output. In some implementations, the pass-through convolutional layer is a 1x1 convolutional layer. Generally, a k × k convolutional layer is a convolutional layer using a k × k filter. That is, k x k represents the size of the patch in the previous layer to which the convolutional layer is connected. In these implementations, the 1x1 pass-through convolutional layer is typically used as a dimensionality reduction module to reduce the dimensionality of the previous output representation and to eliminate the computational bottleneck that may otherwise limit the size of the deep neural network.
The average aggregated stack of neural network layers comprises a stack of neural network layers configured to collectively process the sub-network inputs of sub-network a 202 to generate an average aggregated output. For example, in the example of fig. 2, the average gather stack of the neural network layer 224 includes the average gather layer 206 that averages the sub-network inputs, followed by the 1x1 convolutional layer 208.
Each of the one or more stacks of neural network layers in the modular subnetwork includes two or more neural network layers, with an initial neural network layer followed by one or more other neural network layers. For example, subnetwork a 202 includes a first stack 226 including 1x1 convolutional layer 212 followed by 3x3 convolutional layer 214; and a second stack 228 comprising 1x1 convolutional layer 216, followed by 3x3 convolutional layer 218, followed by 3x3 convolutional layer 220. However, other combinations of convolutional layer dimensions are possible. The layers in the first stack 226 are configured to collectively process the subnetwork input of subnetwork a 202 to generate a first stack output, and the layers in the second stack 228 are configured to collectively process the subnetwork input of subnetwork a 202 to generate a second stack output.
The cascade layer 222 is configured to cascade the through output, the average aggregated output, the first stack output, and the second stack output to generate a first module output for the first module. For example, the cascade layer 222 is configured to cascade tensors generated by the pass-through convolutional layer, the average collection stack of neural network layers, and the stack of convolutional neural network layers along the depth dimension to generate a single tensor, i.e., the output representation. The output representation of the first module may be used as input for the next module in subnetwork a 202. The next module in subnetwork a 202 may process the input, e.g., the previous output representation, in the manner described in more detail below with reference to fig. 7.
In some implementations, subnetwork a 202 may receive 35x35x384 inputs and each first module may generate 35x35x384 outputs. However, other input and output sizes are possible.
Fig. 3 shows an example of sub-network B302. Sub-network B302 is depicted as a modular sub-network comprising a second module. Although fig. 3 shows only a single module, sub-network B302 may include a plurality of second modules. For example, subnetwork B302 can include seven second modules. Similar to the first module, the second module includes pass-through convolutional layers, e.g., 1x1 pass-through convolutional layers 310; an average collection stack of neural network layers, such as the average collection stack of neural network layer 330; one or more stacks of neural network layers, such as a stack of neural network layer 332 and a stack of neural network layer 334; and a cascade layer, such as cascade layer 328. Modular subnetwork B302 receives input from a previous subnetwork (e.g., previous subnetwork 304) and generates an output representation from the received input.
Pass-through convolutional layer 310 is configured to process the subnet input of subnet B302 obtained from previous subnet 304 to generate a pass-through output. The average aggregated stack of neural network layers comprises a stack of neural network layers configured to collectively process the subnetwork inputs of subnetwork B302 to generate an average aggregated output. For example, in the example of fig. 3, the average gather stack of the neural network layer 330 includes the average gather layer 306 followed by the 1x1 convolutional layer 308.
Each of the one or more stacks of neural network layers in modular subnetwork B302 includes two or more neural network layers, with an initial neural network layer followed by one or more other neural network layers. For example, subnetwork B302 includes a third stack 332 including 1x1 convolutional layer 312, followed by 1x7 convolutional layer 314, followed by 1x7 convolutional layer 316; and a fourth stack 334 comprising 1x1 convolutional layer 318, followed by 1x7 convolutional layer 320, followed by 7x1 convolutional layer 322, followed by 1x7 convolutional layer 324, followed by 7x1 convolutional layer 326. However, other combinations of convolutional layer dimensions are possible. The layers in the third stack 332 are configured to collectively process the subnetwork input of subnetwork B302 to generate a third stack output, and the layers in the fourth stack 334 are configured to collectively process the subnetwork input of subnetwork B302 to generate a fourth stack output.
The cascade layer 328 is configured to cascade the pass output, the average sink output, the third stack output, and the fourth stack output to generate a second module output of the second module. For example, the cascade layer 328 is configured to cascade the tensors generated by the pass convolutional layer 310, the average aggregation stack of the neural network layer 330, and the stacks 332 and 334 of the convolutional neural network layers along the depth dimension to generate a single tensor, i.e., the output representation of the second module. The output representation of the second module may be used as input for the next module in sub-network B302. The next module in subnetwork B302 may process the input, e.g., the previous output representation, in the manner described in more detail below with reference to fig. 7.
In some implementations, subnetwork B302 may receive 17x17x1024 inputs and each second module may generate 17x17x1024 outputs. However, other input and output sizes are possible.
Fig. 4 shows an example of sub-network C402. Sub-network C402 is depicted as a modular sub-network comprising a third module. Although fig. 4 shows only a single module, sub-network C402 may include a plurality of third modules. For example, subnetwork C402 can include three third modules. The third module includes pass-through convolutional layers, such as 1x1 pass-through convolutional layer 410; an average aggregated stack of neural network layers, such as the average aggregated stack of neural network layer 432; one or more sets of neural network layers, such as one set of neural network layers 434 and another set of neural network layers 436; and a cascade layer, such as cascade layer 430. Module subnetwork C402 receives input from a previous subnetwork (e.g., previous subnetwork 404) and generates an output representation from the received input.
The pass-through convolutional layer is configured to process the subnet input of subnet C402 obtained from the previous subnet 404 to generate a pass-through output. The average aggregated stack of neural network layers comprises a stack of neural network layers configured to collectively process the subnetwork inputs of subnetwork C402 to generate an average aggregated output. For example, as shown in fig. 4, the average gather stack of the neural network layer 432 includes an average gather layer 406 followed by a 1x1 convolutional layer 408.
Each of the one or more sets of neural network layers in modular subnetwork C402 includes two or more neural network layers, with an initial neural network layer followed by one or more other neural network layers. By way of example, as shown in FIG. 4, subnetwork C402 includes a first set of neural network layers 434, which includes 1x1 convolutional layer 412, 1x3 convolutional layer 414, and 3x1 convolutional layer 416. However, other combinations of convolutional layer dimensions are possible. Layer 412 is configured to process the subnetwork input of subnetwork C402 to generate a first intermediate output. Layer 414 and layer 416 are each configured to process the first intermediate output to generate a second intermediate output and a third intermediate output, respectively. The first group may include a first group of cascading layers (not shown) configured to cascade the second intermediate output and the third intermediate output to generate a first group of outputs.
In another example, subnetwork C402 includes a second set of neural network layers 436 that includes a fifth stack of neural network layers 438 configured to process the subnetwork inputs of subnetwork C402 to generate a fifth stack output. The second group also includes a 1x3 convolutional layer 428 configured to process the fifth stack output to generate a fourth intermediate output; and a 3x1 convolutional layer 426 configured to process the fifth stack output to generate a fifth intermediate output. However, other combinations of convolutional layer dimensions are possible. The second group 436 may include a second cascade layer (not shown) configured to cascade the fourth intermediate output and the fifth intermediate output to generate a second group of outputs.
The cascade layer 430 is configured to cascade the through output, the average aggregate output, the first set of outputs, and the second set of outputs to generate a third module output of the third module. For example, the cascade layer 430 is configured to cascade the tensors generated by the pass-through convolutional layer 410, the average aggregated stack of neural network layers 432, and the groups of convolutional neural network layers 434 and 436 along the depth dimension to generate a single tensor, i.e., the output representation of the second module. The output representation of the third module may be used as input for the next module in sub-network C402. The next module in subnetwork C402 can process the input, e.g., the previous output representation, in the manner described in more detail below with reference to fig. 7.
In some implementations, subnetwork C402 may receive an 8x8x1536 input and each third module may generate an 8x8x1536 output. However, other input and output sizes are possible.
Fig. 5 shows an example of the remaining sub-modules 550 of the remaining modules of the remaining sub-network 502. Although only one remaining sub-module is depicted, remaining sub-network 502 may include a plurality of remaining modules, and each remaining module may include a plurality of remaining sub-modules. Remaining sub-modules 550 include pass-through convolutional layers, such as pass-through convolutional layer 506; one or more sets of neural network layers, such as one set of neural network layers 524 and another set of neural network layers 526; a filter extension layer, such as filter extension layer 512; a summing layer, such as summing layer 520; and an activation function layer, such as activation function layer 522. The remaining sub-modules 550 receive input from a previous sub-network (e.g., previous sub-network 504) and generate an output representation from the received input.
The pass-through convolutional layer is configured to process the subnet inputs of the remaining subnets 502 to generate pass-through outputs. For example, pass-through convolutional layer 506 is a 1x1 convolutional layer that processes inputs from previous subnetwork 504 to generate a pass-through output.
Each of the one or more sets of neural network layers is configured to process the sub-network inputs of the remaining sub-networks to generate a respective set of outputs. In some implementations, the one or more sets of neural network layers include a first set that is a stack of a plurality of convolutional neural network layers and a second set that is another stack of the plurality of convolutional neural network layers. For example, the remaining sub-modules 550 include a stack of neural network layers 524 that includes 1x1 convolutional layers 508 followed by 1x1 convolutional layers 510; and another stack 526 comprising 1x1 convolutional layer 514, followed by 3x3 convolutional layer 516, followed by 3x3 convolutional layer 518. Each of these stacks receives the subnet input from the previous subnet 504 and processes the subnet input to generate a corresponding group output.
The summing layer 520 is configured to generate a summed output from the sub-network inputs, the pass-through outputs, and the group outputs of the remaining sub-networks. However, after processing the subnetwork input (received from the previous subnetwork 504) through the stack of pass-through convolution layer 506, neural network layer 524, and the stack of neural network layer 526, the dimensions of the pass-through output and the group output may not match the dimensions of the original subnetwork input (e.g., the dimensions of the subnetwork input may be reduced by these neural network layers).
The filter expansion layer 512 is configured to generate an expanded output by scaling up the dimensions of the through output and each group output such that the dimensions of the expanded output match the dimensions of the original sub-network input. For example, as shown in fig. 5, filter expansion layer 512 is configured to receive pass outputs from pass convolution layer 506 and the respective group outputs, and to apply a 1x1 convolution to these outputs to generate an expanded output.
The summing layer 520 may then be configured to sum the subnetwork input and the expanded output of the remaining subnetwork 502 to generate a summed output.
The activation function layer 522 is configured to apply an activation function to the summed outputs to generate remaining module outputs for the remaining modules. In some implementations, the activation function may be a modified linear unit (Relu) activation function.
After the remaining module outputs are generated by the remaining modules, the remaining sub-networks are configured to combine the remaining module outputs to generate remaining sub-network outputs for the remaining sub-networks.
In implementations in which the neural network system includes multiple remaining sub-networks, each group of neural networks in different ones of the remaining sub-networks may have a different configuration, such as a different number of groups, a different configuration of neural network layers within a group, or both.
FIG. 6 is a flow diagram of an exemplary process 600 for generating output from received input. For convenience, process 600 will be described as being performed by a system of one or more computers located in one or more locations. For example, an image processing system (e.g., image processing system 100 of FIG. 1) suitably programmed in accordance with the subject specification can perform process 600.
The system receives data characterizing an input image (step 602).
The system processes the data using a deep neural network (e.g., deep neural network 150 of fig. 1) comprising sub-networks to generate an alternative representation (step 604). The deep neural network includes a series of sub-networks arranged from a lowest sub-network in the sequence to a highest sub-network in the sequence. The system processes the data through each sub-network in the sequence to generate an alternative representation. The sub-networks in the sequence comprise a plurality of modular sub-networks, and optionally one or more sub-networks comprising one or more conventional neural network layers, such as pass-through convolutional layers, average aggregation layers, convolutional layers, cascaded layers, and the like. Processing input through the modular subnetworks is described below with reference to fig. 7.
The system processes the alternate representation through the output layer to generate an output of the input image (step 606). Generally, the output generated by the system depends on the image processing task that the system has been configured to perform. For example, if the system is configured to perform image classification or recognition tasks, the output generated by the output layer may be a respective score for each of a predetermined set of object categories, where the score for a given object category represents a likelihood that the input image contains an image of an object belonging to the object category.
FIG. 7 is a flow diagram of an exemplary process 700 for processing input using a modular subnetwork. For convenience, process 700 will be described as being performed by a system of one or more computers located in one or more locations. For example, the neural network system 100 of fig. 1, suitably programmed in accordance with the present description, may perform the process 700.
The system receives an input (step 702). In particular, the input is a previous output representation, i.e. an output representation generated by a previous sub-network in the sequence of sub-networks, or an output representation generated by a previous module in the sequence of modules of the sub-network of modules.
The system processes the previous output representation through the pass-through convolution layer to generate a pass-through output (step 704). In some implementations, the pass-through convolutional layer is a 1x1 convolutional layer.
The system processes the previous output representation through an average aggregated stack of neural network layers to generate an average aggregated output (step 706). For example, the average gather stack of the neural network layer may include an average gather layer that averages the inputs of the sub-network, followed by a 1 × 1 convolutional layer.
The system processes the previous output representation through one or more sets of neural network layers (step 708). Each set of neural network layers includes an initial neural network layer followed by one or more additional neural network layers. The system processes the previous output representation through a given group, and processes the previous output representation through each neural network layer in the group to generate a group output for the group.
In some implementations, one or more groups include one convolutional layer followed by another convolutional layer. For example, one group may include 1x1 convolutional layers followed by 3x3 convolutional layers. As another example, another group may include a 1x1 convolutional layer, followed by a 3x3 convolutional layer, followed by a 3x3 convolutional layer. As described above, the 1x1 convolutional layer may be used as a dimensionality reduction module to reduce the dimensionality of the previous output representation prior to processing by another convolutional layer after the 1x1 convolutional layer. However, other combinations of convolutional layer dimensions are possible.
The system concatenates the pass-through output, the average aggregate output, and the group output through a cascade hierarchy to generate an output representation (step 710). For example, the system may cascade the tensors generated by the pass-through convolutional layers, the average collection stack of neural network layers, and the groups to generate a single tensor, i.e., the output representation. The system may then represent the output as the input of the next module in the sequence of modules of the sub-network, or the input of the next sub-network in the sequence of sub-networks, or the input of the output layer of the system.
Processes 600 and 700 may be performed to generate classification data for an image for which a desired classification (i.e., the output that the system should generate for the image) is unknown. The processes 600 and 700 may also be performed on documents in a set of training images (i.e., for which the output that should be predicted by the system is known) in order to train the deep neural network, i.e., to determine training values for parameters for each layer in the deep neural network, i.e., for parameters for each layer in the module sub-network and other sub-networks. In particular, processes 600 and 700 may be repeatedly performed on images selected from a set of training images as part of a back-propagation training technique that determines training values for parameters for layers of a deep neural network.
In some implementations, during training, the deep neural network is augmented with one or more other training sub-networks that are removed after the deep neural network has been trained. Each other training subnetwork (also referred to as a "side tower") includes one or more conventional neural network layers, which may include, for example, one or more of an average gathering layer, a fully connected layer, a droout layer (droout layer), etc., and an output layer configured to generate the same classification as the output layer of the system. Each other training sub-network is configured to receive the output generated by one of the sub-networks of the deep neural network, i.e., in parallel with the sub-network that has received the sub-network output, and to process the sub-network output to generate a training sub-network output of the training image. As part of the back propagation training technique, the training sub-network outputs are also used to adjust the parameter values for each layer in the deep neural network. As described above, once the deep neural network has been trained, the training sub-network is removed.
By a system of one or more computers configured to perform a particular operation or action is meant that the system has installed thereon software, firmware, hardware, or a combination thereof that in operation causes the system to perform the operation or action. By one or more computer programs configured to perform particular operations or actions is meant that the one or more programs include instructions that, when executed by a data processing apparatus, cause the apparatus to perform the operations or actions.
Embodiments of the subject matter and the functional operations described in this specification can be implemented in digital electronic circuitry, in tangibly embodied computer software or firmware, in computer hardware (including the structures disclosed in this specification and their structural equivalents), or in combinations of one or more of them. Embodiments of the subject matter described in this specification can be implemented as one or more computer programs, i.e., one or more modules of computer program instructions encoded on a tangible, non-transitory program carrier for execution by, or to control the operation of, data processing apparatus. Alternatively or in addition, the program instructions may be encoded on an artificially generated propagated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal, that is generated to encode information for transmission to suitable receiver apparatus for execution by a data processing apparatus. The computer storage medium may be a machine-readable storage device, a machine-readable storage substrate, a random or serial access memory device, or a combination of one or more of them. However, computer storage media are not propagated signals.
The term "data processing apparatus" includes all types of apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers. The apparatus can comprise special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit). The apparatus can include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them.
A computer program (which may also be referred to or described as a program, software application, module, software module, script, or code) can be written in any form of programming language, including compiled or interpreted languages, declarative or procedural languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program may, but need not, correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language file), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code). A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network.
As used in this specification, "engine" or "software engine" refers to a software-implemented input/output system that provides outputs that are distinct from inputs. The engine may be a coded function block such as a library, platform, software development kit ("SDK"), or object. Each engine may be implemented on any suitable type of computing device, such as a server, mobile phone, tablet computer, notebook computer, music player, e-book reader, laptop or desktop computer, PDA, smart phone, or other fixed or portable device that includes one or more processors and computer-readable media. Further, two or more engines may be implemented on the same computing device or on different computing devices.
The processes and logic flows described in this specification can be performed by one or more programmable computers executing one or more computer programs to perform functions by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit).
Computers suitable for the execution of a computer program include, by way of example, central processing units that may be based on general or special purpose microprocessors, or both, or any other kind of central processing unit. Generally, a central processing unit will receive instructions and data from a read-only memory or a random access memory or both. The essential elements of a computer are a central processing unit for performing or executing instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks. However, the computer need not have such devices. Further, a computer may be embedded in another apparatus, e.g., a mobile telephone, a Personal Digital Assistant (PDA), a mobile audio or video player, a game player, a Global Positioning System (GPS) receiver, or a portable storage device (e.g., a Universal Serial Bus (USB) flash drive), to name a few.
Computer-readable media suitable for storing computer program instructions and data include all forms of non-volatile memory, media and storage, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, such as internal hard disks or removable disks; magneto-optical disks; as well as CDROM and DVD-ROM discs. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.
To provide for interaction with a user, embodiments of the subject matter described in this specification can be implemented on a computer having a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to the user and a keyboard and a pointing device (e.g., a mouse or a trackball) by which the user can provide input to the computer. Other types of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback, such as visual feedback, auditory feedback, or tactile feedback; and input from the user may be received in any form, including acoustic, speech, or tactile input. Further, the computer may interact with the user by sending documents to and receiving documents from the device used by the user; for example, by sending a web page to a web browser on the user's client device in response to a request received from the web browser.
Embodiments of the subject matter described in this specification can be implemented in a computing system that includes a back end component, e.g., as a data server, or that includes a middleware component, e.g., an application server, or that includes a front end component, e.g., a client computer having a graphical user interface or a Web browser through which a user can interact with an implementation of the subject matter described is this specification, or any combination of one or more such back end, middleware, or front end components. The components of the system can be interconnected by any form or medium of digital data communication, e.g., a communication network. Examples of communication networks include a local area network ("LAN") and a wide area network ("WAN"), such as the Internet.
The computing system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
While this specification contains many specific implementation details, these should not be construed as limitations on the scope of any invention or of what may be claimed, but rather as descriptions of features that may be specific to particular embodiments of particular inventions. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Furthermore, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.
Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In some cases, multitasking and parallel processing may be advantageous. Moreover, the separation of various system modules and components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products.
Specific embodiments of the subject matter have been described. Other embodiments are within the scope of the following claims. For example, the actions recited in the claims can be performed in a different order and still achieve desirable results. As one example, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some implementations, multitasking and parallel processing may be advantageous.

Claims (42)

1. A neural network system implemented by one or more computers, wherein the neural network system is configured to receive images and generate classification outputs for input images, and wherein the neural network system comprises:
a plurality of sub-networks arranged on top of each other in a stack, wherein each sub-network is configured to process a sub-network input to generate a sub-network output and to provide the sub-network output as an input to another sub-network above the sub-network in the stack, and wherein the plurality of sub-networks comprises:
a first sub-network comprising a plurality of first modules, each first module comprising:
a first pass-through convolutional layer configured to process a subnetwork input of the first subnetwork to generate a first pass-through output;
a first average aggregation stack of neural network layers, wherein layers of the first average aggregation stack are configured to collectively process sub-network inputs of the first sub-network to generate a first average aggregation output;
a first stack of convolutional neural network layers, wherein layers in the first stack are configured to collectively process subnetwork inputs of the first subnetwork to generate a first stack output;
a second stack of convolutional neural network layers, wherein the second stack comprises a 1x1 convolutional layer, followed by a 3x3 convolutional layer, followed by a 3x3 convolutional layer, and wherein the layers in the second stack are configured to collectively process a subnetwork input of the first subnetwork to generate a second stack output; and
a first cascade layer configured to cascade the first pass output, the first average aggregate output, the first stack output, and the second stack output to generate a first module output of the first module; and is
A second sub-network comprising second modules, each second module comprising:
a second pass-through convolutional layer configured to process a subnetwork input of the second subnetwork to generate a second pass-through output;
a second average sink stack of neural network layers, wherein layers of the second average sink stack are configured to collectively process the sub-network inputs of the second sub-network to generate a second average sink output;
a third stack of convolutional neural network layers, wherein the third stack comprises 1x1 convolutional layers, followed by 1x7 convolutional layers, followed by 1x7 convolutional layers, and the layers in the third stack are configured to collectively process the subnetwork input of the second subnetwork to generate a third stack output;
a fourth stack of convolutional neural network layers, wherein each of the fourth stack is configured to collectively process the subnetwork input of the second subnetwork to generate a fourth stack output; and
a second cascade layer configured to cascade the second pass output, the second average aggregate output, the third stack output, and the fourth stack output to generate a second module output of the second module.
2. The neural network system of claim 1, wherein the first sub-network comprises four first modules.
3. The neural network system of claim 1, wherein the first pass convolutional layer is a 1x1 convolutional layer.
4. The neural network system of claim 1, wherein the first average aggregation stack comprises an average aggregation layer followed by 1x1 convolutional layers.
5. The neural network system of claim 1, wherein the first stack includes 1x1 convolutional layers followed by 3x3 convolutional layers.
6. The neural network system of claim 1, wherein the first sub-network is configured to combine the first module outputs generated by the plurality of first sub-networks to generate a first sub-network output of the first sub-network.
7. The neural network system of claim 1, wherein the first sub-network receives an input of 35x35x384, and each first module generates an output of 35x35x 384.
8. The neural network system of claim 1, wherein the second subnetwork comprises seven second modules.
9. The neural network system of claim 1, wherein the second pass-through convolutional layer is a 1x1 convolutional layer.
10. The neural network system of claim 1, wherein the second average gather stack includes an average gather layer followed by a 1x1 convolutional layer.
11. The neural network system of claim 1, wherein the fourth stack includes 1x1 convolutional layers, followed by 1x7 convolutional layers, followed by 7x1 convolutional layers, followed by 1x7 convolutional layers, followed by 7x1 convolutional layers.
12. The neural network system of claim 1, wherein the second sub-network is configured to combine the second module outputs generated by the plurality of second modules to generate a second sub-network output of the second sub-network.
13. The neural network system of claim 1, wherein the second subnetwork receives a 17x17x1024 input and each first module generates a 17x17x1024 output.
14. A neural network system implemented by one or more computers, wherein the neural network system is configured to receive images and generate classification outputs for input images, and wherein the neural network system comprises:
a plurality of sub-networks arranged on top of each other in a stack, wherein each sub-network is configured to process a sub-network input to generate a sub-network output and to provide the sub-network output as an input to another sub-network above the sub-network in the stack, and wherein the plurality of sub-networks comprises:
a first sub-network comprising a plurality of first modules, each first module comprising:
a first pass-through convolutional layer configured to process a subnetwork input of the first subnetwork to generate a first pass-through output;
a first average aggregate stack of neural network layers, wherein layers in the first average aggregate stack are configured to collectively process the sub-network inputs of the first sub-network to generate a first average aggregate output;
a first stack of convolutional neural network layers, wherein layers in the first stack are configured to collectively process the subnetwork inputs of the first subnetwork to generate a first stack output;
a second stack of convolutional neural network layers, wherein the second stack comprises 1x1 convolutional layers, followed by 3x3 convolutional layers, followed by 3x3 convolutional layers, and wherein the layers in the second stack are configured to collectively process the subnetwork inputs of the first subnetwork to generate a second stack output; and
a first cascade layer configured to cascade the first pass output, the first average aggregate output, the first stack output, and the second stack output to generate a first module output of the first module; and
a third sub-network comprising a plurality of third modules, each third module comprising:
a third pass-through convolutional layer configured to process a subnetwork input of the third subnetwork to generate a third pass-through output;
a third average sink stack of neural network layers, wherein layers of the third average sink stack are configured to collectively process the sub-network inputs of the third sub-network to generate a third average sink output;
a first set of convolutional neural network layers, wherein the first set comprises 1x1 convolutional layers, 1x3 convolutional layers, and 3x1 convolutional layers, wherein the layers in the first set are configured to collectively process the subnetwork inputs of the third subnetwork to generate a first set of outputs;
a second set of convolutional neural network layers, wherein the layers in the second set are configured to collectively process the sub-network inputs of the third sub-network to generate a second set of outputs; and
a third cascading layer configured to cascade the pass-through output, the average aggregate output, the first set of outputs, and the second set of outputs to generate a third module output of the third module.
15. The neural network system of claim 14, wherein the first sub-network includes four first modules.
16. The neural network system of claim 14, wherein the third sub-network includes three third modules.
17. The neural network system of claim 14, wherein at least one of the first pass convolutional layer or the third pass convolutional layer is a 1x1 convolutional layer.
18. The neural network system of claim 14, wherein the first average aggregation stack comprises an average aggregation layer followed by 1x1 convolutional layers.
19. The neural network system of claim 14, wherein the first stack comprises 1x1 convolutional layers followed by 3x3 convolutional layers.
20. The neural network system of claim 14, wherein the first sub-network is configured to combine the first module outputs generated by the plurality of first sub-networks to generate a first sub-network output of the first sub-network.
21. The neural network system of claim 14, wherein the first sub-network receives 35x384 inputs and each first module produces 35x384 outputs.
22. The neural network system of claim 14, wherein the third average gather stack includes one average gather layer followed by one 1x1 convolutional layer.
23. The neural network system of claim 14, wherein:
the 1x1 convolutional layers in the first group are configured to process the sub-network input of the third sub-network to generate a first intermediate output;
the 1x3 convolutional layers in the first set are configured to process the first intermediate output to generate a second intermediate output;
the 3x1 convolutional layers in the first set are configured to process the first intermediate output to generate a third intermediate output; and
wherein the first set further comprises a first set of cascade layers configured to cascade the second intermediate output and the third intermediate output to generate the first set of outputs.
24. The neural network system of claim 14, wherein the second group includes:
a fifth stack of convolutional layers configured to process the subnetwork input of the third subnetwork to generate a fifth stack output;
1x3 convolutional layers configured to process the fifth stack output to generate a fourth intermediate output;
a 3x1 convolutional layer configured to process the fifth stack output to generate a fifth intermediate output; and
a second set of cascade layers configured to cascade the fourth intermediate output and the fifth intermediate output to generate the second set of outputs.
25. The neural network system of claim 24, wherein the fifth stack includes 1x1 convolutional layers, followed by 1x3 convolutional layers, followed by 3x1 convolutional layers.
26. The neural network system of claim 14, wherein the third sub-network is configured to combine the third module outputs generated by the plurality of third modules to generate a third sub-network output of the third sub-network.
27. The neural network system of claim 14, wherein the third sub-network receives an 8x8x1536 input and each third module generates an 8x8x1536 output.
28. A neural network system implemented by one or more computers, wherein the neural network system is configured to receive images and generate classification outputs for input images, and wherein the neural network system comprises:
a plurality of sub-networks arranged on top of each other in a stack, wherein each sub-network is configured to process a sub-network input to generate a sub-network output and to provide the sub-network output as an input to another sub-network above the sub-network in the stack, and wherein the plurality of sub-networks comprises:
a first sub-network according to any of claims 1-7;
a second sub-network according to any of claims 1-13; and
the third sub-network of any of claims 14-27.
29. The neural network system of claim 28, wherein the neural network system further comprises:
a trunk sub-network below the first sub-network, the second sub-network and the third sub-network in the stack, wherein the trunk sub-network is configured to:
receiving the image; and is
The image is processed to generate a trunk sub-network output.
30. The neural network system of claim 28, wherein the neural network system further comprises:
a first reduced sub-network between the first sub-network and the second sub-network in the stack.
31. The neural network system of claim 28, wherein the neural network system further comprises:
a second reduced sub-network between the second sub-network and the third sub-network in the stack.
32. One or more non-transitory storage media encoded with instructions that, when executed by one or more computers, cause the one or more computers to implement the neural network system of any one of claims 1-26.
33. A neural network system implemented by one or more computers, wherein the neural network system is configured to receive images and generate classification outputs for input images, and wherein the neural network system comprises:
a plurality of sub-networks arranged on top of each other in a stack, wherein each sub-network is configured to process a sub-network input to generate a sub-network output and to provide the sub-network output as an input to another sub-network above the sub-network in the stack, and wherein the plurality of sub-networks comprises:
a first remaining sub-network comprising a plurality of first remaining modules, each first remaining module comprising:
a first sub-module comprising:
a pass-through convolutional layer configured to process subnetwork inputs of the first remaining subnetwork to generate a pass-through output;
one or more sets of neural network layers, each set of the one or more sets of neural network layers configured to process the subnetwork inputs of the first remaining subnetwork to generate a respective set output; and
a filter expansion layer configured to generate an expanded output by scaling up dimensions of the pass-through output and each of the set of outputs;
a summing layer configured to generate a summed output from the sub-network input and the expanded output of the first remaining sub-network; and
an activation function layer configured to apply an activation function to the summation output to generate a first remaining module output of the first remaining module.
34. The neural network system of claim 33, wherein the pass-through convolutional layers are 1x1 convolutional layers.
35. The neural network system of claim 33, wherein the filter expansion layer is configured to receive the pass-through output and the group output, and to apply a 1x1 convolution to the pass-through output and the group output to generate the expanded output.
36. The neural network system of claim 33, wherein the summing layer is configured to:
summing the sub-network inputs of the first remaining sub-network and the expanded output to generate the summed output.
37. The neural network system of claim 33, wherein the summing layer is configured to:
scaling the expanded output to generate a scaled expanded output; and is
Summing the sub-network inputs of the first remaining sub-network and the scaled expanded output to generate the summed output.
38. The neural network system of claim 33, wherein the activation function is a modified linear unit Relu activation function.
39. The neural network system of claim 33, wherein the one or more sets of neural network layers comprises a first set that is a stack of a plurality of convolutional neural network layers.
40. The neural network system of claim 39, wherein the one or more sets of neural network layers further comprise a second set, the second set being a different stack of a plurality of convolutional neural network layers.
41. The neural network system of claim 33, wherein the first remaining sub-network is configured to:
combining the first remaining module outputs generated by the plurality of first remaining modules to generate a first remaining sub-network output of the first remaining sub-network.
42. One or more non-transitory storage media encoded with instructions that, when executed by one or more computers, cause the one or more computers to implement the neural network system of any one of claims 33-41.
CN201680084514.7A 2016-02-18 2016-12-29 Image classification neural network Active CN108885713B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111545570.5A CN114386567A (en) 2016-02-18 2016-12-29 Image classification neural network

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US201662297101P 2016-02-18 2016-02-18
US62/297,101 2016-02-18
PCT/US2016/069279 WO2017142629A1 (en) 2016-02-18 2016-12-29 Image classification neural networks

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202111545570.5A Division CN114386567A (en) 2016-02-18 2016-12-29 Image classification neural network

Publications (2)

Publication Number Publication Date
CN108885713A CN108885713A (en) 2018-11-23
CN108885713B true CN108885713B (en) 2021-12-24

Family

ID=57822133

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201680084514.7A Active CN108885713B (en) 2016-02-18 2016-12-29 Image classification neural network
CN202111545570.5A Pending CN114386567A (en) 2016-02-18 2016-12-29 Image classification neural network

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202111545570.5A Pending CN114386567A (en) 2016-02-18 2016-12-29 Image classification neural network

Country Status (5)

Country Link
US (3) US10460211B2 (en)
CN (2) CN108885713B (en)
AU (1) AU2016393639B2 (en)
DE (1) DE202016008658U1 (en)
WO (1) WO2017142629A1 (en)

Families Citing this family (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110110843B (en) 2014-08-29 2020-09-25 谷歌有限责任公司 Method and system for processing images
US10643124B2 (en) * 2016-08-12 2020-05-05 Beijing Deephi Intelligent Technology Co., Ltd. Method and device for quantizing complex artificial neural network
US10802992B2 (en) 2016-08-12 2020-10-13 Xilinx Technology Beijing Limited Combining CPU and special accelerator for implementing an artificial neural network
US11151361B2 (en) * 2017-01-20 2021-10-19 Intel Corporation Dynamic emotion recognition in unconstrained scenarios
CN110337644A (en) 2017-02-23 2019-10-15 谷歌有限责任公司 For assisting virologist to identify the method and system of the tumour cell in the organization chart picture of amplification
EP3586181B1 (en) 2017-06-13 2021-09-22 Google LLC Augmented reality microscope for pathology
JP6805984B2 (en) * 2017-07-06 2020-12-23 株式会社デンソー Convolutional neural network
CN108156519B (en) * 2017-12-25 2020-12-11 深圳Tcl新技术有限公司 Image classification method, television device and computer-readable storage medium
EP3625765B1 (en) * 2017-12-29 2024-03-20 Leica Biosystems Imaging, Inc. Processing of histology images with a convolutional neural network to identify tumors
US11379516B2 (en) 2018-03-29 2022-07-05 Google Llc Similar medical image search
CN110110120B (en) * 2018-06-11 2021-05-25 北方工业大学 Image retrieval method and device based on deep learning
US10380997B1 (en) 2018-07-27 2019-08-13 Deepgram, Inc. Deep learning internal state index-based search and classification
US11994664B2 (en) 2019-01-09 2024-05-28 Google Llc Augmented reality laser capture microdissection machine
US10936160B2 (en) 2019-01-11 2021-03-02 Google Llc System, user interface and method for interactive negative explanation of machine-learning localization models in health care applications
CN111626400B (en) * 2019-02-28 2024-03-15 佳能株式会社 Training and application method and device for multi-layer neural network model and storage medium
CN113748007A (en) 2019-03-13 2021-12-03 数字标记公司 Digital marking of recycled articles
US11126890B2 (en) * 2019-04-18 2021-09-21 Adobe Inc. Robust training of large-scale object detectors with a noisy dataset
US20200394458A1 (en) * 2019-06-17 2020-12-17 Nvidia Corporation Weakly-supervised object detection using one or more neural networks
CN110738622A (en) * 2019-10-17 2020-01-31 温州大学 Lightweight neural network single image defogging method based on multi-scale convolution
CN111601116B (en) * 2020-05-15 2021-05-14 浙江盘石信息技术股份有限公司 Live video advertisement insertion method and system based on big data
CN111947599B (en) * 2020-07-24 2022-03-22 南京理工大学 Three-dimensional measurement method based on learning fringe phase retrieval and speckle correlation
US20220331841A1 (en) 2021-04-16 2022-10-20 Digimarc Corporation Methods and arrangements to aid recycling
CN113436200B (en) * 2021-07-27 2023-05-30 西安电子科技大学 RGB image classification method based on lightweight segmentation convolutional network
CN113807363B (en) * 2021-09-08 2024-04-19 西安电子科技大学 Image classification method based on lightweight residual error network
US11978174B1 (en) * 2022-03-28 2024-05-07 Amazon Technologies, Inc. Virtual shoe try-on
US11804023B1 (en) * 2022-07-11 2023-10-31 Stylitics, Inc. Systems and methods for providing a virtual dressing room and a virtual stylist
WO2024015385A1 (en) 2022-07-14 2024-01-18 Digimarc Corporation Methods and arrangements to utilize end-of-life data generated during recycling and waste sortation for counterfeit deterrence and other actions

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9202144B2 (en) * 2013-10-30 2015-12-01 Nec Laboratories America, Inc. Regionlets with shift invariant neural patterns for object detection
US9530047B1 (en) * 2013-11-30 2016-12-27 Beijing Sensetime Technology Development Co., Ltd. Method and system for face image recognition

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Going deeper with convolutions;Christian Szegedy et al.;《arXiv:1409.4842v1 [cs.CV]》;20140917;第1-12页 *
Rethinking the Inception Architecture for Computer Vision;Christian Szegedy et al.;《arXiv:1512.00567v3》;20151211;第1-10页 *

Also Published As

Publication number Publication date
US10460211B2 (en) 2019-10-29
BR112018016884A2 (en) 2019-02-05
WO2017142629A1 (en) 2017-08-24
US20170243085A1 (en) 2017-08-24
US12125257B2 (en) 2024-10-22
AU2016393639B2 (en) 2019-11-28
CN108885713A (en) 2018-11-23
US20190377985A1 (en) 2019-12-12
US11062181B2 (en) 2021-07-13
US20210334605A1 (en) 2021-10-28
AU2016393639A1 (en) 2018-09-06
CN114386567A (en) 2022-04-22
DE202016008658U1 (en) 2018-11-16

Similar Documents

Publication Publication Date Title
CN108885713B (en) Image classification neural network
US11462035B2 (en) Processing images using deep neural networks
US11775804B2 (en) Progressive neural networks
CN110088773B (en) Image processing neural network with separable convolutional layers
CN107969156B (en) Neural network for processing graphics data
CN110889416B (en) Salient object detection method based on cascade improved network
CN110728351A (en) Data processing method, related device and computer storage medium
CN109313718B (en) Systems, methods, and media for predicting a future state of an interactive object
Douch et al. Split Edge-Cloud Neural Networks For Better Adversarial Robustness
JP2024021697A (en) neural network system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant