Kim et al., 2010 - Google Patents
A large-scale architecture for restricted boltzmann machinesKim et al., 2010
View PDF- Document ID
- 2120217413522357242
- Author
- Kim S
- McMahon P
- Olukotun K
- Publication year
- Publication venue
- 2010 18th IEEE Annual International Symposium on Field-Programmable Custom Computing Machines
External Links
Snippet
Deep Belief Nets (DBNs) are an emerging application in the machine learning domain, which use Restricted Boltzmann Machines (RBMs) as their basic building block. Although small scale DBNs have shown great potential, the computational cost of RBM training has …
- 210000002569 neurons 0 abstract description 55
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F15/00—Digital computers in general; Data processing equipment in general
- G06F15/76—Architectures of general purpose stored programme computers
- G06F15/80—Architectures of general purpose stored programme computers comprising an array of processing units with common control, e.g. single instruction multiple data processors
- G06F15/8007—Architectures of general purpose stored programme computers comprising an array of processing units with common control, e.g. single instruction multiple data processors single instruction multiple data [SIMD] multiprocessors
- G06F15/8023—Two dimensional arrays, e.g. mesh, torus
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computer systems based on biological models
- G06N3/02—Computer systems based on biological models using neural network models
- G06N3/06—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
- G06N3/063—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F15/00—Digital computers in general; Data processing equipment in general
- G06F15/16—Combinations of two or more digital computers each having at least an arithmetic unit, a programme unit and a register, e.g. for a simultaneous processing of several programmes
- G06F15/163—Interprocessor communication
- G06F15/173—Interprocessor communication using an interconnection network, e.g. matrix, shuffle, pyramid, star, snowflake
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F15/00—Digital computers in general; Data processing equipment in general
- G06F15/76—Architectures of general purpose stored programme computers
- G06F15/78—Architectures of general purpose stored programme computers comprising a single central processing unit
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computer systems based on biological models
- G06N3/02—Computer systems based on biological models using neural network models
- G06N3/04—Architectures, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F15/00—Digital computers in general; Data processing equipment in general
- G06F15/16—Combinations of two or more digital computers each having at least an arithmetic unit, a programme unit and a register, e.g. for a simultaneous processing of several programmes
- G06F15/163—Interprocessor communication
- G06F15/167—Interprocessor communication using a common memory, e.g. mailbox
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for programme control, e.g. control unit
- G06F9/06—Arrangements for programme control, e.g. control unit using stored programme, i.e. using internal store of processing equipment to receive and retain programme
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5061—Partitioning or combining of resources
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computer systems based on biological models
- G06N3/02—Computer systems based on biological models using neural network models
- G06N3/10—Simulation on general purpose computers
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computer systems based on biological models
- G06N3/02—Computer systems based on biological models using neural network models
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/50—Computer-aided design
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N99/00—Subject matter not provided for in other groups of this subclass
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Kim et al. | A large-scale architecture for restricted boltzmann machines | |
Dai et al. | NeST: A neural network synthesis tool based on a grow-and-prune paradigm | |
Ankit et al. | Resparc: A reconfigurable and energy-efficient architecture with memristive crossbars for deep spiking neural networks | |
Zaman et al. | Custom hardware architectures for deep learning on portable devices: a review | |
Chen et al. | NoC-based DNN accelerator: A future design paradigm | |
US20200026992A1 (en) | Hardware neural network conversion method, computing device, compiling method and neural network software and hardware collaboration system | |
Coates et al. | Deep learning with COTS HPC systems | |
Forrest et al. | Implementing neural network models on parallel computers | |
JP2019522850A (en) | Accelerator for deep neural networks | |
Hong et al. | Multi-dimensional parallel training of winograd layer on memory-centric architecture | |
Chen et al. | Emat: an efficient multi-task architecture for transfer learning using reram | |
Hanif et al. | Resistive crossbar-aware neural network design and optimization | |
Shahhosseini et al. | Partition Pruning: Parallelization-Aware Pruning for Dense Neural Networks | |
Vannel et al. | Scalp: self-configurable 3-d cellular adaptive platform | |
Pietron et al. | Parallel Implementation of Spatial Pooler in Hierarchical Temporal Memory. | |
Rice et al. | Scaling analysis of a neocortex inspired cognitive model on the Cray XD1 | |
Chen et al. | A survey of intelligent chip design research based on spiking neural networks | |
Martinez-Corral et al. | A fully configurable and scalable neural coprocessor ip for soc implementations of machine learning applications | |
Kasabov et al. | From von Neumann, John Atanasoff and ABC to Neuromorphic computation and the NeuCube spatio-temporal data machine | |
CN113645282A (en) | Deep learning method based on server cluster | |
Duranton et al. | A general purpose digital architecture for neural network simulations | |
Sugiarto et al. | Understanding a deep learning technique through a neuromorphic system a case study with spinnaker neuromorphic platform | |
Liang et al. | Static hardware task placement on multi-context FPGA using hybrid genetic algorithm | |
Kolinummi et al. | PARNEU: general-purpose partial tree computer | |
CN116349244A (en) | Neural processing unit synchronization system and method |