[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

Long et al., 2019 - Google Patents

Design of reliable DNN accelerator with un-reliable ReRAM

Long et al., 2019

View PDF
Document ID
1905363083166362988
Author
Long Y
She X
Mukhopadhyay S
Publication year
Publication venue
2019 Design, Automation & Test in Europe Conference & Exhibition (DATE)

External Links

Snippet

This paper presents an algorithmic approach to design reliable ReRAM based Processing- in-Memory (PIM) architecture for Deep Neural Network (DNN) acceleration under intrinsic stochastic behavior of ReRAM devices. We employ the dynamical fixed point (DFP) data …
Continue reading at par.nsf.gov (PDF) (other versions)

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06NCOMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computer systems based on biological models
    • G06N3/02Computer systems based on biological models using neural network models
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • G06N3/0635Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means using analogue means
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06NCOMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computer systems based on biological models
    • G06N3/02Computer systems based on biological models using neural network models
    • G06N3/08Learning methods
    • G06N3/082Learning methods modifying the architecture, e.g. adding or deleting nodes or connections, pruning
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06NCOMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computer systems based on biological models
    • G06N3/02Computer systems based on biological models using neural network models
    • G06N3/04Architectures, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06NCOMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N99/00Subject matter not provided for in other groups of this subclass
    • G06N99/005Learning machines, i.e. computer in which a programme is changed according to experience gained by the machine itself during a complete run
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06NCOMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computer systems based on biological models
    • G06N3/12Computer systems based on biological models using genetic models
    • G06N3/126Genetic algorithms, i.e. information processing using digital simulations of the genetic system
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRICAL DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRICAL DIGITAL DATA PROCESSING
    • G06F7/00Methods or arrangements for processing data by operating upon the order or content of the data handled
    • G06F7/58Random or pseudo-random number generators
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRICAL DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/50Computer-aided design
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C11/00Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor
    • G11C11/56Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using storage elements with more than two stable states represented by steps, e.g. of voltage, current, phase, frequency
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06KRECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K9/00Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
    • G06K9/62Methods or arrangements for recognition using electronic means
    • G06K9/6217Design or setup of recognition systems and techniques; Extraction of features in feature space; Clustering techniques; Blind source separation
    • G06K9/6232Extracting features by transforming the feature space, e.g. multidimensional scaling; Mappings, e.g. subspace methods
    • G06K9/6247Extracting features by transforming the feature space, e.g. multidimensional scaling; Mappings, e.g. subspace methods based on an approximation criterion, e.g. principal component analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06KRECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K9/00Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
    • G06K9/62Methods or arrangements for recognition using electronic means
    • G06K9/6267Classification techniques
    • G06K9/6268Classification techniques relating to the classification paradigm, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06KRECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K9/00Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
    • G06K9/62Methods or arrangements for recognition using electronic means
    • G06K9/6217Design or setup of recognition systems and techniques; Extraction of features in feature space; Clustering techniques; Blind source separation
    • G06K9/6261Design or setup of recognition systems and techniques; Extraction of features in feature space; Clustering techniques; Blind source separation partitioning the feature space
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06NCOMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computer systems utilising knowledge based models
    • G06N5/04Inference methods or devices
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRICAL DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11CSTATIC STORES
    • G11C11/00Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor
    • G11C11/21Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using electric elements

Similar Documents

Publication Publication Date Title
Long et al. Design of reliable DNN accelerator with un-reliable ReRAM
Long et al. ReRAM-based processing-in-memory architecture for recurrent neural network acceleration
US10346347B2 (en) Field-programmable crossbar array for reconfigurable computing
US11741188B2 (en) Hardware accelerated discretized neural network
Chen et al. Multiply accumulate operations in memristor crossbar arrays for analog computing
Lin et al. DL-RSIM: A simulation framework to enable reliable ReRAM-based accelerators for deep learning
Kim et al. A reconfigurable digital neuromorphic processor with memristive synaptic crossbar for cognitive computing
Zidan et al. Field-programmable crossbar array (FPCA) for reconfigurable computing
Kang et al. Deep in-memory architectures in SRAM: An analog approach to approximate computing
Jain et al. Neural network accelerator design with resistive crossbars: Opportunities and challenges
Ravichandran et al. Artificial neural networks based on memristive devices
Kang et al. S-FLASH: A NAND flash-based deep neural network accelerator exploiting bit-level sparsity
Zhang et al. Array-level boosting method with spatial extended allocation to improve the accuracy of memristor based computing-in-memory chips
Mondal et al. Energy-efficient design of MTJ-based neural networks with stochastic computing
Park et al. Unlocking wordline-level parallelism for fast inference on RRAM-based DNN accelerator
Bennett et al. Contrasting advantages of learning with random weights and backpropagation in non-volatile memory neural networks
Zhang et al. Neural network training with stochastic hardware models and software abstractions
Zhou et al. Energy‐Efficient Memristive Euclidean Distance Engine for Brain‐Inspired Competitive Learning
Bhattacharjee et al. Efficiency-driven hardware optimization for adversarially robust neural networks
Chen PUFFIN: an efficient DNN training accelerator for direct feedback alignment in FeFET
Xu et al. Tri-HD: Energy-efficient on-chip learning with in-memory hyperdimensional computing
Cao et al. Parasitic-aware modelling for neural networks implemented with memristor crossbar array
Lele et al. Neuromorphic swarm on rram compute-in-memory processor for solving qubo problem
Fu et al. Cycle-to-cycle variation enabled energy efficient privacy preserving technology in ann
Ma et al. Non-volatile memory array based quantization-and noise-resilient LSTM neural networks