Jain et al., 2020 - Google Patents
RxNN: A framework for evaluating deep neural networks on resistive crossbarsJain et al., 2020
View PDF- Document ID
- 6087572394379771502
- Author
- Jain S
- Sengupta A
- Roy K
- Raghunathan A
- Publication year
- Publication venue
- IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems
External Links
Snippet
Resistive crossbars designed with nonvolatile memory devices have emerged as promising building blocks for deep neural network (DNN) hardware, due to their ability to compactly and efficiently realize vector-matrix multiplication (VMM), the dominant computational kernel …
- 230000001537 neural 0 title abstract description 14
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computer systems based on biological models
- G06N3/02—Computer systems based on biological models using neural network models
- G06N3/06—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
- G06N3/063—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
- G06N3/0635—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means using analogue means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/50—Computer-aided design
- G06F17/5009—Computer-aided design using simulation
- G06F17/5036—Computer-aided design using simulation for analog modelling, e.g. for circuits, spice programme, direct methods, relaxation methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F2217/00—Indexing scheme relating to computer aided design [CAD]
- G06F2217/78—Power analysis and optimization
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/30—Information retrieval; Database structures therefor; File system structures therefor
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N99/00—Subject matter not provided for in other groups of this subclass
- G06N99/005—Learning machines, i.e. computer in which a programme is changed according to experience gained by the machine itself during a complete run
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F7/00—Methods or arrangements for processing data by operating upon the order or content of the data handled
- G06F7/58—Random or pseudo-random number generators
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F2217/00—Indexing scheme relating to computer aided design [CAD]
- G06F2217/16—Numerical modeling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06G—ANALOGUE COMPUTERS
- G06G7/00—Devices in which the computing operation is performed by varying electric or magnetic quantities
- G06G7/12—Arrangements for performing computing operations, e.g. operational amplifiers
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F19/00—Digital computing or data processing equipment or methods, specially adapted for specific applications
- G06F19/70—Chemoinformatics, i.e. data processing methods or systems for the retrieval, analysis, visualisation, or storage of physicochemical or structural data of chemical compounds
- G06F19/708—Chemoinformatics, i.e. data processing methods or systems for the retrieval, analysis, visualisation, or storage of physicochemical or structural data of chemical compounds for data visualisation, e.g. molecular structure representations, graphics generation, display of maps or networks or other visual representations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computer systems utilising knowledge based models
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06J—HYBRID COMPUTING ARRANGEMENTS
- G06J1/00—Hybrid computing arrangements
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Jain et al. | RxNN: A framework for evaluating deep neural networks on resistive crossbars | |
Cai et al. | Power-efficient combinatorial optimization using intrinsic noise in memristor Hopfield neural networks | |
Yi et al. | Activity-difference training of deep neural networks using memristor crossbars | |
Peng et al. | DNN+ NeuroSim V2. 0: An end-to-end benchmarking framework for compute-in-memory accelerators for on-chip training | |
Xia et al. | Stuck-at fault tolerance in RRAM computing systems | |
Xia et al. | MNSIM: Simulation platform for memristor-based neuromorphic computing system | |
Chakraborty et al. | Geniex: A generalized approach to emulating non-ideality in memristive xbars using neural networks | |
Li et al. | RRAM-based analog approximate computing | |
Roy et al. | TxSim: Modeling training of deep neural networks on resistive crossbar systems | |
Zhang et al. | Design guidelines of RRAM based neural-processing-unit: A joint device-circuit-algorithm analysis | |
Giacomin et al. | A robust digital RRAM-based convolutional block for low-power image processing and learning applications | |
Du et al. | An analog neural network computing engine using CMOS-compatible charge-trap-transistor (CTT) | |
Yue et al. | STICKER-IM: A 65 nm computing-in-memory NN processor using block-wise sparsity optimization and inter/intra-macro data reuse | |
Kim et al. | Input voltage mapping optimized for resistive memory-based deep neural network hardware | |
Zhang et al. | Handling stuck-at-fault defects using matrix transformation for robust inference of dnns | |
Onizawa et al. | In-hardware training chip based on CMOS invertible logic for machine learning | |
Ansari et al. | PHAX: Physical characteristics aware ex-situ training framework for inverter-based memristive neuromorphic circuits | |
Nourazar et al. | Code acceleration using memristor-based approximate matrix multiplier: Application to convolutional neural networks | |
Greenberg-Toledo et al. | Supporting the momentum training algorithm using a memristor-based synapse | |
Krishnan et al. | Exploring model stability of deep neural networks for reliable RRAM-based in-memory acceleration | |
Cao et al. | A non-idealities aware software–hardware co-design framework for edge-AI deep neural network implemented on memristive crossbar | |
Athreyas et al. | Memristor-CMOS analog coprocessor for acceleration of high-performance computing applications | |
Bhattacharya et al. | Computing high-degree polynomial gradients in memory | |
Shchanikov | Methodology for hardware-in-the-loop simulation of memristive neuromorphic systems | |
Nguyen et al. | Fully analog ReRAM neuromorphic circuit optimization using DTCO simulation framework |