Li et al., 2022 - Google Patents
Mixed‐Precision Continual Learning Based on Computational Resistance Random Access MemoryLi et al., 2022
View PDF- Document ID
- 393633518180477744
- Author
- Li Y
- Zhang W
- Xu X
- He Y
- Dong D
- Jiang N
- Wang F
- Guo Z
- Wang S
- Dou C
- Liu Y
- Wang Z
- Shang D
- Publication year
- Publication venue
- Advanced Intelligent Systems
External Links
Snippet
Artificial neural networks have acquired remarkable achievements in the field of artificial intelligence. However, it suffers from catastrophic forgetting when dealing with continual learning problems, ie, the loss of previously learned knowledge upon learning new …
- 230000001537 neural 0 abstract description 19
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computer systems based on biological models
- G06N3/02—Computer systems based on biological models using neural network models
- G06N3/06—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
- G06N3/063—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
- G06N3/0635—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means using analogue means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/30—Information retrieval; Database structures therefor; File system structures therefor
- G06F17/30286—Information retrieval; Database structures therefor; File system structures therefor in structured data stores
- G06F17/30312—Storage and indexing structures; Management thereof
- G06F17/30321—Indexing structures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computer systems based on biological models
- G06N3/02—Computer systems based on biological models using neural network models
- G06N3/04—Architectures, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computer systems based on biological models
- G06N3/02—Computer systems based on biological models using neural network models
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N99/00—Subject matter not provided for in other groups of this subclass
- G06N99/005—Learning machines, i.e. computer in which a programme is changed according to experience gained by the machine itself during a complete run
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06N—COMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computer systems based on biological models
- G06N3/12—Computer systems based on biological models using genetic models
- G06N3/126—Genetic algorithms, i.e. information processing using digital simulations of the genetic system
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/50—Computer-aided design
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F7/00—Methods or arrangements for processing data by operating upon the order or content of the data handled
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRICAL DIGITAL DATA PROCESSING
- G06F19/00—Digital computing or data processing equipment or methods, specially adapted for specific applications
- G06F19/10—Bioinformatics, i.e. methods or systems for genetic or protein-related data processing in computational molecular biology
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Yao et al. | Fully hardware-implemented memristor convolutional neural network | |
Li et al. | Long short-term memory networks in memristor crossbar arrays | |
Jaiswal et al. | 8T SRAM cell as a multibit dot-product engine for beyond von Neumann computing | |
Jung et al. | A crossbar array of magnetoresistive memory devices for in-memory computing | |
Yi et al. | Activity-difference training of deep neural networks using memristor crossbars | |
Chen et al. | Multiply accumulate operations in memristor crossbar arrays for analog computing | |
Nandakumar et al. | Mixed-precision deep learning based on computational memory | |
Lim et al. | Adaptive learning rule for hardware-based deep neural networks using electronic synapse devices | |
Li et al. | Analogue signal and image processing with large memristor crossbars | |
Wang et al. | Integration and co-design of memristive devices and algorithms for artificial intelligence | |
Li et al. | Mixed‐Precision Continual Learning Based on Computational Resistance Random Access Memory | |
Agarwal et al. | Energy scaling advantages of resistive memory crossbar based computation and its application to sparse coding | |
Mao et al. | Experimentally validated memristive memory augmented neural network with efficient hashing and similarity search | |
Zidan et al. | Field-programmable crossbar array (FPCA) for reconfigurable computing | |
Li et al. | In situ parallel training of analog neural network using electrochemical random-access memory | |
Wang et al. | Efficient training of the memristive deep belief net immune to non‐idealities of the synaptic devices | |
Ali et al. | RAMANN: in-SRAM differentiable memory computations for memory-augmented neural networks | |
Pedretti et al. | Redundancy and analog slicing for precise in-memory machine learning—Part I: Programming techniques | |
Pedretti et al. | Differentiable content addressable memory with memristors | |
Zhou et al. | Energy‐Efficient Memristive Euclidean Distance Engine for Brain‐Inspired Competitive Learning | |
Hoskins et al. | Streaming batch eigenupdates for hardware neural networks | |
Wang et al. | SSM: a high-performance scheme for in situ training of imprecise memristor neural networks | |
Li et al. | Restricted Boltzmann Machines Implemented by Spin–Orbit Torque Magnetic Tunnel Junctions | |
Yu et al. | Distributed in-memory computing on binary memristor-crossbar for machine learning | |
Liu et al. | Analog content-addressable memory from complementary FeFETs |