[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

Sum et al., 2019 - Google Patents

A limitation of gradient descent learning

Sum et al., 2019

Document ID
6665364130076635085
Author
Sum J
Leung C
Ho K
Publication year
Publication venue
IEEE Transactions on Neural Networks and Learning Systems

External Links

Snippet

Over decades, gradient descent has been applied to develop learning algorithm to train a neural network (NN). In this brief, a limitation of applying such algorithm to train an NN with persistent weight noise is revealed. Let V (w) be the performance measure of an ideal NN. V …
Continue reading at ieeexplore.ieee.org (other versions)

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06NCOMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computer systems based on biological models
    • G06N3/02Computer systems based on biological models using neural network models
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • G06N3/0635Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means using analogue means
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06NCOMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computer systems based on biological models
    • G06N3/02Computer systems based on biological models using neural network models
    • G06N3/08Learning methods
    • G06N3/082Learning methods modifying the architecture, e.g. adding or deleting nodes or connections, pruning
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06NCOMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computer systems based on biological models
    • G06N3/02Computer systems based on biological models using neural network models
    • G06N3/04Architectures, e.g. interconnection topology
    • G06N3/0472Architectures, e.g. interconnection topology using probabilistic elements, e.g. p-rams, stochastic processors
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06NCOMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computer systems based on biological models
    • G06N3/02Computer systems based on biological models using neural network models
    • G06N3/04Architectures, e.g. interconnection topology
    • G06N3/049Temporal neural nets, e.g. delay elements, oscillating neurons, pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06NCOMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computer systems based on biological models
    • G06N3/02Computer systems based on biological models using neural network models
    • G06N3/04Architectures, e.g. interconnection topology
    • G06N3/0454Architectures, e.g. interconnection topology using a combination of multiple neural nets
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06KRECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K9/00Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
    • G06K9/62Methods or arrangements for recognition using electronic means
    • G06K9/6267Classification techniques
    • G06K9/6268Classification techniques relating to the classification paradigm, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06NCOMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computer systems based on biological models
    • G06N3/12Computer systems based on biological models using genetic models
    • G06N3/126Genetic algorithms, i.e. information processing using digital simulations of the genetic system
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06NCOMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N99/00Subject matter not provided for in other groups of this subclass
    • G06N99/005Learning machines, i.e. computer in which a programme is changed according to experience gained by the machine itself during a complete run
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06KRECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K9/00Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
    • G06K9/62Methods or arrangements for recognition using electronic means
    • G06K9/6217Design or setup of recognition systems and techniques; Extraction of features in feature space; Clustering techniques; Blind source separation
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06KRECOGNITION OF DATA; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K9/00Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
    • G06K9/36Image preprocessing, i.e. processing the image information without deciding about the identity of the image
    • G06K9/46Extraction of features or characteristics of the image
    • G06K9/4604Detecting partial patterns, e.g. edges or contours, or configurations, e.g. loops, corners, strokes, intersections
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRICAL DIGITAL DATA PROCESSING
    • G06F7/00Methods or arrangements for processing data by operating upon the order or content of the data handled
    • G06F7/38Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation

Similar Documents

Publication Publication Date Title
Sum et al. A limitation of gradient descent learning
Yu et al. An overview of neuromorphic computing for artificial intelligence enabled hardware-based hopfield neural network
Wilamowski et al. Improved computation for Levenberg–Marquardt training
De la Rosa et al. Randomized algorithms for nonlinear system identification with deep learning modification
Dundar et al. The effects of quantization on multilayer neural networks
Roy et al. Liquid state machine with dendritically enhanced readout for low-power, neuromorphic VLSI implementations
Sakemi et al. A supervised learning algorithm for multilayer spiking neural networks based on temporal coding toward energy-efficient VLSI processor design
Hu et al. Memristor crossbar based hardware realization of BSB recall function
Zhou et al. Discrete-time recurrent neural networks with complex-valued linear threshold neurons
Tang et al. A multilayer neural network merging image preprocessing and pattern recognition by integrating diffusion and drift memristors
Adhikari et al. Building cellular neural network templates with a hardware friendly learning algorithm
Goh et al. An augmented CRTRL for complex-valued recurrent neural networks
Wang et al. A time-domain analog weighted-sum calculation model for extremely low power VLSI implementation of multi-layer neural networks
Cho et al. An on-chip learning neuromorphic autoencoder with current-mode transposable memory read and virtual lookup table
Yamaguchi et al. An energy-efficient time-domain analog CMOS BinaryConnect neural network processor based on a pulse-width modulation approach
Singh et al. Multilayer feed forward neural networks for non-linear continuous bidirectional associative memory
Yeo et al. A hardware and energy-efficient online learning neural network with an RRAM crossbar array and stochastic neurons
Merkel et al. A stochastic learning algorithm for neuromemristive systems
Tripathi et al. Analog neuromorphic system based on multi input floating gate mos neuron model
Nguyen et al. A low-power, high-accuracy with fully on-chip ternary weight hardware architecture for Deep Spiking Neural Networks
Bordanov et al. Simulation of calculation errors in memristive crossbars for artificial neural networks
Sum et al. Learning algorithm for Boltzmann machines with additive weight and bias noise
Smagulova et al. Who is the winner? Memristive-CMOS hybrid modules: CNN-LSTM versus HTM
Ho et al. Searching for minimal optimal neural networks
Quan et al. Training-free stuck-at fault mitigation for ReRAM-based deep learning accelerators