[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

Yamaguchi et al., 2020 - Google Patents

An energy-efficient time-domain analog CMOS BinaryConnect neural network processor based on a pulse-width modulation approach

Yamaguchi et al., 2020

View PDF
Document ID
5064645173526882636
Author
Yamaguchi M
Iwamoto G
Nishimura Y
Tamukoh H
Morie T
Publication year
Publication venue
IEEE Access

External Links

Snippet

This paper proposes a time-domain analog calculations model based on a pulse-width modulation (PWM) approach for neural network calculations including weighted-sum or multiply-and-accumulate calculation and rectified-linear unit operation. We also propose …
Continue reading at ieeexplore.ieee.org (PDF) (other versions)

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06NCOMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computer systems based on biological models
    • G06N3/02Computer systems based on biological models using neural network models
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • G06N3/0635Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means using analogue means
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06NCOMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computer systems based on biological models
    • G06N3/02Computer systems based on biological models using neural network models
    • G06N3/04Architectures, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06NCOMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computer systems based on biological models
    • G06N3/02Computer systems based on biological models using neural network models
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRICAL DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/50Computer-aided design
    • G06F17/5009Computer-aided design using simulation
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06NCOMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N99/00Subject matter not provided for in other groups of this subclass
    • G06N99/005Learning machines, i.e. computer in which a programme is changed according to experience gained by the machine itself during a complete run
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRICAL DIGITAL DATA PROCESSING
    • G06F7/00Methods or arrangements for processing data by operating upon the order or content of the data handled
    • G06F7/38Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRICAL DIGITAL DATA PROCESSING
    • G06F7/00Methods or arrangements for processing data by operating upon the order or content of the data handled
    • G06F7/58Random or pseudo-random number generators

Similar Documents

Publication Publication Date Title
Kaiser et al. Hardware-aware in situ learning based on stochastic magnetic tunnel junctions
Valavi et al. A 64-tile 2.4-Mb in-memory-computing CNN accelerator employing charge-domain compute
Yamaguchi et al. An energy-efficient time-domain analog CMOS BinaryConnect neural network processor based on a pulse-width modulation approach
Bankman et al. An Always-On 3.8$\mu $ J/86% CIFAR-10 mixed-signal binary CNN processor with all memory on chip in 28-nm CMOS
Kang et al. An on-chip-trainable Gaussian-kernel analog support vector machine
Liu et al. NS-CIM: A current-mode computation-in-memory architecture enabling near-sensor processing for intelligent IoT vision nodes
De et al. Read-optimized 28nm hkmg multibit fefet synapses for inference-engine applications
Sahay et al. Energy-efficient moderate precision time-domain mixed-signal vector-by-matrix multiplier exploiting 1T-1R arrays
Wang et al. A time-domain analog weighted-sum calculation model for extremely low power VLSI implementation of multi-layer neural networks
Pittala et al. Biasing techniques: validation of 3 to 8 decoder modules using 18nm FinFET nodes
Singh et al. Quantum tunneling based ultra-compact and energy efficient spiking neuron enables hardware SNN
Ottati et al. To spike or not to spike: A digital hardware perspective on deep learning acceleration
Yamaguchi et al. An energy-efficient time-domain analog VLSI neural network processor based on a pulse-width modulation approach
Pittala et al. Energy Efficient Decoder Circuit Using Source Biasing Technique in CNTFET Technology
Sedighi et al. Nontraditional computation using beyond-CMOS tunneling devices
Vahdat et al. Interstice: Inverter-based memristive neural networks discretization for function approximation applications
Bashir et al. A single schottky barrier MOSFET-based leaky integrate and fire neuron for neuromorphic computing
Kilani et al. C3PU: Cross-coupling capacitor processing unit using analog-mixed signal for AI inference
Vohra et al. CMOS circuit implementation of spiking neural network for pattern recognition using on-chip unsupervised STDP learning
Rezaei et al. A reliable non‐volatile in‐memory computing associative memory based on spintronic neurons and synapses
Scott et al. A flash-based current-mode IC to realize quantized neural networks
Li et al. A CMOS rectified linear unit operating in weak inversion for memristive neuromorphic circuits
Butola et al. A Comprehensive Technique Based on Machine Learning for Device and Circuit Modeling of Gate-All-Around Nanosheet Transistors
Nasab et al. Process-in-Memory realized by nonvolatile Task-Scheduling and Resource-Sharing XNOR-Net hardware Accelerator architectures
Lou et al. An energy efficient all-digital time-domain compute-in-memory macro optimized for binary neural networks