[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

Oh et al., 2012 - Google Patents

1.2-mw online learning mixed-mode intelligent inference engine for low-power real-time object recognition processor

Oh et al., 2012

View PDF
Document ID
5592639835762086372
Author
Oh J
Lee S
Yoo H
Publication year
Publication venue
IEEE Transactions on Very Large Scale Integration (VLSI) Systems

External Links

Snippet

Object recognition is computationally intensive and it is challenging to meet 30-f/s real-time processing demands under sub-watt low-power constraints of mobile platforms even for heterogeneous many-core architecture. In this paper, an intelligent inference engine (IIE) is …
Continue reading at www.researchgate.net (PDF) (other versions)

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06NCOMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computer systems based on biological models
    • G06N3/02Computer systems based on biological models using neural network models
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRICAL DIGITAL DATA PROCESSING
    • G06F7/00Methods or arrangements for processing data by operating upon the order or content of the data handled
    • G06F7/38Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation
    • G06F7/48Methods or arrangements for performing computations using exclusively denominational number representation, e.g. using binary, ternary, decimal representation using non-contact-making devices, e.g. tube, solid state device; using unspecified devices
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06NCOMPUTER SYSTEMS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computer systems based on biological models
    • G06N3/02Computer systems based on biological models using neural network models
    • G06N3/04Architectures, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRICAL DIGITAL DATA PROCESSING
    • G06F7/00Methods or arrangements for processing data by operating upon the order or content of the data handled
    • G06F7/58Random or pseudo-random number generators
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRICAL DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • HELECTRICITY
    • H03BASIC ELECTRONIC CIRCUITRY
    • H03MCODING; DECODING; CODE CONVERSION IN GENERAL
    • H03M1/00Analogue/digital conversion; Digital/analogue conversion
    • H03M1/12Analogue/digital converters

Similar Documents

Publication Publication Date Title
Lee et al. Energy-efficient hybrid stochastic-binary neural networks for near-sensor computing
Alaghi et al. The promise and challenge of stochastic computing
Li et al. Merging the interface: Power, area and accuracy co-optimization for rram crossbar-based mixed-signal computing system
Canals et al. A new stochastic computing methodology for efficient neural network implementation
Shine et al. Low-power scheduling with resources operating at multiple voltages
Oh et al. 1.2-mw online learning mixed-mode intelligent inference engine for low-power real-time object recognition processor
Oh et al. A 57 mW 12.5 µJ/Epoch embedded mixed-mode neuro-fuzzy processor for mobile real-time object recognition
KR101965850B1 (en) A stochastic implementation method of an activation function for an artificial neural network and a system including the same
Chen et al. A time-domain computing accelerated image recognition processor with efficient time encoding and non-linear logic operation
Seva et al. Approximate stochastic computing (ASC) for image processing applications
Oh et al. A 57mW embedded mixed-mode neuro-fuzzy accelerator for intelligent multi-core processor
Onizawa et al. Area/energy-efficient gammatone filters based on stochastic computation
Wang et al. Efficient spiking neural network training and inference with reduced precision memory and computing
Bai et al. Partial sum quantization for computing-in-memory-based neural network accelerator
Al Maharmeh et al. A comparative analysis of time-domain and digital-domain hardware accelerators for neural networks
Khan et al. Hardware accelerator for probabilistic inference in 65-nm cmos
CN213934855U (en) A Neural Network Random Number Generator Shared Circuit Based on Random Computing
US11475288B2 (en) Sorting networks using unary processing
Li et al. Spikesen: Low-latency in-sensor-intelligence design with neuromorphic spiking neurons
Akhtar et al. Stochastic computing: Systems, applications, challenges and solutions
Hu et al. High performance and hardware efficient stochastic computing elements for deep neural network
De Magistris et al. Fpga implementation of a parallel dds for wide-band applications
Thangavel et al. Intrinsic evolution of truncated Puiseux series on a mixed-signal field-programmable soc
Seo et al. A 44.2-TOPS/W CNN processor with variation-tolerant analog datapath and variation compensating circuit
Zyarah et al. Reservoir network with structural plasticity for human activity recognition