WO2019126758A1 - A unified memory organization for neural network processors - Google Patents
A unified memory organization for neural network processors Download PDFInfo
- Publication number
- WO2019126758A1 WO2019126758A1 PCT/US2018/067301 US2018067301W WO2019126758A1 WO 2019126758 A1 WO2019126758 A1 WO 2019126758A1 US 2018067301 W US2018067301 W US 2018067301W WO 2019126758 A1 WO2019126758 A1 WO 2019126758A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- storage
- data
- unified
- shared
- private
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/0207—Addressing or allocation; Relocation with multidimensional access, e.g. row/column, matrix
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0806—Multiuser, multiprocessor or multiprocessing cache systems
- G06F12/0813—Multiuser, multiprocessor or multiprocessing cache systems with a network or matrix configuration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/0223—User address space allocation, e.g. contiguous or non contiguous base addressing
- G06F12/0284—Multiple user address space allocation, e.g. using different base addresses
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/06—Addressing a physical block of locations, e.g. base addressing, module addressing, memory dedication
- G06F12/0646—Configuration or reconfiguration
- G06F12/0692—Multiconfiguration, e.g. local and global addressing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0806—Multiuser, multiprocessor or multiprocessing cache systems
- G06F12/084—Multiuser, multiprocessor or multiprocessing cache systems with a shared cache
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F12/00—Accessing, addressing or allocating within memory systems or architectures
- G06F12/02—Addressing or allocation; Relocation
- G06F12/08—Addressing or allocation; Relocation in hierarchically structured memory systems, e.g. virtual memory systems
- G06F12/0802—Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
- G06F12/0806—Multiuser, multiprocessor or multiprocessing cache systems
- G06F12/0842—Multiuser, multiprocessor or multiprocessing cache systems for multiprocessing or multitasking
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/30—Arrangements for executing machine instructions, e.g. instruction decode
- G06F9/30003—Arrangements for executing specific machine instructions
- G06F9/3004—Arrangements for executing specific machine instructions to perform operations on memory
- G06F9/30043—LOAD or STORE instructions; Clear instruction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/06—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
- G06N3/063—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11C—STATIC STORES
- G11C11/00—Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor
- G11C11/54—Digital stores characterised by the use of particular electric or magnetic storage elements; Storage elements therefor using elements simulating biological cells, e.g. neuron
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/10—Providing a specific technical effect
- G06F2212/1016—Performance improvement
- G06F2212/1024—Latency reduction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2212/00—Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
- G06F2212/10—Providing a specific technical effect
- G06F2212/1028—Power efficiency
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Definitions
- CPU/GPU Central Processing Unit/Graphics Processing Unit
- SIMD Single Instruction Multiple Data
- each of the parallel multiple processing units Arithmetic Logic Units (ALUs) or small CPUs, compute simultaneously with their own data - generally 2 or 3 input operands and 1 output result. These data are stored in memory and are accessed independently in parallel.
- each processing unit can have a dedicated partition of memory and dedicated access ports to the partitions of memory.
- ALUs Arithmetic Logic Units
- many algorithms have some shared data, which can be stored in some shared memory (to save storage cost) and be broadcasted to all processing units as one of the operands.
- hardware To enable parallel access in SIMD architecture, hardware generally introduces physically separated private memory modules and shared memory modules to hold corresponding type of data. However, such memory organization has two issues.
- Embodiments of this disclosure provide a unified memory apparatus.
- the unified memory apparatus can include a unified storage medium including a first storage module having a first plurality of storage cells configured to store data, the first plurality of storage cells identified by a unique cell identifier, and a second storage module having a second plurality of storage cells configured to store data, the second plurality of storage cells identified by a unique cell identifier.
- the unified memory architecture can also include a processing unit in communication with the unified storage medium. The processing unit can be configured to receive a first input data from one of the first plurality of storage cells, receive a second input data from one of the second plurality of storage cells, and generate an output data based on the first and second input data.
- the unified storage medium can include a first storage module having a first plurality of storage cells configured to store data, the first plurality of storage cells identified by a unique cell identifier, and a second storage module having a second plurality of storage cells configured to store data, the second plurality of storage cells identified by a unique cell identifier.
- Some embodiments of this disclosure provide a method for organizing data in a unified memory apparatus having a unified storage medium and one or more processing units.
- the method can include configuring a first storage module of the unified storage medium to communicate with the one or more processing units and to include a first plurality of storage cells that are configured to store data, the first plurality of storage cells identified by a unique cell identifier.
- the method can also include configuring a second storage module of the unified storage medium to communicate with the one or more processing units and to include a second plurality of storage cells that are configured to store data, the second plurality of storage cells identified by a unique cell identifier.
- the method further includes configuring a processing unit of the one or more processing units to receive a first input data from one of the first plurality of storage cells, receive a second input data from one of the second plurality of storage cells, and generate an output data based on the first and second input data.
- Some embodiments of this disclosure provide a method for organizing data in a unified storage medium having a first storage module and a second storage module.
- the method can include configuring the first storage m odule of the unified storage medium to communicate with one or more processing units and to include a first plurality of storage cells that are configured to store data, the first plurality of storage cells identified by a unique cell identifier, and configuring the second storage module of the unified storage medium to communicate with one or more processing units and to include a second plurality of storage cells that are configured to store data, the second plurality of storage cells identified by a unique cell identifier.
- the unique cell identifier of the first and second plurality of storage cells can comprise a bit address including a first plurality of bits and a second plurality of bits.
- the first plurality of bits can indicate a target storage module of the first and second storage modules, and the second plurality of bits can indicate a target storage cell of the first and second plurality of storage cells within the target storage module.
- the second plurality of bits can further indicate a characteristic associated with the target storage cell, the characteristic of the target storage cell being one of private or shared.
- the first and second storage modules are configured to communicate with a corresponding processing unit.
- the processing unit is configured to receive the first input data from a private storage cell, and the second input data from a shared storage cell.
- the unified storage medium and the processing unit are configured to be uniformly addressed by a software code or a software program.
- the unified storage medium is further configured to receive instructions from a compiler, the instructions including a characteristic associated with the data, wherein the characteristic associated with the data is one of private or shared.
- the private storage cell is configured to store private data and the shared storage cell is configured to store shared data that can be shared across the multiple processing units.
- FIG. 1 illustrates an exemplary neural network processing unit (NPU) architecture, consistent with embodiments of the present disclosure.
- NPU neural network processing unit
- FIG. 2A illustrates an exemplary functionality of a layer of neural network processor, consistent with embodiments of the present disclosure.
- FIG. 2B illustrates an exemplary hardware neural network processor, consistent with embodiments of the present disclosure.
- FIG. 3 illustrates data sharing in multi-layer networks, consistent with embodiments of the present disclosure.
- FIG. 4A illustrates a schematic diagram of an exemplary hardware including unified organization of memory modules, consistent with embodiments of the present disclosure.
- FIG. 4B illustrates an exemplary storage cell of a unified storage medium, consistent with embodiments of the present disclosure.
- FIG. 5 illustrates a process flowchart of an exemplary data organization operation, consistent with embodiments of the present disclosure.
- the disclosed embodiments provide systems and methods for organizing data stored in a unified memory architecture and accessing the target data thereof.
- the disclosed embodiments can resolve the aforementioned issues of conventional SIMD architecture by organizing the physical private and shared memory in a unified way.
- the disclosed embodiments maintain a single module of physical memory for logical private and shared memory, and can switch the view of“private” or“shared” through the accessing instructions while keeping the data itself in its original location in the physical memory.
- FIG. 1 illustrates an exemplary neural network processing unit (NPU) architecture 100.
- NPU architecture 100 can include an on-chip communication system 1 10, an off-chip memory 120, a memory controller 130, a direct memory access (DMA) unit 140, a Joint Test Action Group (JTAG)/Test Access End (TAP) controller 150, a peripheral component interconnect express (PCIe) interface 160, inter-chip links 170, and the like.
- DMA direct memory access
- JTAG Joint Test Action Group
- TAP Test Access End
- PCIe peripheral component interconnect express
- On-chip communication system 1 10 can include a global manager 112 and a plurality of tiles 1 16.
- Global manager 1 12 can include one or more cluster managers 1 14 configured to coordinate with one or more tiles 1 16.
- Each cluster manager 1 14 can be associated with an array of tiles 1 16 that provide synapse/neuron circuitry for the neural network.
- the top layer of tiles of FIG. 1 may provide circuitry representing an input layer to neural network, while the second layer of tiles may provide circuitry representing a hidden layer of the neural network.
- global manager 112 can include two cluster managers 1 14 configured to coordinate with two arrays of tiles 116.
- Tiles 1 16 can include one or more multipliers, adders, multiply-accumulators (e.g., a set of multiply-accumulators of a SIMD architecture) and corresponding memory and can be configured to perform an operation (e.g., one or more algorithmic calculations) on the communicated data under the control of global manager 112.
- multiply-accumulators e.g., a set of multiply-accumulators of a SIMD architecture
- an operation e.g., one or more algorithmic calculations
- Off-chip memory 120 can include read-only memory (ROM), erasable programmable read-only memory (EPROM) or the like. Off-chip memory 120 can be configured to store a large amount of data with slower access speed, compared to the on-chip memory integrated within one or more processor.
- ROM read-only memory
- EPROM erasable programmable read-only memory
- Memory controller 130 can read, write, or refresh one or more memory devices.
- the memory devices can include on-chip memory and off-chip memory 120.
- the memory device can be implemented as any type of volatile or non-volatile memory devices, or a combination thereof, such as a static random access memory (SRAM), an electrically erasable programmable read-only memory (EEPROM), an erasable programmable read-only memory (EPROM), a programmable read-only memory (PROM), a read-only memory (ROM), a magnetic memory, a flash memory, or a magnetic or optical disk.
- SRAM static random access memory
- EEPROM electrically erasable programmable read-only memory
- EPROM erasable programmable read-only memory
- PROM programmable read-only memory
- ROM read-only memory
- magnetic memory a magnetic memory
- flash memory or a magnetic or optical disk.
- DMA unit 140 can generate memory addresses and initiate memory read or write cycles.
- DMA unit 140 can contain several hardware registers that can be written and read by the one or more processors.
- the registers can include a memory address register, a byte-count register, and one or more control registers. These registers can specify some combination of the source, the destination, the direction of the transfer (reading from the input/output (I/O) device or writing to the I/O device), the size of the transfer unit, and/or the number of bytes to transfer in one burst.
- JTAG/TAP controller 150 can specify a dedicated debug port implementing a serial communications interface (e.g., a JTAG interface) for low-overhead access without requiring direct external access to the system address and data buses.
- the JTAG/TAP controller 150 can also specify an on-chip test access interface (e.g., a TAP interface) that implements a protocol to access a set of test registers that present chip logic levels and device capabilities of various parts.
- Peripheral interface 160 can support full-duplex communication between any two endpoints, with no inherent limitation on concurrent access across multiple endpoints.
- Inter-chip links 170 can connect all the internal components of NPU architecture 100, such as on-chip communication system 1 10, off-chip memory 120, memory controller 130, DMA unit 140, JTAG/TAP controller 150, and PCIe interface 160 to each other.
- NPU architecture 100 incorporates the embodiments of the present disclosure, it is appreciated that the disclosed embodiments can be applied to chips with SIMD architecture for accelerating some applications such as deep learning.
- Such chips can be, for example, GPU, CPU with vector processing ability, or neural network accelerators for deep learning.
- SIMD or vector architecture is commonly used to support computing devices with data parallelism, such as graphics processing and deep learning.
- the SIMD architecture can include multiple processing elements, wherein each of the processing elements can perform the same operation on multiple data points simultaneously.
- the private memory can be memory dedicated to serving data for each single processing element among multiple parallel processing elements
- shared memory can refer to memory dedicated to serving data for all parallel processing elements.
- FIG. 2A illustrates an exemplary functionality of a layer 200 of neural network, including a software algorithm 210 and hardware 220.
- Hardware 220 can include a private memory module 230, a processing unit array 240, a shared memory module 250, a write buffer 260, input operands 270, output operand 280, and the like.
- hardware 220 can be located in a tile (e.g., tile 1 16 of FIG. 1).
- a processing unit of processing unit array 240 can be an Arithmetic Logic Unit (ALU), a Floating Point Unit (FPU), a CPU, a GPU, or the like.
- An ALU is a fundamental building block of a computing circuit, including the CPU of computers. A single CPU can contain one or more ALUs.
- an ALU is a combinational digital electronic circuit that performs arithmetic and bitwise operations on integer binary numbers.
- Processing unit array 240 can include multiple processing units 242, 244, 246, and 248, for example, an array of processing units, as illustrated in FIG. 2B.
- Private memory module 230 can be partitioned into separate private memory blocks, such that, each of the multiple processing units 242, 244, 246, and 248 has a corresponding private memory block 232, 234, 236, and 238, as shown in FIG. 2B.
- Input operands 270 can be the input data operated on by processing unit array 240.
- input operands 270 of FIG. 2A can include one or more private input operand(s) 272 and one or more shared input operand(s) 274, as shown in FIG. 2B.
- Private input operand 272 can be stored in private memory module 230 and shared input operand 274 can be stored in shared memory module 250.
- software algorithms 210 have shared data that can be stored in shared memory module 250 and can be broadcasted to each of the multiple processing units 242, 244, 246, and 248 of processing unit array 240 as a shared operand 274.
- the algorithm illustrated in FIG. 2A is computing a vector operation of:
- equation 1 which is a representative operation in layer 200 of a neural network called out often in deep learning algorithms.
- “b” can include a constant value
- “X” can include a shared input operand 274
- ‘W 1” can include a private input operand 272.
- the vector size can be set as any natural number.
- a vector size of 4 is taken as an example, and a 4-way SIMD hardware to compute the vector is used.
- the processing units 242, 244, 246, and 248 can compute, in parallel, the following operations:
- the shaded blocks and dotted lines in FIG. 2A indicate how“al” is calculated. From this calculation, it is appreciated that data in each column of the“Wl” array is local to a corresponding processing unit of processing unit array 240 and the data can accordingly be stored in corresponding memory block of private memory module 230, as a private input operand 272. For example, the data in each of the first, second, third, and fourth columns of the W1 array can be stored in their corresponding memory blocks 232, 234, 236, and 238 of private memory module 230 as private input operands.
- the Wl array can include a matrix of stored data, wherein each element of the matrix is represented as Wlij or Wl_// (as shown later), where “i” represents the row number and“j” represents the column number in the matrix.
- Wl 4i represents the data stored in the element located at row 4 and column 1 of the Wl array.
- Other commonly known notations to address elements in a matrix can be used as well.
- Equations 2-5 represent exemplary operations performed in layer 200 of a neural network processor, designed to calculate al, a2, a3 and a4.
- machine learning or deep learning includes training the neural network processor to generate an end result based on input data, accomplished by implementing algorithms for one or more layers of neural processing.
- layer 200 of FIG. 2A represents a first layer including an algorithm configured to perform an operation using a bias b, data stored in the X array, and data stored in Wl array.
- a second and third layer (not shown) can include algorithms using the bias b, data stored in the X array, and data stored in W2 and W3 array.
- Each layer can include a different value of bias b and different parameters stored in“W” array.
- array X can include an individual’s scores in different classes.
- the value of xl of the array X can be student A’s Math score
- x2 can be the English score
- x3 can be the History score
- x4 can be the Science score.
- the end result can be whether the individual will be granted admission in a school or rejected, based on the scores (input data).
- data xl-x4 is“shared” and common in calculating al-a4.
- FIG. 3 illustrates data sharing in multi-layer networks.
- Data sharing refers to how previously private data can become shared data in a later phase of a program.
- neural network architecture 300 includes multiple layers, for example, layers 310 and 320.
- output operand 280 of layer 310 can be used as an input operand 270 for layer 320.
- the output operand 280 of one layer can be utilized as input operand 270 by one or more layers.
- al is calculated by processing unit 242 of private memory module 230.
- the data in al becomes a broadcasting input for layer 320.
- a neural network can be organized in layers. Each layer can perform one or more calculations on its inputs and generate an output.
- the output of a layer can be passed onto a next layer for further processing. For example, an output of a previous layer can be an input for the next layer. Accordingly, the locally generated“a”s have to be either stored back to shared memory 250, or stored to private memory 230 and copied later to shared memory 250.
- a write buffer 260 is introduced to allow shared memory 250 more time to consume these output operands 280 individually.
- the output speed of processing unit array 240 is faster than the width of write buffer 260, e.g., the size of A is greater than X, write buffer 260 may propagate a back pressure, forcing the processing unit array 240 to slow down, resulting in the slowdown of the overall program execution.
- FIG. 4A illustrates a schematic diagram of an exemplary hardware system 400 including unified organization of memory modules.
- Hardware system 400 includes a unified storage medium 405 and processing units 242, 244, 246, and 248.
- Unified storage medium 405 includes one or more storage modules 410, each including storage cells 430 configured to store input operand 270, output data 280. Multiple storage modules 410 can be merged into a single medium to form unified storage medium 405.
- Each storage module 410 can include a private storage module 412 and a shared storage module 414.
- Hardware system 400 can include multiple processing units 242, 244, 246, and 248.
- Each of the multiple processing units of the processing unit array 240 is configured to communicate with one or more storage modules.
- processing unit 242 can receive private input operand 272 from private storage module 412.
- Processing unit 242 can also receive shared input operand 274 from one or more shared storage modules 414.
- processing unit array 240 is configured to receive private input operand 272 from private storage module 412, receive shared input operand 274 from shared storage module 414, and generate an output operand 280 based on private input operand 272 and shared input operand 274.
- each of the storage cells 430 can be uniquely identified by a unique identifier 440.
- Unique identifier 440 can be a bit address including high-order bits 442 and low-order bits 444, or a byte address including high-order and low- order bytes, or a combination thereof.
- high-order bits 442 can be the most significant bit (MSB).
- MSB can also be referred to as the left-most bit due to the convention in positional notation of writing more significant digits further to the left.
- Low- order bits 444 are referred to as bits in the right-most position.
- the high-order bits 442 refer to the left-most bit, i.e.“2” and the low-order bits 444 refer to the bits on the right side, i.e. “E5”.
- storage cell 430 is a private storage cell 432 or a shared storage cell 434. Private storage cells 432 can be located within private storage module 412. Shared storage cells 434 can be located within shared storage module 414. High-order bits 442 of unique identifier 440 are configured to indicate a target storage module for operand (270, 280) and low-order bits 444 of unique identifier 440 are configured to indicate a target storage cell within target storage module, for operand (270, 280). For example, unique identifier 440 having a bit address“2JE5” refers to storage module“2”, and storage cell“E5” within storage module“2”. In other words, high-order bits 442 can also indicate the processing unit to which the storage module is“private” to, and low-order bits 444 indicate the location within the storage module.
- private storage cells 432 and shared storage cells 434 are physically indistinguishable storage cells and are not pre-labelled as such.
- the attribute of“private” and“shared” for a storage cell is determined based on the compilergenerated instructions programmed to address the data. For example, data can be stored in any cell.
- the compiler-generated instructions refer to the data as “private,” the data may be read out in parallel as private input operand 272.
- the compiler-generated instructions refer to the data as“shared,” the data may be read out as shared input operand 274.
- unique identifier 440 includes other characters, for example, numeric characters, alpha-numeric characters, hexadecimal numerals (e.g., shown in FIG. 4A), octal numerals, or the like, addressable by a software addressing mode.
- processing unit array 240 or each of the multiple processing units can generate output data 280.
- Output data 280 can be a private output data 282 or a shared output data 284, determined by the operations in the next layer of a multilayered algorithm for a neural network processor. As illustrated in FIG. 4A, output data 280 can be considered private output data 282 since it is written back to unified storage medium in parallel in each of the storage modules 410.
- neural network processors comprise a compiler (not shown).
- the compiler is a program or computer software that transforms computer code written in one programming language into another programming language to create an executable program.
- a compiler can perform a variety of operations, for example, pre-processing, lexical analysis, parsing, semantic analysis, conversion of input programs to an intermediate representation, code optimization, and code generation, or combinations thereof.
- FIG. 5 is a process flowchart of an exemplary data organization operation 500, consistent with embodiments of the present disclosure.
- data organization operation 500 can be performed by an on-chip communication system (e.g., on-chip communication system 1 10).
- Step 502 includes configuring a storage module (e.g., storage module 410) of a unified storage medium (e.g., unified storage medium 400) to include multiple storage cells (e.g. storage cells 430).
- step 502 includes configuring a private storage module (e.g., private storage module 412) to include private storage cells (e.g., private storage cell 432) and/or a shared storage module 414 (e.g., shared storage module 414) to include shared storage cells (e.g., shared storage cell 434).
- Configuring a storage module to include storage cells can comprise allocating storage space based on the total storage space available, software programs or algorithms, hardware limitations, time restrictions, and the like. If a software application or an algorithm is multi-layered and requires multiple layers of computation including more shared data than private data, the storage module can be configured to comprise more shared storage cells or more shared storage modules.
- Step 504 includes configuring a storage medium (e.g., unified storage medium 400 of FIG. 4A) to communicate with a processing unit (e.g., processing unit array 240) or multiple processing units.
- the processing unit is an Arithmetic Logic Unit (ALU), a Floating Point Unit (FPU), a Central Processing Unit (CPU), or a Graphics Processing Unit (GPU).
- a single CPU can contain one or more ALUs.
- an ALU is a combinational digital electronic circuit that performs arithmetic and bitwise operations on integer binary numbers.
- the processing unit can include multiple processing units, for example, an array of processing units configured to operate in parallel.
- Communicating with a processing unit can include receiving data generated by the processing unit, or providing stored data to the processing unit.
- the storage medium can be the source of data to be computed on or the target of data storage.
- the hardware system comprises a single processing unit configured to receive data from multiple storage modules.
- the hardware system can also include a unique processing unit for each storage module, configured to receive data only from the corresponding storage module.
- processing unit e.g., processing unit array 240
- output data e.g., output data 280
- the compiler may be a program or computer software that transforms computer code written in one programming language into another programming language to create an executable program.
- the compiler can generate a set of instructions configured to access data from a storage medium, execute a desired operation on the accessed data, generate output data based on the operation, and store the generated output data back into the storage medium for subsequent processing.
- the instructions can also include assigning a characteristic to the input and the output data. The characteristic of the data can be private, shared, restricted, or the like.
- the set of instructions will be described with reference to FIG. 4A, in accordance with embodiments of the disclosure.
- the instructions in the aforementioned set of instructions generally comprise an operation on the data, characteristic of the data, and a target location within the storage medium.
- operation on the data includes load (reading), store (writing), arithmetic operations, (e.g., addition, subtraction, multiplication, division) copy, paste, and the like.
- Characteristic of the data can refer generally to the accessibility of the data within the storage medium. Characteristic of the data can include private, shared, restricted, allowed, global, local, or combinations thereof.
- Data in general, is referred to as an operand. Data can be an input operand, for example, operand 1 ( OP1 ) and operand 2 ( OP2 ), or an output data based on the vector operation being performed.
- the subfield of load/store instructions implies how to load/store the data.
- Subfield“.SHARED” implies that the data should be read or written as shared data. In this mode, both high-order bits (e.g., 442 of FIG. 4B) and low- order bits (e.g., 444 of FIG. 4B) are utilized to determine the target location of input operand or output data.
- Subfield“.SIMD” implies that the data should be read or written as private data in parallel, wherein, the high-order bits can be disregarded by hardware and the low- order bits are utilized to determine the target location of input operand or output data.
- each processing unit e.g., 242, 244, 246, and 248 of FIG. 4A
- input operand 1 e.g., private input operand 272
- the high-order bit“0” in bit address“0 00” is not utilized, and the low-order bits“00” indicate the storage cell and a characteristic of the storage cell (e.g., private storage cell 432)
- all data in row 1 of the“Wl” array (Wl_/z) is read out simultaneously but separately to each corresponding processing unit.
- The“LOAD. SIMD” field implies that the data should be read in parallel.
- input operand 2 (e.g., shared input operand 274) is read once and broadcast to all processing units, as illustrated in FIG. 4A.
- the high-order bit“0” in bit address“ 0JF0” indicates the storage module where the data is stored, and the low-order bits “F0” indicate the storage cell and a characteristic of the storage cell in which the data is stored (e.g., shared storage cell 434).
- the data in “XI” of the“X” array is read out read once and broadcast to each corresponding processing unit.
- the LOAD. SHARED field implies that the data should be read as shared data between all processing units.
- processing unit performs multiplication of input operands 1 and 2, as defined by the vector operation, to generate an output data“A”.
- the arithmetic operation can include basic arithmetic functions of addition, subtraction, multiplication, or division, or combinations thereof.
- processing unit is configured to perform complex arithmetic and algebraic functions, logarithmic functions, exponentiation, or the like.
- output data“A” in instruction i3 is stored in parallel back to storage medium for further processing.
- Generated output data“A” (e.g., output data 280) can be used as the input operand in the next layer of the multi-layered algorithm.
- the high-order bit“0” in bit address“0_F1” is not utilized by hardware, and the low-order bits“F I” indicate the storage cell and a characteristic of the storage cell (e.g., shared storage cell 434) for the output data to be stored.
- output data 280 may be temporarily stored in a temporary storage (e.g., write buffer 260) before storing it in the shared or private storage module of the unified storage medium.
- step 508 generated output data is stored back in the unified storage medium for further processing.
- a neural network can be organized in multiple layers.
- the output of a layer can be passed onto a next layer for further processing.
- an output of a previous layer can be an input for the next layer.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- General Health & Medical Sciences (AREA)
- Neurology (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Computational Linguistics (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Computer Hardware Design (AREA)
- Memory System (AREA)
- Multi Processors (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
The present disclosure relates to a unified memory apparatus having a unified storage medium and one or more processing units. The unified memory apparatus can include a first storage module having a first plurality of storage cells, and a second storage module having a second plurality of storage cells, each of the first and second plurality of storage cells configured to store data and to be identified by a unique cell identifier. The one or more processing units are in communication with the unified storage medium and the processing units are configured to receive a first input data from one of the first plurality of storage cells, receive a second input data from one of the second plurality of storage cells, and generate an output data based on the first and second input data.
Description
A UNIFIED MEMORY ORGANIZATION FOR NEURAL NETWORK
PROCESSORS
CROSS REFERENCE TO RELATED APPLICATION
[001] The disclosure claims the benefits of priority to U.S. Provisional Application No. 62/610,1 19, filed December 22, 2017, and U.S. Patent Application No. 15/984,255, filed May 18, 2018, the entire contents of which are incorporated herein by reference.
BACKGROUND
[002] With the exponential growth of neural network based deep learning
applications such as image recognition, speech/voice recognition, and machine translation, the commodity Central Processing Unit/Graphics Processing Unit (CPU/GPU) based platform is no longer a suitable computing substrate to support the ever growing computation demands in terms of performance, power efficiency and economic scalability. Developing neural network processors to accelerate neural-network-based deep-learning applications has gained significant traction across many business segments, including established chip makers, start-up companies as well as large Internet companies. Single Instruction Multiple Data (SIMD) architecture can be applied to chips to accelerate calculations for applications of deep learning.
[003] In a computer with SIMD architecture, each of the parallel multiple processing units, Arithmetic Logic Units (ALUs) or small CPUs, compute simultaneously with their own data - generally 2 or 3 input operands and 1 output result. These data are stored in memory and are accessed independently in parallel. Thus, each processing unit can have a dedicated partition of memory and dedicated access ports to the partitions of memory. In practice, many algorithms have some shared data, which can be stored in some shared memory (to save storage cost) and be broadcasted to all processing units as one of the operands.
[004] To enable parallel access in SIMD architecture, hardware generally introduces physically separated private memory modules and shared memory modules to hold corresponding type of data. However, such memory organization has two issues.
[005] First, because the size of each hardware memory module is fixed while different software programs have different data sizes, these modules are inefficiently utilized, resulting in the waste of physical memory space. Second, dedicated memory copy operations have to be performed when previously considered“private” data becomes“shared” data in a later phase of the program. This causes extra power consumption and a drop in performance of the processing unit.
SUMMARY
[006] Embodiments of this disclosure provide a unified memory apparatus. The unified memory apparatus can include a unified storage medium including a first storage module having a first plurality of storage cells configured to store data, the first plurality of storage cells identified by a unique cell identifier, and a second storage module having a second plurality of storage cells configured to store data, the second plurality of storage cells identified by a unique cell identifier. The unified memory architecture can also include a processing unit in communication with the unified storage medium. The processing unit can be configured to receive a first input data from one of the first plurality of storage cells, receive a second input data from one of the second plurality of storage cells, and generate an output data based on the first and second input data.
[007] Some embodiments of this disclosure provide a unified storage medium. The unified storage medium can include a first storage module having a first plurality of storage cells configured to store data, the first plurality of storage cells identified by a unique cell identifier, and a second storage module having a second plurality of storage cells configured to store data, the second plurality of storage cells identified by a unique cell identifier.
[008] Some embodiments of this disclosure provide a method for organizing data in a unified memory apparatus having a unified storage medium and one or more processing units. The method can include configuring a first storage module of the unified storage medium to communicate with the one or more processing units and to include a first plurality of storage cells that are configured to store data, the first plurality of storage cells identified by a unique cell identifier. The method can also include configuring a second storage module of the unified storage medium to communicate with the one or more processing units and to include a second plurality of storage cells that are configured to store data, the second plurality of storage cells identified by a unique cell identifier. The method further includes configuring a processing unit of the one or more processing units to receive a first input data from one of the first plurality of storage cells, receive a second input data from one of the second plurality of storage cells, and generate an output data based on the first and second input data.
[009] Some embodiments of this disclosure provide a method for organizing data in a unified storage medium having a first storage module and a second storage module. The method can include configuring the first storage m odule of the unified storage medium to communicate with one or more processing units and to include a first plurality of storage cells that are configured to store data, the first plurality of storage cells identified by a unique cell identifier, and configuring the second storage module of the unified storage medium to communicate with one or more processing units and to include a second plurality of storage cells that are configured to store data, the second plurality of storage cells identified by a unique cell identifier.
[010] The unique cell identifier of the first and second plurality of storage cells can comprise a bit address including a first plurality of bits and a second plurality of bits. The first plurality of bits can indicate a target storage module of the first and second storage
modules, and the second plurality of bits can indicate a target storage cell of the first and second plurality of storage cells within the target storage module. The second plurality of bits can further indicate a characteristic associated with the target storage cell, the characteristic of the target storage cell being one of private or shared. In some embodiments, the first and second storage modules are configured to communicate with a corresponding processing unit. The processing unit is configured to receive the first input data from a private storage cell, and the second input data from a shared storage cell. The unified storage medium and the processing unit are configured to be uniformly addressed by a software code or a software program. The unified storage medium is further configured to receive instructions from a compiler, the instructions including a characteristic associated with the data, wherein the characteristic associated with the data is one of private or shared. The private storage cell is configured to store private data and the shared storage cell is configured to store shared data that can be shared across the multiple processing units.
BRIEF DESCRIPTION OF THE DRAWINGS
[Oil] FIG. 1 illustrates an exemplary neural network processing unit (NPU) architecture, consistent with embodiments of the present disclosure.
[012] FIG. 2A illustrates an exemplary functionality of a layer of neural network processor, consistent with embodiments of the present disclosure.
[013] FIG. 2B illustrates an exemplary hardware neural network processor, consistent with embodiments of the present disclosure.
[014] FIG. 3 illustrates data sharing in multi-layer networks, consistent with embodiments of the present disclosure.
[015] FIG. 4A illustrates a schematic diagram of an exemplary hardware including unified organization of memory modules, consistent with embodiments of the present disclosure.
[016] FIG. 4B illustrates an exemplary storage cell of a unified storage medium, consistent with embodiments of the present disclosure.
[017] FIG. 5 illustrates a process flowchart of an exemplary data organization operation, consistent with embodiments of the present disclosure.
DETAILED DESCRIPTION
[018] Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. The following description refers to the accompanying drawings in which the same numbers in different drawings represent the same or similar elements unless otherwise represented. The implementations set forth in the following description of exemplary embodiments do not represent all implementations consistent with the invention. Instead, they are merely examples of apparatuses and methods consistent with aspects related to the invention as recited in the appended claims.
[019] The disclosed embodiments provide systems and methods for organizing data stored in a unified memory architecture and accessing the target data thereof. The disclosed embodiments can resolve the aforementioned issues of conventional SIMD architecture by organizing the physical private and shared memory in a unified way. The disclosed embodiments maintain a single module of physical memory for logical private and shared memory, and can switch the view of“private” or“shared” through the accessing instructions while keeping the data itself in its original location in the physical memory.
[020] FIG. 1 illustrates an exemplary neural network processing unit (NPU) architecture 100. NPU architecture 100 can include an on-chip communication system 1 10,
an off-chip memory 120, a memory controller 130, a direct memory access (DMA) unit 140, a Joint Test Action Group (JTAG)/Test Access End (TAP) controller 150, a peripheral component interconnect express (PCIe) interface 160, inter-chip links 170, and the like. It is appreciated that on-chip communication system 110 can perform algorithmic operations based on communicated data.
[021] On-chip communication system 1 10 can include a global manager 112 and a plurality of tiles 1 16. Global manager 1 12 can include one or more cluster managers 1 14 configured to coordinate with one or more tiles 1 16. Each cluster manager 1 14 can be associated with an array of tiles 1 16 that provide synapse/neuron circuitry for the neural network. For example, the top layer of tiles of FIG. 1 may provide circuitry representing an input layer to neural network, while the second layer of tiles may provide circuitry representing a hidden layer of the neural network. As shown in FIG. 1, global manager 112 can include two cluster managers 1 14 configured to coordinate with two arrays of tiles 116. Tiles 1 16 can include one or more multipliers, adders, multiply-accumulators (e.g., a set of multiply-accumulators of a SIMD architecture) and corresponding memory and can be configured to perform an operation (e.g., one or more algorithmic calculations) on the communicated data under the control of global manager 112.
[022] Off-chip memory 120 can include read-only memory (ROM), erasable programmable read-only memory (EPROM) or the like. Off-chip memory 120 can be configured to store a large amount of data with slower access speed, compared to the on-chip memory integrated within one or more processor.
[023] Memory controller 130 can read, write, or refresh one or more memory devices. The memory devices can include on-chip memory and off-chip memory 120. For example, the memory device can be implemented as any type of volatile or non-volatile memory devices, or a combination thereof, such as a static random access memory (SRAM),
an electrically erasable programmable read-only memory (EEPROM), an erasable programmable read-only memory (EPROM), a programmable read-only memory (PROM), a read-only memory (ROM), a magnetic memory, a flash memory, or a magnetic or optical disk.
[024] DMA unit 140 can generate memory addresses and initiate memory read or write cycles. DMA unit 140 can contain several hardware registers that can be written and read by the one or more processors. The registers can include a memory address register, a byte-count register, and one or more control registers. These registers can specify some combination of the source, the destination, the direction of the transfer (reading from the input/output (I/O) device or writing to the I/O device), the size of the transfer unit, and/or the number of bytes to transfer in one burst.
[025] JTAG/TAP controller 150 can specify a dedicated debug port implementing a serial communications interface (e.g., a JTAG interface) for low-overhead access without requiring direct external access to the system address and data buses. The JTAG/TAP controller 150 can also specify an on-chip test access interface (e.g., a TAP interface) that implements a protocol to access a set of test registers that present chip logic levels and device capabilities of various parts.
[026] Peripheral interface 160 can support full-duplex communication between any two endpoints, with no inherent limitation on concurrent access across multiple endpoints.
[027] Inter-chip links 170 can connect all the internal components of NPU architecture 100, such as on-chip communication system 1 10, off-chip memory 120, memory controller 130, DMA unit 140, JTAG/TAP controller 150, and PCIe interface 160 to each other.
[028] While NPU architecture 100 incorporates the embodiments of the present disclosure, it is appreciated that the disclosed embodiments can be applied to chips with
SIMD architecture for accelerating some applications such as deep learning. Such chips can be, for example, GPU, CPU with vector processing ability, or neural network accelerators for deep learning. SIMD or vector architecture is commonly used to support computing devices with data parallelism, such as graphics processing and deep learning. The SIMD architecture can include multiple processing elements, wherein each of the processing elements can perform the same operation on multiple data points simultaneously.
[029] For example, the private memory can be memory dedicated to serving data for each single processing element among multiple parallel processing elements, while shared memory can refer to memory dedicated to serving data for all parallel processing elements.
[030] FIG. 2A illustrates an exemplary functionality of a layer 200 of neural network, including a software algorithm 210 and hardware 220. Hardware 220 can include a private memory module 230, a processing unit array 240, a shared memory module 250, a write buffer 260, input operands 270, output operand 280, and the like. In some embodiments, hardware 220 can be located in a tile (e.g., tile 1 16 of FIG. 1).
[031] In some embodiments, a processing unit of processing unit array 240 can be an Arithmetic Logic Unit (ALU), a Floating Point Unit (FPU), a CPU, a GPU, or the like. An ALU is a fundamental building block of a computing circuit, including the CPU of computers. A single CPU can contain one or more ALUs. Generally, an ALU is a combinational digital electronic circuit that performs arithmetic and bitwise operations on integer binary numbers. Processing unit array 240 can include multiple processing units 242, 244, 246, and 248, for example, an array of processing units, as illustrated in FIG. 2B.
[032] Private memory module 230 can be partitioned into separate private memory blocks, such that, each of the multiple processing units 242, 244, 246, and 248 has a corresponding private memory block 232, 234, 236, and 238, as shown in FIG. 2B.
[033] Input operands 270 can be the input data operated on by processing unit array 240. In some embodiments, input operands 270 of FIG. 2A can include one or more private input operand(s) 272 and one or more shared input operand(s) 274, as shown in FIG. 2B. Private input operand 272 can be stored in private memory module 230 and shared input operand 274 can be stored in shared memory module 250.
[034] In the application of neural networks, software algorithms 210 have shared data that can be stored in shared memory module 250 and can be broadcasted to each of the multiple processing units 242, 244, 246, and 248 of processing unit array 240 as a shared operand 274. For example, the algorithm illustrated in FIG. 2A is computing a vector operation of:
A = sigmoid(b + X * W ) (Eq. 1)
which is a representative operation in layer 200 of a neural network called out often in deep learning algorithms. With reference to equation 1,“b” can include a constant value,“X” can include a shared input operand 274, and‘W 1” can include a private input operand 272.
[035] With reference to FIG. 2A, the vector size can be set as any natural number. Here, a vector size of 4 is taken as an example, and a 4-way SIMD hardware to compute the vector is used. The processing units 242, 244, 246, and 248 can compute, in parallel, the following operations:
al = sigmoid(b + xl * W1X1 + x2 * Wl2i + x3 * Wl31 + x4 * Wl41) (Eq. 2) a2 = sigmoid(b + xl * Wl12 + x2 * Wl22 + x3 * Wl32 + x4 * Wl42) (Eq. 3) a3 = sigmoid(b + xl * Wl13 + x2 * Wl23 + x3 * Wl33 + x4 * Wl43) (Eq. 4) a4 = sigmoid(b + xl * Wl14 + x2 * Wl24 + x3 * Wl34 + x4 * Wl44) (Eq. 5)
[036] The shaded blocks and dotted lines in FIG. 2A indicate how“al” is calculated. From this calculation, it is appreciated that data in each column of the“Wl” array is local to a corresponding processing unit of processing unit array 240 and the data can accordingly be
stored in corresponding memory block of private memory module 230, as a private input operand 272. For example, the data in each of the first, second, third, and fourth columns of the W1 array can be stored in their corresponding memory blocks 232, 234, 236, and 238 of private memory module 230 as private input operands.
[037] With reference to FIG. 2A, the Wl array can include a matrix of stored data, wherein each element of the matrix is represented as Wlij or Wl_// (as shown later), where “i” represents the row number and“j” represents the column number in the matrix. For example, in Eq. 2, Wl4i represents the data stored in the element located at row 4 and column 1 of the Wl array. Other commonly known notations to address elements in a matrix can be used as well.
[038] Simultaneously, data in the X-array is utilized by all processing units 242, 244, 246, and 248, and is accordingly stored in shared memory module 250, as shared input operand 274 and broadcasted to all components reading from shared memory module 250. Equations 2-5 represent exemplary operations performed in layer 200 of a neural network processor, designed to calculate al, a2, a3 and a4.
[039] In some embodiments, machine learning or deep learning includes training the neural network processor to generate an end result based on input data, accomplished by implementing algorithms for one or more layers of neural processing. For example, layer 200 of FIG. 2A, represents a first layer including an algorithm configured to perform an operation using a bias b, data stored in the X array, and data stored in Wl array. A second and third layer (not shown) can include algorithms using the bias b, data stored in the X array, and data stored in W2 and W3 array. Each layer can include a different value of bias b and different parameters stored in“W” array.
[040] With reference to FIG. 2A, for example, array X can include an individual’s scores in different classes. The value of xl of the array X can be student A’s Math score, x2
can be the English score, x3 can be the History score, and x4 can be the Science score. The end result can be whether the individual will be granted admission in a school or rejected, based on the scores (input data). As shown in FIG. 2A, and described in Equations 2-5, data xl-x4 is“shared” and common in calculating al-a4.
[041] FIG. 3 illustrates data sharing in multi-layer networks. Data sharing, as described herein, refers to how previously private data can become shared data in a later phase of a program. In some embodiments, neural network architecture 300 includes multiple layers, for example, layers 310 and 320. In some embodiments, output operand 280 of layer 310 can be used as an input operand 270 for layer 320. In some embodiments, the output operand 280 of one layer can be utilized as input operand 270 by one or more layers.
[042] For example, in layer 310, al is calculated by processing unit 242 of private memory module 230. The data in al becomes a broadcasting input for layer 320. Generally, a neural network can be organized in layers. Each layer can perform one or more calculations on its inputs and generate an output. The output of a layer can be passed onto a next layer for further processing. For example, an output of a previous layer can be an input for the next layer. Accordingly, the locally generated“a”s have to be either stored back to shared memory 250, or stored to private memory 230 and copied later to shared memory 250.
[043] As an alternative solution to storing in private memory 230 and copying to shared memory 250 later, output operand 280 from al can be stored back directly to shared memory 250 than memory copying. Nevertheless, this alternative solution could still slow down the program. Since a single processing unit, for example processing unit 242, can finish only one multiply-add operation per cycle, say Xi* Wl_//, each calculation of“a” can be performed over multiple cycles. For this reason, only one operand of W1 _Jj is read out from private memory 230 in each cycle, thus only one“X” is needed from shared memory 250. Consequently, a common design of each memory module is single-read/single-write per cycle.
When all“a”s are generated simultaneously by multiple processing units in the last cycle, shared memory 250 may not have the ability to write them all back.
[044] In some embodiments, a write buffer 260 is introduced to allow shared memory 250 more time to consume these output operands 280 individually. However, when the output speed of processing unit array 240 is faster than the width of write buffer 260, e.g., the size of A is greater than X, write buffer 260 may propagate a back pressure, forcing the processing unit array 240 to slow down, resulting in the slowdown of the overall program execution.
[045] FIG. 4A illustrates a schematic diagram of an exemplary hardware system 400 including unified organization of memory modules. Hardware system 400 includes a unified storage medium 405 and processing units 242, 244, 246, and 248. Unified storage medium 405 includes one or more storage modules 410, each including storage cells 430 configured to store input operand 270, output data 280. Multiple storage modules 410 can be merged into a single medium to form unified storage medium 405. Each storage module 410 can include a private storage module 412 and a shared storage module 414.
[046] Hardware system 400 can include multiple processing units 242, 244, 246, and 248. Each of the multiple processing units of the processing unit array 240 is configured to communicate with one or more storage modules. For example, processing unit 242 can receive private input operand 272 from private storage module 412. Processing unit 242 can also receive shared input operand 274 from one or more shared storage modules 414. In some embodiments, processing unit array 240 is configured to receive private input operand 272 from private storage module 412, receive shared input operand 274 from shared storage module 414, and generate an output operand 280 based on private input operand 272 and shared input operand 274.
[047] As illustrated in FIG. 4B, each of the storage cells 430 can be uniquely identified by a unique identifier 440. Unique identifier 440 can be a bit address including high-order bits 442 and low-order bits 444, or a byte address including high-order and low- order bytes, or a combination thereof. In computing, high-order bits 442 can be the most significant bit (MSB). The MSB can also be referred to as the left-most bit due to the convention in positional notation of writing more significant digits further to the left. Low- order bits 444, on the other hand, are referred to as bits in the right-most position. For example, in a unique identifier 440 having a bit address“2_E5”, the high-order bits 442 refer to the left-most bit, i.e.“2” and the low-order bits 444 refer to the bits on the right side, i.e. “E5”.
[048] In some embodiments, storage cell 430 is a private storage cell 432 or a shared storage cell 434. Private storage cells 432 can be located within private storage module 412. Shared storage cells 434 can be located within shared storage module 414. High-order bits 442 of unique identifier 440 are configured to indicate a target storage module for operand (270, 280) and low-order bits 444 of unique identifier 440 are configured to indicate a target storage cell within target storage module, for operand (270, 280). For example, unique identifier 440 having a bit address“2JE5” refers to storage module“2”, and storage cell“E5” within storage module“2”. In other words, high-order bits 442 can also indicate the processing unit to which the storage module is“private” to, and low-order bits 444 indicate the location within the storage module.
[049] It is to be appreciated that private storage cells 432 and shared storage cells 434 are physically indistinguishable storage cells and are not pre-labelled as such. The attribute of“private” and“shared” for a storage cell is determined based on the compilergenerated instructions programmed to address the data. For example, data can be stored in any cell. During a read step, if the compiler-generated instructions refer to the data as
“private,” the data may be read out in parallel as private input operand 272. Alternatively, if the compiler-generated instructions refer to the data as“shared,” the data may be read out as shared input operand 274.
[050] In some embodiments, unique identifier 440 includes other characters, for example, numeric characters, alpha-numeric characters, hexadecimal numerals (e.g., shown in FIG. 4A), octal numerals, or the like, addressable by a software addressing mode.
[051] Referring back to FIG. 4A, processing unit array 240 or each of the multiple processing units can generate output data 280. Output data 280 can be a private output data 282 or a shared output data 284, determined by the operations in the next layer of a multilayered algorithm for a neural network processor. As illustrated in FIG. 4A, output data 280 can be considered private output data 282 since it is written back to unified storage medium in parallel in each of the storage modules 410.
[052] In some embodiments, neural network processors comprise a compiler (not shown). The compiler is a program or computer software that transforms computer code written in one programming language into another programming language to create an executable program. In machining applications, a compiler can perform a variety of operations, for example, pre-processing, lexical analysis, parsing, semantic analysis, conversion of input programs to an intermediate representation, code optimization, and code generation, or combinations thereof.
[053] FIG. 5 is a process flowchart of an exemplary data organization operation 500, consistent with embodiments of the present disclosure. For example, data organization operation 500 can be performed by an on-chip communication system (e.g., on-chip communication system 1 10).
[054] Step 502 includes configuring a storage module (e.g., storage module 410) of a unified storage medium (e.g., unified storage medium 400) to include multiple storage cells
(e.g. storage cells 430). In some embodiments, step 502 includes configuring a private storage module (e.g., private storage module 412) to include private storage cells (e.g., private storage cell 432) and/or a shared storage module 414 (e.g., shared storage module 414) to include shared storage cells (e.g., shared storage cell 434). Configuring a storage module to include storage cells can comprise allocating storage space based on the total storage space available, software programs or algorithms, hardware limitations, time restrictions, and the like. If a software application or an algorithm is multi-layered and requires multiple layers of computation including more shared data than private data, the storage module can be configured to comprise more shared storage cells or more shared storage modules.
[055] Step 504 includes configuring a storage medium (e.g., unified storage medium 400 of FIG. 4A) to communicate with a processing unit (e.g., processing unit array 240) or multiple processing units. In some embodiments, the processing unit is an Arithmetic Logic Unit (ALU), a Floating Point Unit (FPU), a Central Processing Unit (CPU), or a Graphics Processing Unit (GPU). A single CPU can contain one or more ALUs. Generally, an ALU is a combinational digital electronic circuit that performs arithmetic and bitwise operations on integer binary numbers. The processing unit can include multiple processing units, for example, an array of processing units configured to operate in parallel.
[056] Communicating with a processing unit can include receiving data generated by the processing unit, or providing stored data to the processing unit. The storage medium can be the source of data to be computed on or the target of data storage. In some embodiments, the hardware system comprises a single processing unit configured to receive data from multiple storage modules. The hardware system can also include a unique processing unit for each storage module, configured to receive data only from the corresponding storage module.
[057] Step 506, processing unit (e.g., processing unit array 240) generates output data (e.g., output data 280) based on the instructions generated by a compiler. In some
embodiments, the compiler may be a program or computer software that transforms computer code written in one programming language into another programming language to create an executable program. The compiler can generate a set of instructions configured to access data from a storage medium, execute a desired operation on the accessed data, generate output data based on the operation, and store the generated output data back into the storage medium for subsequent processing. The instructions can also include assigning a characteristic to the input and the output data. The characteristic of the data can be private, shared, restricted, or the like.
[058] In the example discussed here, compiler generates the following code for the vector operation“A=X*W1”, where“X” can be considered as operand 2, and“Wl” can be considered as operand 1. The set of instructions will be described with reference to FIG. 4A, in accordance with embodiments of the disclosure.
il: LOAD.SIMD OP1 0x0 JO
i2: LOAD. SHARED OP2 0x0 O
i3: MUL RESULT OP1 OP2
i4: STORE.SIMD RESULT 0x0 JF1;
[059] The instructions in the aforementioned set of instructions generally comprise an operation on the data, characteristic of the data, and a target location within the storage medium.
[060] In some embodiments, operation on the data includes load (reading), store (writing), arithmetic operations, (e.g., addition, subtraction, multiplication, division) copy, paste, and the like. Characteristic of the data can refer generally to the accessibility of the data within the storage medium. Characteristic of the data can include private, shared, restricted, allowed, global, local, or combinations thereof. Data, in general, is referred to as
an operand. Data can be an input operand, for example, operand 1 ( OP1 ) and operand 2 ( OP2 ), or an output data based on the vector operation being performed.
[061] In the set of instructions il- i4, the subfield of load/store instructions implies how to load/store the data. Subfield“.SHARED” implies that the data should be read or written as shared data. In this mode, both high-order bits (e.g., 442 of FIG. 4B) and low- order bits (e.g., 444 of FIG. 4B) are utilized to determine the target location of input operand or output data. Subfield“.SIMD” implies that the data should be read or written as private data in parallel, wherein, the high-order bits can be disregarded by hardware and the low- order bits are utilized to determine the target location of input operand or output data.
[062] In instruction il, each processing unit (e.g., 242, 244, 246, and 248 of FIG. 4A) reads input operand 1 (e.g., private input operand 272) in parallel. The high-order bit“0” in bit address“0 00” is not utilized, and the low-order bits“00” indicate the storage cell and a characteristic of the storage cell (e.g., private storage cell 432) For example, with reference to FIG. 2 A, all data in row 1 of the“Wl” array (Wl_/z) is read out simultaneously but separately to each corresponding processing unit. The“LOAD. SIMD” field implies that the data should be read in parallel.
[063] In instruction i2, input operand 2 (e.g., shared input operand 274) is read once and broadcast to all processing units, as illustrated in FIG. 4A. The high-order bit“0” in bit address“ 0JF0” indicates the storage module where the data is stored, and the low-order bits “F0” indicate the storage cell and a characteristic of the storage cell in which the data is stored (e.g., shared storage cell 434). For example, with reference to FIG. 2A, the data in “XI” of the“X” array is read out read once and broadcast to each corresponding processing unit. The LOAD. SHARED field implies that the data should be read as shared data between all processing units.
[064] In instruction i3, processing unit performs multiplication of input operands 1 and 2, as defined by the vector operation, to generate an output data“A”. The arithmetic operation can include basic arithmetic functions of addition, subtraction, multiplication, or division, or combinations thereof. In some embodiments, processing unit is configured to perform complex arithmetic and algebraic functions, logarithmic functions, exponentiation, or the like.
[065] In instruction i4, generated output data“A” in instruction i3 is stored in parallel back to storage medium for further processing. Generated output data“A” (e.g., output data 280) can be used as the input operand in the next layer of the multi-layered algorithm. The high-order bit“0” in bit address“0_F1” is not utilized by hardware, and the low-order bits“F I” indicate the storage cell and a characteristic of the storage cell (e.g., shared storage cell 434) for the output data to be stored. For example, with reference to FIG. 2B, output data 280 may be temporarily stored in a temporary storage (e.g., write buffer 260) before storing it in the shared or private storage module of the unified storage medium.
[066] In step 508, generated output data is stored back in the unified storage medium for further processing. Generally, a neural network can be organized in multiple layers. The output of a layer can be passed onto a next layer for further processing. For example, an output of a previous layer can be an input for the next layer.
[067] In the foregoing specification, embodiments have been described with reference to numerous specific details that can vary from implementation to implementation. Certain adaptations and modifications of the described embodiments can be made. Other embodiments can be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims. It is also intended that the sequence of
steps shown in figures are only for illustrative purposes and are not intended to be limited to any particular sequence of steps. As such, those skilled in the art can appreciate that these steps can be performed in a different order while implementing the same method.
Claims
1. A unified memory apparatus comprising:
a unified storage medium including:
a first storage module having a first plurality of storage cells configured to store data, the first plurality of storage cells identified by a unique cell identifier;
a second storage module having a second plurality of storage cells configured to store data, the second plurality of storage cells identified by a unique cell identifier; and
a processing unit in communication with the unified storage medium, the processing unit configured to:
receive a first input data from one of the first plurality of storage cells, receive a second input data from one of the second plurality of storage cells, and generate an output data based on the first and second input data.
2. The unified memory apparatus of claim 1, wherein the unique cell identifier of the first and second plurality of storage cells comprises a bit address including a first plurality of bits and a second plurality of bits.
3. The unified memory apparatus of claim 2, wherein the first plurality of bits indicates a target storage module of the first and second storage modules, and wherein the second plurality of bits indicates a target storage cell of the first and second plurality of storage cells within the target storage module.
4. The unified memory apparatus of claims 2 and 3, wherein the second plurality of bits further indicates a characteristic associated with the target storage cell, the characteristic of the target storage cell being one of private or shared.
5. The unified memory apparatus of any of claims 1-4, wherein the first and second storage modules are configured to communicate with a corresponding processing unit.
6. The unified memory apparatus of claim 1, wherein the processing unit is configured to receive the first input data from a private storage cell, and the second input data from a shared storage cell.
7. The unified memory apparatus of any of claims 1-6, wherein the unified storage medium and the processing unit are configured to be uniformly addressed by a software code or a software program.
8. The unified memory apparatus of claim 1 , wherein the unified storage medium is further configured to receive instructions from a compiler, the instructions including a characteristic associated with the data, wherein the characteristic associated with the data is one of private or shared.
9. The unified memory apparatus of claim 5, wherein the private storage cell is configured to store private data and the shared storage cell is configured to store shared data that can be shared across the multiple processing units.
10. A unified storage medium comprising:
a first storage module having a first plurality of storage cells configured to store data, the first plurality of storage cells identified by a unique cell identifier; and
a second storage module having a second plurality of storage cells configured to store data, the second plurality of storage cells identified by a unique cell identifier.
1 1. The unified storage medium of claim 10, wherein the unique cell identifier of the first and second plurality of storage cells comprises a bit address including a first plurality of bits and a second plurality of bits.
12. The unified storage medium of any of claims 10 and 11, wherein the first plurality of bits indicates a target storage module of the first and second storage modules, and wherein the second plurality of bits indicates a target storage cell of the first and second plurality of storage cells within the target storage module.
13. The unified storage medium of claim 12, wherein the second plurality of bits further indicates a characteristic associated with the target storage cell, the characteristic of the target storage cell being one of private or shared.
14. The unified storage medium of claim 10, wherein the first and second storage modules are configured to communicate with a corresponding processing unit.
15. The unified storage medium of claim 10, configured to receive instructions from a compiler, the instructions including a characteristic associated with the data, wherein the characteristic associated with the data is one of private or shared.
16. The unified storage medium of claim 13, wherein the private storage cell is configured to store private data and the shared storage cell is configured to store shared data that can be shared across the multiple processing units.
17. A method for organizing data in a unified memory apparatus having a unified storage medium and one or more processing units, the method comprising:
configuring a first storage module of the unified storage medium to communicate with the one or more processing units and to include a first plurality of storage cells that are configured to store data, the first plurality of storage cells identified by a unique cell identifier;
configuring a second storage module of the unified storage medium to communicate with the one or more processing units and to include a second plurality of storage cells that are configured to store data, the second plurality of storage cells identified by a unique cell identifier; and configuring a processing unit of the one or more processing units to:
receive a first input data from one of the first plurality of storage cells, receive a second input data from one of the second plurality of storage cells, and generate an output data based on the first and second input data.
18. The method of claim 17, further comprising receiving instructions from a compiler, the instructions including a characteristic associated with the output data, wherein the characteristic associated with the output data is one of private or shared.
19. The method of claim 17, wherein the unique cell identifier of the first and second plurality of storage cells comprises a bit address including a first plurality of bits and a second plurality of bits.
20. The method of claim 19, wherein the first plurality of bits indicates a target storage module of the first and second storage modules, and wherein the second plurality of bits indicates a target storage cell of the first and second plurality of storage cells within the target storage module.
21. The method of any of claim 20, wherein the second plurality of bits further indicates a characteristic associated with the target storage cell, the characteristic of the target storage cell being one of private or shared.
22. The method of claim 17, wherein the first and second storage modules are configured to communicate with a corresponding processing unit.
23. A method for organizing data in a unified storage medium having a first storage module and a second storage module, the method comprising:
configuring the first storage module of the unified storage medium to communicate with one or more processing units and to include a first plurality of storage cells that are configured to store data, the first plurality of storage cells identified by a unique cell identifier; and
configuring the second storage module of the unified storage medium to communicate with one or more processing units and to include a second plurality of storage cells that are configured to store data, the second plurality of storage cells identified by a unique cell identifier.
24. The method of claim 23, wherein the one or more processing units are configured to:
receive a first input data from one of the first plurality of storage cells;
receive a second input data from one of the second plurality of storage cells; and generate an output data based on the first and second input data.
25. The method of claim 23, wherein the unique cell identifier of the first and second plurality of storage cells comprises a bit address including a first plurality of bits and a second plurality of bits.
26. The method of claim 25, wherein the first plurality of bits indicates a target storage module of the first and second storage modules, and wherein the second plurality of bits indicates a target storage cell of the first and second plurality of storage cells within the target storage module.
27. The method of claim 26, wherein the second plurality of bits further indicates a characteristic associated with the target storage cell, the characteristic of the target storage cell being one of private or shared.
28. The method of claim 24, further comprising receiving instructions from a compiler to store the output data, the instructions including a characteristic associated with the output data.
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201880074349.6A CN111630502B (en) | 2017-12-22 | 2018-12-21 | Unified memory organization for neural network processors |
EP18890583.0A EP3729279A4 (en) | 2017-12-22 | 2018-12-21 | A unified memory organization for neural network processors |
JP2020532976A JP7266602B2 (en) | 2017-12-22 | 2018-12-21 | Unified memory structure for neural network processors |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201762610119P | 2017-12-22 | 2017-12-22 | |
US62/610,119 | 2017-12-22 | ||
US15/984,255 US11436143B2 (en) | 2017-12-22 | 2018-05-18 | Unified memory organization for neural network processors |
US15/984,255 | 2018-05-18 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019126758A1 true WO2019126758A1 (en) | 2019-06-27 |
Family
ID=66949585
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2018/067301 WO2019126758A1 (en) | 2017-12-22 | 2018-12-21 | A unified memory organization for neural network processors |
Country Status (5)
Country | Link |
---|---|
US (1) | US11436143B2 (en) |
EP (1) | EP3729279A4 (en) |
JP (1) | JP7266602B2 (en) |
CN (1) | CN111630502B (en) |
WO (1) | WO2019126758A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113204478A (en) * | 2021-04-06 | 2021-08-03 | 北京百度网讯科技有限公司 | Method, device and equipment for running test unit and storage medium |
Families Citing this family (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10977338B1 (en) | 2018-04-20 | 2021-04-13 | Perceive Corporation | Reduced-area circuit for dot product computation |
US12093696B1 (en) | 2018-04-20 | 2024-09-17 | Perceive Corporation | Bus for transporting output values of a neural network layer to cores specified by configuration data |
US11531727B1 (en) | 2018-04-20 | 2022-12-20 | Perceive Corporation | Computation of neural network node with large input values |
US11586910B1 (en) * | 2018-04-20 | 2023-02-21 | Perceive Corporation | Write cache for neural network inference circuit |
US11468145B1 (en) | 2018-04-20 | 2022-10-11 | Perceive Corporation | Storage of input values within core of neural network inference circuit |
US11568227B1 (en) | 2018-04-20 | 2023-01-31 | Perceive Corporation | Neural network inference circuit read controller with multiple operational modes |
US11783167B1 (en) | 2018-04-20 | 2023-10-10 | Perceive Corporation | Data transfer for non-dot product computations on neural network inference circuit |
US11995533B1 (en) | 2018-12-05 | 2024-05-28 | Perceive Corporation | Executing replicated neural network layers on inference circuit |
FR3089649A1 (en) * | 2018-12-06 | 2020-06-12 | Stmicroelectronics (Rousset) Sas | Method and device for determining the global memory size of a global memory area allocated to the data of a neural network |
FR3094104A1 (en) | 2019-03-20 | 2020-09-25 | Stmicroelectronics (Rousset) Sas | Method and device for determining the overall memory size of a global memory area allocated to data from a neural network taking into account its topology |
US11615322B1 (en) | 2019-05-21 | 2023-03-28 | Perceive Corporation | Compiler for implementing memory shutdown for neural network implementation configuration |
KR20230168574A (en) | 2022-06-07 | 2023-12-14 | 리벨리온 주식회사 | Method for using shared page table of neural processing device and method for assigning physical page of the same |
KR102509472B1 (en) * | 2022-06-07 | 2023-03-14 | 리벨리온 주식회사 | Neural processing device and Method for using shared page table thereof |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030200378A1 (en) | 2002-04-22 | 2003-10-23 | Micron Technology, Inc. | Providing a register file memory with local addressing in a SIMD parallel processor |
US20150248353A1 (en) | 2004-08-13 | 2015-09-03 | Rambus Inc. | Processor memory system |
EP3035204A1 (en) | 2014-12-19 | 2016-06-22 | Intel Corporation | Storage device and method for performing convolution operations |
US20160232107A1 (en) | 2015-02-05 | 2016-08-11 | Alberto Ros | Systems and methods for coherence in clustered cache hierarchies |
US20160283399A1 (en) | 2015-03-27 | 2016-09-29 | Intel Corporation | Pooled memory address translation |
GB2543520A (en) | 2015-10-20 | 2017-04-26 | Advanced Risc Mach Ltd | Memory access instructions |
Family Cites Families (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH02292684A (en) * | 1989-05-06 | 1990-12-04 | Takayama:Kk | Picture recognizing system |
US5956703A (en) * | 1995-07-28 | 1999-09-21 | Delco Electronics Corporation | Configurable neural network integrated circuit |
JP2001290699A (en) | 2000-04-10 | 2001-10-19 | Matsushita Electric Ind Co Ltd | Access device for dual port ram |
US7073044B2 (en) * | 2001-03-30 | 2006-07-04 | Intel Corporation | Method and apparatus for sharing TLB entries |
GB0623276D0 (en) * | 2006-11-22 | 2007-01-03 | Transitive Ltd | Memory consistency protection in a multiprocessor computing system |
US8271763B2 (en) | 2009-09-25 | 2012-09-18 | Nvidia Corporation | Unified addressing and instructions for accessing parallel memory spaces |
US8990506B2 (en) * | 2009-12-16 | 2015-03-24 | Intel Corporation | Replacing cache lines in a cache memory based at least in part on cache coherency state information |
US8868848B2 (en) * | 2009-12-21 | 2014-10-21 | Intel Corporation | Sharing virtual memory-based multi-version data between the heterogenous processors of a computer platform |
US8982140B2 (en) * | 2010-09-24 | 2015-03-17 | Nvidia Corporation | Hierarchical memory addressing |
US9274960B2 (en) * | 2012-03-20 | 2016-03-01 | Stefanos Kaxiras | System and method for simplifying cache coherence using multiple write policies |
US9009419B2 (en) * | 2012-07-31 | 2015-04-14 | Advanced Micro Devices, Inc. | Shared memory space in a unified memory model |
EP2885708A4 (en) * | 2012-08-20 | 2016-11-09 | D Kevin Cameron | Processing resource allocation |
US9563425B2 (en) * | 2012-11-28 | 2017-02-07 | Intel Corporation | Instruction and logic to provide pushing buffer copy and store functionality |
US9733995B2 (en) * | 2014-12-17 | 2017-08-15 | Intel Corporation | Scalable synchronization mechanism for distributed memory |
US10664751B2 (en) * | 2016-12-01 | 2020-05-26 | Via Alliance Semiconductor Co., Ltd. | Processor with memory array operable as either cache memory or neural network unit memory |
US20170060736A1 (en) * | 2015-12-09 | 2017-03-02 | Mediatek Inc. | Dynamic Memory Sharing |
-
2018
- 2018-05-18 US US15/984,255 patent/US11436143B2/en active Active
- 2018-12-21 CN CN201880074349.6A patent/CN111630502B/en active Active
- 2018-12-21 JP JP2020532976A patent/JP7266602B2/en active Active
- 2018-12-21 EP EP18890583.0A patent/EP3729279A4/en active Pending
- 2018-12-21 WO PCT/US2018/067301 patent/WO2019126758A1/en unknown
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030200378A1 (en) | 2002-04-22 | 2003-10-23 | Micron Technology, Inc. | Providing a register file memory with local addressing in a SIMD parallel processor |
US20150248353A1 (en) | 2004-08-13 | 2015-09-03 | Rambus Inc. | Processor memory system |
EP3035204A1 (en) | 2014-12-19 | 2016-06-22 | Intel Corporation | Storage device and method for performing convolution operations |
US20160232107A1 (en) | 2015-02-05 | 2016-08-11 | Alberto Ros | Systems and methods for coherence in clustered cache hierarchies |
US20160283399A1 (en) | 2015-03-27 | 2016-09-29 | Intel Corporation | Pooled memory address translation |
GB2543520A (en) | 2015-10-20 | 2017-04-26 | Advanced Risc Mach Ltd | Memory access instructions |
Non-Patent Citations (1)
Title |
---|
See also references of EP3729279A4 |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113204478A (en) * | 2021-04-06 | 2021-08-03 | 北京百度网讯科技有限公司 | Method, device and equipment for running test unit and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN111630502A (en) | 2020-09-04 |
US20190196970A1 (en) | 2019-06-27 |
EP3729279A4 (en) | 2021-03-03 |
JP2021507383A (en) | 2021-02-22 |
CN111630502B (en) | 2024-04-16 |
JP7266602B2 (en) | 2023-04-28 |
US11436143B2 (en) | 2022-09-06 |
EP3729279A1 (en) | 2020-10-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11436143B2 (en) | Unified memory organization for neural network processors | |
US11714780B2 (en) | Compiler flow logic for reconfigurable architectures | |
KR20220054357A (en) | Method for performing PROCESSING-IN-MEMORY (PIM) operations on serially allocated data, and related memory devices and systems | |
JP7264897B2 (en) | Memory device and method for controlling same | |
US10970043B2 (en) | Programmable multiply-add array hardware | |
US11921814B2 (en) | Method and device for matrix multiplication optimization using vector registers | |
CN114341802A (en) | Method for performing in-memory processing operations and related memory device and system | |
US20090172352A1 (en) | Dynamic reconfigurable circuit | |
US10915317B2 (en) | Multiple-pipeline architecture with special number detection | |
US7526632B1 (en) | System, apparatus and method for implementing multifunctional memory in reconfigurable data path processing | |
CN116431214A (en) | Instruction set device for reconfigurable deep neural network accelerator | |
CN115878075A (en) | Priority encoder based techniques to compute minimum or maximum of multiple values |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18890583 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2020532976 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2018890583 Country of ref document: EP Effective date: 20200722 |