US20150074680A1 - Method and apparatus for asynchronous processor with a token ring based parallel processor scheduler - Google Patents
Method and apparatus for asynchronous processor with a token ring based parallel processor scheduler Download PDFInfo
- Publication number
- US20150074680A1 US20150074680A1 US14/480,561 US201414480561A US2015074680A1 US 20150074680 A1 US20150074680 A1 US 20150074680A1 US 201414480561 A US201414480561 A US 201414480561A US 2015074680 A1 US2015074680 A1 US 2015074680A1
- Authority
- US
- United States
- Prior art keywords
- token
- processing
- processing component
- component
- accordance
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 238000000034 method Methods 0.000 title claims abstract description 37
- 230000001902 propagating effect Effects 0.000 claims abstract description 6
- 230000003111 delayed effect Effects 0.000 claims description 5
- 230000000644 propagated effect Effects 0.000 claims 1
- 238000004891 communication Methods 0.000 description 9
- 230000001360 synchronised effect Effects 0.000 description 8
- 238000005516 engineering process Methods 0.000 description 6
- 238000010586 diagram Methods 0.000 description 4
- 230000005540 biological transmission Effects 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 230000004075 alteration Effects 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 239000000654 additive Substances 0.000 description 1
- 230000000996 additive effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 230000001934 delay Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000007257 malfunction Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/30—Arrangements for executing machine instructions, e.g. instruction decode
- G06F9/30145—Instruction analysis, e.g. decoding, instruction word fields
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/04—Generating or distributing clock signals or signals derived directly therefrom
- G06F1/08—Clock generators with changeable or programmable clock frequency
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F1/00—Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
- G06F1/04—Generating or distributing clock signals or signals derived directly therefrom
- G06F1/10—Distribution of clock signals, e.g. skew
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F15/00—Digital computers in general; Data processing equipment in general
- G06F15/76—Architectures of general purpose stored program computers
- G06F15/80—Architectures of general purpose stored program computers comprising an array of processing units with common control, e.g. single instruction multiple data processors
- G06F15/8053—Vector processors
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F15/00—Digital computers in general; Data processing equipment in general
- G06F15/76—Architectures of general purpose stored program computers
- G06F15/80—Architectures of general purpose stored program computers comprising an array of processing units with common control, e.g. single instruction multiple data processors
- G06F15/8053—Vector processors
- G06F15/8092—Array of vector units
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/30—Arrangements for executing machine instructions, e.g. instruction decode
- G06F9/30003—Arrangements for executing specific machine instructions
- G06F9/30007—Arrangements for executing specific machine instructions to perform operations on data operands
- G06F9/30036—Instructions to perform operations on packed data, e.g. vector, tile or matrix operations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/30—Arrangements for executing machine instructions, e.g. instruction decode
- G06F9/30181—Instruction operation extension or modification
- G06F9/30189—Instruction operation extension or modification according to execution mode, e.g. mode flag
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/30—Arrangements for executing machine instructions, e.g. instruction decode
- G06F9/38—Concurrent instruction execution, e.g. pipeline or look ahead
- G06F9/3824—Operand accessing
- G06F9/3826—Bypassing or forwarding of data results, e.g. locally between pipeline stages or within a pipeline stage
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/30—Arrangements for executing machine instructions, e.g. instruction decode
- G06F9/38—Concurrent instruction execution, e.g. pipeline or look ahead
- G06F9/3824—Operand accessing
- G06F9/3826—Bypassing or forwarding of data results, e.g. locally between pipeline stages or within a pipeline stage
- G06F9/3828—Bypassing or forwarding of data results, e.g. locally between pipeline stages or within a pipeline stage with global bypass, e.g. between pipelines, between clusters
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/30—Arrangements for executing machine instructions, e.g. instruction decode
- G06F9/38—Concurrent instruction execution, e.g. pipeline or look ahead
- G06F9/3836—Instruction issuing, e.g. dynamic instruction scheduling or out of order instruction execution
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/30—Arrangements for executing machine instructions, e.g. instruction decode
- G06F9/38—Concurrent instruction execution, e.g. pipeline or look ahead
- G06F9/3836—Instruction issuing, e.g. dynamic instruction scheduling or out of order instruction execution
- G06F9/3851—Instruction issuing, e.g. dynamic instruction scheduling or out of order instruction execution from multiple instruction streams, e.g. multistreaming
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/30—Arrangements for executing machine instructions, e.g. instruction decode
- G06F9/38—Concurrent instruction execution, e.g. pipeline or look ahead
- G06F9/3836—Instruction issuing, e.g. dynamic instruction scheduling or out of order instruction execution
- G06F9/3853—Instruction issuing, e.g. dynamic instruction scheduling or out of order instruction execution of compound instructions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/30—Arrangements for executing machine instructions, e.g. instruction decode
- G06F9/38—Concurrent instruction execution, e.g. pipeline or look ahead
- G06F9/3867—Concurrent instruction execution, e.g. pipeline or look ahead using instruction pipelines
- G06F9/3871—Asynchronous instruction pipeline, e.g. using handshake signals between stages
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/30—Arrangements for executing machine instructions, e.g. instruction decode
- G06F9/38—Concurrent instruction execution, e.g. pipeline or look ahead
- G06F9/3877—Concurrent instruction execution, e.g. pipeline or look ahead using a slave processor, e.g. coprocessor
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/30—Arrangements for executing machine instructions, e.g. instruction decode
- G06F9/38—Concurrent instruction execution, e.g. pipeline or look ahead
- G06F9/3885—Concurrent instruction execution, e.g. pipeline or look ahead using a plurality of independent parallel functional units
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/30—Arrangements for executing machine instructions, e.g. instruction decode
- G06F9/38—Concurrent instruction execution, e.g. pipeline or look ahead
- G06F9/3885—Concurrent instruction execution, e.g. pipeline or look ahead using a plurality of independent parallel functional units
- G06F9/3889—Concurrent instruction execution, e.g. pipeline or look ahead using a plurality of independent parallel functional units controlled by multiple instructions, e.g. MIMD, decoupled access or execute
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/30—Arrangements for executing machine instructions, e.g. instruction decode
- G06F9/38—Concurrent instruction execution, e.g. pipeline or look ahead
- G06F9/3885—Concurrent instruction execution, e.g. pipeline or look ahead using a plurality of independent parallel functional units
- G06F9/3889—Concurrent instruction execution, e.g. pipeline or look ahead using a plurality of independent parallel functional units controlled by multiple instructions, e.g. MIMD, decoupled access or execute
- G06F9/3891—Concurrent instruction execution, e.g. pipeline or look ahead using a plurality of independent parallel functional units controlled by multiple instructions, e.g. MIMD, decoupled access or execute organised in groups of units sharing resources, e.g. clusters
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/50—Allocation of resources, e.g. of the central processing unit [CPU]
- G06F9/5005—Allocation of resources, e.g. of the central processing unit [CPU] to service a request
- G06F9/5011—Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F15/00—Digital computers in general; Data processing equipment in general
- G06F15/76—Architectures of general purpose stored program computers
- G06F15/80—Architectures of general purpose stored program computers comprising an array of processing units with common control, e.g. single instruction multiple data processors
- G06F15/8007—Architectures of general purpose stored program computers comprising an array of processing units with common control, e.g. single instruction multiple data processors single instruction multiple data [SIMD] multiprocessors
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/30—Arrangements for executing machine instructions, e.g. instruction decode
- G06F9/38—Concurrent instruction execution, e.g. pipeline or look ahead
- G06F9/3877—Concurrent instruction execution, e.g. pipeline or look ahead using a slave processor, e.g. coprocessor
- G06F2009/3883—Two-engine architectures, i.e. stand-alone processor acting as a slave processor
Definitions
- This application is related to:
- the present disclosure relates generally to asynchronous processors, and more particularly to an asynchronous processor with a token ring based parallel processor scheduler.
- High performance synchronous digital processing systems utilize pipelining to increase parallel performance and throughput.
- pipelining results in many partitioned or subdivided smaller blocks or stages and a system clock is applied to registers between the blocks/stages.
- the system clock initiates movement of the processing and data from one stage to the next, and the processing in each stage must be completed during one fixed clock cycle.
- the next processing stages must wait—increasing processing delays (which are additive).
- asynchronous systems i.e., clockless
- each processing stage is intended, in general terms, to begin its processing upon completion of processing in the prior stage.
- Several benefits or features are present with asynchronous processing systems.
- Each processing stage can have a different processing delay, the input data can be processed upon arrival, and consume power only on demand.
- FIG. 1 illustrates a prior art Sutherland asynchronous micro-pipeline architecture 100 .
- the Sutherland asynchronous micro-pipeline architecture is one form of asynchronous micro-pipeline architecture that uses a handshaking protocol built by Muller-C elements to control the micro-pipeline building blocks.
- the architecture 100 includes a plurality of computing logic 102 linked in sequence via flip-flops or latches 104 (e.g., registers). Control signals are passed between the computing blocks via Muller C-elements 106 and delayed via delay logic 108 . Further information describing this architecture 100 is published by Ivan Sutherland in Communications of the ACM Volume 32 Issue 6, June 1989 pages 720-738, ACM New York, N.Y., USA, which is incorporated herein by reference.
- FIG. 2 there is illustrated a typical section or processing stage of a synchronous system 200 .
- the system 200 includes flip-flops or registers 202 , 204 for clocking an output signal (data) 206 from a logic block 210 .
- On the right side of FIG. 2 there is shown an illustration of the concept of meta-stability.
- Set-up times and hold times must be considered to avoid meta-stability.
- the data must be valid and held during the set-up time and the hold time, otherwise a set-up violation 212 or a hold violation 214 may occur. If either of these violations occurs, the synchronous system may malfunction.
- the concept of meta-stability also applies to asynchronous systems. Therefore, it is important to design asynchronous systems to avoid meta-stability.
- asynchronous systems also need to address various potential data/instruction hazards, and should include a bypassing mechanism and pipeline interlock mechanism to detect and resolve hazards.
- a method of operating a clock-less asynchronous processing system comprising a plurality of successive asynchronous processing components.
- the method comprises providing a first token signal path in the plurality of processing components to allow propagation of a token through the processing components. Possession of the token by one of the processing components enables the processing component to conduct a transaction with a resource component that is shared among the processing components.
- the method comprises propagating the token from one processing component to another processing component along the token signal path.
- a clock-less asynchronous processing system comprising a plurality of successive asynchronous processing components, each processing component comprising token processing logic configured to receive, hold and pass a token from a given processing component to another processing component.
- the token processing logic comprises token signal path in the plurality of processing components to allow propagation of the token through the processing components. Possession of the token by one of the processing components enables the processing component to conduct a transaction a resource component that is shared among the processing components.
- FIG. 1 illustrates a prior art asynchronous micro-pipeline architecture
- FIG. 2 is a block diagram illustrating the concept of meta-stability in a synchronous system
- FIG. 3A illustrates an asynchronous processing system in accordance with disclosed embodiments of the present disclosure
- FIG. 4 illustrates an example of a token ring architecture in accordance with disclosed embodiments of the present disclosure
- FIG. 5 illustrates an example of an asynchronous processor architecture in accordance with disclosed embodiments of the present disclosure
- FIG. 6 illustrates token based pipelining with gating within an ALU in accordance with disclosed embodiments of the present disclosure
- FIG. 7 illustrates token based pipelining for an inter-ALU token passing system in accordance with disclosed embodiments of the present disclosure
- FIG. 8 illustrates a block diagram of an exemplary token ring based array architecture in accordance with disclosed embodiments of the present disclosure
- FIG. 9 illustrates an exemplary embodiment of a token ring based parallel processor asynchronous scheduler in accordance with disclosed embodiments of the present disclosure
- FIG. 10 illustrates a more detailed view of the token ring based parallel processor asynchronous scheduler of FIG. 9 in accordance with disclosed embodiments of the present disclosure
- FIG. 11 illustrates an example communication system in which the asynchronous processor and processing system may be utilized.
- FIGS. 12A and 12B illustrate example devices in which the asynchronous processor and processing system may be utilized.
- Asynchronous technology seeks to eliminate the need of synchronous technology for a global clock-tree which not only consumes an important portion of the chip power and die area, but also reduces the speed(s) of the faster parts of the circuit to match the slower parts (i.e., the final clock-tree rate derives from the slowest part of a circuit).
- To remove the clock-tree (or minimize the clock-tree) asynchronous technology requires special logic to realize a handshaking protocol between two consecutive clock-less processing circuits. Once a clock-less processing circuit finishes its operation and enters into a stable state, a signal (e.g., a “Request” signal) is triggered and issued to its ensuing circuit.
- the ensuing circuit If the ensuing circuit is ready to receive the data, the ensuing circuit sends a signal (e.g., an “ACK” signal) to the preceding circuit.
- a signal e.g., an “ACK” signal
- the handshaking protocol ensures the correctness of a circuit or a cascade of circuits.
- hazard for situations in which instructions in a pipeline would produce wrong answers.
- a structural hazard occurs when two instructions might attempt to use the same resources at the same time.
- a data hazard occurs when an instruction, scheduled blindly, would attempt to use data before the data is available in the register file.
- the system 300 includes an asynchronous scalar processor 310 , an asynchronous vector processor 330 , a cache controller 320 and L1/L2 cache memory 340 .
- asynchronous processor may refer to the processor 310 , the processor 330 , or the processors 310 , 330 in combination. Though only one processor 310 , 330 is shown, the processing system 300 may include more than one of these processors. In addition, it will be understood that each processor may include therein multiple CPUs, control units, execution units and/or ALUs, etc.
- the asynchronous scalar processor 310 may include multiple execution units with each execution unit having a desired number of pipeline stages.
- the processor 310 may include sixteen execution units with each execution unit having five stages.
- the asynchronous vector processor 330 may include multiple execution units with each execution unit having a desired number of pipeline stages.
- the L1/L2 cache memory 340 may be subdivided into L1 and L2 cache, and may also be subdivided into instruction cache and data cache. Likewise, the cache controller 320 may be functionally subdivided.
- a token system is a two-dimensional system. Within a functional unit, tokens gate each other to form a closed loop. Across functional units, a token signal is delayed “deliberately” to avoid a structural hazard.
- a token-based asynchronous processor uses a token system to “emulate” a pipeline to yield instruction-level-parallelism (ILP) to preserve the program order, and avoid the data/structural/control hazards.
- ILP instruction-level-parallelism
- FIG. 4 illustrates an example of a token ring architecture 600 as an alternative to the architecture above in FIG. 1 .
- the token ring architecture 600 comprises a token processing logic unit 610 .
- the token processing logic 610 comprises token-sense-latch-logic 612 and a variable delay chain 614 .
- the token processing logic unit 610 may also comprise pulse/active generation logic 616 .
- the token processing logic unit 610 may include any suitable circuitry for detecting reception of a token.
- the token processing logic unit 610 is configured to propagate the token from one processing component to other processing components along a token signal path.
- the Sutherland asynchronous micro pipeline architecture requires the handshaking protocol, which is realized by the non-standard Muller-C elements.
- a series of token processing logic units are used to control the processing of different computing logic (not shown), such as processing units on a chip (e.g., ALUs) or other functional calculation units, or the access of the computing logic to system resources, such as registers or memory.
- the token processing logic unit 610 is replicated to several copies and arranged in a series of token processing logic units as shown at 620 .
- Each token processing logic unit 610 in the series 620 controls the passing of one or more token signals 630 (associated with one or more resources).
- a token signal 630 passing through the token processing logic units in series 620 forms a token ring 640 .
- the token ring 640 regulates the access of the computing logic (not shown) to the system resource (e.g., memory, register) associated with that token signal.
- the token processing logic 610 accepts, holds, and passes the token signal 630 between each other in a sequential manner. When the token signal 630 is held by the token processing logic 610 , the computing logic associated with that token processing logic is granted the exclusive access to the resource corresponding to that token signal, until the token signal is passed to a next token processing logic in the ring.
- Holding and passing the token signal concludes the computing logic's access or use of the corresponding resource, and is referred to herein as consuming the token. Once the token is consumed, it is released by the given token processing logic unit to a subsequent token processing logic unit in the ring.
- FIG. 5 illustrates an asynchronous processor architecture 3101 .
- the architecture includes a plurality of self-timed (asynchronous) arithmetic and logic units (ALUs) 3122 coupled in parallel in a token ring architecture as described above with respect to FIG. 4 .
- Each ALU 3122 may correspond to the token processing logic unit 610 of FIG. 4 .
- the asynchronous processor architecture 3101 also includes a feedback engine 3120 for properly distributing incoming instructions between the ALUs 3122 , an instruction/timing history table 3115 accessible by the feedback engine 3120 for determining the distribution of instructions, a register (memory) 3102 accessible by the ALUs 3122 , and a crossbar 3124 for exchanging needed information between the ALUs 3122 .
- the history table 3115 is used for indicating timing and dependency information between multiple input instructions to the processor system. Instructions from the instruction cache/memory are received by the feedback engine 3120 which detects or calculates the data dependencies and determines the timing for instructions using the history table 3115 . The feedback engine 3120 pre-decodes each instruction to decide how many input operands this instruction requires. The feedback engine 3120 then looks up the history table 3115 to find whether this piece of data is on the crossbar 3124 or on the register file 3102 . If the data is found on the crossbar 3124 , the feedback engine 3120 calculates which ALU produces the data. This information is tagged to the instruction dispatched to the ALUs 3122 . The feedback engine 3120 also updates the history table 3115 accordingly.
- asynchronous architecture 3101 is provided in co-pending application entitled “Method and Apparatus for Asynchronous Processor Pipeline and Bypass Passing”, attorney docket number HUAW07-06364, filed concurrently herewith and incorporated herein by reference.
- FIG. 6 illustrates token based pipelining with gating within an ALU, also referred to herein as token based pipelining for an intra-ALU token gating system 2800 .
- the intra-ALU token gating system 2800 comprises a plurality of tokens including a launch token 2802 associated with a start and decode instruction, a register access token 2804 associated with reading values from a register file, a jump token 2806 associated with a program counter jump, a memory access token 2808 associated with accessing a memory, an instruction pre-fetch token 2810 associated with fetching the next instruction, an other resources token 2812 associated with use of other resources, and a commit token 2814 associated with register and memory commit.
- Designated tokens are used to gate other designated tokens in a given order of the pipeline. This means that when a designated token passes through an ALU, a second designated token is then allowed to be processed and passed by the same ALU in the token ring architecture. In other words, releasing one token by the ALU becomes a condition to consume (process) another token in that ALU in that given order.
- FIG. 6 A particular example of a token-gating relationship is illustrated in FIG. 6 . It will be appreciated by one skilled in the art that other token-gating relationships may be used.
- the launch token (L) 2802 gates the register access token (R) 2804 , which in turn gates the jump token (PC token) 2806 .
- the jump token 2806 gates the memory access token (M) 2808 , the instruction pre-fetch token (F) 2810 , and possibly other resource tokens 2812 that may be used. This means that tokens M 2808 , F 2810 , and other resource tokens 2812 can only be consumed by the ALU after passing the jump token 2806 .
- These tokens gate the commit token (W) 2814 to register or memory.
- the commit token 2814 is also referred to herein as a token for writing the instruction.
- the commit token 2814 in turn gates the launch token 2802 .
- the gating signal from the gating token (a token in the pipeline) is used as input into a consumption condition logic of the gated token (the token in the next order of the pipeline).
- the launch token (L) 2802 generates an active signal to the register access or read token (R) 2804 , when the launch token (L) 2802 is released to the next ALU. This guarantees that any ALU would not read the register file until an instruction is actually started by the launch token 2802 .
- FIG. 7 illustrates token based pipelining for an inter-ALU token passing system 2900 .
- the inter-ALU token passing system 2900 comprises a first ALU 2902 and a second ALU 2904 .
- a consumed token signal triggers a pulse to a common resource.
- the register read token 2804 in the first CPU 2902 triggers a pulse to the register file (not shown).
- the token signal is delayed before it is released to the next ALU (e.g., the second ALU 2904 ) for a period of time such that there is no structural hazard on this common resource (e.g., the register file) between the first ALU 2902 and the second ALU 2904 .
- the tokens not only preserve multiple ALUs from launching and committing (or writing) instructions in the program counter (PC) order, but also avoid structural hazard among the multiple ALUs.
- FIG. 8 illustrates a block diagram of an exemplary token ring based array architecture 2700 .
- the token ring based array architecture 2700 comprises a plurality of processing units 2702 , a token signal path or ring 2704 comprising a plurality of tokens, a multiplexor 2706 , and a plurality of external resources 2708 shared between the processing units 2702 .
- the processing units 2702 are identical in design and function to one another.
- the processing units 2702 implement arithmetic and logic units (ALUs).
- the ALUs 2702 may be asynchronous units.
- the token ring 2704 allows propagation of a token through the ALUs 2702 .
- Token processing logic is provided (not shown) for propagating the token from one ALU to other ALU amongst the ALUs 2702 along the token ring 2704 .
- the token processing logic is configured to propagate the token between the ALUs 2708 at a propagation rate that is related to a transaction rate of the shared external resource 2708 . For example, the rate at which the ALU completes a transaction may vary depending on the specific transaction requested.
- Each token in the token ring 2704 is a signal indicator for the availability of one or more of the external resources 2708 .
- the token is such that only one ALU amongst the ALUs 2702 can possess it at any given time.
- possession of the token by a given ALU enables the given ALU to conduct a transaction with the shared external resource 2708 .
- lack of possession of the token by the given ALU prevents the given ALU from conducting a transaction with the shared external resource 2708 . In this manner, the token allows preventing more than one ALU from conducting a transaction with the external resource 2708 at a given time.
- the ALU After a given ALU conducts a transaction with the shared external resource 2708 , or if the given ALU does not wish to conduct a transaction with the shared external resource 2708 , the ALU releases or “passes” the token to the next ALU. Serialized in this way, multiple ALUs can share a common external resource. As illustrated, multiple tokens may be required to control access to the shared external resources 2708 via an N-bit selection control signal 2712 and the multiplexor 2706 .
- FIG. 9 illustrates an exemplary embodiment of a token ring based parallel processor asynchronous scheduler 3000 .
- a multiple token ring 3010 similar to the token ring 2704 described above with respect to FIG. 8 is used to control access of different external common resources 2708 between a first ALU (e.g., ALU 0) 2902 , a second ALU (e.g., ALU 1) 2904 , etc.
- ALU e.g., ALU 1
- token dependency and gating similar to the intra-token gating system 2800 described above with respect to FIG. 28 is used to form a pipeline with different stages within a given ALU.
- multiple asynchronous ALUs can be combined as a parallel processor 3030 .
- natural pipeline stages may be formed unlike a synchronous processor that has fixed period pipeline stages.
- FIG. 10 illustrates a more detailed view of the token ring based parallel processor asynchronous scheduler of FIG. 9 , where token ring signal paths of the tokens are illustrated by “dashed” lines, and where token dependence signal paths of the tokens are illustrated by “solid” lines.
- inter-ALU token passing as described above with respect to FIG. 7 is illustrated by the launch token 2802 being passed from ALU 0 2902 to ALU 1 2904 via token ring signal path 3104 .
- intra-ALU token passing as described above with respect to FIG. 6 is illustrated by launch token dependency signal 3106 from the launch token 2802 gating the register access token 2804 .
- the other tokens e.g., the register access token (R) 2804 , the jump token (PC token) 2806 , etc.
- FIG. 11 illustrates an example communication system 1400 that may be used for implementing the devices and methods disclosed herein.
- the system 1400 enables multiple wireless users to transmit and receive data and other content.
- the system 1400 may implement one or more channel access methods, such as code division multiple access (CDMA), time division multiple access (TDMA), frequency division multiple access (FDMA), orthogonal FDMA (OFDMA), or single-carrier FDMA (SC-FDMA).
- CDMA code division multiple access
- TDMA time division multiple access
- FDMA frequency division multiple access
- OFDMA orthogonal FDMA
- SC-FDMA single-carrier FDMA
- the communication system 1400 includes user equipment (UE) 1410 a - 1410 c , radio access networks (RANs) 1420 a - 1420 b , a core network 1430 , a public switched telephone network (PSTN) 1440 , the Internet 1450 , and other networks 1460 . While certain numbers of these components or elements are shown in FIG. 14 , any number of these components or elements may be included in the system 1400 .
- UE user equipment
- RANs radio access networks
- PSTN public switched telephone network
- the UEs 1410 a - 1410 c are configured to operate and/or communicate in the system 1400 .
- the UEs 1410 a - 1410 c are configured to transmit and/or receive wireless signals or wired signals.
- Each UE 1410 a - 1410 c represents any suitable end user device and may include such devices (or may be referred to) as a user equipment/device (UE), wireless transmit/receive unit (WTRU), mobile station, fixed or mobile subscriber unit, pager, cellular telephone, personal digital assistant (PDA), smartphone, laptop, computer, touchpad, wireless sensor, or consumer electronics device.
- UE user equipment/device
- WTRU wireless transmit/receive unit
- PDA personal digital assistant
- smartphone laptop, computer, touchpad, wireless sensor, or consumer electronics device.
- the RANs 1420 a - 1420 b here include base stations 1470 a - 1470 b , respectively.
- Each base station 1470 a - 1470 b is configured to wirelessly interface with one or more of the UEs 1410 a - 1410 c to enable access to the core network 1430 , the PSTN 1440 , the Internet 1450 , and/or the other networks 1460 .
- the base stations 1470 a - 1470 b may include (or be) one or more of several well-known devices, such as a base transceiver station (BTS), a Node-B (NodeB), an evolved NodeB (eNodeB), a Home NodeB, a Home eNodeB, a site controller, an access point (AP), or a wireless router, or a server, router, switch, or other processing entity with a wired or wireless network.
- BTS base transceiver station
- NodeB Node-B
- eNodeB evolved NodeB
- AP access point
- AP access point
- wireless router or a server, router, switch, or other processing entity with a wired or wireless network.
- the base station 1470 a forms part of the RAN 1420 a , which may include other base stations, elements, and/or devices.
- the base station 1470 b forms part of the RAN 1420 b , which may include other base stations, elements, and/or devices.
- Each base station 1470 a - 1470 b operates to transmit and/or receive wireless signals within a particular geographic region or area, sometimes referred to as a “cell.”
- MIMO multiple-input multiple-output
- the base stations 1470 a - 1470 b communicate with one or more of the UEs 1410 a - 1410 c over one or more air interfaces 1490 using wireless communication links.
- the air interfaces 1490 may utilize any suitable radio access technology.
- the system 1400 may use multiple channel access functionality, including such schemes as described above.
- the base stations and UEs implement LTE, LTE-A, and/or LTE-B.
- LTE Long Term Evolution
- LTE-A Long Term Evolution
- LTE-B Long Term Evolution-B
- the RANs 1420 a - 1420 b are in communication with the core network 1430 to provide the UEs 1410 a - 1410 c with voice, data, application, Voice over Internet Protocol (VoIP), or other services. Understandably, the RANs 1420 a - 1420 b and/or the core network 1430 may be in direct or indirect communication with one or more other RANs (not shown).
- the core network 1430 may also serve as a gateway access for other networks (such as PSTN 1440 , Internet 1450 , and other networks 1460 ).
- some or all of the UEs 1410 a - 1410 c may include functionality for communicating with different wireless networks over different wireless links using different wireless technologies and/or protocols.
- FIG. 11 illustrates one example of a communication system
- the communication system 1400 could include any number of UEs, base stations, networks, or other components in any suitable configuration, and can further include the EPC illustrated in any of the figures herein.
- FIGS. 12A and 12B illustrate example devices that may implement the methods and teachings according to this disclosure.
- FIG. 12A illustrates an example UE 1410
- FIG. 12B illustrates an example base station 1470 .
- These components could be used in the system 140 A or in any other suitable system.
- the UE 1410 includes at least one processing unit 1500 .
- the processing unit 1500 implements various processing operations of the UE 1410 .
- the processing unit 1500 could perform signal coding, data processing, power control, input/output processing, or any other functionality enabling the UE 1410 to operate in the system 1400 .
- the processing unit 1500 also supports the methods and teachings described in more detail above.
- Each processing unit 1500 includes any suitable processing or computing device configured to perform one or more operations.
- Each processing unit 1500 could, for example, include a microprocessor, microcontroller, digital signal processor, field programmable gate array, or application specific integrated circuit.
- the processing unit 1500 may be an asynchronous processor as described herein.
- the UE 1410 also includes at least one transceiver 1502 .
- the transceiver 1502 is configured to modulate data or other content for transmission by at least one antenna 1504 .
- the transceiver 1502 is also configured to demodulate data or other content received by the at least one antenna 1504 .
- Each transceiver 1502 includes any suitable structure for generating signals for wireless transmission and/or processing signals received wirelessly.
- Each antenna 1504 includes any suitable structure for transmitting and/or receiving wireless signals.
- One or multiple transceivers 1502 could be used in the UE 1410 , and one or multiple antennas 1504 could be used in the UE 1410 .
- a transceiver 1502 could also be implemented using at least one transmitter and at least one separate receiver.
- the UE 1410 further includes one or more input/output devices 1506 .
- the input/output devices 1506 facilitate interaction with a user.
- Each input/output device 1506 includes any suitable structure for providing information to or receiving information from a user, such as a speaker, microphone, keypad, keyboard, display, or touch screen.
- the UE 1410 includes at least one memory 1508 .
- the memory 1508 stores instructions and data used, generated, or collected by the UE 1410 .
- the memory 1508 could store software or firmware instructions executed by the processing unit(s) 1500 and data used to reduce or eliminate interference in incoming signals.
- Each memory 1508 includes any suitable volatile and/or non-volatile storage and retrieval device(s). Any suitable type of memory may be used, such as random access memory (RAM), read only memory (ROM), hard disk, optical disc, subscriber identity module (SIM) card, memory stick, secure digital (SD) memory card, and the like.
- the base station 1470 includes at least one processing unit 1500 , at least one transmitter 1552 , at least one receiver 1554 , one or more antennas 1556 , one or more network interfaces 1560 , and at least one memory 1558 .
- the processing unit 1500 implements various processing operations of the base station 1470 , such as signal coding, data processing, power control, input/output processing, or any other functionality.
- the processing unit 1500 can also support the methods and teachings described in more detail above.
- Each processing unit 1500 includes any suitable processing or computing device configured to perform one or more operations.
- Each processing unit 1500 could, for example, include a microprocessor, microcontroller, digital signal processor, field programmable gate array, or application specific integrated circuit.
- the processing unit 1500 may be an asynchronous processor as described herein.
- Each transmitter 1552 includes any suitable structure for generating signals for wireless transmission to one or more UEs or other devices.
- Each receiver 1554 includes any suitable structure for processing signals received wirelessly from one or more UEs or other devices. Although shown as separate components, at least one transmitter 1552 and at least one receiver 1554 could be combined into a transceiver.
- Each antenna 1556 includes any suitable structure for transmitting and/or receiving wireless signals. While a common antenna 1556 is shown here as being coupled to both the transmitter 1552 and the receiver 1554 , one or more antennas 1556 could be coupled to the transmitter(s) 1552 , and one or more separate antennas 1556 could be coupled to the receiver(s) 1554 .
- Each memory 1558 includes any suitable volatile and/or non-volatile storage and retrieval device(s).
- a computer program that is formed from computer readable program code and that is embodied in a computer readable medium.
- computer readable program code includes any type of computer code, including source code, object code, and executable code.
- computer readable medium includes any type of medium capable of being accessed by a computer, such as read only memory (ROM), random access memory (RAM), a hard disk drive, a compact disc (CD), a digital video disc (DVD), or any other type of memory.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Software Systems (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Hardware Design (AREA)
- Multimedia (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Advance Control (AREA)
- Power Sources (AREA)
- Executing Machine-Instructions (AREA)
- Complex Calculations (AREA)
- Logic Circuits (AREA)
Abstract
A method of operating a clock-less asynchronous processing system comprising a plurality of successive asynchronous processing components. The method comprises providing a first token signal path in the plurality of processing components to allow propagation of a token through the processing components. Possession of the token by one of the processing components enables the processing component to conduct a transaction with a resource component that is shared among the processing components. The method comprises propagating the token from one processing component to another processing component along the token signal path.
Description
- This application claims priority under 35 USC 119(e) to U.S. Provisional Application Ser. Nos. 61/874,794, 61/874,810, 61/874,856, 61/874,914, 61/874,880, 61/874,889, and 61/874,866, all filed on Sep. 6, 2013, and all of which are incorporated herein by reference.
- This application is related to:
- U.S. patent application Ser. No. ______ entitled “METHOD AND APPARATUS FOR ASYNCHRONOUS PROCESSOR WITH FAST AND SLOW MODE” and filed on the same date herewith, and identified by attorney docket number HUAW07-06583, and which is incorporated herein by reference;
- U.S. patent application Ser. No. ______ entitled “METHOD AND APPARATUS FOR ASYNCHRONOUS PROCESSOR REMOVAL OF META-STABILITY” and filed on the same date herewith, and identified by attorney docket number HUAW07-06400, and which is incorporated herein by reference;
- U.S. patent application Ser. No. ______ entitled “METHOD AND APPARATUS FOR ASYNCHRONOUS PROCESSOR WITH A TOKEN RING BASED PARALLEL PROCESSOR SCHEDULER” and filed on the same date herewith, and identified by attorney docket number HUAW07-06376, and which is incorporated herein by reference;
- U.S. patent application Ser. No. ______ entitled “METHOD AND APPARATUS FOR ASYNCHRONOUS PROCESSOR PIPELINE AND BYPASS PASSING” and filed on the same date herewith, and identified by attorney docket number HUAW07-06364, and which is incorporated herein by reference; and
- U.S. patent application Ser. No. ______ entitled “METHOD AND APPARATUS FOR ASYNCHRONOUS PROCESSOR BASED ON CLOCK DELAY ADJUSTMENT” and filed on the same date herewith, and identified by attorney docket number HUAW07-06351, and which is incorporated herein by reference.
- The present disclosure relates generally to asynchronous processors, and more particularly to an asynchronous processor with a token ring based parallel processor scheduler.
- High performance synchronous digital processing systems utilize pipelining to increase parallel performance and throughput. In synchronous systems, pipelining results in many partitioned or subdivided smaller blocks or stages and a system clock is applied to registers between the blocks/stages. The system clock initiates movement of the processing and data from one stage to the next, and the processing in each stage must be completed during one fixed clock cycle. When certain stages take less time than a clock cycle to complete processing, the next processing stages must wait—increasing processing delays (which are additive).
- In contrast, asynchronous systems (i.e., clockless) do not utilize a system clock and each processing stage is intended, in general terms, to begin its processing upon completion of processing in the prior stage. Several benefits or features are present with asynchronous processing systems. Each processing stage can have a different processing delay, the input data can be processed upon arrival, and consume power only on demand.
-
FIG. 1 illustrates a prior art Sutherland asynchronousmicro-pipeline architecture 100. The Sutherland asynchronous micro-pipeline architecture is one form of asynchronous micro-pipeline architecture that uses a handshaking protocol built by Muller-C elements to control the micro-pipeline building blocks. Thearchitecture 100 includes a plurality ofcomputing logic 102 linked in sequence via flip-flops or latches 104 (e.g., registers). Control signals are passed between the computing blocks via Muller C-elements 106 and delayed viadelay logic 108. Further information describing thisarchitecture 100 is published by Ivan Sutherland in Communications of the ACM Volume 32 Issue 6, June 1989 pages 720-738, ACM New York, N.Y., USA, which is incorporated herein by reference. - Now turning to
FIG. 2 , there is illustrated a typical section or processing stage of asynchronous system 200. Thesystem 200 includes flip-flops orregisters logic block 210. On the right side ofFIG. 2 there is shown an illustration of the concept of meta-stability. Set-up times and hold times must be considered to avoid meta-stability. In other words, the data must be valid and held during the set-up time and the hold time, otherwise a set-up violation 212 or ahold violation 214 may occur. If either of these violations occurs, the synchronous system may malfunction. The concept of meta-stability also applies to asynchronous systems. Therefore, it is important to design asynchronous systems to avoid meta-stability. In addition, like synchronous systems, asynchronous systems also need to address various potential data/instruction hazards, and should include a bypassing mechanism and pipeline interlock mechanism to detect and resolve hazards. - Accordingly, there are needed asynchronous processing systems, asynchronous processors, and methods of asynchronous processing that are stable and detect and resolve potential hazards.
- According to one embodiment, there is provided a method of operating a clock-less asynchronous processing system comprising a plurality of successive asynchronous processing components. The method comprises providing a first token signal path in the plurality of processing components to allow propagation of a token through the processing components. Possession of the token by one of the processing components enables the processing component to conduct a transaction with a resource component that is shared among the processing components. The method comprises propagating the token from one processing component to another processing component along the token signal path.
- In another embodiment, there is provided a clock-less asynchronous processing system. The processing system comprises a plurality of successive asynchronous processing components, each processing component comprising token processing logic configured to receive, hold and pass a token from a given processing component to another processing component. The token processing logic comprises token signal path in the plurality of processing components to allow propagation of the token through the processing components. Possession of the token by one of the processing components enables the processing component to conduct a transaction a resource component that is shared among the processing components.
- For a more complete understanding of the present disclosure, and the advantages thereof, reference is now made to the following descriptions taken in conjunction with the accompanying drawings, wherein like numbers designate like objects, and in which:
-
FIG. 1 illustrates a prior art asynchronous micro-pipeline architecture; -
FIG. 2 is a block diagram illustrating the concept of meta-stability in a synchronous system; -
FIG. 3A illustrates an asynchronous processing system in accordance with disclosed embodiments of the present disclosure; -
FIG. 4 illustrates an example of a token ring architecture in accordance with disclosed embodiments of the present disclosure; -
FIG. 5 illustrates an example of an asynchronous processor architecture in accordance with disclosed embodiments of the present disclosure; -
FIG. 6 illustrates token based pipelining with gating within an ALU in accordance with disclosed embodiments of the present disclosure; -
FIG. 7 illustrates token based pipelining for an inter-ALU token passing system in accordance with disclosed embodiments of the present disclosure; -
FIG. 8 illustrates a block diagram of an exemplary token ring based array architecture in accordance with disclosed embodiments of the present disclosure; -
FIG. 9 illustrates an exemplary embodiment of a token ring based parallel processor asynchronous scheduler in accordance with disclosed embodiments of the present disclosure; -
FIG. 10 illustrates a more detailed view of the token ring based parallel processor asynchronous scheduler ofFIG. 9 in accordance with disclosed embodiments of the present disclosure; -
FIG. 11 illustrates an example communication system in which the asynchronous processor and processing system may be utilized; and -
FIGS. 12A and 12B illustrate example devices in which the asynchronous processor and processing system may be utilized. - Asynchronous technology seeks to eliminate the need of synchronous technology for a global clock-tree which not only consumes an important portion of the chip power and die area, but also reduces the speed(s) of the faster parts of the circuit to match the slower parts (i.e., the final clock-tree rate derives from the slowest part of a circuit). To remove the clock-tree (or minimize the clock-tree), asynchronous technology requires special logic to realize a handshaking protocol between two consecutive clock-less processing circuits. Once a clock-less processing circuit finishes its operation and enters into a stable state, a signal (e.g., a “Request” signal) is triggered and issued to its ensuing circuit. If the ensuing circuit is ready to receive the data, the ensuing circuit sends a signal (e.g., an “ACK” signal) to the preceding circuit. Although the processing latencies of the two circuits are different and varying with time, the handshaking protocol ensures the correctness of a circuit or a cascade of circuits.
- Hennessy and Patterson coined the term “hazard” for situations in which instructions in a pipeline would produce wrong answers. A structural hazard occurs when two instructions might attempt to use the same resources at the same time. A data hazard occurs when an instruction, scheduled blindly, would attempt to use data before the data is available in the register file.
- With reference to
FIG. 3A , there is shown a block diagram of anasynchronous processing system 300 in accordance with the present disclosure. Thesystem 300 includes an asynchronousscalar processor 310, anasynchronous vector processor 330, acache controller 320 and L1/L2 cache memory 340. As will be appreciated, the term “asynchronous processor” may refer to theprocessor 310, theprocessor 330, or theprocessors processor processing system 300 may include more than one of these processors. In addition, it will be understood that each processor may include therein multiple CPUs, control units, execution units and/or ALUs, etc. For example, the asynchronousscalar processor 310 may include multiple execution units with each execution unit having a desired number of pipeline stages. In one example, theprocessor 310 may include sixteen execution units with each execution unit having five stages. Similarly, theasynchronous vector processor 330 may include multiple execution units with each execution unit having a desired number of pipeline stages. - The L1/
L2 cache memory 340 may be subdivided into L1 and L2 cache, and may also be subdivided into instruction cache and data cache. Likewise, thecache controller 320 may be functionally subdivided. - Aspects of the present disclosure provide architectures and techniques for a clock-less asynchronous processor architecture that utilizes a token ring based parallel processor scheduler. A token system is a two-dimensional system. Within a functional unit, tokens gate each other to form a closed loop. Across functional units, a token signal is delayed “deliberately” to avoid a structural hazard. A token-based asynchronous processor uses a token system to “emulate” a pipeline to yield instruction-level-parallelism (ILP) to preserve the program order, and avoid the data/structural/control hazards.
-
FIG. 4 illustrates an example of atoken ring architecture 600 as an alternative to the architecture above inFIG. 1 . The components of this architecture are supported by standard function libraries for chip implementation. For example, thetoken ring architecture 600 comprises a tokenprocessing logic unit 610. Thetoken processing logic 610 comprises token-sense-latch-logic 612 and avariable delay chain 614. In some embodiments, the tokenprocessing logic unit 610 may also comprise pulse/active generation logic 616. The tokenprocessing logic unit 610 may include any suitable circuitry for detecting reception of a token. The tokenprocessing logic unit 610 is configured to propagate the token from one processing component to other processing components along a token signal path. - As described above with respect to
FIG. 1 , the Sutherland asynchronous micro pipeline architecture requires the handshaking protocol, which is realized by the non-standard Muller-C elements. In order to avoid using Muller-C elements (as inFIG. 1 ), a series of token processing logic units are used to control the processing of different computing logic (not shown), such as processing units on a chip (e.g., ALUs) or other functional calculation units, or the access of the computing logic to system resources, such as registers or memory. To cover the long latency of some computing logic, the tokenprocessing logic unit 610 is replicated to several copies and arranged in a series of token processing logic units as shown at 620. Each tokenprocessing logic unit 610 in theseries 620 controls the passing of one or more token signals 630 (associated with one or more resources). Atoken signal 630 passing through the token processing logic units inseries 620 forms atoken ring 640. Thetoken ring 640 regulates the access of the computing logic (not shown) to the system resource (e.g., memory, register) associated with that token signal. Thetoken processing logic 610 accepts, holds, and passes thetoken signal 630 between each other in a sequential manner. When thetoken signal 630 is held by thetoken processing logic 610, the computing logic associated with that token processing logic is granted the exclusive access to the resource corresponding to that token signal, until the token signal is passed to a next token processing logic in the ring. Holding and passing the token signal concludes the computing logic's access or use of the corresponding resource, and is referred to herein as consuming the token. Once the token is consumed, it is released by the given token processing logic unit to a subsequent token processing logic unit in the ring. -
FIG. 5 illustrates anasynchronous processor architecture 3101. The architecture includes a plurality of self-timed (asynchronous) arithmetic and logic units (ALUs) 3122 coupled in parallel in a token ring architecture as described above with respect toFIG. 4 . EachALU 3122 may correspond to the tokenprocessing logic unit 610 ofFIG. 4 . Theasynchronous processor architecture 3101 also includes afeedback engine 3120 for properly distributing incoming instructions between theALUs 3122, an instruction/timing history table 3115 accessible by thefeedback engine 3120 for determining the distribution of instructions, a register (memory) 3102 accessible by theALUs 3122, and acrossbar 3124 for exchanging needed information between theALUs 3122. The history table 3115 is used for indicating timing and dependency information between multiple input instructions to the processor system. Instructions from the instruction cache/memory are received by thefeedback engine 3120 which detects or calculates the data dependencies and determines the timing for instructions using the history table 3115. Thefeedback engine 3120 pre-decodes each instruction to decide how many input operands this instruction requires. Thefeedback engine 3120 then looks up the history table 3115 to find whether this piece of data is on thecrossbar 3124 or on theregister file 3102. If the data is found on thecrossbar 3124, thefeedback engine 3120 calculates which ALU produces the data. This information is tagged to the instruction dispatched to theALUs 3122. Thefeedback engine 3120 also updates the history table 3115 accordingly. A more detailed explanation of theasynchronous architecture 3101 is provided in co-pending application entitled “Method and Apparatus for Asynchronous Processor Pipeline and Bypass Passing”, attorney docket number HUAW07-06364, filed concurrently herewith and incorporated herein by reference. -
FIG. 6 illustrates token based pipelining with gating within an ALU, also referred to herein as token based pipelining for an intra-ALUtoken gating system 2800. The intra-ALUtoken gating system 2800 comprises a plurality of tokens including alaunch token 2802 associated with a start and decode instruction, a register access token 2804 associated with reading values from a register file, a jump token 2806 associated with a program counter jump, amemory access token 2808 associated with accessing a memory, aninstruction pre-fetch token 2810 associated with fetching the next instruction, an other resources token 2812 associated with use of other resources, and a commit token 2814 associated with register and memory commit. - Designated tokens are used to gate other designated tokens in a given order of the pipeline. This means that when a designated token passes through an ALU, a second designated token is then allowed to be processed and passed by the same ALU in the token ring architecture. In other words, releasing one token by the ALU becomes a condition to consume (process) another token in that ALU in that given order.
- A particular example of a token-gating relationship is illustrated in
FIG. 6 . It will be appreciated by one skilled in the art that other token-gating relationships may be used. In the illustrated example, the launch token (L) 2802 gates the register access token (R) 2804, which in turn gates the jump token (PC token) 2806. The jump token 2806 gates the memory access token (M) 2808, the instruction pre-fetch token (F) 2810, and possiblyother resource tokens 2812 that may be used. This means thattokens M 2808,F 2810, andother resource tokens 2812 can only be consumed by the ALU after passing thejump token 2806. These tokens gate the commit token (W) 2814 to register or memory. The commit token 2814 is also referred to herein as a token for writing the instruction. The commit token 2814 in turn gates thelaunch token 2802. The gating signal from the gating token (a token in the pipeline) is used as input into a consumption condition logic of the gated token (the token in the next order of the pipeline). For example, the launch token (L) 2802 generates an active signal to the register access or read token (R) 2804, when the launch token (L) 2802 is released to the next ALU. This guarantees that any ALU would not read the register file until an instruction is actually started by thelaunch token 2802. -
FIG. 7 illustrates token based pipelining for an inter-ALUtoken passing system 2900. The inter-ALUtoken passing system 2900 comprises afirst ALU 2902 and asecond ALU 2904. A consumed token signal triggers a pulse to a common resource. For example, the register read token 2804 in thefirst CPU 2902 triggers a pulse to the register file (not shown). The token signal is delayed before it is released to the next ALU (e.g., the second ALU 2904) for a period of time such that there is no structural hazard on this common resource (e.g., the register file) between thefirst ALU 2902 and thesecond ALU 2904. The tokens not only preserve multiple ALUs from launching and committing (or writing) instructions in the program counter (PC) order, but also avoid structural hazard among the multiple ALUs. -
FIG. 8 illustrates a block diagram of an exemplary token ring basedarray architecture 2700. As illustrated, the token ring basedarray architecture 2700 comprises a plurality of processing units 2702, a token signal path orring 2704 comprising a plurality of tokens, amultiplexor 2706, and a plurality ofexternal resources 2708 shared between the processing units 2702. In the illustrated example, the processing units 2702 are identical in design and function to one another. In a non-limiting example, the processing units 2702 implement arithmetic and logic units (ALUs). The ALUs 2702 may be asynchronous units. - The
token ring 2704 allows propagation of a token through the ALUs 2702. Token processing logic is provided (not shown) for propagating the token from one ALU to other ALU amongst the ALUs 2702 along thetoken ring 2704. The token processing logic is configured to propagate the token between theALUs 2708 at a propagation rate that is related to a transaction rate of the sharedexternal resource 2708. For example, the rate at which the ALU completes a transaction may vary depending on the specific transaction requested. - Each token in the
token ring 2704 is a signal indicator for the availability of one or more of theexternal resources 2708. The token is such that only one ALU amongst the ALUs 2702 can possess it at any given time. In a specific example of implementation, possession of the token by a given ALU enables the given ALU to conduct a transaction with the sharedexternal resource 2708. Conversely, lack of possession of the token by the given ALU prevents the given ALU from conducting a transaction with the sharedexternal resource 2708. In this manner, the token allows preventing more than one ALU from conducting a transaction with theexternal resource 2708 at a given time. After a given ALU conducts a transaction with the sharedexternal resource 2708, or if the given ALU does not wish to conduct a transaction with the sharedexternal resource 2708, the ALU releases or “passes” the token to the next ALU. Serialized in this way, multiple ALUs can share a common external resource. As illustrated, multiple tokens may be required to control access to the sharedexternal resources 2708 via an N-bitselection control signal 2712 and themultiplexor 2706. -
FIG. 9 illustrates an exemplary embodiment of a token ring based parallel processorasynchronous scheduler 3000. As illustrated, a multipletoken ring 3010 similar to thetoken ring 2704 described above with respect toFIG. 8 is used to control access of different externalcommon resources 2708 between a first ALU (e.g., ALU 0) 2902, a second ALU (e.g., ALU 1) 2904, etc. In addition, token dependency and gating similar to theintra-token gating system 2800 described above with respect toFIG. 28 is used to form a pipeline with different stages within a given ALU. By using a multiple token ring to control access of different external common resources and token dependency and gating to form a pipeline with different stages, multiple asynchronous ALUs can be combined as aparallel processor 3030. As a result, natural pipeline stages may be formed unlike a synchronous processor that has fixed period pipeline stages. -
FIG. 10 illustrates a more detailed view of the token ring based parallel processor asynchronous scheduler ofFIG. 9 , where token ring signal paths of the tokens are illustrated by “dashed” lines, and where token dependence signal paths of the tokens are illustrated by “solid” lines. For example, inter-ALU token passing as described above with respect toFIG. 7 is illustrated by thelaunch token 2802 being passed fromALU 0 2902 toALU 1 2904 via tokenring signal path 3104. In addition, intra-ALU token passing as described above with respect toFIG. 6 is illustrated by launchtoken dependency signal 3106 from thelaunch token 2802 gating theregister access token 2804. The other tokens (e.g., the register access token (R) 2804, the jump token (PC token) 2806, etc.) may be similarly passed between the ALUs and within the ALUs, respectively. -
FIG. 11 illustrates anexample communication system 1400 that may be used for implementing the devices and methods disclosed herein. In general, thesystem 1400 enables multiple wireless users to transmit and receive data and other content. Thesystem 1400 may implement one or more channel access methods, such as code division multiple access (CDMA), time division multiple access (TDMA), frequency division multiple access (FDMA), orthogonal FDMA (OFDMA), or single-carrier FDMA (SC-FDMA). - In this example, the
communication system 1400 includes user equipment (UE) 1410 a-1410 c, radio access networks (RANs) 1420 a-1420 b, acore network 1430, a public switched telephone network (PSTN) 1440, theInternet 1450, andother networks 1460. While certain numbers of these components or elements are shown inFIG. 14 , any number of these components or elements may be included in thesystem 1400. - The
UEs 1410 a-1410 c are configured to operate and/or communicate in thesystem 1400. For example, theUEs 1410 a-1410 c are configured to transmit and/or receive wireless signals or wired signals. EachUE 1410 a-1410 c represents any suitable end user device and may include such devices (or may be referred to) as a user equipment/device (UE), wireless transmit/receive unit (WTRU), mobile station, fixed or mobile subscriber unit, pager, cellular telephone, personal digital assistant (PDA), smartphone, laptop, computer, touchpad, wireless sensor, or consumer electronics device. - The RANs 1420 a-1420 b here include
base stations 1470 a-1470 b, respectively. Eachbase station 1470 a-1470 b is configured to wirelessly interface with one or more of theUEs 1410 a-1410 c to enable access to thecore network 1430, thePSTN 1440, theInternet 1450, and/or theother networks 1460. For example, thebase stations 1470 a-1470 b may include (or be) one or more of several well-known devices, such as a base transceiver station (BTS), a Node-B (NodeB), an evolved NodeB (eNodeB), a Home NodeB, a Home eNodeB, a site controller, an access point (AP), or a wireless router, or a server, router, switch, or other processing entity with a wired or wireless network. - In the embodiment shown in
FIG. 11 , thebase station 1470 a forms part of theRAN 1420 a, which may include other base stations, elements, and/or devices. Also, thebase station 1470 b forms part of theRAN 1420 b, which may include other base stations, elements, and/or devices. Eachbase station 1470 a-1470 b operates to transmit and/or receive wireless signals within a particular geographic region or area, sometimes referred to as a “cell.” In some embodiments, multiple-input multiple-output (MIMO) technology may be employed having multiple transceivers for each cell. - The
base stations 1470 a-1470 b communicate with one or more of theUEs 1410 a-1410 c over one ormore air interfaces 1490 using wireless communication links. The air interfaces 1490 may utilize any suitable radio access technology. - It is contemplated that the
system 1400 may use multiple channel access functionality, including such schemes as described above. In particular embodiments, the base stations and UEs implement LTE, LTE-A, and/or LTE-B. Of course, other multiple access schemes and wireless protocols may be utilized. - The RANs 1420 a-1420 b are in communication with the
core network 1430 to provide theUEs 1410 a-1410 c with voice, data, application, Voice over Internet Protocol (VoIP), or other services. Understandably, the RANs 1420 a-1420 b and/or thecore network 1430 may be in direct or indirect communication with one or more other RANs (not shown). Thecore network 1430 may also serve as a gateway access for other networks (such asPSTN 1440,Internet 1450, and other networks 1460). In addition, some or all of theUEs 1410 a-1410 c may include functionality for communicating with different wireless networks over different wireless links using different wireless technologies and/or protocols. - Although
FIG. 11 illustrates one example of a communication system, various changes may be made toFIG. 11 . For example, thecommunication system 1400 could include any number of UEs, base stations, networks, or other components in any suitable configuration, and can further include the EPC illustrated in any of the figures herein. -
FIGS. 12A and 12B illustrate example devices that may implement the methods and teachings according to this disclosure. In particular,FIG. 12A illustrates anexample UE 1410, andFIG. 12B illustrates anexample base station 1470. These components could be used in the system 140A or in any other suitable system. - As shown in
FIG. 12A , theUE 1410 includes at least oneprocessing unit 1500. Theprocessing unit 1500 implements various processing operations of theUE 1410. For example, theprocessing unit 1500 could perform signal coding, data processing, power control, input/output processing, or any other functionality enabling theUE 1410 to operate in thesystem 1400. Theprocessing unit 1500 also supports the methods and teachings described in more detail above. Eachprocessing unit 1500 includes any suitable processing or computing device configured to perform one or more operations. Eachprocessing unit 1500 could, for example, include a microprocessor, microcontroller, digital signal processor, field programmable gate array, or application specific integrated circuit. Theprocessing unit 1500 may be an asynchronous processor as described herein. - The
UE 1410 also includes at least onetransceiver 1502. Thetransceiver 1502 is configured to modulate data or other content for transmission by at least oneantenna 1504. Thetransceiver 1502 is also configured to demodulate data or other content received by the at least oneantenna 1504. Eachtransceiver 1502 includes any suitable structure for generating signals for wireless transmission and/or processing signals received wirelessly. Eachantenna 1504 includes any suitable structure for transmitting and/or receiving wireless signals. One ormultiple transceivers 1502 could be used in theUE 1410, and one ormultiple antennas 1504 could be used in theUE 1410. Although shown as a single functional unit, atransceiver 1502 could also be implemented using at least one transmitter and at least one separate receiver. - The
UE 1410 further includes one or more input/output devices 1506. The input/output devices 1506 facilitate interaction with a user. Each input/output device 1506 includes any suitable structure for providing information to or receiving information from a user, such as a speaker, microphone, keypad, keyboard, display, or touch screen. - In addition, the
UE 1410 includes at least onememory 1508. Thememory 1508 stores instructions and data used, generated, or collected by theUE 1410. For example, thememory 1508 could store software or firmware instructions executed by the processing unit(s) 1500 and data used to reduce or eliminate interference in incoming signals. Eachmemory 1508 includes any suitable volatile and/or non-volatile storage and retrieval device(s). Any suitable type of memory may be used, such as random access memory (RAM), read only memory (ROM), hard disk, optical disc, subscriber identity module (SIM) card, memory stick, secure digital (SD) memory card, and the like. - As shown in
FIG. 12B , thebase station 1470 includes at least oneprocessing unit 1500, at least onetransmitter 1552, at least onereceiver 1554, one ormore antennas 1556, one ormore network interfaces 1560, and at least onememory 1558. Theprocessing unit 1500 implements various processing operations of thebase station 1470, such as signal coding, data processing, power control, input/output processing, or any other functionality. Theprocessing unit 1500 can also support the methods and teachings described in more detail above. Eachprocessing unit 1500 includes any suitable processing or computing device configured to perform one or more operations. Eachprocessing unit 1500 could, for example, include a microprocessor, microcontroller, digital signal processor, field programmable gate array, or application specific integrated circuit. Theprocessing unit 1500 may be an asynchronous processor as described herein. - Each
transmitter 1552 includes any suitable structure for generating signals for wireless transmission to one or more UEs or other devices. Eachreceiver 1554 includes any suitable structure for processing signals received wirelessly from one or more UEs or other devices. Although shown as separate components, at least onetransmitter 1552 and at least onereceiver 1554 could be combined into a transceiver. Eachantenna 1556 includes any suitable structure for transmitting and/or receiving wireless signals. While acommon antenna 1556 is shown here as being coupled to both thetransmitter 1552 and thereceiver 1554, one ormore antennas 1556 could be coupled to the transmitter(s) 1552, and one or moreseparate antennas 1556 could be coupled to the receiver(s) 1554. Eachmemory 1558 includes any suitable volatile and/or non-volatile storage and retrieval device(s). - Additional
details regarding UEs 1410 andbase stations 1470 are known to those of skill in the art. As such, these details are omitted here for clarity. - In some embodiments, some or all of the functions or processes of the one or more of the devices are implemented or supported by a computer program that is formed from computer readable program code and that is embodied in a computer readable medium. The phrase “computer readable program code” includes any type of computer code, including source code, object code, and executable code. The phrase “computer readable medium” includes any type of medium capable of being accessed by a computer, such as read only memory (ROM), random access memory (RAM), a hard disk drive, a compact disc (CD), a digital video disc (DVD), or any other type of memory.
- It may be advantageous to set forth definitions of certain words and phrases used throughout this patent document. The terms “include” and “comprise,” as well as derivatives thereof, mean inclusion without limitation. The term “or” is inclusive, meaning and/or. The phrases “associated with” and “associated therewith,” as well as derivatives thereof, mean to include, be included within, interconnect with, contain, be contained within, connect to or with, couple to or with, be communicable with, cooperate with, interleave, juxtapose, be proximate to, be bound to or with, have, have a property of, or the like.
- While this disclosure has described certain embodiments and generally associated methods, alterations and permutations of these embodiments and methods will be apparent to those skilled in the art. Accordingly, the above description of example embodiments does not define or constrain this disclosure. Other changes, substitutions, and alterations are also possible without departing from the spirit and scope of this disclosure, as defined by the following claims.
Claims (20)
1. A method of operating a clock-less asynchronous processing system comprising a plurality of successive asynchronous processing components, the method comprising:
providing a first token signal path in the plurality of processing components to allow propagation of a token through the processing components, wherein possession of the token by one of the processing components enables the processing component to conduct a transaction with a resource component that is shared among the processing components; and
propagating the token from one processing component to another processing component along the first token signal path.
2. The method in accordance with claim 1 , wherein propagating the token is performed at a propagation rate that is related to a latency associated with the processing component.
3. The method in accordance with claim 2 , wherein the latency is variable and is based on an operation to be conducted by the processing component.
4. The method in accordance with claim 1 , wherein propagating the token is performed at a propagation rate that is related to a transaction rate associated with the shared resource component.
5. The method as defined in claim 4 , wherein the transaction rate is variable and is based on the transaction to be conducted with the shared resource component.
6. The method in accordance with claim 1 , wherein lack of possession of the token by the processing component prevents the processing component from conducting a transaction with the shared resource component.
7. The method in accordance with claim 1 , further comprising:
in response to determining that the processing component desires no transaction with the shared resource component, releasing the token so that the token is propagated along the token signal path to another processing component.
8. The method in accordance with claim 1 , further comprising:
providing a second token signal path in the plurality of processing components separate and distinct from the first token signal path to allow propagation of a second token through the processing components, wherein the first token signal path and the second token signal path form a multi-token ring.
9. The method in accordance with claim 8 , further comprising:
providing an intra-processing component gating system, wherein a first designated token of a plurality of tokens is used to gate other designated tokens in a given order.
10. The method in accordance with claim 9 , wherein releasing the designated token by the processing component becomes a condition to consume another token in the processing component in the given order.
11. The method in accordance with claim 8 , further comprising:
providing an inter-processing component passing system, wherein the first token is delayed from passing from a first processing component to a second processing component to avoid a structural hazard.
12. The method in accordance with claim 11 , further comprising:
providing an intra-processing component gating system, wherein a first designated token of a plurality of tokens is used to gate other designated tokens in a given order;
wherein the inter-processing component passing system and the intra-processing component gating system form a pipeline with different stages.
13. A clock-less asynchronous processing system comprising:
a plurality of successive asynchronous processing components, each processing component comprising token processing logic configured to receive, hold and pass a token from a given processing component to another processing component;
wherein the token processing logic comprises a token signal path in the plurality of processing components to allow propagation of the token through the processing components, wherein possession of the token by one of the processing components enables the processing component to conduct a transaction with a resource component that is shared among the processing components.
14. The processing system in accordance with claim 13 , wherein the token processing logic is configured to propagate the token at a propagation rate that is related to a latency associated with the processing component.
15. The processing system in accordance with claim 14 , wherein the latency is variable and is based on an operation to be conducted by the processing component.
16. The processing system in accordance with claim 13 , wherein lack of possession of the token by the processing component prevents the processing component from conducting a transaction with the shared resource component.
17. The processing system in accordance with claim 13 , wherein the token processing circuitry further comprises intra-processing component gating circuitry, where a first designated token of a plurality of tokens is used to gate other designated tokens in a given order.
18. The processing system in accordance with claim 17 , wherein releasing the first designated token by the processing component becomes a condition to consume another token in the processing component in the given order.
19. The processing system in accordance with claim 13 , wherein the token processing circuitry further comprises inter-processing component passing circuitry, wherein the token is delayed from passing from a first processing component to a second processing component to avoid a structural hazard.
20. The processing system in accordance with claim 19 , wherein the token processing circuitry further comprises intra-processing component gating circuitry, wherein a first designated token of a plurality of tokens is used to gate other designated tokens in a given order;
wherein the inter-processing component passing circuitry and the intra-processing component gating circuitry form a pipeline with different stages.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/480,561 US20150074680A1 (en) | 2013-09-06 | 2014-09-08 | Method and apparatus for asynchronous processor with a token ring based parallel processor scheduler |
Applications Claiming Priority (8)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201361874856P | 2013-09-06 | 2013-09-06 | |
US201361874880P | 2013-09-06 | 2013-09-06 | |
US201361874866P | 2013-09-06 | 2013-09-06 | |
US201361874810P | 2013-09-06 | 2013-09-06 | |
US201361874794P | 2013-09-06 | 2013-09-06 | |
US201361874914P | 2013-09-06 | 2013-09-06 | |
US201361874889P | 2013-09-06 | 2013-09-06 | |
US14/480,561 US20150074680A1 (en) | 2013-09-06 | 2014-09-08 | Method and apparatus for asynchronous processor with a token ring based parallel processor scheduler |
Publications (1)
Publication Number | Publication Date |
---|---|
US20150074680A1 true US20150074680A1 (en) | 2015-03-12 |
Family
ID=52626716
Family Applications (6)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/480,522 Active 2035-02-18 US9740487B2 (en) | 2013-09-06 | 2014-09-08 | Method and apparatus for asynchronous processor removal of meta-stability |
US14/480,491 Active 2035-05-02 US9489200B2 (en) | 2013-09-06 | 2014-09-08 | Method and apparatus for asynchronous processor with fast and slow mode |
US14/480,556 Active 2036-01-01 US9846581B2 (en) | 2013-09-06 | 2014-09-08 | Method and apparatus for asynchronous processor pipeline and bypass passing |
US14/480,561 Abandoned US20150074680A1 (en) | 2013-09-06 | 2014-09-08 | Method and apparatus for asynchronous processor with a token ring based parallel processor scheduler |
US14/480,573 Active 2036-01-08 US10042641B2 (en) | 2013-09-06 | 2014-09-08 | Method and apparatus for asynchronous processor with auxiliary asynchronous vector processor |
US14/480,531 Active 2035-04-09 US9606801B2 (en) | 2013-09-06 | 2014-09-08 | Method and apparatus for asynchronous processor based on clock delay adjustment |
Family Applications Before (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/480,522 Active 2035-02-18 US9740487B2 (en) | 2013-09-06 | 2014-09-08 | Method and apparatus for asynchronous processor removal of meta-stability |
US14/480,491 Active 2035-05-02 US9489200B2 (en) | 2013-09-06 | 2014-09-08 | Method and apparatus for asynchronous processor with fast and slow mode |
US14/480,556 Active 2036-01-01 US9846581B2 (en) | 2013-09-06 | 2014-09-08 | Method and apparatus for asynchronous processor pipeline and bypass passing |
Family Applications After (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/480,573 Active 2036-01-08 US10042641B2 (en) | 2013-09-06 | 2014-09-08 | Method and apparatus for asynchronous processor with auxiliary asynchronous vector processor |
US14/480,531 Active 2035-04-09 US9606801B2 (en) | 2013-09-06 | 2014-09-08 | Method and apparatus for asynchronous processor based on clock delay adjustment |
Country Status (4)
Country | Link |
---|---|
US (6) | US9740487B2 (en) |
EP (3) | EP3014468A4 (en) |
CN (3) | CN105379121B (en) |
WO (6) | WO2015035338A1 (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9400685B1 (en) | 2015-01-30 | 2016-07-26 | Huawei Technologies Co., Ltd. | Dividing, scheduling, and parallel processing compiled sub-tasks on an asynchronous multi-core processor |
US11321019B2 (en) * | 2019-09-13 | 2022-05-03 | Accemic Technologies Gmbh | Event processing |
US20220276983A1 (en) * | 2018-07-05 | 2022-09-01 | Mythic, Inc. | Systems and methods for implementing an intelligence processing computing architecture |
Families Citing this family (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9325520B2 (en) * | 2013-09-06 | 2016-04-26 | Huawei Technologies Co., Ltd. | System and method for an asynchronous processor with scheduled token passing |
WO2015035338A1 (en) * | 2013-09-06 | 2015-03-12 | Futurewei Technologies, Inc. | Method and apparatus for asynchronous processor with a token ring based parallel processor scheduler |
US9520180B1 (en) | 2014-03-11 | 2016-12-13 | Hypres, Inc. | System and method for cryogenic hybrid technology computing and memory |
US9488692B2 (en) * | 2014-08-26 | 2016-11-08 | Apple Inc. | Mode based skew to reduce scan instantaneous voltage drop and peak currents |
US10108580B2 (en) * | 2015-05-21 | 2018-10-23 | Goldman Sachs & Co. LLC | General-purpose parallel computing architecture |
CN108605055A (en) | 2016-02-01 | 2018-09-28 | 高通股份有限公司 | Programmable distributed data processing in serial link |
US10159053B2 (en) * | 2016-02-02 | 2018-12-18 | Qualcomm Incorporated | Low-latency low-uncertainty timer synchronization mechanism across multiple devices |
US10185699B2 (en) | 2016-03-14 | 2019-01-22 | Futurewei Technologies, Inc. | Reconfigurable data interface unit for compute systems |
US20180121202A1 (en) * | 2016-11-02 | 2018-05-03 | Intel Corporation | Simd channel utilization under divergent control flow |
DE102017207876A1 (en) * | 2017-05-10 | 2018-11-15 | Robert Bosch Gmbh | Parallel processing |
CN107239276B (en) * | 2017-05-22 | 2021-01-12 | 广州安圣信息科技有限公司 | Asynchronous delay execution method and execution device based on C language |
RU2020102277A (en) * | 2017-06-22 | 2021-07-22 | АйКЭТ ЛЛК | HIGH PERFORMANCE PROCESSORS |
US10326452B2 (en) * | 2017-09-23 | 2019-06-18 | Eta Compute, Inc. | Synchronizing a self-timed processor with an external event |
EP3797355B1 (en) * | 2018-06-22 | 2024-09-25 | Huawei Technologies Co., Ltd. | Method of deadlock detection and synchronization-aware optimizations on asynchronous processor architectures |
CN109240981B (en) * | 2018-08-13 | 2023-03-24 | 中国科学院电子学研究所 | Method, device and computer readable storage medium for synchronous acquisition of multichannel data |
CN111090464B (en) | 2018-10-23 | 2023-09-22 | 华为技术有限公司 | Data stream processing method and related equipment |
US11556145B2 (en) * | 2020-03-04 | 2023-01-17 | Birad—Research & Development Company Ltd. | Skew-balancing algorithm for digital circuitry |
GB2592083B8 (en) * | 2020-03-27 | 2022-11-16 | Spatialbuzz Ltd | Network monitoring system |
US11720328B2 (en) * | 2020-06-26 | 2023-08-08 | Advanced Micro Devices, Inc. | Processing unit with small footprint arithmetic logic unit |
US11551120B2 (en) * | 2020-06-29 | 2023-01-10 | Paypal, Inc. | Systems and methods for predicting performance |
CN113190081B (en) * | 2021-04-26 | 2022-12-13 | 中国科学院近代物理研究所 | Method and device for adjusting time synchronism of power supply |
CN113505095B (en) * | 2021-07-30 | 2023-03-21 | 上海壁仞智能科技有限公司 | System-on-chip and integrated circuit with multi-core out-of-phase processing |
CN114253346B (en) * | 2021-12-09 | 2024-09-24 | 杭州长川科技股份有限公司 | Timing signal generator and calibration system and method thereof |
Family Cites Families (65)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4760518A (en) * | 1986-02-28 | 1988-07-26 | Scientific Computer Systems Corporation | Bi-directional databus system for supporting superposition of vector and scalar operations in a computer |
US4916652A (en) | 1987-09-30 | 1990-04-10 | International Business Machines Corporation | Dynamic multiple instruction stream multiple data multiple pipeline apparatus for floating-point single instruction stream single data architectures |
US5043867A (en) | 1988-03-18 | 1991-08-27 | Digital Equipment Corporation | Exception reporting mechanism for a vector processor |
US5197130A (en) * | 1989-12-29 | 1993-03-23 | Supercomputer Systems Limited Partnership | Cluster architecture for a highly parallel scalar/vector multiprocessor system |
US5021985A (en) | 1990-01-19 | 1991-06-04 | Weitek Corporation | Variable latency method and apparatus for floating-point coprocessor |
GB9014811D0 (en) * | 1990-07-04 | 1990-08-22 | Pgc Limited | Computer |
JPH05204634A (en) | 1991-08-29 | 1993-08-13 | Internatl Business Mach Corp <Ibm> | Microprocessor circuit |
JP3341269B2 (en) * | 1993-12-22 | 2002-11-05 | 株式会社ニコン | Projection exposure apparatus, exposure method, semiconductor manufacturing method, and projection optical system adjustment method |
US5758176A (en) | 1994-09-28 | 1998-05-26 | International Business Machines Corporation | Method and system for providing a single-instruction, multiple-data execution unit for performing single-instruction, multiple-data operations within a superscalar data processing system |
US5598113A (en) | 1995-01-19 | 1997-01-28 | Intel Corporation | Fully asynchronous interface with programmable metastability settling time synchronizer |
US6108769A (en) | 1996-05-17 | 2000-08-22 | Advanced Micro Devices, Inc. | Dependency table for reducing dependency checking hardware |
US5842034A (en) * | 1996-12-20 | 1998-11-24 | Raytheon Company | Two dimensional crossbar mesh for multi-processor interconnect |
GB2325535A (en) * | 1997-05-23 | 1998-11-25 | Aspex Microsystems Ltd | Data processor controller with accelerated instruction generation |
US6381692B1 (en) | 1997-07-16 | 2002-04-30 | California Institute Of Technology | Pipelined asynchronous processing |
US5987620A (en) | 1997-09-19 | 1999-11-16 | Thang Tran | Method and apparatus for a self-timed and self-enabled distributed clock |
US6049882A (en) | 1997-12-23 | 2000-04-11 | Lg Semicon Co., Ltd. | Apparatus and method for reducing power consumption in a self-timed system |
US6065126A (en) * | 1998-02-02 | 2000-05-16 | Tran; Thang Minh | Method and apparatus for executing plurality of operations per clock cycle in a single processing unit with a self-timed and self-enabled distributed clock |
DE69923769T2 (en) | 1998-04-01 | 2006-02-02 | Mosaid Technologies Incorporated, Kanata | ASYNCHRONES SEMICONDUCTOR MEMBER TAPE |
US6658581B1 (en) | 1999-03-29 | 2003-12-02 | Agency Of Industrial Science & Technology | Timing adjustment of clock signals in a digital circuit |
US6633971B2 (en) | 1999-10-01 | 2003-10-14 | Hitachi, Ltd. | Mechanism for forward data in a processor pipeline using a single pipefile connected to the pipeline |
EP1199629A1 (en) * | 2000-10-17 | 2002-04-24 | STMicroelectronics S.r.l. | Processor architecture with variable-stage pipeline |
KR100783687B1 (en) * | 2000-10-23 | 2007-12-07 | 더 트러스티스 오브 콜롬비아 유니버시티 인 더 시티 오브 뉴욕 | Asynchronous pipeline with latch controllers |
WO2002037264A2 (en) | 2000-11-06 | 2002-05-10 | Broadcom Corporation | Reconfigurable processing system and method |
US7681013B1 (en) | 2001-12-31 | 2010-03-16 | Apple Inc. | Method for variable length decoding using multiple configurable look-up tables |
US7376812B1 (en) | 2002-05-13 | 2008-05-20 | Tensilica, Inc. | Vector co-processor for configurable and extensible processor architecture |
EP1543390A2 (en) * | 2002-09-20 | 2005-06-22 | Koninklijke Philips Electronics N.V. | Adaptive data processing scheme based on delay forecast |
US7240231B2 (en) * | 2002-09-30 | 2007-07-03 | National Instruments Corporation | System and method for synchronizing multiple instrumentation devices |
US6889267B2 (en) * | 2002-11-26 | 2005-05-03 | Intel Corporation | Asynchronous communication protocol using efficient data transfer formats |
US20050213761A1 (en) | 2002-12-02 | 2005-09-29 | Walmsley Simon R | Storing number and a result of a function on an integrated circuit |
US7281050B2 (en) | 2003-04-08 | 2007-10-09 | Sun Microsystems, Inc. | Distributed token manager with transactional properties |
US8307194B1 (en) | 2003-08-18 | 2012-11-06 | Cray Inc. | Relaxed memory consistency model |
US7788332B2 (en) | 2004-05-06 | 2010-08-31 | Cornell Research Foundation, Inc. | Sensor-network processors using event-driven architecture |
US7089518B2 (en) | 2004-05-08 | 2006-08-08 | International Business Machines Corporation | Method and program product for modelling behavior of asynchronous clocks in a system having multiple clocks |
US7353364B1 (en) | 2004-06-30 | 2008-04-01 | Sun Microsystems, Inc. | Apparatus and method for sharing a functional unit execution resource among a plurality of functional units |
US7533248B1 (en) | 2004-06-30 | 2009-05-12 | Sun Microsystems, Inc. | Multithreaded processor including a functional unit shared between multiple requestors and arbitration therefor |
JP4906734B2 (en) * | 2004-11-15 | 2012-03-28 | エヌヴィディア コーポレイション | Video processing |
US8738891B1 (en) | 2004-11-15 | 2014-05-27 | Nvidia Corporation | Methods and systems for command acceleration in a video processor via translation of scalar instructions into vector instructions |
US7584449B2 (en) | 2004-11-22 | 2009-09-01 | Fulcrum Microsystems, Inc. | Logic synthesis of multi-level domino asynchronous pipelines |
US7536535B2 (en) * | 2005-04-22 | 2009-05-19 | Altrix Logic, Inc. | Self-timed processor |
US20070150697A1 (en) * | 2005-05-10 | 2007-06-28 | Telairity Semiconductor, Inc. | Vector processor with multi-pipe vector block matching |
EP1883045A4 (en) | 2005-05-20 | 2016-10-05 | Sony Corp | Signal processor |
US20060277425A1 (en) | 2005-06-07 | 2006-12-07 | Renno Erik K | System and method for power saving in pipelined microprocessors |
US7313673B2 (en) | 2005-06-16 | 2007-12-25 | International Business Machines Corporation | Fine grained multi-thread dispatch block mechanism |
US7622961B2 (en) | 2005-09-23 | 2009-11-24 | Intel Corporation | Method and apparatus for late timing transition detection |
US7669028B2 (en) | 2006-02-07 | 2010-02-23 | International Business Machines Corporation | Optimizing data bandwidth across a variable asynchronous clock domain |
US7698505B2 (en) * | 2006-07-14 | 2010-04-13 | International Business Machines Corporation | Method, system and computer program product for data caching in a distributed coherent cache system |
JP2008198003A (en) * | 2007-02-14 | 2008-08-28 | Nec Electronics Corp | Array type processor |
US7757137B2 (en) | 2007-03-27 | 2010-07-13 | International Business Machines Corporation | Method and apparatus for on-the-fly minimum power state transition |
US7936637B2 (en) * | 2008-06-30 | 2011-05-03 | Micron Technology, Inc. | System and method for synchronizing asynchronous signals without external clock |
US7605604B1 (en) * | 2008-07-17 | 2009-10-20 | Xilinx, Inc. | Integrated circuits with novel handshake logic |
US7928790B2 (en) | 2008-08-20 | 2011-04-19 | Qimonda Ag | Integrated circuit and programmable delay |
US8689218B1 (en) | 2008-10-15 | 2014-04-01 | Octasic Inc. | Method for sharing a resource and circuit making use of same |
US7986706B2 (en) | 2009-04-29 | 2011-07-26 | Telefonaktiebolaget Lm Ericsson | Hierarchical pipelined distributed scheduling traffic manager |
GB2470780B (en) | 2009-06-05 | 2014-03-26 | Advanced Risc Mach Ltd | A data processing apparatus and method for performing a predetermined rearrangement operation |
US20110072238A1 (en) | 2009-09-20 | 2011-03-24 | Mimar Tibet | Method for variable length opcode mapping in a VLIW processor |
US20110072236A1 (en) | 2009-09-20 | 2011-03-24 | Mimar Tibet | Method for efficient and parallel color space conversion in a programmable processor |
JP5565228B2 (en) * | 2010-09-13 | 2014-08-06 | ソニー株式会社 | Processor |
WO2012052774A2 (en) * | 2010-10-21 | 2012-04-26 | Bluwireless Technology Limited | Data processing units |
US9170638B2 (en) | 2010-12-16 | 2015-10-27 | Advanced Micro Devices, Inc. | Method and apparatus for providing early bypass detection to reduce power consumption while reading register files of a processor |
US8832412B2 (en) | 2011-07-20 | 2014-09-09 | Broadcom Corporation | Scalable processing unit |
JP5861354B2 (en) | 2011-09-22 | 2016-02-16 | 富士通株式会社 | Arithmetic processing device and control method of arithmetic processing device |
GB2503438A (en) | 2012-06-26 | 2014-01-01 | Ibm | Method and system for pipelining out of order instructions by combining short latency instructions to match long latency instructions |
US9569214B2 (en) | 2012-12-27 | 2017-02-14 | Nvidia Corporation | Execution pipeline data forwarding |
US9495154B2 (en) | 2013-03-13 | 2016-11-15 | Qualcomm Incorporated | Vector processing engines having programmable data path configurations for providing multi-mode vector processing, and related vector processors, systems, and methods |
WO2015035338A1 (en) | 2013-09-06 | 2015-03-12 | Futurewei Technologies, Inc. | Method and apparatus for asynchronous processor with a token ring based parallel processor scheduler |
-
2014
- 2014-09-08 WO PCT/US2014/054618 patent/WO2015035338A1/en active Application Filing
- 2014-09-08 WO PCT/US2014/054610 patent/WO2015035330A1/en active Application Filing
- 2014-09-08 EP EP14842884.0A patent/EP3014468A4/en not_active Ceased
- 2014-09-08 WO PCT/US2014/054607 patent/WO2015035327A1/en active Application Filing
- 2014-09-08 US US14/480,522 patent/US9740487B2/en active Active
- 2014-09-08 CN CN201480040506.3A patent/CN105379121B/en active Active
- 2014-09-08 US US14/480,491 patent/US9489200B2/en active Active
- 2014-09-08 WO PCT/US2014/054613 patent/WO2015035333A1/en active Application Filing
- 2014-09-08 US US14/480,556 patent/US9846581B2/en active Active
- 2014-09-08 WO PCT/US2014/054616 patent/WO2015035336A1/en active Application Filing
- 2014-09-08 CN CN201480040566.5A patent/CN105431819A/en active Pending
- 2014-09-08 CN CN201480041103.0A patent/CN105393240B/en active Active
- 2014-09-08 EP EP14841550.8A patent/EP3014429B1/en active Active
- 2014-09-08 US US14/480,561 patent/US20150074680A1/en not_active Abandoned
- 2014-09-08 EP EP14842900.4A patent/EP3031137B1/en active Active
- 2014-09-08 WO PCT/US2014/054620 patent/WO2015035340A1/en active Application Filing
- 2014-09-08 US US14/480,573 patent/US10042641B2/en active Active
- 2014-09-08 US US14/480,531 patent/US9606801B2/en active Active
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9400685B1 (en) | 2015-01-30 | 2016-07-26 | Huawei Technologies Co., Ltd. | Dividing, scheduling, and parallel processing compiled sub-tasks on an asynchronous multi-core processor |
US20220276983A1 (en) * | 2018-07-05 | 2022-09-01 | Mythic, Inc. | Systems and methods for implementing an intelligence processing computing architecture |
US12013807B2 (en) * | 2018-07-05 | 2024-06-18 | Mythic, Inc. | Systems and methods for implementing an intelligence processing computing architecture |
US11321019B2 (en) * | 2019-09-13 | 2022-05-03 | Accemic Technologies Gmbh | Event processing |
Also Published As
Publication number | Publication date |
---|---|
WO2015035340A1 (en) | 2015-03-12 |
EP3014429A4 (en) | 2016-09-21 |
US9489200B2 (en) | 2016-11-08 |
US20150074446A1 (en) | 2015-03-12 |
US9846581B2 (en) | 2017-12-19 |
US9740487B2 (en) | 2017-08-22 |
CN105379121B (en) | 2019-06-28 |
WO2015035330A1 (en) | 2015-03-12 |
US20150074374A1 (en) | 2015-03-12 |
US20150074445A1 (en) | 2015-03-12 |
US20150074443A1 (en) | 2015-03-12 |
EP3014429A1 (en) | 2016-05-04 |
WO2015035336A1 (en) | 2015-03-12 |
EP3014468A4 (en) | 2017-06-21 |
WO2015035338A1 (en) | 2015-03-12 |
US20150074380A1 (en) | 2015-03-12 |
WO2015035333A1 (en) | 2015-03-12 |
US10042641B2 (en) | 2018-08-07 |
CN105393240A (en) | 2016-03-09 |
CN105379121A (en) | 2016-03-02 |
CN105393240B (en) | 2018-01-23 |
WO2015035327A1 (en) | 2015-03-12 |
EP3014468A1 (en) | 2016-05-04 |
EP3031137A1 (en) | 2016-06-15 |
EP3014429B1 (en) | 2020-03-04 |
EP3031137B1 (en) | 2022-01-05 |
EP3031137A4 (en) | 2018-01-10 |
US9606801B2 (en) | 2017-03-28 |
CN105431819A (en) | 2016-03-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20150074680A1 (en) | Method and apparatus for asynchronous processor with a token ring based parallel processor scheduler | |
US10445451B2 (en) | Processors, methods, and systems for a configurable spatial accelerator with performance, correctness, and power reduction features | |
US10515046B2 (en) | Processors, methods, and systems with a configurable spatial accelerator | |
US9928036B2 (en) | Random number generator | |
US10755242B2 (en) | Bitcoin mining hardware accelerator with optimized message digest and message scheduler datapath | |
US9529997B2 (en) | Centralized platform settings management for virtualized and multi OS systems | |
US20140075153A1 (en) | Reducing issue-to-issue latency by reversing processing order in half-pumped simd execution units | |
US20160011874A1 (en) | Silent memory instructions and miss-rate tracking to optimize switching policy on threads in a processing device | |
US9753832B2 (en) | Minimizing bandwith to compress output stream in instruction tracing systems | |
KR100730280B1 (en) | Apparatus and Method for Optimizing Loop Buffer in Reconfigurable Processor | |
US20190041895A1 (en) | Single clock source for a multiple die package | |
US20200366457A1 (en) | Phase locked loop switching in a communication system | |
US10133578B2 (en) | System and method for an asynchronous processor with heterogeneous processors | |
CN111352894B (en) | Single-instruction multi-core system, instruction processing method and storage medium | |
US10812075B2 (en) | Dynamic on-die termination | |
US9495316B2 (en) | System and method for an asynchronous processor with a hierarchical token system | |
Yan et al. | A reconfigurable processor architecture combining multi-core and reconfigurable processing units |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: HUAWEI TECHNOLOGIES CO., LTD., CHINA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ZHANG, QIFAN;GE, YIQUN;SHI, WUXIAN;AND OTHERS;SIGNING DATES FROM 20160127 TO 20160201;REEL/FRAME:037768/0720 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |