CN111079910B - Operation method, device and related product - Google Patents
Operation method, device and related product Download PDFInfo
- Publication number
- CN111079910B CN111079910B CN201811220923.2A CN201811220923A CN111079910B CN 111079910 B CN111079910 B CN 111079910B CN 201811220923 A CN201811220923 A CN 201811220923A CN 111079910 B CN111079910 B CN 111079910B
- Authority
- CN
- China
- Prior art keywords
- instruction
- vector logic
- vector
- operating
- macro
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/06—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
- G06N3/063—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Theoretical Computer Science (AREA)
- Evolutionary Computation (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Neurology (AREA)
- Advance Control (AREA)
Abstract
The disclosure relates to an operation method, an operation device and a related product. The device comprises a device determining module and an instruction generating module, wherein the device determining module is used for calculating the macro instruction according to the received vector logic and determining the operating device for executing the vector logic to calculate the macro instruction. The instruction generating module is used for calculating the macro instruction and the operation equipment according to the vector logic and generating the operation instruction. The operation method, the operation device and the related products provided by the embodiment of the disclosure can be used in a cross-platform mode, and have the advantages of good applicability, high instruction conversion speed, high processing efficiency, low error probability and low development cost of manpower and material resources.
Description
Technical Field
The present disclosure relates to the field of information processing technologies, and in particular, to a vector logic calculation instruction generation method, an apparatus, and a related product.
Background
With the continuous development of science and technology, neural network algorithms are more and more widely used. The method is well applied to the fields of image recognition, voice recognition, natural language processing and the like. However, as the complexity of neural network algorithms is higher and higher, the scale of the neural network algorithms is continuously increased. A large-scale neural network model based on a Graphics Processing Unit (GPU) and a Central Processing Unit (CPU) takes a lot of computation time and consumes a lot of power. In the related art, the method for accelerating the processing speed of the neural network model has the problems of incapability of cross-platform processing, low processing efficiency, high development cost, easiness in making mistakes and the like.
Disclosure of Invention
In view of this, the present disclosure provides a vector logic calculation instruction generation method, device and related product, which can be used across platforms, improve processing efficiency, and reduce error probability and development cost.
According to a first aspect of the present disclosure, there is provided a vector logic computation instruction generation apparatus, the apparatus comprising:
the device determining module is used for calculating the macro instruction according to the received vector logic and determining the running device for executing the vector logic calculation macro instruction;
an instruction generating module, configured to calculate a macro instruction and the operation device according to the vector logic, generate an operation instruction,
wherein the vector logic computation macro-instruction refers to a macro-instruction for performing a logic operation on a vector,
the vector logic calculation macro instruction comprises an operation type, an input address and an output address, the operation instruction comprises the operation type, an operation input address and an operation output address, and the operation input address and the operation output address are determined according to the input address and the output address respectively.
According to a second aspect of the present disclosure, there is provided a machine learning arithmetic device, the device including:
one or more vector logic computation instruction generating devices according to the first aspect, configured to obtain data to be computed and control information from another processing device, execute a specified machine learning operation, and transmit an execution result to the other processing device through an I/O interface;
when the machine learning arithmetic device comprises a plurality of vector logic calculation instruction generating devices, the vector logic calculation instruction generating devices can be connected through a specific structure and transmit data;
the vector logic calculation instruction generation devices are interconnected through a Peripheral Component Interface Express (PCIE) bus and transmit data so as to support larger-scale machine learning operation; the vector logic calculation instruction generation devices share the same control system or own respective control systems; the vector logic calculation instruction generation devices share a memory or own memories; the interconnection mode of the vector logic calculation instruction generation devices is any interconnection topology.
According to a third aspect of the present disclosure, there is provided a combined processing apparatus, the apparatus comprising:
the machine learning arithmetic device, the universal interconnect interface, and the other processing device according to the second aspect;
and the machine learning arithmetic device interacts with the other processing devices to jointly complete the calculation operation designated by the user.
According to a fourth aspect of the present disclosure, there is provided a machine learning chip including the machine learning network operation device of the second aspect or the combination processing device of the third aspect.
According to a fifth aspect of the present disclosure, there is provided a machine learning chip package structure, which includes the machine learning chip of the fourth aspect.
According to a sixth aspect of the present disclosure, a board card is provided, which includes the machine learning chip packaging structure of the fifth aspect.
According to a seventh aspect of the present disclosure, there is provided an electronic device, which includes the machine learning chip of the fourth aspect or the board of the sixth aspect.
According to an eighth aspect of the present disclosure, there is provided a vector logic calculation instruction generation method, the method comprising:
according to the received vector logic calculation macro instruction, determining an operation device for executing the vector logic calculation macro instruction;
computing a macro instruction and the execution device according to the vector logic to generate an execution instruction,
wherein the vector logic computation macro-instruction refers to a macro-instruction for performing a logic operation on a vector,
the vector logic calculation macro instruction comprises an operation type, an input address and an output address, the operation instruction comprises the operation type, an operation input address and an operation output address, and the operation input address and the operation output address are determined according to the input address and the output address respectively.
In some embodiments, the electronic device comprises a data processing apparatus, a robot, a computer, a printer, a scanner, a tablet, a smart terminal, a cell phone, a tachograph, a navigator, a sensor, a camera, a server, a cloud server, a camera, a camcorder, a projector, a watch, a headset, a mobile storage, a wearable device, a vehicle, a household appliance, and/or a medical device.
In some embodiments, the vehicle comprises an aircraft, a ship, and/or a vehicle; the household appliances comprise a television, an air conditioner, a microwave oven, a refrigerator, an electric cooker, a humidifier, a washing machine, an electric lamp, a gas stove and a range hood; the medical equipment comprises a nuclear magnetic resonance apparatus, a B-ultrasonic apparatus and/or an electrocardiograph.
The device comprises a device determining module and an instruction generating module, wherein the device determining module is used for calculating the macro instruction according to the received vector logic and determining the running device for executing the vector logic to calculate the macro instruction. The instruction generating module is used for calculating the macro instruction and the operation equipment according to the vector logic and generating the operation instruction. The method, the device and the related products can be used in a cross-platform mode, the applicability is good, the instruction conversion speed is high, the processing efficiency is high, the error probability is low, and the cost of developing manpower and material resources is low.
Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate exemplary embodiments, features, and aspects of the disclosure and, together with the description, serve to explain the principles of the disclosure.
Fig. 1 illustrates a block diagram of a vector logic compute instruction generation apparatus according to an embodiment of the present disclosure.
Fig. 2 illustrates a block diagram of a vector logic compute instruction generation apparatus according to an embodiment of the present disclosure.
Fig. 3a and 3b are schematic diagrams illustrating application scenarios of a vector logic calculation instruction generation apparatus according to an embodiment of the present disclosure.
Fig. 4a, 4b show block diagrams of a combined processing device according to an embodiment of the present disclosure.
Fig. 5 shows a schematic structural diagram of a board card according to an embodiment of the present disclosure.
FIG. 6 shows a flow diagram of a vector logic compute instruction generation method according to an embodiment of the present disclosure.
Detailed Description
Various exemplary embodiments, features and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers can indicate functionally identical or similar elements. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present disclosure.
Fig. 1 illustrates a block diagram of a vector logic compute instruction generation apparatus according to an embodiment of the present disclosure. As shown in fig. 1, the apparatus includes a device determination module 11 and an instruction generation module 12. The device determining module 11 is configured to determine, according to the received vector logic computation macro instruction, an operating device that executes the vector logic computation macro instruction. The instruction generating module 12 is configured to calculate a macro instruction and an execution device according to vector logic, and generate an execution instruction.
The vector logic computation macro instruction refers to a macro instruction for performing a logic operation on a vector. The vector logic calculates the macro-instruction to contain operation type, input address and output address, and the operation instruction contains the operation type, operation input address and operation output address. The operation input address and the operation output address are determined according to the input address and the output address respectively.
In this implementation, a macro is a name for batch processing, and a macro may be a rule or pattern, or syntax replacement, which is automatically performed when the macro is encountered. The vector logic calculation macro instruction can be formed by integrating commonly used vector logic calculation instructions to be executed for performing calculation, control, transportation and other processing on data.
In one possible implementation, the vector logic compute macroinstructions may include at least one of: vector and compute macro, vector or compute macro, vector non-compute macro, vector compare compute macro.
In one possible implementation, the vector logic compute macro instruction may include at least one of the following options: the identification, input quantities, output quantities, operands, and instruction parameters of a specified device for executing the vector logic compute the macroinstruction. The instruction parameters may include at least one of an address, a length of the second operand.
The identifier of the specific device may be a physical address, an IP address, a name, a number, and the like of the specific device. The mark may comprise one or any combination of numbers, letters, symbols. When the position of the identifier of the specified device of the vector logic calculation macro instruction is null, determining that the vector logic calculation macro instruction has no specified device; alternatively, when the field "identification of specified device" is not included in the vector logic computation macro instruction, it is determined that the vector logic computation macro instruction has no specified device. The operation type may refer to a type of operation performed by the vector logic computation macro instruction on the data, and represents a specific type of the vector logic computation macro instruction, for example, when an operation type of a certain vector logic computation macro instruction is "XXX", the specific type of operation performed by the vector logic computation macro instruction on the data may be determined according to "XXX". The instruction set required for executing the vector logic to calculate the macro instruction may be determined according to the operation type, for example, when the operation type of a certain vector logic to calculate the macro instruction is "XXX", the instruction set required for the certain vector logic to calculate the macro instruction is all instruction sets required for performing the processing corresponding to "XXX". The input address may be an input address of data, an address for obtaining data such as a read address, and the output address may be an output address of processed data, an address for storing data such as a write address. The input amount may be information indicating the size of the data amount, such as the input size and the input length of the data. The output quantity may be information indicating the size of the data quantity, such as the output size and the output length of the data. The operands may include the length of the register, the address of the register, the identity of the register, the immediate, and the like. The immediate is the number given in the immediate addressing mode instruction. Instruction parameters may refer to parameters associated with the execution of the vector logic computing macro instruction. For example, the instruction parameters may be the address and length of the second operand, etc. The instruction parameters may be the size of the convolution kernel, the step size of the convolution kernel, the filling of the convolution kernel, etc.
In this implementation, for a vector logic compute macro-instruction, it must include an opcode, i.e., an operation type, and at least one operation field, which includes the identification of the specified device, input addresses, output addresses, input quantities, output quantities, operands, and instruction parameters. An opcode may be the portion of an instruction or field (usually denoted by a code) specified in a computer program that is to perform an operation, and is an instruction sequence number that tells the device executing the instruction which instruction specifically needs to be executed. The operation domain may be a source of all data required for executing the corresponding instruction, including parameter data, data to be operated on or processed, a corresponding operation method, or an address or the like storing the parameter data, the data to be operated on or processed, the corresponding operation method.
It should be understood that the instruction format and inclusion of vector logic compute macroinstructions may be configured as desired by one skilled in the art and are not limited by the present disclosure.
In this embodiment, the device determining module 11 may determine one or more running devices according to a vector logic computation macro. Instruction generation module 12 may generate one or more execution instructions. When a plurality of generated operating instructions are provided, the plurality of operating instructions may be executed in the same operating device or different operating devices, and the present disclosure is not limited thereto.
The vector logic calculation instruction generation device provided by the embodiment of the disclosure comprises an equipment determination module and an instruction generation module, wherein the equipment determination module is used for calculating a macro instruction according to received vector logic and determining an operation device for executing the vector logic calculation macro instruction. The instruction generating module is used for calculating the macro instruction and the operation equipment according to the vector logic and generating the operation instruction. The device can be used in a cross-platform mode, the applicability is good, the instruction conversion speed is high, the processing efficiency is high, the error probability is low, and the development labor and material cost is low.
Fig. 2 illustrates a block diagram of a vector logic compute instruction generation apparatus according to an embodiment of the present disclosure. In a possible implementation manner, as shown in fig. 2, the apparatus may further include a macro instruction generation module 13. The macro instruction generating module 13 is configured to receive a vector logic calculation instruction to be executed, and generate a vector logic calculation macro instruction according to the determined identifier of the specific device and the vector logic calculation instruction to be executed.
In this implementation, the specified device may be determined according to the operation type, the input amount, the output amount, and the like of the vector logic calculation instruction to be executed. The received vector logic calculation instruction to be executed can be one or more.
The vector logic computation instruction to be executed may include at least one of: the vector to be executed is compared with a calculation instruction, the vector to be executed or the calculation instruction, the vector to be executed is not calculated, and the calculation instruction is compared with the vector to be executed.
The vector logic computation instruction to be executed may include at least one of the following options: operation type, input address, output address, input quantity, output quantity, operands, and instruction parameters.
In this implementation, when there is one vector logic calculation instruction to be executed, the determined identifier of the specific device may be added to the vector logic calculation instruction to be executed, so as to generate a vector logic calculation macro instruction. For example, some vector logic to be executed calculates that instruction m is "XXX … … param". Where XXX is the operation type and param is the instruction parameter. Its designated device m-1 may be determined from the operation type "XXX" of the vector logic computation instruction m to be executed. Then, an identification (e.g., 09) specifying device M-1 is added to the vector logic calculation instruction M to be executed, and a vector logic calculation macro instruction M "XXX 09, … … param" corresponding to the vector logic calculation instruction M to be executed is generated. When the vector logic calculation instruction to be executed is multiple, the identifier of the specified device corresponding to each determined vector logic calculation instruction to be executed may be added to the vector logic calculation instruction to be executed, and one vector logic calculation macro instruction or multiple corresponding vector logic calculation macro instructions may be generated according to the multiple vector logic calculation instructions to be executed with the identifier of the specified device.
It should be understood that the instruction format and inclusion of the vector logic computation instruction to be executed may be configured as desired by those skilled in the art, and the present disclosure is not limited thereto.
In one possible implementation, as shown in fig. 2, the device determining module 11 may include a first determining sub-module 111. The first determining submodule 111 is configured to determine the specified device as an operating device when it is determined that the vector logic calculation macro includes an identifier of the specified device and a resource of the specified device meets an execution condition for executing the vector logic calculation macro. Wherein, the execution condition may include: the designated device contains a set of instructions corresponding to the vector logic compute macroinstructions.
In this implementation, the vector logic compute macroinstruction may contain therein an identification of one or more specified devices that execute the vector logic compute macroinstruction. When the vector logic calculation macro instruction includes the identifier of the specified device and the resource of the specified device meets the execution condition, the first determining sub-module 111 may directly determine the specified device as the operating device, so as to save the generation time for generating the operating instruction based on the vector logic calculation macro instruction and ensure that the generated operating instruction can be executed by the corresponding operating device.
In one possible implementation, as shown in fig. 2, the apparatus may further include a resource acquisition module 14. The device determination module 11 may also include a second determination submodule 112. The resource obtaining module 14 is configured to obtain resource information of the alternative device. The second determining submodule 112 is configured to, when it is determined that the vector logic calculation macro instruction does not include the identifier of the specified device, determine, from the candidate devices, an operating device for executing the vector logic calculation macro instruction according to the received vector logic calculation macro instruction and resource information of the candidate devices. Wherein the resource information may comprise a set of instructions contained by the alternative device. The instruction set included by the alternative device may be a set of instructions corresponding to one or more types of operations for vector logic computation of macro-instructions. The more instruction sets the alternative device contains, the more types of vector logic the alternative device is capable of executing to compute the macro-instructions.
In this implementation, the second determining submodule 112 may determine, from the candidate devices, one or more running devices capable of executing the vector logic computation macro instruction when it is determined that the vector logic computation macro instruction does not include the identifier of the specified device. Wherein the determined instruction set of the running device comprises an instruction set corresponding to the vector logic computation macroinstruction. For example, where the received vector logic calculates the macro instruction as a vector and calculate macro instruction, an alternative device containing an instruction set corresponding to the vector and calculate macro instruction may be determined as the executing device to ensure that it can execute the generated executing instruction.
In one possible implementation, as shown in fig. 2, the device determining module 11 may further include a third determining sub-module 113. When it is determined that the vector logic calculation macro instruction includes the identifier of the specified device and the resource of the specified device does not satisfy the execution condition for executing the vector logic calculation macro instruction, the third determining submodule 113 determines the operating device according to the vector logic calculation macro instruction and the resource information of the alternative device.
In this implementation, when it is determined that the vector logic calculation macro instruction includes an identifier of a specific device and the resource of the specific device does not satisfy the execution condition, the third determining sub-module 113 may determine that the specific device of the vector logic calculation macro instruction does not have the capability of executing the vector logic calculation macro instruction. The third determination submodule 113 may determine an execution device from among the candidate devices, and may determine, as the execution device, a candidate device containing an instruction set corresponding to an operation type of the vector logic calculation macro instruction.
In a possible implementation manner, as shown in fig. 2, the vector logic calculation macro instruction may include at least one of an input quantity and an output quantity, and the instruction generation module 12 is further configured to determine a data quantity of the vector logic calculation macro instruction, and generate the operation instruction according to the data quantity of the vector logic calculation macro instruction, and resource information of the operation device. The vector logic may calculate the data amount of the macro instruction according to at least one of the input amount and the output amount, and the resource information of the operating device may further include at least one of a storage capacity and a remaining storage capacity.
The storage capacity of the operating device may refer to the amount of binary information that the memory of the operating device can accommodate. The remaining storage capacity of the operating device may refer to the storage capacity that the operating device is currently available for instruction execution after the occupied storage capacity is removed. The resource information of the running device can characterize the running capability of the running device. The larger the storage capacity and the larger the remaining storage capacity are, the stronger the operation capability of the operation device is.
In this implementation, the instruction generating module 12 may determine a specific manner of splitting the vector logic computing macro instruction according to resource information of each operating device, a data amount of the vector logic computing macro instruction, and the like, so as to split the vector logic computing macro instruction and generate an operating instruction corresponding to the operating device.
In one possible implementation, as shown in fig. 2, the instruction generating module 12 may include a first instruction generating submodule 121. The first instruction generation submodule 121 is configured to, when it is determined that there is one execution device and the amount of operation data of the execution device is smaller than the amount of data of the vector logic calculation macro instruction, split the vector logic calculation macro instruction into a plurality of execution instructions according to the amount of operation data and the amount of data of the execution device, so that the execution device sequentially executes the plurality of execution instructions. The operation data amount of the operation device may be determined according to the resource information of the operation device, each operation instruction may include at least one of an operation input amount and an operation output amount, and the operation input amount and the operation output amount may be determined according to the operation data amount.
In this implementation, the operation data amount of the operation device may be determined according to the storage capacity or the remaining storage capacity of the operation device. The operation input quantity and the operation output quantity are required to be less than or equal to the operation data quantity so as to ensure that the generated operation instruction can be executed by the operation equipment. The operation input amount (or operation output amount) of different operation instructions in the plurality of operation instructions may be the same or different, and the disclosure does not limit this.
In this implementation, when it is determined that there is one operating device and the data amount of the operating device is greater than or equal to the data amount of the vector logic calculation macro, the first instruction generation sub-module 121 may directly convert the vector logic calculation macro into one operating instruction, and may also split the vector logic calculation macro into multiple operating instructions, which is not limited in this disclosure.
In one possible implementation, as shown in FIG. 2, the instruction generation module 12 may include a second instruction generation submodule 122. The second instruction generating submodule 122 is configured to split the vector logic computation macroinstruction according to the operation data volume and the data volume vector of each operating device when it is determined that a plurality of operating devices are provided, and generate an operation instruction corresponding to each operating device. The operation data amount of each operating device may be determined according to the resource information of each operating device, the operation instruction may include at least one of an operation input amount and an operation output amount, and the operation input amount and the operation output amount are determined according to the operation data amount of the operating device that executes the operation instruction.
In this implementation, the operation input quantity and the operation output quantity need to be less than or equal to the operation data quantity to ensure that the generated operation instruction can operate the device to execute. The second instruction generation sub-module 122 may generate one or more operation instructions for each operation device according to the operation data amount of each operation device, so as to be executed by the corresponding operation device.
In the above implementation, the operation instruction includes at least one of the operation input quantity and the operation output quantity, except that the data quantity of the operation instruction can be limited to be executed by the corresponding operation device. And special limiting requirements of different operation instructions on operation input quantity and/or operation output quantity can be met.
In a possible implementation manner, for some operation instructions that do not have a special limited requirement on the operation input amount and/or the operation output amount, where the operation input amount and/or the operation output amount may not be included, a default operation input amount and a default operation output amount may be preset, so that when it is determined that the operation input amount and the operation output amount do not exist in the received operation instructions, the operation device may use the default operation input amount and the default operation output amount as the operation input amount and the operation output amount of the operation instructions. By presetting the default operation input quantity and the default operation output quantity, the generation process of the operation instruction can be simplified, and the generation time of the operation instruction is saved.
In one possible implementation, the default input quantity and the default output quantity of the macro instruction may be calculated for different types of vector logic, which may be set in advance. When the vector logic calculates the macro instruction without the input quantity and the output quantity, the corresponding default input quantity and default output quantity may be used as the input quantity and the output quantity of the vector logic calculation macro instruction. And then determining the data volume of the vector logic calculation macro instruction according to the default input quantity and/or the default output quantity, calculating the data volume of the macro instruction according to the vector logic, calculating the resource information of the macro instruction and the operating equipment according to the vector logic, and generating the operating instruction. When the vector logic calculates that the macro instruction does not include the input quantity and the output quantity, the generated operation instruction may not include the operation input quantity and the operation output quantity, and may include at least one of the operation input quantity and the operation output quantity. When the operation instruction does not include the operation input amount and/or the operation output amount, the operation device may execute the operation instruction according to a preset default operation input amount and/or a default operation output amount.
In a possible implementation manner, the instruction generating module 12 may further split the vector logic calculation macro instruction according to the vector logic calculation macro instruction and a preset vector logic calculation macro instruction splitting rule, so as to generate the operation instruction. The vector logic computation macro instruction splitting rule may be determined according to a conventional vector logic computation macro instruction splitting manner (for example, splitting according to a processing procedure of vector logic computation macro instruction, and the like), and by combining the running data amount threshold of the instructions that can be executed by all the candidate devices. The vector logic calculation macro instruction is divided into the operation instructions of which the operation input quantity and the operation output quantity are smaller than or equal to the operation data quantity threshold value, so that the generated operation instructions can be executed in the corresponding operation device (the operation device is any one of the alternative devices). The storage capacities (or remaining storage capacities) of all the candidate devices may be compared, and the determined minimum storage capacity (or remaining storage capacity) may be determined as an operation data amount threshold of the instruction that can be executed by all the candidate devices.
It should be understood that, the person skilled in the art can set the generation mode of the operation instruction according to the actual needs, and the present disclosure does not limit this.
In this embodiment, the operation instruction generated by the instruction generating module according to the vector logic calculation macro instruction may be a to-be-executed vector logic calculation instruction, or may be one or more analyzed instructions obtained by analyzing the to-be-executed vector logic calculation instruction, which is not limited in this disclosure.
In one possible implementation, as shown in fig. 2, the apparatus may further include a queue building module 15. The queue building module 15 is configured to sort the operation instructions according to a queue sorting rule, and build an instruction queue corresponding to the operation device according to the sorted operation instructions.
In this implementation, an instruction queue uniquely corresponding to each execution device may be constructed for each execution device. The operating instructions can be sequentially sent to the operating equipment uniquely corresponding to the instruction queue according to the sequence of the operating instructions in the instruction queue; or the instruction queue may be sent to the execution device, so that the execution device sequentially executes the execution instructions in the instruction queue according to the order of the execution instructions in the instruction queue. By the mode, the operation equipment can execute the operation instruction according to the instruction queue, the operation instruction is prevented from being executed mistakenly and delayed, and the operation instruction is prevented from being omitted.
In this implementation, the queue sorting rule may be determined according to information such as a predicted execution time for executing the operation instruction, a generation time of the operation instruction, an operation input amount, an operation output amount, and an operation type related to the operation instruction itself, which is not limited by this disclosure.
In one possible implementation, as shown in FIG. 2, the apparatus may also include an instruction dispatch module 16. The instruction dispatch module 16 is configured to send the execution instruction to the execution device, so that the execution device executes the execution instruction.
In this implementation, when there is one execution instruction executed by the execution device, the execution instruction may be directly sent to the execution device. When the number of the operation instructions executed by the operation device is multiple, all of the multiple operation instructions may be sent to the operation device, so that the operation device sequentially executes the multiple operation instructions. The plurality of operation instructions can also be sequentially sent to the corresponding operation equipment, wherein after the operation equipment completes the current operation instruction, the next operation instruction corresponding to the current operation instruction is sent to the operation equipment each time. The manner in which the person skilled in the art can send the operation instruction to the operation device is set, and the present disclosure does not limit this.
In one possible implementation, as shown in FIG. 2, the instruction dispatch module 16 may include an instruction assembly submodule 161, an assembly translation submodule 162, and an instruction issue submodule 163. The instruction assembling sub-module 161 is used for generating an assembling file according to the operation instruction. The assembly translation sub-module 162 is used to translate the assembly file into a binary file. The instruction sending submodule 163 is configured to send the binary file to the operating device, so that the operating device executes the operating instruction according to the binary file.
By the method, the data volume of the operation instruction can be reduced, the time for sending the operation instruction to the operation equipment is saved, and the conversion and execution speed of the vector logic calculation macro instruction is increased.
In this implementation manner, after the binary file is sent to the running device, the running device may decode the received binary file to obtain a corresponding running instruction, and execute the obtained running instruction to obtain an execution result.
In a possible implementation manner, the running device may be one or any combination of a CPU, a GPU, and an embedded Neural-Network Processing Unit (NPU). In this way, the speed of the device for generating the operation instruction according to the vector logic calculation macro instruction is improved.
In one possible implementation, the apparatus may be provided in the CPU and/or the NPU. The method realizes the process of generating the operation instruction according to the vector logic calculation macro instruction by the CPU and/or the NPU, and provides more possible modes for realizing the device.
In one possible implementation, a vector logic compute macro instruction may refer to a macro instruction that is used to perform a logical operation on a vector. For example, macro instructions that perform logical computations such as AND, Compare, OR, etc. on vectors. Different types of vector logic compute macroinstructions correspond to different operation types. For example, the operation type corresponding to the vector and the compute macro may be VAND, and the operation type corresponding to the vector or the compute macro may be VOR.
The present disclosure provides an execution device for executing the execution instruction generated by the above vector logic calculation instruction generation apparatus. The operating device comprises a control module and an execution module. The control module is used for acquiring data, the neural network model and the operation instruction, can also be used for analyzing the operation instruction, acquiring a plurality of analysis instructions and sending the plurality of analysis instructions and the data to the execution module. The execution module is used for executing a plurality of analysis instructions according to the data to obtain an execution result.
In one possible implementation, the execution device further includes a storage module. The memory module may include at least one of a register and a cache, and the cache may include a scratch pad cache. The cache may be used to store data. The registers may be used to store scalar data within the data.
In one possible implementation, the control module may include an instruction storage sub-module and an instruction processing sub-module. The instruction storage submodule is used for storing the operation instruction. The instruction processing submodule is used for analyzing the operation instruction to obtain a plurality of analysis instructions.
In one possible implementation, the control module may further include a store queue submodule. The storage queue submodule is used for storing an operation instruction queue, and the operation instruction queue comprises an operation instruction and a plurality of analysis instructions which are required to be executed by the operation equipment. And all the instructions in the operation instruction queue are sequentially arranged according to the execution sequence.
In one possible implementation, the execution module may further include a dependency processing sub-module. The dependency relationship processing submodule is used for caching the first analysis instruction in the instruction storage submodule when the first analysis instruction is determined to have an incidence relationship with a zeroth analysis instruction before the first analysis instruction, and extracting the first analysis instruction from the instruction storage submodule to send the first analysis instruction to the execution module after the zeroth analysis instruction is executed.
The association relationship between the first parsing instruction and the zeroth parsing instruction before the first parsing instruction may include: the first storage address interval for storing the data required by the first analysis instruction and the zeroth storage address interval for storing the data required by the zeroth analysis instruction have an overlapping area. Conversely, the no association relationship between the first parse instruction and the zeroth parse instruction may be that the first memory address interval and the zeroth memory address interval have no overlapping area.
The present disclosure provides a vector logic calculation instruction processing system, which includes the above vector logic calculation instruction generation apparatus and the above operation device.
It should be noted that, although the vector logic calculation instruction generation apparatus, the execution device, and the vector logic calculation instruction processing system have been described as above by taking the above embodiments as examples, those skilled in the art will understand that the present disclosure should not be limited thereto. In fact, the user can flexibly set each module according to personal preference and/or actual application scene, as long as the technical scheme of the disclosure is met.
Application example
An application example according to the embodiment of the present disclosure is given below in conjunction with "a work process of the vector logic calculation instruction generation apparatus generating the execution instruction according to the vector logic calculation macro instruction" as an exemplary application scenario to facilitate understanding of the flow of the vector logic calculation instruction generation apparatus. It is to be understood by those skilled in the art that the following application examples are for the purpose of facilitating understanding of the embodiments of the present disclosure only and are not to be construed as limiting the embodiments of the present disclosure.
First, an instruction format of a vector logic calculation macro instruction, an instruction format of a vector logic calculation instruction to be executed, and a process of executing an execution instruction by an execution device are described, and specific examples are as follows.
The instruction format for the vector logic to compute the macro-instructions may be:
Type device_id,input_addr,output_addr,input_size,output_size,[param1,param2,…]
the Type is an operation Type, the device _ id is an identifier of a designated device, the input _ addr is an input address, the output _ addr is an output address, the input _ size is the size of an input vector (i.e., an input quantity), the output _ size is the size of an output vector (i.e., an output quantity), and param1 and param2 are instruction parameters. The instruction parameter may be an address and length of the second operand.
For a vector logic computation macro instruction, the vector logic computation macro instruction must include an operation type, an input address and an output address, and an operation instruction generated by computing the macro instruction according to the vector logic must also include an operation type, an operation input address and an operation output address, wherein the operation input address and the operation output address are determined according to the input address and the output address respectively.
For example, the operation instructions generated by calculating macro instructions according to certain vector logic are "@ VAND #501, #7, #33, # 4". After the operation device receives the operation instruction, the execution process is as follows: an input vector of size 33 is acquired from the input address 501, and logical operation is performed on the input vector to obtain an output vector of size 4, and the output vector of size 4 is stored as an execution result at the output address 7.
The instruction format of the vector logic computation instruction to be executed may be:
Type input_addr,output_addr,input_size,output_size,[param1,param2,…]
the Type is an operation Type, input _ addr is an input address, output _ addr is an output address, input _ size is the size of an input vector (i.e., an input quantity), output _ size is the size of an output vector (i.e., an output quantity), and param1 and param2 are instruction parameters. The instruction parameter may be an address and length of the second operand.
Fig. 3a and 3b are schematic diagrams illustrating application scenarios of a vector logic calculation instruction generation apparatus according to an embodiment of the present disclosure. As shown in FIGS. 3a and 3b, the number of alternative devices for executing vector logic calculation macro instructions may be multiple, and the alternative devices may be CPU-1, CPU-2, …, CPU-n, NPU-1, NPU-2, …, NPU-n and GPU-1, GPU-2, … and GPU-n. The working process and principle of computing macro instructions according to certain vector logic to generate operation instructions are as follows.
And acquiring resource information of the alternative device, wherein the resource information comprises the residual storage capacity and the storage capacity of the alternative device and an instruction set contained in the alternative device. The resource obtaining module 14 sends the obtained resource information of the candidate device to the device determining module 11 and the instruction generating module 12.
The device determination module 11 (including a first determination sub-module 111, a second determination sub-module 112, and a third determination sub-module 113)
And when the vector logic calculation macro instruction is received, determining the running equipment for executing the vector logic calculation macro instruction according to the received vector logic calculation macro instruction. For example, the following vector logic compute macroinstructions are received. Where the vector logic compute macroinstructions may be from different platforms.
Vector logic calculates macroinstruction 1: @ XXX #01 … …
Vector logic compute macroinstruction 2: @ SSS #02 … …
Vector logic compute macroinstruction 3: @ DDD #04 … …
Vector logic calculates macro instruction 4: @ NNN … …
When the first determining sub-module 111 determines that the vector logic calculation macro instruction contains the identifier of the specified device and determines that the specified device contains the instruction set corresponding to the vector logic calculation macro instruction, the first determining sub-module 111 may determine the specified device as an operating device for executing the vector logic calculation macro instruction and send the identifier of the determined operating device to the instruction generating module 12. For example, the first determination sub-module 111 may determine a specific device corresponding to the identifier 01, such as CPU-2 (the CPU-2 includes an instruction set corresponding to the vector logic calculation macro instruction 1), as an execution device for executing the vector logic calculation macro instruction 1. A specific device, such as CPU-1(CPU-1 contains an instruction set corresponding to the vector logic compute macroinstruction 2) to which the identifier 02 corresponds may be determined as an execution device for executing the vector logic compute macroinstruction 2.
When the third determining submodule 113 determines that the vector logic calculation macro instruction contains the identifier of the specified device and determines that the specified device does not contain the instruction set corresponding to the vector logic calculation macro instruction, the third determining submodule 113 may determine, as the operating device, the candidate device containing the instruction set corresponding to the vector logic calculation macro instruction, and send the identifier of the determined operating device to the instruction generating module 12. For example, when it is determined that the instruction set corresponding to the vector logic computation macro instruction 3 is not included in the specified device corresponding to the identifier 04, the third determination sub-module 113 may determine, as the execution device for executing the vector logic computation macro instruction 3, the candidate device, such as NPU-n or NPU-2, including the instruction set corresponding to the operation type DDD of the vector logic computation macro instruction 3.
When the second determining submodule 112 determines that the identifier of the specified device does not exist in the vector logic calculation macro instruction (the position corresponding to the identifier of the specified device is empty, or the vector logic calculation macro instruction does not include the field of "identifier of the specified device"), the second determining submodule 112 may determine the operating device from the candidate devices according to the vector logic calculation macro instruction and the resource information of the candidate devices (the specific determination process is detailed in the related description of the second determining submodule 112), and send the determined identifier of the operating device to the instruction generating module 12. For example, since the vector logic computation macro 4 does not have the identifier of the specified device, the second determination sub-module 112 may determine, from the candidate devices, the running device, for example, GPU-n (GPU-n includes an instruction set corresponding to the operation type NNN), which is used to execute the vector logic computation macro 4, according to the operation type NNN of the vector logic computation macro 4 and the resource information (included instruction set) of the candidate devices.
Instruction generation module 12 (including first instruction generation module 121 and second instruction generation module 122)
When the number of the operating devices is one and the data size of the operating device is smaller than that of the vector logic calculation macro instruction, the first instruction generating module 121 splits the vector logic calculation macro instruction into a plurality of operating instructions according to the data size and the operating data size of the operating device, and sends the plurality of operating instructions to the queue constructing module 15. For example, a plurality of execution instructions 2-1, 2-2, …, 2-n are generated according to the vector logic calculation of the data amount of the macro instruction 2 and the execution data amount of the execution device CPU-1. And generating a plurality of operating instructions 4-1, 4-2, … and 4-n according to the data volume of the macro instruction 4 and the operating data volume of the operating device GPU-n calculated by the vector logic.
When it is determined that there is one operating device and the data amount of the operating device is greater than or equal to the data amount of the vector logic calculation macro instruction, the first instruction generating module 121 may generate one operating instruction according to the vector logic calculation macro instruction and send the operating instruction to the queue building module 15. For example, one operation instruction 1-1 is generated according to the vector logic calculation of the data amount of the macro instruction 1 and the operation data amount of the operation device CPU-2.
When determining that a plurality of operating devices are provided, the second instruction generating module 122 splits the vector logic calculation macro instruction according to the data amount of the operating data amount and the vector logic calculation macro instruction of each operating device, generates an operating instruction corresponding to each operating device, and sends the operating instruction to the queue building module 15. For example, the data size of the macroinstruction 3, the operation data size of the operation device NPU-n, and the operation data size of the operation device NPU-2 are calculated according to vector logic, a plurality of operation instructions 3-1, 3-2, …, 3-n are generated for the operation device NPU-n, and a plurality of operation instructions 3 ' -1, 3 ' -2, …, 3 ' -n are generated for the operation device NPU-2.
When receiving the operation instruction, all the operation instructions to be executed by each operation device are sorted according to the queue sorting rule, a unique corresponding instruction queue is constructed for each operation device according to the sorted operation instructions, and the instruction queue is sent to the instruction dispatching module 16. In particular, the amount of the solvent to be used,
for an operation instruction 1-1 executed by the operation device CPU-2. The instruction queue CPU-2 "constructed corresponding to the execution device CPU-2 includes only the execution instructions 1-1.
For a plurality of execution instructions 2-1, 2-2, …, 2-n executed by the execution device CPU-1. And sequencing the plurality of operating instructions 2-1, 2-2, … and 2-n according to a queue sequencing rule, and constructing an instruction queue CPU-1' corresponding to the operating equipment CPU-1 according to the sequenced plurality of operating instructions 2-1, 2-2, … and 2-n.
For a plurality of execution instructions 3-1, 3-2, …, 3-n executed by the execution device NPU-n. The multiple operating instructions 3-1, 3-2, …, 3-n are sorted according to a queue sorting rule, and an instruction queue NPU-n' corresponding to the operating equipment NPU-n is constructed according to the sorted multiple operating instructions 3-n, …, 3-2, 3-1.
For the plurality of execution instructions 3 ' -1, 3 ' -2, …, 3 ' -n executed by the execution device NPU-2. The plurality of operating instructions 3 '-1, 3' -2, …, 3 '-n are ordered according to a queue ordering rule, and an instruction queue NPU-2 "corresponding to the operating device NPU-2 is constructed according to the ordered plurality of operating instructions 3' -n, …, 3 '-2, 3' -1.
For the plurality of execution instructions 4-1, 4-2, …, 4-n executed by the execution device GPU-n. And sequencing the plurality of operating instructions 4-1, 4-2, … and 4-n according to a queue sequencing rule, and constructing an instruction queue GPU-n' corresponding to the operating equipment GPU-n according to the sequenced plurality of operating instructions 4-1, 4-2, … and 4-n.
After the instruction queues are received, the operation instructions in each instruction queue are sequentially sent to corresponding operation equipment, so that the operation equipment executes the operation instructions. For example, the execution instruction 1-1 included in the instruction queue CPU-2 ″ is sent to its corresponding execution device CPU-2. And sequentially sending a plurality of running instructions 2-1, 2-2, … and 2-n in the instruction queue CPU-1' to the corresponding running equipment CPU-1. And sequentially sending the plurality of operating instructions 3-n, …, 3-2 and 3-1 in the instruction queue NPU-n' to the corresponding operating equipment NPU-n. And sequentially sending the plurality of running instructions 3 '-n, …, 3' -2 and 3 '-1 in the instruction queue NPU-2' to the corresponding running equipment NPU-2. And sequentially sending the multiple operating instructions 4-1, 4-2, … and 4-n in the queue GPU-n' to the corresponding operating equipment GPU-n.
After receiving the instruction queue, the operation device CPU-2, the operation device CPU-1, the operation device NPU-n and the operation device NPU-2 execute the operation instructions in sequence according to the arrangement sequence of the operation instructions in the instruction queue. Taking the operating device CPU-2 as an example, a specific process of executing the received operating instruction will be described. The running device CPU-2 comprises a control module, an execution module and a storage module. The control module comprises an instruction storage submodule, an instruction processing submodule and a storage queue submodule, and the execution module comprises a dependency relationship processing submodule, which refers to the relevant description about the operation equipment in detail.
Assume that the execute instruction 1-1 generated from the vector logic computation macro instruction 1 is "@ XXX … …". After receiving the operation instruction 1-1, the operation device CPU-2 executes the operation instruction 1-1 as follows:
the control module of the operating device CPU-2 obtains data, a neural network model and an operating instruction 1-1. The instruction storage submodule is used for storing an operation instruction 1-1. The instruction processing submodule is used for analyzing the operation instruction 1-1, obtaining a plurality of analysis instructions such as an analysis instruction 0, an analysis instruction 1 and an analysis instruction 2, and sending the plurality of analysis instructions to the storage queue submodule and the execution module. The storage queue submodule is used for storing an operation instruction queue, the operation instruction queue comprises an analysis instruction 0, an analysis instruction 1, an analysis instruction 2 and other operation instructions which are required to be executed by the CPU-2 of the operation equipment, and all the instructions are sequentially arranged in the operation instruction queue according to the execution sequence. For example, the obtained sequence of the execution of the multiple analysis instructions is analysis instruction 0, analysis instruction 1, and analysis instruction 2, and there is an association relationship between analysis instruction 1 and analysis instruction 0.
After the execution module of the operating device CPU-2 receives the plurality of analysis instructions, the dependency relationship processing submodule judges whether an association relationship exists among the plurality of analysis instructions. And the dependency relationship processing submodule determines that the analysis instruction 1 and the analysis instruction 0 have an incidence relationship, caches the analysis instruction 1 into the instruction storage submodule, and extracts the analysis instruction 1 from the cache and sends the analysis instruction 1 to the execution module after determining that the analysis instruction 0 is executed, so that the execution module can execute the analysis instruction.
The execution module receives and executes the resolving instruction 0, the resolving instruction 1 and the resolving instruction 2 to complete the operation of the operation instruction 1-1.
The working process of the above modules can refer to the above related description.
Therefore, the device can be used in a cross-platform mode, the applicability is good, the instruction conversion speed is high, the processing efficiency is high, the error probability is low, and the cost of developing manpower and material resources is low.
The present disclosure provides a machine learning arithmetic device, which may include one or more of the above vector logic calculation instruction generation devices, and is configured to acquire data to be operated and control information from other processing devices, and execute a specified machine learning operation. The machine learning arithmetic device can obtain the vector logic calculation macro instruction or the vector logic calculation instruction to be executed from other machine learning arithmetic devices or non-machine learning arithmetic devices, and transmit the execution result to peripheral equipment (also called other processing devices) through an I/O interface. Peripheral devices such as cameras, displays, mice, keyboards, network cards, wifi interfaces, servers. When more than one vector logic calculation instruction generation device is included, the vector logic calculation instruction generation devices can be linked and transmit data through a specific structure, for example, the vector logic calculation instruction generation devices are interconnected and transmit data through a PCIE bus, so as to support larger-scale operation of the neural network. At this time, the same control system may be shared, or there may be separate control systems; the memory may be shared or there may be separate memories for each accelerator. In addition, the interconnection mode can be any interconnection topology.
The machine learning arithmetic device has high compatibility and can be connected with various types of servers through PCIE interfaces.
Fig. 4a shows a block diagram of a combined processing device according to an embodiment of the present disclosure. As shown in fig. 4a, the combined processing device includes the machine learning arithmetic device, the universal interconnection interface, and other processing devices. The machine learning arithmetic device interacts with other processing devices to jointly complete the operation designated by the user.
Other processing devices include one or more of general purpose/special purpose processors such as Central Processing Units (CPUs), Graphics Processing Units (GPUs), neural network processors, and the like. The number of processors included in the other processing devices is not limited. The other processing devices are used as interfaces of the machine learning arithmetic device and external data and control, and comprise data transportation to finish basic control of starting, stopping and the like of the machine learning arithmetic device; other processing devices may cooperate with the machine learning computing device to perform computing tasks.
And the universal interconnection interface is used for transmitting data and control instructions between the machine learning arithmetic device and other processing devices. The machine learning arithmetic device acquires required input data from other processing devices and writes the input data into a storage device on the machine learning arithmetic device; control instructions can be obtained from other processing devices and written into a control cache on a machine learning arithmetic device chip; the data in the storage module of the machine learning arithmetic device can also be read and transmitted to other processing devices.
Fig. 4b shows a block diagram of a combined processing device according to an embodiment of the present disclosure. In a possible implementation manner, as shown in fig. 4b, the combined processing device may further include a storage device, and the storage device is connected to the machine learning operation device and the other processing device respectively. The storage device is used for storing data stored in the machine learning arithmetic device and the other processing device, and is particularly suitable for data which is required to be calculated and cannot be stored in the internal storage of the machine learning arithmetic device or the other processing device.
The combined processing device can be used as an SOC (system on chip) system of equipment such as a mobile phone, a robot, an unmanned aerial vehicle and video monitoring equipment, the core area of a control part is effectively reduced, the processing speed is increased, and the overall power consumption is reduced. In this case, the generic interconnect interface of the combined processing device is connected to some component of the apparatus. Some parts are such as camera, display, mouse, keyboard, network card, wifi interface.
The present disclosure provides a machine learning chip, which includes the above machine learning arithmetic device or combined processing device.
The present disclosure provides a machine learning chip package structure, which includes the above machine learning chip.
Fig. 5 shows a schematic structural diagram of a board card according to an embodiment of the present disclosure. As shown in fig. 5, the board includes the above-mentioned machine learning chip package structure or the above-mentioned machine learning chip. The board may include, in addition to the machine learning chip 389, other kits including, but not limited to: memory device 390, interface device 391 and control device 392.
The memory device 390 is coupled to a machine learning chip 389 (or a machine learning chip within a machine learning chip package structure) via a bus for storing data. Memory device 390 may include multiple sets of memory cells 393. Each group of memory cells 393 is coupled to a machine learning chip 389 via a bus. It is understood that each group 393 may be a DDR SDRAM (Double Data Rate SDRAM).
DDR can double the speed of SDRAM without increasing the clock frequency. DDR allows data to be read out on the rising and falling edges of the clock pulse. DDR is twice as fast as standard SDRAM.
In one embodiment, memory device 390 may include 4 groups of memory cells 393. Each group of memory cells 393 may include a plurality of DDR4 particles (chips). In one embodiment, the machine learning chip 389 may include 4 72-bit DDR4 controllers therein, where 64bit is used for data transmission and 8bit is used for ECC check in the 72-bit DDR4 controller. It is appreciated that when DDR4-3200 particles are used in each group of memory cells 393, the theoretical bandwidth of data transfer may reach 25600 MB/s.
In one embodiment, each group 393 of memory cells includes a plurality of double rate synchronous dynamic random access memories arranged in parallel. DDR can transfer data twice in one clock cycle. A controller for controlling DDR is provided in the machine learning chip 389 for controlling data transfer and data storage of each memory unit 393.
The control device 392 is electrically connected to a machine learning chip 389. The control device 392 is used to monitor the state of the machine learning chip 389. Specifically, the machine learning chip 389 and the control device 392 may be electrically connected through an SPI interface. The control device 392 may include a single chip Microcomputer (MCU). For example, machine learning chip 389 may include multiple processing chips, multiple processing cores, or multiple processing circuits, which may carry multiple loads. Therefore, the machine learning chip 389 can be in different operation states such as a multi-load and a light load. The control device can regulate and control the working states of a plurality of processing chips, a plurality of processing circuits and/or a plurality of processing circuits in the machine learning chip.
The present disclosure provides an electronic device, which includes the above machine learning chip or board card.
The electronic device may include a data processing apparatus, a robot, a computer, a printer, a scanner, a tablet, a smart terminal, a cell phone, a tachograph, a navigator, a sensor, a camera, a server, a cloud server, a camera, a video camera, a projector, a watch, an earphone, a mobile storage, a wearable device, a vehicle, a household appliance, and/or a medical device.
The vehicle may include an aircraft, a ship, and/or a vehicle. The household appliances may include televisions, air conditioners, microwave ovens, refrigerators, electric rice cookers, humidifiers, washing machines, electric lamps, gas cookers, and range hoods. The medical device may include a nuclear magnetic resonance apparatus, a B-mode ultrasound apparatus and/or an electrocardiograph.
FIG. 6 shows a flow diagram of a vector logic compute instruction generation method according to an embodiment of the present disclosure. As shown in fig. 6, the method is applied to the above vector logic calculation instruction generating apparatus, and includes step S41 and step S42. In step S41, an execution device that executes the vector logic computation macro instruction is determined based on the received vector logic computation macro instruction. In step S42, the macro and the execution device are calculated according to the vector logic, and an execution instruction is generated. The vector logic computation macro instruction refers to a macro instruction for performing a logic operation on a vector. The vector logic calculates the macro-instruction to contain operation type, input address and output address, and the operation instruction contains the operation type, operation input address and operation output address. The operation input address and the operation output address are determined according to the input address and the output address respectively.
In one possible implementation, step S41 may include: and when the vector logic calculation macro instruction is determined to contain the identification of the specified device, and the resources of the specified device meet the execution condition for executing the vector logic calculation macro instruction, determining the specified device as the operating device. Wherein, the execution condition may include: the designated device contains a set of instructions corresponding to the vector logic compute macroinstructions.
In one possible implementation, the method may further include: and acquiring resource information of the alternative equipment. Wherein, step S41 may further include: when the fact that the vector logic calculation macro instruction does not contain the identification of the designated device is determined, according to the received vector logic calculation macro instruction and resource information of the alternative device, the operation device used for executing the vector logic calculation macro instruction is determined from the alternative device. Wherein the resource information may comprise a set of instructions contained by the alternative device.
In one possible implementation, step S41 may further include: and when the vector logic calculation macro instruction is determined to contain the identification of the specified equipment and the resource of the specified equipment does not meet the execution condition for executing the vector logic calculation macro instruction, determining the running equipment according to the vector logic calculation macro instruction and the resource information of the alternative equipment.
In one possible implementation, the vector logic compute macro instruction may also contain at least one of an input quantity and an output quantity. Step S42 may include: determining the data volume of the vector logic calculation macro instruction, and generating an operation instruction according to the data volume of the vector logic calculation macro instruction, the vector logic calculation macro instruction and the resource information of the operation equipment. Wherein, the data amount can be determined according to at least one of the input amount and the output amount, and the resource information of the operation device can further comprise at least one of the storage capacity and the residual storage capacity.
In one possible implementation, the generating the operation instruction according to the vector logic calculation macro instruction data size, the vector logic calculation macro instruction, and the resource information of the operation device may include: when the number of the operation devices is determined to be one and the operation data of the operation devices is smaller than the data volume of the vector logic calculation macro instruction, the vector logic calculation macro instruction is split into a plurality of operation instructions according to the operation data and the data volume of the operation devices, so that the operation devices sequentially execute the plurality of operation instructions. The operation data volume of the operation device may be determined according to the resource information of the operation device, each operation instruction may further include at least one of an operation input volume and an operation output volume, and the operation input volume and the operation output volume are determined according to the operation data volume.
In one possible implementation, the generating the operation instruction according to the vector logic calculation macro instruction data size, the vector logic calculation macro instruction, and the resource information of the operation device may include: when a plurality of running devices are determined, the vector logic calculation macro instructions are split according to the running data volume and the data volume vector of each running device, and the running instructions corresponding to each running device are generated. The operation data volume of each operating device may be determined according to the resource information of each operating device, the operation instruction may further include at least one of an operation input volume and an operation output volume, and the operation input volume and the operation output volume are determined according to the operation data volume of the operating device that executes the operation instruction.
In one possible implementation, the method may further include: and sequencing the operating instructions according to a queue sequencing rule, and constructing an instruction queue corresponding to the operating equipment according to the sequenced operating instructions.
In one possible implementation, the method may further include: and receiving a vector logic calculation instruction to be executed, and generating a vector logic calculation macro instruction according to the determined identifier of the specified equipment and the vector logic calculation instruction to be executed.
In one possible implementation, the method may further include: and sending the operation instruction to the operation equipment so as to enable the operation equipment to execute the operation instruction.
In one possible implementation manner, sending the execution instruction to the execution device to cause the execution device to execute the execution instruction includes: generating an assembly file according to the operation instruction; translating the assembly file into a binary file; and sending the binary file to the running equipment so that the running equipment executes the running instruction according to the binary file.
In one possible implementation, the resource information may include at least one of a storage capacity of the alternative device, a remaining storage capacity, and an instruction set included in the alternative device.
In one possible implementation, the running device may be one or any combination of a CPU, a GPU and an NPU.
In one possible implementation, the method may be applied in a CPU and/or NPU.
In one possible implementation, the vector logic compute macro instruction may include at least one of the following instructions: vector and compute macro, vector or compute macro, vector non-compute macro, vector compare compute macro.
In one possible implementation, the vector logic compute macro instruction may include at least one of the following options: the identification, input quantities, output quantities, operands, and instruction parameters of a specified device for executing the vector logic compute the macroinstruction. The instruction parameters include at least one of an address and a length of the second operand.
According to the vector logic calculation instruction generation method provided by the embodiment of the disclosure, a macro instruction is calculated according to received vector logic, and an operation device for executing the vector logic calculation macro instruction is determined; and calculating the macro instruction and the operation equipment according to the vector logic to generate an operation instruction. The method can be used in a cross-platform mode, and is good in applicability, high in instruction conversion speed, high in processing efficiency, low in error probability, and low in development labor and material cost.
The present disclosure also provides a vector logic computation instruction execution method, which is applied to the above operating device, and the method includes: the data, the neural network model and the operation instruction are obtained through the operation equipment, the operation instruction is analyzed to obtain a plurality of analysis instructions, and the plurality of analysis instructions are executed according to the data to obtain an execution result.
In one possible implementation, the method may further include: the data and scalar data in the data are stored by the running device. The running equipment comprises a storage module, the storage module comprises any combination of a register and a cache, and the cache comprises a temporary cache. And the cache is used for storing data. And the register is used for storing scalar data in the data.
In one possible implementation, the method may further include:
storing the operation instruction through the operation equipment;
analyzing the operation instruction through the operation equipment to obtain a plurality of analysis instructions;
and storing an operation instruction queue through the operation equipment, wherein the operation instruction queue comprises an operation instruction and a plurality of analysis instructions, and the operation instruction queue operation instruction and the plurality of analysis instructions are sequentially arranged according to the executed sequence.
In one possible implementation, the method may further include:
the method comprises the steps that when the running equipment determines that the first analysis instruction and a zero analysis instruction before the first analysis instruction have an incidence relation, the first analysis instruction is cached, and after the execution of the zero analysis instruction is finished, the cached first analysis instruction is executed.
The method for analyzing the data comprises the following steps that an incidence relation exists between a first analysis instruction and a zeroth analysis instruction before the first analysis instruction: the first storage address interval for storing the data required by the first resolving instruction and the zeroth storage address interval for storing the data required by the zeroth resolving instruction have an overlapped area.
According to the vector logic calculation instruction execution method provided by the embodiment of the disclosure, the operation equipment is used for acquiring data, the neural network model and the operation instruction, analyzing the operation instruction to obtain a plurality of analysis instructions, and executing the plurality of analysis instructions according to the data to obtain an execution result. The method can be used in a cross-platform mode, and is good in applicability, high in instruction conversion speed, high in processing efficiency, low in error probability, and low in development labor and material cost.
The present disclosure also provides a vector logic calculation instruction processing method, which is applied to a vector logic calculation instruction processing system, where the vector logic calculation instruction processing system includes the vector logic calculation instruction generation apparatus and the running device. The method comprises the vector logic calculation instruction generation method applied to the vector logic calculation instruction generation device and the vector logic calculation instruction execution method applied to the running equipment. The method can be used in a cross-platform mode, and is good in applicability, high in instruction conversion speed, high in processing efficiency, low in error probability, and low in development labor and material cost.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are exemplary embodiments and that acts and modules referred to are not necessarily required by the disclosure.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present disclosure, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, a division of modules is merely a division of logical functions, and an actual implementation may have another division, for example, a plurality of modules may be combined or may be integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or modules through some interfaces, and may be in an electrical or other form.
Modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
In addition, functional modules in the embodiments of the present disclosure may be integrated into one processing unit, or each module may exist alone physically, or two or more modules are integrated into one module. The integrated module can be realized in a form of hardware or a form of a software program module.
The integrated modules, if implemented in the form of software program modules and sold or used as a stand-alone product, may be stored in a computer readable memory. Based on such understanding, the technical solution of the present disclosure may be embodied in the form of a software product, which is stored in a memory and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present disclosure. And the aforementioned memory comprises: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable memory, which may include: flash Memory disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The foregoing detailed description of the embodiments of the present application has been presented to illustrate the principles and implementations of the present application, and the above description of the embodiments is only provided to help understand the method and the core concept of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.
Claims (25)
1. An apparatus for generating vector logic compute instructions, the apparatus comprising:
the device determining module is used for calculating the macro instruction according to the received vector logic and determining the running device for executing the vector logic calculation macro instruction;
an instruction generating module, configured to calculate a macro instruction and the operating device according to the vector logic, generate an operating instruction, so that the operating device executes the received operating instruction,
wherein the vector logic computation macro-instruction refers to a macro-instruction for performing a logic operation on a vector,
the vector logic calculation macro instruction comprises an operation type, an input address and an output address, the operation instruction comprises the operation type, an operation input address and an operation output address, and the operation input address and the operation output address are determined according to the input address and the output address respectively.
2. The apparatus of claim 1, wherein the device determination module comprises:
a first determining submodule, configured to determine, when it is determined that the vector logic computation macro instruction includes an identifier of a specified device and a resource of the specified device meets an execution condition for executing the vector logic computation macro instruction, the specified device as the running device,
wherein the execution condition includes: the designated device includes a set of instructions corresponding to the vector logic compute macroinstructions.
3. The apparatus of claim 2, further comprising:
a resource obtaining module for obtaining resource information of the alternative device,
the device determination module further includes:
a second determining submodule, configured to determine, when it is determined that the vector logic computation macro instruction does not include the identifier of the specified device, an operating device for executing the vector logic computation macro instruction from the candidate device according to the received vector logic computation macro instruction and the resource information of the candidate device,
wherein the resource information comprises a set of instructions contained by the alternative device.
4. The apparatus of claim 3, wherein the device determination module further comprises:
and the third determining submodule determines the operating equipment according to the vector logic calculation macro instruction and the resource information of the alternative equipment when it is determined that the vector logic calculation macro instruction contains the identifier of the specified equipment and the resource of the specified equipment does not meet the execution condition for executing the vector logic calculation macro instruction.
5. The apparatus of any of claims 2-4, wherein the vector logic compute macro instructions further comprise at least one of an input quantity and an output quantity,
the instruction generating module is further configured to determine a data volume of the vector logic calculation macro instruction, generate an operation instruction according to the data volume, the vector logic calculation macro instruction, and the resource information of the operation device,
wherein the data amount is determined according to at least one of the input amount and the output amount, and the resource information of the operating device further includes at least one of a storage capacity and a remaining storage capacity.
6. The apparatus of claim 5, wherein the instruction generation module comprises:
a first instruction generation submodule, configured to split the vector logic calculation macro instruction into multiple operation instructions according to the operation data amount of the operation device and the data amount when it is determined that the number of the operation devices is one and the operation data amount of the operation device is smaller than the data amount of the vector logic calculation macro instruction, so that the operation device sequentially executes the multiple operation instructions,
the operation data volume of the operation equipment is determined according to the resource information of the operation equipment, each operation instruction further comprises at least one of operation input volume and operation output volume, and the operation input volume and the operation output volume are determined according to the operation data volume.
7. The apparatus of claim 5, wherein the instruction generation module comprises:
a second instruction generation submodule, configured to split the vector logic computation macroinstruction according to the operation data amount of each operating device and the data amount when it is determined that the number of the operating devices is multiple, and generate an operation instruction corresponding to each operating device,
the operation data volume of each operation device is determined according to the resource information of each operation device, the operation instruction further comprises at least one of operation input volume and operation output volume, and the operation input volume and the operation output volume are determined according to the operation data volume of the operation device executing the operation instruction.
8. The apparatus of claim 1, further comprising:
and the queue construction module is used for sequencing the operating instructions according to a queue sequencing rule and constructing an instruction queue corresponding to the operating equipment according to the sequenced operating instructions.
9. The apparatus of claim 2, further comprising:
and the macro instruction generating module is used for receiving the vector logic calculation instruction to be executed and generating the vector logic calculation macro instruction according to the determined identifier of the specified equipment and the vector logic calculation instruction to be executed.
10. The apparatus of claim 1, further comprising:
an instruction dispatching module for sending the operation instruction to the operation device,
wherein the instruction dispatch module comprises:
the instruction assembly submodule is used for generating an assembly file according to the operation instruction;
the assembly translation submodule is used for translating the assembly file into a binary file;
and the instruction sending submodule is used for sending the binary file to the operating equipment so as to enable the operating equipment to execute the operating instruction according to the binary file.
11. The apparatus of claim 1,
the running equipment is one or any combination of a CPU, a GPU and an NPU
The device is arranged in a CPU and/or an NPU;
the vector logic compute macroinstructions include at least one of: vector and compute macroinstruction, vector or compute macroinstruction, vector non-compute macroinstruction, vector compare compute macroinstruction;
the vector logic compute macro instruction further includes at least one of: and the vector logic is used for calculating the identification, the input quantity, the output quantity, the operand and the instruction parameter of a specified device of the macro instruction, wherein the instruction parameter comprises at least one of the address and the length of the second operand.
12. A machine learning arithmetic device, the device comprising:
one or more vector logic computation instruction generation devices as claimed in any one of claims 1 to 11, configured to obtain data to be computed and control information from other processing devices, perform specified machine learning operations, and transmit execution results to other processing devices through an I/O interface;
when the machine learning arithmetic device comprises a plurality of vector logic calculation instruction generating devices, the vector logic calculation instruction generating devices can be connected through a specific structure and transmit data;
the vector logic calculation instruction generation devices are interconnected through a Peripheral Component Interface Express (PCIE) bus and transmit data so as to support larger-scale machine learning operation; the vector logic calculation instruction generation devices share the same control system or own respective control systems; the vector logic calculation instruction generation devices share a memory or own memories; the interconnection mode of the vector logic calculation instruction generation devices is any interconnection topology.
13. A combined treatment device, characterized in that the device comprises:
the machine learning computing device, universal interconnect interface, and other processing device of claim 12;
the machine learning arithmetic device interacts with the other processing devices to jointly complete the calculation operation designated by the user,
wherein the combination processing apparatus further comprises: and a storage device connected to the machine learning arithmetic device and the other processing device, respectively, for storing data of the machine learning arithmetic device and the other processing device.
14. The utility model provides a board card, its characterized in that, the board card includes: memory device, interface device and control device and machine learning chip comprising a machine learning arithmetic device according to claim 12 or a combined processing device according to claim 13;
wherein the machine learning chip is connected with the storage device, the control device and the interface device respectively;
the storage device is used for storing data;
the interface device is used for realizing data transmission between the machine learning chip and external equipment;
and the control device is used for monitoring the state of the machine learning chip.
15. A method of vector logic computation instruction generation, the method comprising:
according to the received vector logic calculation macro instruction, determining an operation device for executing the vector logic calculation macro instruction;
calculating a macro instruction and the operation device according to the vector logic, generating an operation instruction so as to enable the operation device to execute the received operation instruction,
wherein the vector logic computation macro-instruction refers to a macro-instruction for performing a logic operation on a vector,
the vector logic calculation macro instruction comprises an operation type, an input address and an output address, the operation instruction comprises the operation type, an operation input address and an operation output address, and the operation input address and the operation output address are determined according to the input address and the output address respectively.
16. The method of claim 15, wherein determining, based on the received vector logic compute macroinstruction, an execution device to execute the vector logic compute macroinstruction comprises:
determining the specified device as the running device when the vector logic calculation macro instruction is determined to contain the identification of the specified device and the resource of the specified device meets the execution condition for executing the vector logic calculation macro instruction,
wherein the execution condition comprises inclusion in the designated device of an instruction set corresponding to the vector logic compute macroinstruction.
17. The method of claim 16, further comprising:
the resource information of the alternative device is acquired,
the method for determining the running equipment for executing the vector logic calculation macro instruction according to the received vector logic calculation macro instruction comprises the following steps:
when determining that the vector logic calculation macro instruction does not contain the identifier of the specified device, determining an operating device for executing the vector logic calculation macro instruction from the candidate device according to the received vector logic calculation macro instruction and the resource information of the candidate device,
wherein the resource information comprises a set of instructions contained by the alternative device.
18. The method of claim 17, wherein determining, based on the received vector logic compute macroinstruction, an execution device to execute the vector logic compute macroinstruction comprises:
and when the vector logic calculation macro instruction is determined to comprise the identifier of the specified device and the resource of the specified device does not meet the execution condition for executing the vector logic calculation macro instruction, determining the operating device according to the vector logic calculation macro instruction and the resource information of the alternative device.
19. The method of any of claims 16-18, wherein the vector logic compute macroinstructions further comprise at least one of an input quantity and an output quantity,
according to the vector logic calculation macro instruction and the operation device, generating an operation instruction, comprising:
determining the data volume of the vector logic calculation macro instruction, generating an operation instruction according to the data volume, the vector logic calculation macro instruction and the resource information of the operation equipment,
wherein the data amount is determined according to at least one of the input amount and the output amount, and the resource information of the operating device further includes at least one of a storage capacity and a remaining storage capacity.
20. The method of claim 19, wherein generating the run instruction according to the vector logic computation macroinstruction data size, the vector logic computation macroinstruction, and the resource information of the run device comprises:
when the number of the operating devices is determined to be one and the operating data volume of the operating devices is smaller than the data volume of the vector logic calculation macro-instruction, splitting the vector logic calculation macro-instruction into a plurality of operating instructions according to the operating data volume and the data volume of the operating devices so as to enable the operating devices to sequentially execute the plurality of operating instructions,
the operation data volume of the operation equipment is determined according to the resource information of the operation equipment, each operation instruction comprises at least one of operation input volume and operation output volume, and the operation input volume and the operation output volume are determined according to the operation data volume.
21. The method of claim 19, wherein generating the run instruction according to the vector logic computation macroinstruction data size, the vector logic computation macroinstruction, and the resource information of the run device comprises:
when a plurality of operating devices are determined, splitting the vector logic calculation macro instruction according to the operating data volume of each operating device and the data volume to generate an operating instruction corresponding to each operating device,
the operation data volume of each operation device is determined according to the resource information of each operation device, the operation instruction comprises at least one of operation input volume and operation output volume, and the operation input volume and the operation output volume are determined according to the operation data volume of the operation device executing the operation instruction.
22. The method of claim 15, further comprising:
and sequencing the operating instructions according to a queue sequencing rule, and constructing an instruction queue corresponding to the operating equipment according to the sequenced operating instructions.
23. The method of claim 16, further comprising:
and receiving a vector logic calculation instruction to be executed, and generating the vector logic calculation macro instruction according to the determined identifier of the specified equipment and the vector logic calculation instruction to be executed.
24. The method of claim 15, further comprising:
sending the operating instruction to the operating device,
wherein sending the operation instruction to the operation device includes:
generating an assembly file according to the operation instruction;
translating the assembly file into a binary file;
and sending the binary file to the operating equipment so that the operating equipment executes the operating instruction according to the binary file.
25. The method of claim 15,
the running equipment is one or any combination of a CPU, a GPU and an NPU;
the method is applied to a CPU and/or an NPU;
the vector logic compute macroinstructions include at least one of: vector and compute macroinstruction, vector or compute macroinstruction, vector non-compute macroinstruction, vector compare compute macroinstruction;
the vector logic compute macro instruction further includes at least one of: and the vector logic is used for calculating the identification, the input quantity, the output quantity, the operand and the instruction parameter of a specified device of the macro instruction, wherein the instruction parameter comprises at least one of the address and the length of the second operand.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811220923.2A CN111079910B (en) | 2018-10-19 | 2018-10-19 | Operation method, device and related product |
PCT/CN2019/111852 WO2020078446A1 (en) | 2018-10-19 | 2019-10-18 | Computation method and apparatus, and related product |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811220923.2A CN111079910B (en) | 2018-10-19 | 2018-10-19 | Operation method, device and related product |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111079910A CN111079910A (en) | 2020-04-28 |
CN111079910B true CN111079910B (en) | 2021-01-26 |
Family
ID=70309211
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811220923.2A Active CN111079910B (en) | 2018-10-19 | 2018-10-19 | Operation method, device and related product |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111079910B (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106209682A (en) * | 2016-07-08 | 2016-12-07 | 北京百度网讯科技有限公司 | Business scheduling method, device and system |
CN107229463A (en) * | 2016-03-24 | 2017-10-03 | 联发科技股份有限公司 | Computing device and corresponding computational methods |
CN107315568A (en) * | 2016-04-26 | 2017-11-03 | 北京中科寒武纪科技有限公司 | A kind of device for being used to perform vector logic computing |
CN108268423A (en) * | 2016-12-31 | 2018-07-10 | 英特尔公司 | Realize the micro-architecture for being used for the concurrency with the enhancing for writing the sparse linear algebraic operation for reading dependence |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3035204B1 (en) * | 2014-12-19 | 2018-08-15 | Intel Corporation | Storage device and method for performing convolution operations |
CN110298443B (en) * | 2016-09-29 | 2021-09-17 | 中科寒武纪科技股份有限公司 | Neural network operation device and method |
CN107016175B (en) * | 2017-03-23 | 2018-08-31 | 中国科学院计算技术研究所 | It is applicable in the Automation Design method, apparatus and optimization method of neural network processor |
CN107103113B (en) * | 2017-03-23 | 2019-01-11 | 中国科学院计算技术研究所 | The Automation Design method, apparatus and optimization method towards neural network processor |
CN107450972B (en) * | 2017-07-04 | 2020-10-16 | 创新先进技术有限公司 | Scheduling method and device and electronic equipment |
CN108416431B (en) * | 2018-01-19 | 2021-06-01 | 上海兆芯集成电路有限公司 | Neural network microprocessor and macroinstruction processing method |
-
2018
- 2018-10-19 CN CN201811220923.2A patent/CN111079910B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107229463A (en) * | 2016-03-24 | 2017-10-03 | 联发科技股份有限公司 | Computing device and corresponding computational methods |
CN107315568A (en) * | 2016-04-26 | 2017-11-03 | 北京中科寒武纪科技有限公司 | A kind of device for being used to perform vector logic computing |
CN106209682A (en) * | 2016-07-08 | 2016-12-07 | 北京百度网讯科技有限公司 | Business scheduling method, device and system |
CN108268423A (en) * | 2016-12-31 | 2018-07-10 | 英特尔公司 | Realize the micro-architecture for being used for the concurrency with the enhancing for writing the sparse linear algebraic operation for reading dependence |
Also Published As
Publication number | Publication date |
---|---|
CN111079910A (en) | 2020-04-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111079909B (en) | Operation method, system and related product | |
CN111078291B (en) | Operation method, system and related product | |
CN111078284B (en) | Operation method, system and related product | |
CN111079916B (en) | Operation method, system and related product | |
CN111079925B (en) | Operation method, device and related product | |
CN111079910B (en) | Operation method, device and related product | |
CN111079912B (en) | Operation method, system and related product | |
CN111078283B (en) | Operation method, device and related product | |
CN111079907B (en) | Operation method, device and related product | |
CN111079913B (en) | Operation method, device and related product | |
CN111078280B (en) | Operation method, device and related product | |
CN111079911B (en) | Operation method, system and related product | |
CN111078281B (en) | Operation method, system and related product | |
CN111078125B (en) | Operation method, device and related product | |
CN111078282B (en) | Operation method, device and related product | |
CN111079915B (en) | Operation method, device and related product | |
CN111078285B (en) | Operation method, system and related product | |
CN111079914B (en) | Operation method, system and related product | |
CN111078293B (en) | Operation method, device and related product | |
CN111079924B (en) | Operation method, system and related product | |
CN111381872A (en) | Operation method, device and related product | |
CN111381873A (en) | Operation method, device and related product | |
CN111401536A (en) | Operation method, device and related product | |
CN111325331B (en) | Operation method, device and related product | |
CN111400341B (en) | Scalar lookup instruction processing method and device and related product |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |