[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

WO2024188247A1 - 一种模型量化方法、装置、电子设备及存储介质 - Google Patents

一种模型量化方法、装置、电子设备及存储介质 Download PDF

Info

Publication number
WO2024188247A1
WO2024188247A1 PCT/CN2024/081245 CN2024081245W WO2024188247A1 WO 2024188247 A1 WO2024188247 A1 WO 2024188247A1 CN 2024081245 W CN2024081245 W CN 2024081245W WO 2024188247 A1 WO2024188247 A1 WO 2024188247A1
Authority
WO
WIPO (PCT)
Prior art keywords
module
model
modules
adjacent
quantization
Prior art date
Application number
PCT/CN2024/081245
Other languages
English (en)
French (fr)
Inventor
李慧霞
马跃萧
郑侠武
肖学锋
王睿
文石磊
潘欣
晁飞
纪荣嵘
Original Assignee
北京字跳网络技术有限公司
脸萌有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京字跳网络技术有限公司, 脸萌有限公司 filed Critical 北京字跳网络技术有限公司
Publication of WO2024188247A1 publication Critical patent/WO2024188247A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Definitions

  • the embodiments of the present disclosure relate to the field of computer technology, and in particular to a model quantization method, device, electronic device, and storage medium.
  • Post-training quantization is one of the common methods of neural network quantization. It can map the floating-point weights and activations in the trained neural network to low-bit fixed-point numbers to achieve neural network compression. Since the post-training quantization method does not include the quantization training process, the quantized model obtained by the post-training quantization method has a large gap in accuracy compared with the full-precision model.
  • the embodiments of the present disclosure provide a model quantization method, device, electronic device and storage medium, which can improve the quantization accuracy of the post-training quantization method.
  • an embodiment of the present disclosure provides a model quantization method, including:
  • Joint quantization is performed on the adjacent module pairs to be combined that have been determined in the model to be quantized.
  • the present disclosure also provides a model quantization device, including:
  • a capability index evaluation module is used to determine the capability index of each module in the model to be quantified; wherein the capability index of each module represents the capability of each module to process data;
  • a joint information determination module used to determine the difference between the capability indicators of adjacent modules in the modules, and determine the adjacent module pairs to be combined according to the differences;
  • the joint quantization module is used to perform joint quantization on the adjacent module pairs to be combined that have been determined in the model to be quantized.
  • an embodiment of the present disclosure further provides an electronic device, the electronic device comprising:
  • processors one or more processors
  • a storage device for storing one or more programs
  • the one or more processors When the one or more programs are executed by the one or more processors, the one or more processors implement the model quantization method as described in any one of the embodiments of the present disclosure.
  • the embodiments of the present disclosure further provide a storage medium comprising computer executable instructions, which, when executed by a computer processor, are used to execute the model quantization method as described in any one of the embodiments of the present disclosure.
  • the technical solution of the embodiment of the present disclosure is to determine the capability index of each module in the model to be quantified; wherein the capability index of each module represents the capability of each module to process data; determine the difference between the capability indexes of adjacent modules in each module, and determine the adjacent module pairs to be combined according to each difference; and jointly quantize the adjacent module pairs to be combined that have been determined in the model to be quantized.
  • FIG1 is a schematic diagram showing the relationship between the final quantization loss and the maximum quantization loss in a post-training quantization process
  • FIG2 is a schematic diagram of a flow chart of a model quantization method provided by an embodiment of the present disclosure
  • FIG3 is a block diagram of a quantization process of a model quantization method provided by an embodiment of the present disclosure
  • FIG4 is a flowchart of a process for determining capability indicators of each module in a data-free scenario in a model quantization method provided by an embodiment of the present disclosure
  • FIG5 is a flowchart of a process for determining capability indicators of each module in a data scenario in a model quantization method provided by an embodiment of the present disclosure
  • FIG6 is a schematic diagram of the structure of a model quantization device provided by an embodiment of the present disclosure.
  • FIG. 7 is a schematic diagram of the structure of an electronic device provided by an embodiment of the present disclosure.
  • the following quantization loss variation trend is further obtained: if the module capacity of the latter module in the adjacent modules is significantly greater than that of the former module, the quantization loss will be reduced. On the contrary, if the module capacity of the latter module is less than that of the former module, the quantization loss accumulation effect will be aggravated on the basis of the original quantization loss accumulation effect.
  • FIG1 is a schematic diagram of the relationship between the final quantization loss and the maximum quantization loss in a post-training quantization process.
  • the post-training quantization process is performed by using the method of "increasing the limit of post-training quantization through block reconstruction" (indicated by BRECQ in FIG1 ) and the method of "randomly discarding quantization for extremely low-bit post-training quantization” (indicated by QDROP in FIG1 ), respectively. Both show a positive correlation between the final quantization loss and the maximum quantization loss.
  • Loss(W i ,X i ) can represent the quantization loss after quantization of the i-th module in the model to be quantized.
  • FIG. 2 is a flow chart of a model quantization method provided by the embodiment of the present disclosure.
  • the embodiment of the present disclosure is applicable to the case where the model is quantized based on the post-training quantization method. It can be executed by a model quantization device, which can be implemented in the form of software and/or hardware, and can be configured in an electronic device, such as a computer.
  • the model quantization method provided in this embodiment may include:
  • the model to be quantized may be a pre-trained model; each module in the model to be quantized may be composed of at least one convolutional layer and may be used to perform at least one data processing operation.
  • the module capabilities of each module in the model can be evaluated.
  • the module capabilities of each module can be evaluated from different dimensions, for example, the module capabilities can be evaluated from the module's own dimensions (such as structure, parameter quantity, etc.), or from the module's data processing dimensions (such as accuracy, speed, etc.), and the evaluation dimensions and evaluation methods for each dimension can be pre-set according to the specific model to be quantized.
  • the results of evaluating the module capability of a module from different dimensions can all belong to the module capability index.
  • the module capability index includes capability indexes of at least two dimensions
  • the capability indexes of at least two dimensions can also be fused (such as weighted fusion) to determine the final capability index of the module, thereby more accurately representing the module capability of the module.
  • Fig. 3 is a block diagram of a quantization process of a model quantization method provided by an embodiment of the present disclosure.
  • the model to be quantized can be composed of modules 1-modules L connected in sequence, and the capability index of each module can be evaluated to obtain capability index 1-capability index L respectively.
  • S220 Determine the difference in capability indicators between adjacent modules in each module, and determine adjacent module pairs to be combined according to each difference.
  • adjacent modules in the modules of FIG. 3 may include: module 1 and module 2, module 2 and module 3, ..., module L-1 and module L.
  • the difference in capability indicators between adjacent modules can be calculated respectively, for example, the larger capability indicator value can be subtracted from the smaller capability indicator value, or the former module capability indicator can be subtracted from the latter module capability indicator, etc., without limitation here.
  • determining the adjacent module pairs to be combined according to each difference may include: taking the absolute value of each difference; and determining the adjacent module pairs to be combined according to the absolute value.
  • the adjacent module pair to be combined may be composed of two adjacent modules with the largest absolute value; for another example, the adjacent module pair to be combined may be composed of two adjacent modules with absolute values greater than a preset value, etc. It can be considered that the adjacent module pair may include at least one pair.
  • determining the adjacent module pairs to be combined according to the differences may include: determining a preset number of adjacent module pairs to be combined according to descending order of the differences.
  • the absolute value of the difference can be taken to ensure that each difference is a positive number. Then, the difference after taking the absolute value can be arranged in descending order from large to small. Finally, the difference can be arranged in descending order to determine a preset number of adjacent module pairs to be combined.
  • the adjacent module pairs consisting of two adjacent modules corresponding to the first preset number of differences can be used as the adjacent module pairs to be combined; for example, the adjacent module pairs corresponding to the preset number of differences can be randomly selected from each difference greater than the median of the arrangement as the adjacent module pairs to be combined.
  • the preset number can be set according to experience or experiment, and the optimal quantization accuracy can be achieved by controlling the number of adjacent module pairs.
  • adjacent module pairs to be combined can also be determined by constructing the following formula:
  • the solution target is the value of m when the result value is the largest; wherein m represents a binary vector of length L-1, and ml represents the value of the lth element in m.
  • m represents a binary vector of length L-1
  • ml represents the value of the lth element in m.
  • the lth and l+1th modules in the model to be quantified constitute a pair of adjacent modules to be combined; wherein CM l and CM l+1 represent the capacity indicators of the lth and l+1th modules in the model to be quantified, respectively, and the model to be quantified can have a total of L modules; wherein 1 represents a vector of the same length as m and whose elements are all 1, k represents a preset number, and ⁇ represents a preset regular term proportional coefficient. Based on the above formula, the adjacent module pairs to be combined corresponding to the largest first k squared differences can be determined at one time, which is highly efficient.
  • the adjacent module pairs consisting of module 1 and module 2 and the adjacent module pairs consisting of module 3 and module 4 are determined as the adjacent module pairs to be combined in the model to be quantized.
  • the adjacent module pairs consisting of module 1 and module 2 that have been determined can be combined as a whole module for quantization, and the adjacent module pairs consisting of module 3 and module 4 that have been determined can be combined as a whole module for quantization.
  • the adjacent module pairs to be combined that have been determined in the model to be quantized can be jointly quantized, which can include: firstly, module 1 and module 2 are jointly quantized as a whole to obtain the quantized module 1 and quantized module 2 after the joint quantization; then, module 3 and module 4 can be used as The whole is jointly quantized to obtain the quantized modules 3 and 4 after the joint quantization.
  • the modules 5, 6, ... can be quantized module by module until the module L is quantized to complete the post-training quantization process of the model to be quantized.
  • the module can be quantized based on the input data and output data of the module according to the existing module quantization method.
  • the quantization loss oscillation in the post-training quantization process can be reduced, thereby helping to improve the quantization accuracy of the post-training quantization.
  • the adjacent module pairs to be combined that have been determined in the quantized model are jointly quantized, which can include: taking each module in the adjacent module pair to be combined to which each overlapping module belongs as a target module corresponding to each overlapping module; determining each target module group based on the continuity of the position order of the overlapping modules in the model to be quantized and according to the target module corresponding to each overlapping module; and jointly quantizing the modules in each target module group.
  • the adjacent module pairs to be combined include adjacent module pair 1 to be combined consisting of module 1 and module 2, adjacent module pair 2 to be combined consisting of module 2 and module 3, adjacent module pair 3 to be combined consisting of module 3 and module 4, adjacent module pair 4 to be combined consisting of module 5 and module 6, and adjacent module pair 5 to be combined consisting of module 6 and module 7.
  • the overlapping modules in the adjacent module pairs to be combined include module 2, module 3 and module 6.
  • each module in the adjacent module pair 1 and the adjacent module pair 2 to which module 2 belongs i.e., module 1, module 2 and module 3
  • each module in the adjacent module pair 2 and the adjacent module pair 3 to which module 3 belongs i.e., module 2, module 3 and module 4
  • each module in the adjacent module pair 4 and the adjacent module pair 5 to which module 6 belongs i.e., module 5, module 6 and module 7
  • the order of data processing of the overlapping modules in the model to be quantified can be considered as the position order of the modules in the model to be quantified.
  • the position order of module 2 belonging to the overlapping module can be considered as 2
  • the position order of module 3 belonging to the overlapping module can be considered as 3, etc.
  • the continuity of the position order can include continuous position order and discontinuous position order
  • the overlapping modules with continuous position order can include at least two overlapping modules.
  • the overlapping modules include module 2, module 3, module 4 and module 6, then the position order of module 2, module 3 and module 4 in the model to be quantified is continuous, and the position order of module 6 in the model to be quantified is discontinuous with that of module 2, module 3 and module 4 respectively.
  • each target module group is determined according to the target module corresponding to each overlapping module, which may include at least one of the following items: if the position sequence of at least two overlapping modules in the model to be quantified is continuous, the target modules corresponding to the at least two overlapping modules are merged to obtain a target module group; if the position sequence of the current overlapping module and other overlapping modules in the model to be quantified is not continuous, the target module group is composed of the target modules corresponding to the current overlapping module.
  • the target module corresponding to module 2 and the target module corresponding to module 3 can be merged, that is, module 1, module 2 and module 3 are merged with module 2, module 3 and module 4 to obtain a target module group consisting of module 1, module 2, module 3 and module 4.
  • the target module group can be composed of the target modules corresponding to module 6 (ie, module 5, module 6 and module 7).
  • the joint quantization of the modules in each target module group may include: joint quantization of all modules in the target module group as a whole. For example, when the target module group consists of module 1, module 2, module 3 and module 4, module 1, module 2, module 3 and module 4 may be taken as a whole. The whole is jointly quantized to obtain quantized module 1, quantized module 2, quantized module 3 and quantized module 4, which are used as the final quantization results of module 1, module 2, module 3 and module 4 respectively.
  • the target module groups containing different numbers of modules can be determined by the continuity of the position order of overlapping modules in the model to be quantized, and the modules in each target module group can be jointly quantized, which can make the joint quantization of modules more flexible and reduce the quantization loss oscillation in the post-training quantization process in the global dimension, which is conducive to improving the quantization accuracy.
  • jointly quantizing adjacent module pairs in the model to be quantized may include: based on a preset batch of sample data, jointly quantizing adjacent module pairs to be combined that have been determined in the model to be quantized.
  • sample data can be used for model quantization in batches.
  • the batch size of each batch of sample data is related to the quantization accuracy. Increasing the batch size can reduce the expected approximate error of the optimization target, so increasing the batch size can improve the quantization accuracy to a certain extent.
  • sample data of a suitable batch i.e., a preset batch
  • the technical solution of the disclosed embodiment is to determine the capability index of each module in the model to be quantized; wherein the capability index of each module represents the capability of each module to process data; determine the difference between the capability indexes of adjacent modules in each module, and determine the adjacent module pairs to be combined according to each difference; and jointly quantize the adjacent module pairs to be combined that have been determined in the model to be quantized.
  • the capability index differences between the modules to be quantized can be eliminated, thereby reducing the quantization loss oscillation to improve the quantization accuracy.
  • the model quantization method provided in this embodiment describes in detail the determination of the capability index of each module in different scenarios.
  • the capability index can be evaluated from the module's own dimension, that is, the capability index of each module in the model to be quantified can be determined, which may include:
  • the capability index of each module is determined according to the parameter quantity and bit width of each module; wherein the parameter quantity and bit width of each module are positively correlated with the capability index of the corresponding module.
  • the no-data scenario may refer to a scenario in which the model to be quantified does not need to process sample data to measure the capability indicators of each module in the model.
  • the parameter quantity of the module includes, for example, the number of parameters such as the weight value and activation value of each network layer in the module; the bit width of the module may refer to the data width of each parameter, for example, 32 bits, 16 bits, etc. Based on experience and experiments, it can be known that the parameter quantity and bit width of the module obviously affect the module's ability to process data, and generally, the larger the parameter quantity and bit width, the stronger the module capability.
  • FIG4 is a flowchart of a process for determining the capability index of each module in a data-free scenario in a model quantization method provided by an embodiment of the present disclosure.
  • the capability index of each module can be calculated based on the following formula:
  • ModCap represents the capability index
  • i represents the i-th convolutional layer in the current module, and the convolutional layers in the current module can have n layers in total
  • Wi represents the parameter of the i-th convolutional layer, params( Wi ) is the function that determines the parameter amount of the i-th convolutional layer
  • bi represents the bit width of the i-th convolutional layer
  • ⁇ i ⁇ [1,+ ⁇ ) is the preset proportional coefficient.
  • the premise of calculating the capacity index is restricted, that is, when each module meets the preset conditions, the capacity index of each module can be determined according to the parameter quantity and bit width of each module.
  • the preset conditions can be pre-set according to experience or experiments. For example, the conditions that each module needs to meet in terms of network structure, number of network layers, etc.
  • the capability index of each module can be directly calculated by using the parameter quantity and bit width of each module, thereby achieving a higher efficiency in determining the capability index.
  • the preset conditions include that the number of convolutional layers of each module is the same, and determining the adjacent module pairs to be combined according to each difference may include: determining the adjacent module pairs with the largest difference as the adjacent module pairs to be combined; arranging the differences in descending order, and determining in sequence that the adjacent module pairs corresponding to the differences do not have any overlapping modules with the determined adjacent module pairs to be combined, and determining the adjacent module pairs corresponding to the differences as the adjacent module pairs to be combined, until the number of determined adjacent module pairs to be combined reaches a preset number.
  • each module can be called topologically homogeneous. Since in the process of jointly quantizing multiple modules as a whole, there is an implicit process of first combining the initial two modules for joint quantization, and then jointly quantizing the combined module with the next module. Under the premise of topological homogeneity, if there are overlapping modules between adjacent modules, then the combined module and the next module will no longer be topologically homogeneous, thus violating the premise of topological homogeneity.
  • the process of determining the adjacent module pairs to be combined may include: first, The absolute value of the difference is taken to ensure that each difference is a positive number; then, the difference after taking the absolute value can be arranged in descending order from large to small, and the adjacent module pair corresponding to the frontmost arrangement (i.e., the largest difference) is used as the adjacent module pair to be combined; then, the target difference can be determined from each difference in order from front to back, and when there is no overlapping module between the adjacent module pair corresponding to the target difference and the adjacent module pair to be combined that has been determined, the adjacent module pair corresponding to the target difference is determined as the adjacent module pair to be combined; until the number of adjacent module pairs to be combined reaches a preset number, the selection process of the adjacent module pairs to be combined can be stopped.
  • the differences after taking the absolute values are arranged in descending order as 10, 9, 8, 7, etc. These differences correspond to adjacent module pairs module 1 and module 2, module 2 and module 3, module 5 and module 6, module 7 and module 8, etc., and the preset number is 3.
  • the process of determining the adjacent module pair to be combined may include: taking the adjacent module pair (i.e., module 1 and module 2) corresponding to the maximum difference (i.e., difference 10) as the adjacent module pair 1 to be combined; then, it may be determined in sequence that the adjacent module pair (i.e., module 2 and module 3) corresponding to the difference 9 has an overlapping module (i.e., module 2) with the adjacent module pair 1 to be combined that has been determined, and then the adjacent module pair corresponding to the difference 9 cannot be used as the adjacent module pair to be combined; then, it may be determined in sequence that the adjacent module pair (i.e., module 5 and module 6) corresponding to the difference 8 has no overlapping module with the adjacent module pair 1 to be combined that has been determined, and the adjacent module pair corresponding to the difference 8 may be determined as the adjacent module pair 2 to be combined; and then, it may be determined that the adjacent module pair (i.e., module 7 and module 8) corresponding to the difference 7 has no overlapping module with both the adjacent module
  • each module when each module needs to meet the preset condition of topological homogeneity to determine the adjacent module pairs to be combined, there must be no overlapping modules between the adjacent module pairs to be combined, so as to avoid Violating the premise of topological homogeneity, it ensures the reasonable selection of adjacent module pairs.
  • the capability index can be evaluated from the dimension of module data processing, that is, the capability index of each module in the model to be quantified can be determined, which may include:
  • the model to be quantified is initially quantized module by module; according to the first feature of the output of each module after the initial quantization and the second feature of the output of the corresponding module in the model to be quantified, the module quantization loss of each module in the model to be quantified is determined; according to the module quantization loss, the capability index of each module in the model to be quantified is determined; wherein, the module quantization loss is inversely correlated with the capability index of each module in the model to be quantified.
  • the data scenario may refer to a scenario where the model to be quantized needs to process sample data, for example, an initial quantization needs to be performed based on the sample data.
  • the initial quantization process may include a process of quantizing the model to be quantized module by module.
  • FIG5 is a flowchart of the process of determining the capability indicators of each module in a data scenario in a model quantization method provided by an embodiment of the present disclosure.
  • the quantization module in FIG5 can be regarded as any module in the modules after initial quantization; accordingly, the feature output by the quantization module can be called the first feature.
  • the feature output by the module used to quantize the quantization module in FIG5 i.e., the corresponding module in the model to be quantified
  • the loss between the first feature and the second feature can be calculated based on the existing feature loss determination algorithm to obtain the module quantization loss of the corresponding module in the model to be quantified.
  • the second-order norm loss between the first feature and the second feature can be calculated to obtain the module quantization loss of the corresponding module in the model to be quantified.
  • the module quantization loss of each module in the model to be quantified can be obtained in the same way.
  • the anti-correlation relationship between the module quantization loss and the capacity index of the second module can be preset. Accordingly, the capacity index of the second module can be determined based on the module quantization loss according to the anti-correlation relationship.
  • the module's quantization loss can be used to determine the module's capability index.
  • the technical solution of the embodiment of the present disclosure describes in detail the determination of the capability index of each module in different scenarios.
  • a data-free scenario and when each module meets the preset conditions, the capability index of each module is directly calculated by using the parameter quantity and bit width of each module, so that a higher capability index determination efficiency can be achieved.
  • the quantization loss of the module can be used to determine the capability index of the module.
  • the model quantization method provided in the embodiment of the present disclosure and the model quantization method provided in the above-mentioned embodiment belong to the same public concept. The technical details not described in detail in this embodiment can be referred to the above-mentioned embodiment, and the same technical features have the same beneficial effects in this embodiment and the above-mentioned embodiment.
  • Fig. 6 is a schematic diagram of the structure of a model quantization device provided in an embodiment of the present disclosure.
  • the model quantization device provided in this embodiment is suitable for quantizing a model based on a post-training quantization method.
  • the model quantization device may include:
  • a capability index evaluation module 610 is used to determine the capability index of each module in the model to be quantified; wherein the capability index of each module represents the capability of each module to process data;
  • a joint information determination module 620 used to determine the difference between the capability indicators of adjacent modules in each module, and determine the adjacent module pairs to be combined according to each difference;
  • the joint quantization module 630 is used to perform joint quantization on the adjacent module pairs to be combined that have been determined in the quantization model.
  • the capability indicator evaluation module may be used to:
  • the module is determined according to the parameter quantity and bit width of each module. Capability indicators of each module;
  • the parameter quantity and bit width of each module are positively correlated with the capability index of the corresponding module.
  • the capability indicator evaluation module may also be used to:
  • the module quantification loss determine the capacity index of each module in the model to be quantified
  • the module quantization loss is inversely correlated with the capability index of each module in the model to be quantized.
  • the joint information determination module may be used to:
  • a preset number of adjacent module pairs to be combined are determined.
  • the preset condition includes that the number of convolutional layers of each module is the same, and the joint information determination module can be used to:
  • the adjacent module pairs corresponding to the difference values are determined as the adjacent module pairs to be combined, until the number of the determined adjacent module pairs to be combined reaches a preset number.
  • the joint quantization module may be used to:
  • each module in the adjacent module pairs to be combined to which each overlapping module belongs is used as a target module corresponding to each overlapping module;
  • each target module group is determined;
  • the modules in each target module group are jointly quantized.
  • the joint quantization module may be used for at least one of the following:
  • the target modules corresponding to the at least two overlapping modules are merged to obtain a target module group;
  • the target module group is composed of the target modules corresponding to the current overlapping modules.
  • the joint quantization module may be used to:
  • the adjacent module pairs to be combined in the quantized model are jointly quantized.
  • the model quantization device provided in the embodiments of the present disclosure can execute the model quantization method provided in any embodiment of the present disclosure, and has the corresponding functional modules and beneficial effects of the execution method.
  • FIG. 7 shows a schematic diagram of the structure of an electronic device (e.g., a terminal device or server in FIG. 7 ) 700 suitable for implementing the embodiments of the present disclosure.
  • the terminal device in the embodiments of the present disclosure may include, but is not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), vehicle-mounted terminals (e.g., vehicle-mounted navigation terminals), etc., and fixed terminals such as digital TVs, desktop computers, etc.
  • the electronic device shown in FIG. 7 is only an example and should not impose any limitations on the functions and scope of use of the embodiments of the present disclosure.
  • the electronic device 700 may include a processing device (e.g., a central processing unit, a graphics processing unit, etc.) 701, which may be configured to process data according to a program stored in a read-only memory (ROM) 702 or loaded from a storage device 708 to a random access memory (Random Access Memory,
  • ROM read-only memory
  • RAM 703 random access memory
  • the processing device 701, the ROM 702 and the RAM 703 are connected to each other via a bus 704.
  • An input/output (I/O) interface 705 is also connected to the bus 704.
  • the following devices may be connected to the I/O interface 705: input devices 706 including, for example, a touch screen, a touchpad, a keyboard, a mouse, a camera, a microphone, an accelerometer, a gyroscope, etc.; output devices 707 including, for example, a liquid crystal display (LCD), a speaker, a vibrator, etc.; storage devices 708 including, for example, a magnetic tape, a hard disk, etc.; and communication devices 709.
  • the communication device 709 may allow the electronic device 700 to communicate wirelessly or wired with other devices to exchange data.
  • FIG. 7 shows an electronic device 700 with various devices, it should be understood that it is not required to implement or have all the devices shown. More or fewer devices may be implemented or have alternatively.
  • an embodiment of the present disclosure includes a computer program product, which includes a computer program carried on a non-transitory computer-readable medium, and the computer program contains program code for executing the method shown in the flowchart.
  • the computer program can be downloaded and installed from the network through the communication device 709, or installed from the storage device 708, or installed from the ROM 702.
  • the processing device 701 the above-mentioned functions defined in the model quantization method of the embodiment of the present disclosure are executed.
  • the electronic device provided in the embodiment of the present disclosure and the model quantization method provided in the above embodiment belong to the same disclosed concept.
  • the technical details not fully described in this embodiment can be referred to the above embodiment, and this embodiment has the same beneficial effects as the above embodiment.
  • An embodiment of the present disclosure provides a computer storage medium on which a computer program is stored.
  • the program is executed by a processor, the model quantization method provided by the above embodiment is implemented.
  • the computer-readable medium mentioned above in the present disclosure may be a computer-readable signal medium. Or a computer-readable storage medium or any combination of the above two.
  • the computer-readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device or device, or any combination of the above.
  • Computer-readable storage media may include, but are not limited to: an electrical connection with one or more wires, a portable computer disk, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM) or flash memory (FLASH), an optical fiber, a portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the above.
  • a computer-readable storage medium may be any tangible medium containing or storing a program that can be used by or in conjunction with an instruction execution system, device or device.
  • a computer-readable signal medium may include a data signal propagated in a baseband or as part of a carrier wave, in which a computer-readable program code is carried.
  • This propagated data signal may take a variety of forms, including but not limited to an electromagnetic signal, an optical signal, or any suitable combination of the above.
  • the computer readable signal medium may also be any computer readable medium other than a computer readable storage medium, which may send, propagate or transmit a program for use by or in conjunction with an instruction execution system, apparatus or device.
  • the program code contained on the computer readable medium may be transmitted using any suitable medium, including but not limited to: wires, optical cables, RF (radio frequency), etc., or any suitable combination of the above.
  • the client and server may communicate using any currently known or future developed network protocol such as HTTP (Hyper Text Transfer Protocol), and may be interconnected with any form or medium of digital data communication (e.g., a communication network).
  • HTTP Hyper Text Transfer Protocol
  • Examples of communication networks include a local area network ("LAN”), a wide area network ("WAN”), an internet (e.g., the Internet), and a peer-to-peer network (e.g., an ad hoc peer-to-peer network), as well as any currently known or future developed network.
  • the computer-readable medium may be included in the electronic device, or may exist independently without being installed in the electronic device.
  • the computer-readable medium carries one or more programs.
  • the electronic device When the one or more programs are executed by the electronic device, the electronic device:
  • Computer program code for performing the operations of the present disclosure may be written in one or more programming languages or a combination thereof, including, but not limited to, object-oriented programming languages, such as Java, Smalltalk, C++, and conventional procedural programming languages, such as "C" or similar programming languages.
  • the program code may be executed entirely on the user's computer, partially on the user's computer, as a separate software package, partially on the user's computer and partially on a remote computer, or entirely on a remote computer or server.
  • the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computer (e.g., via the Internet using an Internet service provider).
  • LAN local area network
  • WAN wide area network
  • Internet service provider e.g., via the Internet using an Internet service provider
  • each box in the flowchart or block diagram may represent a module, a program segment, or a portion of a code, which contains one or more executable instructions for implementing a specified logical function.
  • the functions marked in the boxes may also occur in an order different from that marked in the accompanying drawings. For example, two boxes represented in succession may actually be executed substantially in parallel. The instructions for executing the instructions may be executed in reverse order, depending on the functions involved.
  • each block in the block diagram and/or flow chart, and the combination of blocks in the block diagram and/or flow chart can be implemented by a dedicated hardware-based system that performs the specified functions or operations, or can be implemented by a combination of dedicated hardware and computer instructions.
  • the units involved in the embodiments described in the present disclosure may be implemented by software or hardware, wherein the names of the units and modules do not, in certain circumstances, limit the units and modules themselves.
  • FPGA Field Programmable Gate Array
  • ASIC Application Specific Integrated Circuit
  • ASSP Application Specific Standard Parts
  • SOC System on Chip
  • CPLD Complex Programmable Logic Device
  • a machine-readable medium may be a tangible medium that may contain or store a program for use by or in conjunction with an instruction execution system, device, or equipment.
  • a machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
  • a machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or equipment, or any suitable combination of the foregoing.
  • a more specific example of a machine-readable storage medium may include an electrical connection based on one or more lines, a portable computer disk, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
  • RAM random access memory
  • ROM read-only memory
  • EPROM or flash memory erasable programmable read-only memory
  • CD-ROM portable compact disk read-only memory
  • CD-ROM compact disk read-only memory
  • magnetic storage device or any suitable combination of the foregoing.
  • a model quantization method comprising:
  • Joint quantization is performed on the adjacent module pairs to be combined that have been determined in the model to be quantized.
  • a model quantization method further comprising:
  • determining the capability index of each module in the model to be quantified includes:
  • the parameter quantity and bit width of each module are positively correlated with the capability index of the corresponding module.
  • a model quantization method further comprising:
  • determining the capability index of each module in the model to be quantified includes:
  • the module quantization loss is inversely correlated with the capability index of each module in the model to be quantified.
  • a model quantization method further comprising:
  • determining the adjacent module pairs to be combined according to the differences includes:
  • a preset number of adjacent module pairs to be combined are determined.
  • a model quantization method further comprising:
  • the preset condition includes that the number of convolutional layers of the modules is the same, and determining the adjacent module pairs to be combined according to the differences includes:
  • the adjacent module pairs corresponding to the difference values are determined as the adjacent module pairs to be combined, until the number of the determined adjacent module pairs to be combined reaches a preset number.
  • a model quantization method further comprising:
  • the adjacent module pairs to be combined that have been determined in the model to be quantized are jointly quantized, including:
  • each target module group is determined;
  • the modules in each target module group are jointly quantized.
  • a model quantization method further comprising:
  • the determining of each target module group based on the continuity of the position sequence of the overlapped modules in the model to be quantized and according to the target modules corresponding to each overlapped module includes at least one of the following:
  • the target modules corresponding to the current overlapping modules constitute a target module group.
  • a model quantization method further comprising:
  • the performing joint quantization on the adjacent module pairs to be combined that have been determined in the model to be quantized includes:
  • the adjacent module pairs to be combined that have been determined in the model to be quantized are jointly quantized.
  • a model quantization device comprising:
  • a capability index evaluation module is used to determine the capability index of each module in the model to be quantified; wherein the capability index of each module represents the capability of each module to process data;
  • a joint information determination module used to determine the difference between the capability indicators of adjacent modules in the modules, and determine the adjacent module pairs to be combined according to the differences;
  • the joint quantization module is used to perform joint quantization on the adjacent module pairs to be combined that have been determined in the model to be quantized.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Abstract

本公开实施例公开了一种模型量化方法、装置、电子设备及存储介质,其中该方法包括:确定待量化模型中各模块的能力指标;其中,所述各模块的能力指标表征所述各模块处理数据的能力;确定所述各模块中相邻的模块间所述能力指标的差值,并根据各所述差值确定待联合的相邻模块对;对所述待量化模型中已确定的所述待联合的相邻模块对进行联合量化。

Description

一种模型量化方法、装置、电子设备及存储介质
相关申请的交叉引用
本申请要求于2023年3月13日提交中国国家知识产权局、申请号为202310259827.3、发明名称为“一种模型量化方法、装置、电子设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本公开实施例涉及计算机技术领域,尤其涉及一种模型量化方法、装置、电子设备及存储介质。
背景技术
训练后量化(Post-Training Quantization)为神经网络量化(Neural network quantization)的常用方式之一,可将训练完成的神经网络中浮点型的权重和激活映射到低比特的定点数,以实现神经网络压缩。由于训练后量化方式不包括量化训练的过程,采用训练后量化方式得到的量化模型与全精度模型相比精度有较大差距。
发明内容
本公开实施例提供了一种模型量化方法、装置、电子设备及存储介质,能够提高训练后量化方式的量化精度。
第一方面,本公开实施例提供了一种模型量化方法,包括:
确定待量化模型中各模块的能力指标;其中,所述各模块的能力指标表征所述各模块处理数据的能力;
确定所述各模块中相邻的模块间所述能力指标的差值,并根据各所述差值确定待联合的相邻模块对;
对所述待量化模型中已确定的所述待联合的相邻模块对进行联合量化。
第二方面,本公开实施例还提供了一种模型量化装置,包括:
能力指标评估模块,用于确定待量化模型中各模块的能力指标;其中,所述各模块的能力指标表征所述各模块处理数据的能力;
联合信息确定模块,用于确定所述各模块中相邻的模块间所述能力指标的差值,并根据各所述差值确定待联合的相邻模块对;
联合量化模块,用于对所述待量化模型中已确定的所述待联合的相邻模块对进行联合量化。
第三方面,本公开实施例还提供了一种电子设备,所述电子设备包括:
一个或多个处理器;
存储装置,用于存储一个或多个程序,
当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如本公开实施例任一所述的模型量化方法。
第四方面,本公开实施例还提供了一种包含计算机可执行指令的存储介质,所述计算机可执行指令在由计算机处理器执行时用于执行如本公开实施例任一所述的模型量化方法。
本公开实施例的技术方案,确定待量化模型中各模块的能力指标;其中,各模块的能力指标表征各模块处理数据的能力;确定各模块中相邻的模块间能力指标的差值,并根据各差值确定待联合的相邻模块对;对待量化模型中已确定的待联合的相邻模块对进行联合量化。
附图说明
结合附图并参考以下具体实施方式,本公开各实施例的上述和其他特征、优点及方面将变得更加明显。贯穿附图中,相同或相似的附图标记表示相同或相似的元素。应当理解附图是示意性的,原件和元素不一定按照比例绘制。
图1为一种训练后量化过程中最终量化损失和最大量化损失的关系示意图;
图2为本公开实施例所提供的一种模型量化方法的流程示意图;
图3为本公开实施例所提供的一种模型量化方法的量化过程框图;
图4为本公开实施例所提供的一种模型量化方法中无数据场景下各模块能力指标的确定过程框图;
图5为本公开实施例所提供的一种模型量化方法中有数据场景下各模块能力指标的确定过程框图;
图6为本公开实施例所提供的一种模型量化装置的结构示意图;
图7为本公开实施例所提供的一种电子设备的结构示意图。
具体实施方式
下面将参照附图更详细地描述本公开的实施例。虽然附图中显示了本公开的某些实施例,然而应当理解的是,本公开可以通过各种形式来实现,而且不应该被解释为限于这里阐述的实施例,相反提供这些实施例是为了更加透彻和完整地理解本公开。应当理解的是,本公开的附图及实施例仅用于示例性作用,并非用于限制本公开的保护范围。
应当理解,本公开的方法实施方式中记载的各个步骤可以按照不同的顺序执行,和/或并行执行。此外,方法实施方式可以包括附加的步骤和/或省略执行示出的步骤。本公开的范围在此方面不受限制。
本文使用的术语“包括”及其变形是开放性包括,即“包括但不限于”。术语“基于”是“至少部分地基于”。术语“一个实施例”表示“至少一个实施例”;术语“另一实施例”表示“至少一个另外的实施例”;术语“一些实施例”表示“至少一些实施例”。其他术语的相关定义将在下文描述中给出。
需要注意,本公开中提及的“第一”、“第二”等概念仅用于对不同的装 置、模块或单元进行区分,并非用于限定这些装置、模块或单元所执行的功能的顺序或者相互依存关系。
需要注意,本公开中提及的“一个”、“多个”的修饰是示意性而非限制性的,本领域技术人员应当理解,除非在上下文另有明确指出,否则应该理解为“一个或多个”。
可以理解的是,本技术方案所涉及的数据(包括但不限于数据本身、数据的获取或使用)应当遵循相应法律法规及相关规定的要求。
经实验发现,训练完成的模型在基于训练后量化方式进行量化的过程中,会出现明显的量化损失振荡的现象。为探索量化损失振荡与量化精度之间的相关性,做出了如下研究:
首先,提出了模块等价情况下量化损失变化趋势:在给定预训练模型和输入数据的情况下,如果预训练模型中两个相邻模块是等价的,那么后面模块的量化损失要大于前面模块的量化损失。
这个量化损失变化趋势可理解为,在两个相邻模块等价的条件下,量化损失会受到累加效应而单调递增。然而,在实际的预训练模型中上述“相邻模块等价”的条件很难满足。比如,一些卷积层超参数的变化会使得相邻的模块不再等价。当相邻的模块不等价时,可认为模块处理数据的能力(可简称为模块能力)发生了变化。
然后,进一步得出了以下量化损失变化趋势:如果相邻模块中后者模块的模块能力明显大于前者模块,那么量化损失就会减小。反之,如果后者模块的模块能力小于前者模块,那么就会在原始量化损失累加效应的基础上,加剧量化损失累加效应。
这个进一步得出的变化趋势可表明,训练后量化过程中出现的量化损失振 荡的现象是由模块间能力差异造成的。并且,两个相邻模块的能力差异越大时,量化损失振荡越明显。由于实际应用模型中后者模块通常需要输出更高压缩程度的低维向量,即通常具备后者模块能力小于前者模块的趋势,故量化损失振荡通常会加大模块能力等价情况下的最大量化损失。
接着,随机采样了大量训练后量化过程中的量化损失,将最终量化损失与最大量化损失进行了统计,并生成了散点图。
示例性的,图1为一种训练后量化过程中最终量化损失和最大量化损失的关系示意图。参见图1,针对训练完成的移动网络模型(如图1中的MobileNetV2模型)以及残差网络(图1中的ResNet-18模型),分别采用“通过块重建提高训练后量化的极限”的方式(图1中用BRECQ表示)和“用于极低比特训练后量化的随机丢弃量化”的方式(图1中用QDROP表示)进行训练后量化过程中,都呈现出了最终量化损失和最大量化损失正相关的关系。假设待量化模型中包含L个模块,那么这种正相关的关系可以用如下公式来描述:
Loss(WL,XL)∝max(Loss(W1,X1),...,Loss(WL-1,XL-1));
其中,Loss(Wi,Xi)可表示待量化模型中第i个模块量化后的量化损失。
即经数据统计,训练后量化过程中,最终量化损失和最大量化损失呈正相关关系。
综上,待量化模型中模块能力差异会导致量化损失振荡,量化损失振荡会加大最大量化损失,进而提高最终量化损失。由于最终量化损失与量化后精度高相关,可得出结论:训练后量化过程中的量化损失振荡会损害量化精度。
在上述研究基础上,本公开实施例提供了一种模型量化方法,可提高训练后量化方式的量化精度。图2为本公开实施例所提供的一种模型量化方法的流程示意图。本公开实施例适用于基于训练后量化方式量化模型的情形。该方法 可以由模型量化装置来执行,该装置可以通过软件和/或硬件的形式实现,该装置可配置于电子设备中,例如配置于计算机中。
如图2所示,本实施例提供的模型量化方法,可以包括:
S210、确定待量化模型中各模块的能力指标;其中,各模块的能力指标表征各模块处理数据的能力。
本公开实施例中,待量化模型可以为预训练后的模型;待量化模型中的每个模块可以由至少一层卷积层构成,且可用于执行至少一种数据处理操作。
在对待量化模型进行训练后量化之前,可以先评估模型中各模块的模块能力。其中,可以从不同维度对各模块进行模块能力的评估,例如可以从模块的自身维度(如结构、参数量等)来评估模块能力,又如可以从模块处理数据的维度(如精度、速度等)来评估模块能力,且评估维度及每个维度的评估方式可根据具体的待量化模型进行预先设置。
可以认为,从不同维度来评估模块的模块能力的结果,皆可属于模块的能力指标。并且,在模块的能力指标包括至少两个维度的能力指标的情况下,还可以将至少两个维度的能力指标进行融合(如加权融合等),以确定该模块最终的能力指标,从而可更加精准地表征模块的模块能力。
示例性的,图3为本公开实施例所提供的一种模型量化方法的量化过程框图。参见图3,待量化模型可以由先后连接的模块1-模块L构成,且可以针对每个模块进行能力指标的评估,分别得到能力指标1-能力指标L。
S220、确定各模块中相邻的模块间能力指标的差值,并根据各差值确定待联合的相邻模块对。
示例性的,再次参见图3,图3各模块中相邻的模块可以包括:模块1和模块2、模块2和模块3、...、模块L-1和模块L。
本实施例中,可以分别计算相邻的模块间的能力指标的差值,例如可以为能力指标的较大值减去较小值,又如可以为前者模块能力指标减去后者模块能力指标等,在此不做限制。
之后,根据各差值确定待联合的相邻模块对,可以包括:对各差值取绝对值;根据绝对值确定待联合的相邻模块对。例如,可以由绝对值最大的相邻的两个模块构成待联合的相邻模块对;又如,可以由绝对值大于预设数值的相邻的两个模块构成待联合的相邻模块对等。可以认为,相邻模块对可以包括至少一对。
通过根据能力指标的差值确定待联合的相邻模块对,能够实现模块能力差异大的相邻模块对的选取,即可实现造成较大量化损失振荡的相邻模块对的选取,有利于减少训练后量化过程中的量化损失振荡。
在一些可选的实现方式中,根据各差值确定待联合的相邻模块对,可以包括:根据差值的降序排列,确定预设数量个待联合的相邻模块对。
其中,首先可将差值取绝对值,以保证各差值为正数。然后可将取绝对值后的差值按由大到小的降序顺序进行排列。最后可以按照差值的降序排列,确定预设数量个待联合的相邻模块对,例如可以将前预设数量个差值对应的相邻的两模块构成的相邻模块对,作为待联合的相邻模块对;又如可以从大于排列中位数的各差值中随机选预设数量个差值对应的相邻模块对,作为待联合的相邻模块对等。其中,预设数量可以根据经验或实验进行设置,通过控制相邻模块对的数量,以实现量化精度最优。
此外,也可通过构建如下公式,来确定待联合的相邻模块对:
其中,上述公式的求解目标为结果值最大时m的取值;其中,m表示长度为L-1的二值向量,且ml表示m中第l个元素的数值,当该数值为0时,可认为待量化模型中第l个和第l+1个模块不构成待联合的相邻模块对,当该数值为1时,可认为待量化模型中第l个和第l+1个模块构成待联合的相邻模块对;其中,CMl和CMl+1分别表示待量化模型中第l个和第l+1个模块的能力指标,且待量化模型可以共L个模块;其中,1表示与m等长度的、元素皆为1的向量,k表示预设数量,λ表示预设的正则项比例系数。基于上述公式,可一次性确定出最大的前k个差值平方对应的待联合的相邻模块对,效率较高。
在这些可选的实现方式中,通过选择绝对值数值大的、前预设数量个差值对应的相邻的模块构成待联合的相邻模块对,有利于进一步减少训练后量化过程中的量化损失振荡。
S230、对待量化模型中已确定的待联合的相邻模块对进行联合量化。
示例性的,再次参见图3,假设通过对差值进行降序排列以及选择前预设数量个较大差值对应的相邻模块对(图3中简称为排序和选择)的方式,确定由模块1和模块2构成的相邻模块对、由模块3和模块4构成的相邻模块对为待量化模型中待联合的相邻模块对。在训练后量化过程中,即可将已确定的由模块1和模块2构成的相邻模块对作为一个整体的模块进行联合量化,将已确定的由模块3和模块4构成的相邻模块对作为一个整体的模块进行联合量化。
以图3中待量化模型为例,对待量化模型中已确定的待联合的相邻模块对进行联合量化,可以包括:先将模块1和模块2作为一个整体进行联合量化,得到联合量化后的量化模块1和量化模块2;接着可以将模块3和模块4作为 一个整体进行联合量化,得到联合量化后的量化模块3和量化模块4。之后,可以逐模块量化模块5、模块6、...,直至对模块L完成量化为止,以完成待量化模型的训练后量化过程。其中,可根据已有的模块量化方式,基于模块的输入数据、输出数据进行模块的量化。
本公开实施例中,通过将能力指标差异大的相邻模块对进行联合量化,能够减少训练后量化过程中的量化损失振荡,从而有助于提高训练后量化的量化精度。
在一些可选的实现方式中,若待联合的相邻模块对中存在重合模块,则对待量化模型中已确定的待联合的相邻模块对进行联合量化,可以包括:将每个重合模块所属的待联合的相邻模块对中的各模块,作为与每个重合模块对应的目标模块;基于重合模块在待量化模型中位置顺序的连续性,根据与每个重合模块对应的目标模块,确定各目标模块组;对各目标模块组中的模块进行联合量化。
其中,待联合的相邻模块中可以存在至少一个重合模块。以图3中待量化模型为例,假设待联合的相邻模块对包括由模块1和模块2构成的待联合的相邻模块对1、由模块2和模块3构成的待联合的相邻模块对2、由模块3和模块4构成的待联合的相邻模块对3、由模块5和模块6构成的待联合的相邻模块对4,以及由模块6和模块7构成的待联合的相邻模块对5。那么,待联合的相邻模块对中存在的重合模块包括模块2、模块3和模块6。相应的,可以将模块2所属的相邻模块对1和相邻模块对2中的各模块(即模块1、模块2和模块3),作为与模块2对应的目标模块;可以将模块3所属的相邻模块对2和相邻模块对3中的各模块(即模块2、模块3和模块4),作为与模块3对应的目标模块;可将模块6所属的相邻模块对4和相邻模块对5中的各模块(即模块5、模块6 和模块7),作为与模块6对应的目标模块。
其中,重合模块在待量化模型中数据处理的顺序即可认为是其在待量化模型中的位置顺序。例如,属于重合模块的模块2的位置顺序可认为是2,属于重合模块的模块3的位置顺序可认为是3等。其中,位置顺序连续性可以包括位置顺序连续和位置顺序不连续,且位置顺序连续的重合模块中可以至少包括两个重合模块。示例性的,假设重合模块包括模块2、模块3、模块4和模块6,那么模块2、模块3和模块4在待量化模型中位置顺序连续,模块6分别与模块2、模块3和模块4在待量化模型中位置顺序均不连续。
其中,基于重合模块在待量化模型中位置顺序的连续性,根据与每个重合模块对应的目标模块,确定各目标模块组,可以包括下述至少一项:若至少两个重合模块在待量化模型中位置顺序连续,则将与至少两个重合模块对应的目标模块进行合并,得到目标模块组;若当前重合模块与其他重合模块在待量化模型中位置顺序不连续,则由当前重合模块对应的目标模块构成目标模块组。
示例性的,假设在待量化模型中位置顺序连续的重合模块包括模块2和模块3,那么可以将模块2对应的目标模块与模块3对应的目标模块进行合并,即将模块1、模块2和模块3与模块2、模块3和模块4合并,得到由模块1、模块2、模块3和模块4构成的目标模块组。
示例性的,假设属于重合模块的模块6与除自身模块外的其他重合模块在待量化模型中位置顺序均不连续,那么可以由模块6对应的目标模块(即模块5、模块6和模块7)构成目标模块组。
其中,对各目标模块组中的模块进行联合量化,可以包括:将目标模块组中的全部模块作为一个整体进行联合量化。例如,当目标模块组由模块1、模块2、模块3和模块4构成时,可以将模块1、模块2、模块3和模块4作为一 个整体进行联合量化,分别得到量化后的量化模块1、量化模块2、量化模块3和量化模块4,并分别作为模块1、模块2、模块3和模块4的最终量化结果。
在这些可选的实现方式中,通过重合模块在待量化模型中位置顺序的连续性,可确定包含不同数量的模块的目标模块组,并可对各目标模块组内的模块进行联合量化,可以使模块的联合量化更具灵活性,能够在全局维度上减少训练后量化过程中的量化损失振荡,有利于提高量化精度。
在一些可选的实现方式中,将待量化模型中相邻模块对进行联合量化,可以包括:基于预设批量的样本数据,对待量化模型中已确定的待联合的相邻模块对进行联合量化。
在实际量化过程中,存在样本数据的数据量较大的情况,此时可将样本数据分批次用于模型量化。经研究发现,每批样本数据的批量大小与量化精度相关,扩大批量大小可以减少优化目标期望的近似误差,故扩大批量可在一定程度上提高量化精度。同时,在批量扩大到一定程度后,量化精度的提高就会受制于边际效益递减。因此,在这些可选的实现方式中,可预先选取合适批量(即预设批量)的样本数据对待量化模型中已确定的待联合的相邻模块对进行联合量化,以实现量化精度最优。
本公开实施例的技术方案,确定待量化模型中各模块的能力指标;其中,各模块的能力指标表征各模块处理数据的能力;确定各模块中相邻的模块间能力指标的差值,并根据各差值确定待联合的相邻模块对;对待量化模型中已确定的待联合的相邻模块对进行联合量化。在基于训练后量化方式量化模型的过程中,通过根据模型中各模块能力指标的差值确定待联合的相邻模块对,并在量化过程中对相邻模块对进行联合量化,能够消除待量化的模块间的能力指标差异,从而可减少量化损失振荡,以提高量化精度。
本实施例与上述实施例中所提供的模型量化方法中各个可选方案可以结合。本实施例所提供的模型量化方法,对不同场景下各模块的能力指标的确定进行了详细描述。
本实施例所提供的一种模型量化方法中,在无数据场景下可以从模块自身维度进行能力指标的评估,即确定待量化模型中各模块的能力指标,可以包括:
在各模块满足预设条件的情况下,根据各模块的参数量和比特宽度,确定各模块的能力指标;其中,各模块的参数量和比特宽度,分别与对应模块的能力指标正相关。
其中,无数据场景可指待量化模型无需通过样本数据处理来衡量模型中各模块的能力指标的场景。其中,模块的参数量例如包括模块中各网络层的权重值、激活值等参数的数量;模块的比特宽度可指各参数的数据宽度,例如可以为32比特、16比特等。基于经验和实验可知,模块的参数量和比特宽度显然影响着模块处理数据的能力,且通常参数量和比特宽度越大,模块能力相应越强。
示例性的,图4为本公开实施例所提供的一种模型量化方法中无数据场景下各模块能力指标的确定过程框图。如图4所示,在一些实现方式中,可以基于下述公式计算各模块的能力指标:
其中,ModCap表示能力指标;i表示当前模块中第i个卷积层,且当前模块中卷积层可以共n层;Wi表示第i个卷积层的参数,params(Wi)为确定第i个卷积层的参数量的函数;bi表示第i个卷积层的比特宽度;αi∈[1,+∞),为预设的比例系数。
但是,一些其他的因素同样也可影响模块的能力,并且这些因素对模块能力的影响难以量化。比如,模块中间的层有无残差链接的输入、模块卷积层的分组数等等。这些因素的存在使得模块的能力指标无法确定。因此,本实施例中,对计算能力指标的前提进行了限制,即在各模块满足预设条件的情况下,可根据各模块的参数量和比特宽度,确定各模块的能力指标。其中,预设条件可根据经验或实验进行预先设置。例如,各模块在网络结构、网络层数量等方面需要满足的条件。
在无数据场景、且在各模块满足预设条件的情况下,通过使用各模块的参数量和比特宽度直接计算各模块的能力指标,可具备较高的能力指标确定效率。
在一些可选的实现方式中,预设条件包括各模块的卷积层的数量相同,根据各差值确定待联合的相邻模块对,可以包括:确定差值最大的相邻模块对作为待联合的相邻模块对;根据差值的降序排列,依次确定差值对应的相邻模块对与已确定的待联合的相邻模块对不存在重合模块时,将差值对应的相邻模块对确定为待联合的相邻模块对,直至已确定的待联合的相邻模块对的数量达到预设数量。
其中,各模块的卷积层的数量相同,可以指各模块对应的卷积层的超参数除了内核大小和通道数外都相同,该种预设条件下可称各模块是拓扑同质的。由于将多个模块作为整体进行联合量化的过程中,存在隐性的先结合初始的两个模块进行联合量化,再将结合后的模块与下一模块进行联合量化的过程,在拓扑同质的前提下,如果相邻模块存在重合模块,那么结合后的模块与下一模块将不再拓扑同质,从而违背了拓扑同质的前提。
因此,在这些可选的实现方式中,确定的各相邻模块对之间不可存在重合模块。在此基础上,确定待联合的相邻模块对的过程可以包括:首先,可将各 差值取绝对值,以保证各差值为正数;然后,可将取绝对值后的差值按由大到小的降序顺序进行排列,并将排列最靠前(即最大的差值)对应的相邻模块对作为待联合的相邻模块对;接着,可以按排列由前到后的顺序依次从各差值中确定目标差值,并在目标差值对应的相邻模块对与已确定的待联合的相邻模块对之间不存在重合模块时,将目标差值对应的相邻模块对确定为待联合的相邻模块对;直至待联合的相邻模块对的数量达到预设数量时,可以停止待联合的相邻模块对的选取过程。
示例性的,假设在拓扑同质的前提下,取绝对值后的差值降序排列为10、9、8、7..,这些差值分别对应相邻模块对模块1和模块2、模块2和模块3、模块5和模块6、模块7和模块8...,且预设数量为3。
那么,确定待联合的相邻模块对的过程可以包括:将最大差值(即差值10)对应的相邻模块对(即模块1和模块2)作为待联合的相邻模块对1;接着,可以依次确定差值9对应的相邻模块对(即模块2和模块3)与已确定的待联合的相邻模块对1存在重合模块(即模块2),那么差值9对应的相邻模块对不能作为待联合的相邻模块对;接着,可依次确定差值8对应的相邻模块对(即模块5和模块6)与已确定的待联合的相邻模块对1不存在重合模块,可将差值8对应的相邻模块对确定为待联合的相邻模块对2;继续可确定差值7对应的相邻模块对(即模块7和模块8)与已确定的待联合的相邻模块对1和相邻模块对2皆不存在重合模块,可将差值7对应的相邻模块对确定为待联合的相邻模块对3。此时,已确定的待联合的相邻模块对的数量达到了预设数量3,可停止待联合的相邻模块的确定过程。
在这些可选的实现方式中,在各模块需要满足拓扑同质的预设条件下确定待联合的相邻模块对时,待联合的相邻模块对之间不可存在重合模块,以避免 违背拓扑同质前提,保证相邻模块对的合理选取。
此外,本实施例所提供的另一种模型量化方法中,在有数据场景下可以从模块处理数据的维度进行能力指标的评估,即确定待量化模型中各模块的能力指标,可以包括:
基于样本数据对待量化模型进行逐模块的初始量化;根据初始量化后的各模块输出的第一特征,以及待量化模型中对应模块输出的第二特征,确定待量化模型中各模块的模块量化损失;根据模块量化损失,确定待量化模型中各模块的能力指标;其中,模块量化损失与待量化模型中各模块的能力指标反相关。
其中,有数据场景可指待量化模型需要进行样本数据处理的场景,例如需要先基于样本数据进行一次初始量化。该初始量化过程可以包括对待量化模型进行逐模块量化的过程。
示例性的,图5为本公开实施例所提供的一种模型量化方法中有数据场景下各模块能力指标的确定过程框图。如图5所示,可将图5中的量化模块看作是初始量化后的各模块中的任一模块;相应的,量化模块输出的特征即可称为第一特征。可将待量化模型中用于量化得到图5中量化模块的模块(即待量化模型中的对应模块)输出的特征称为第二特征。可以基于已有的特征损失确定算法计算第一特征和第二特征之间的损失,得到待量化模型中的对应模块的模块量化损失。例如,图5中可以计算第一特征和第二特征之间的二阶范数损失,得到待量化模型中对应模块的模块量化损失。同理,可以用相同方式得到待量化模型中各模块的模块量化损失。
若模块量化损失越大,则可认为待量化模型中的对应模块的模块能力越小。其中,可以预先设置模块量化损失与第二模块的能力指标的反相关关系。相应的,可根据该反相关关系基于模块量化损失,确定第二模块的能力指标。
在有数据场景下,可使用模块的量化损失来确定模块的能力指标,该种情况下确定的相邻模块对中可存在重合模块,从而能够做到联合量化的全局最优解,提高量化精度。
本公开实施例的技术方案,对不同场景下各模块的能力指标的确定进行了详细描述。在无数据场景、且在各模块满足预设条件的情况下,通过使用各模块的参数量和比特宽度直接计算各模块的能力指标,可具备较高的能力指标确定效率。在有数据场景下,可使用模块的量化损失来确定模块的能力指标,该种情况下确定的相邻模块对中可存在重合模块,从而能够做到联合量化的全局最优解,提高量化精度。此外,本公开实施例提供的模型量化方法与上述实施例提供的模型量化方法属于同一公开构思,未在本实施例中详尽描述的技术细节可参见上述实施例,并且相同的技术特征在本实施例与上述实施例中具有相同的有益效果。
图6为本公开实施例所提供的一种模型量化装置的结构示意图。本实施例提供的模型量化装置适用于基于训练后量化方式量化模型的情形。
如图6所示,本公开实施例提供的模型量化装置,可以包括:
能力指标评估模块610,用于确定待量化模型中各模块的能力指标;其中,各模块的能力指标表征各模块处理数据的能力;
联合信息确定模块620,用于确定各模块中相邻的模块间能力指标的差值,并根据各差值确定待联合的相邻模块对;
联合量化模块630,用于对待量化模型中已确定的待联合的相邻模块对进行联合量化。
在一些可选的实现方式中,能力指标评估模块,可以用于:
在各模块满足预设条件的情况下,根据各模块的参数量和比特宽度,确定 各模块的能力指标;
其中,各模块的参数量和比特宽度,分别与对应模块的能力指标正相关。
在一些可选的实现方式中,能力指标评估模块,还可以用于:
基于样本数据对待量化模型进行逐模块的初始量化;
根据初始量化后的各模块输出的第一特征,以及待量化模型中对应模块输出的第二特征,确定待量化模型中各模块的模块量化损失;
根据模块量化损失,确定待量化模型中各模块的能力指标;
其中,模块量化损失与待量化模型中各模块的能力指标反相关。
在一些可选的实现方式中,联合信息确定模块,可以用于:
根据差值的降序排列,确定预设数量个待联合的相邻模块对。
在一些可选的实现方式中,预设条件包括各模块的卷积层的数量相同,联合信息确定模块,可以用于:
确定差值最大的相邻模块对作为待联合的相邻模块对;
根据差值的降序排列,依次确定差值对应的相邻模块对与已确定的待联合的相邻模块对不存在重合模块时,将差值对应的相邻模块对确定为待联合的相邻模块对,直至已确定的待联合的相邻模块对的数量达到预设数量。
在一些可选的实现方式中,联合量化模块,可以用于:
若待联合的相邻模块对中存在重合模块,则将每个重合模块所属的待联合的相邻模块对中的各模块,作为与每个重合模块对应的目标模块;
基于重合模块在待量化模型中位置顺序的连续性,根据与每个重合模块对应的目标模块,确定各目标模块组;
对各目标模块组中的模块进行联合量化。
在一些可选的实现方式中,联合量化模块,可以用于下述至少一项:
若至少两个重合模块在待量化模型中位置顺序连续,则将与至少两个重合模块对应的目标模块进行合并,得到目标模块组;
若当前重合模块与其他重合模块在待量化模型中位置顺序不连续,则由当前重合模块对应的目标模块构成目标模块组。
在一些可选的实现方式中,联合量化模块,可以用于:
基于预设批量的样本数据,对待量化模型中已确定的待联合的相邻模块对进行联合量化。
本公开实施例所提供的模型量化装置,可执行本公开任意实施例所提供的模型量化方法,具备执行方法相应的功能模块和有益效果。
值得注意的是,上述装置所包括的各个单元和模块只是按照功能逻辑进行划分的,但并不局限于上述的划分,只要能够实现相应的功能即可;另外,各功能单元的具体名称也只是为了便于相互区分,并不用于限制本公开实施例的保护范围。
下面参考图7,其示出了适于用来实现本公开实施例的电子设备(例如图7中的终端设备或服务器)700的结构示意图。本公开实施例中的终端设备可以包括但不限于诸如移动电话、笔记本电脑、数字广播接收器、PDA(个人数字助理)、PAD(平板电脑)、PMP(便携式多媒体播放器)、车载终端(例如车载导航终端)等等的移动终端以及诸如数字TV、台式计算机等等的固定终端。图7示出的电子设备仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。
如图7所示,电子设备700可以包括处理装置(例如中央处理器、图形处理器等)701,其可以根据存储在只读存储器(Read-Only Memory,ROM)702中的程序或者从存储装置708加载到随机访问存储器(Random Access Memory, RAM)703中的程序而执行各种适当的动作和处理。在RAM 703中,还存储有电子设备700操作所需的各种程序和数据。处理装置701、ROM 702以及RAM 703通过总线704彼此相连。输入/输出(I/O)接口705也连接至总线704。
通常,以下装置可以连接至I/O接口705:包括例如触摸屏、触摸板、键盘、鼠标、摄像头、麦克风、加速度计、陀螺仪等的输入装置706;包括例如液晶显示器(LCD)、扬声器、振动器等的输出装置707;包括例如磁带、硬盘等的存储装置708;以及通信装置709。通信装置709可以允许电子设备700与其他设备进行无线或有线通信以交换数据。虽然图7示出了具有各种装置的电子设备700,但是应理解的是,并不要求实施或具备所有示出的装置。可以替代地实施或具备更多或更少的装置。
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在非暂态计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置709从网络上被下载和安装,或者从存储装置708被安装,或者从ROM702被安装。在该计算机程序被处理装置701执行时,执行本公开实施例的模型量化方法中限定的上述功能。
本公开实施例提供的电子设备与上述实施例提供的模型量化方法属于同一公开构思,未在本实施例中详尽描述的技术细节可参见上述实施例,并且本实施例与上述实施例具有相同的有益效果。
本公开实施例提供了一种计算机存储介质,其上存储有计算机程序,该程序被处理器执行时实现上述实施例所提供的模型量化方法。
需要说明的是,本公开上述的计算机可读介质可以是计算机可读信号介质 或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的更具体的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(Erasable Programmable Read-Only Memory,EPROM)或闪存(FLASH)、光纤、便携式紧凑磁盘只读存储器(CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本公开中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:电线、光缆、RF(射频)等等,或者上述的任意合适的组合。
在一些实施方式中,客户端、服务器可以利用诸如HTTP(Hyper Text Transfer Protocol,超文本传输协议)之类的任何当前已知或未来研发的网络协议进行通信,并且可以与任意形式或介质的数字数据通信(例如,通信网络)互连。通信网络的示例包括局域网(“LAN”),广域网(“WAN”),网际网(例如,互联网)以及端对端网络(例如,ad hoc端对端网络),以及任何当前已知或未来研发的网络。
上述计算机可读介质可以是上述电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。
上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该电子设备执行时,使得该电子设备:
确定待量化模型中各模块的能力指标;其中,各模块的能力指标表征各模块处理数据的能力;确定各模块中相邻的模块间能力指标的差值,并根据各差值确定待联合的相邻模块对;对待量化模型中已确定的待联合的相邻模块对进行联合量化。
可以以一种或多种程序设计语言或其组合来编写用于执行本公开的操作的计算机程序代码,上述程序设计语言包括但不限于面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
附图中的流程图和框图,图示了按照本公开各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执 行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
描述于本公开实施例中所涉及到的单元可以通过软件的方式实现,也可以通过硬件的方式来实现。其中,单元、模块的名称在某种情况下并不构成对该单元、模块本身的限定。
本文中以上描述的功能可以至少部分地由一个或多个硬件逻辑部件来执行。例如,非限制性地,可以使用的示范类型的硬件逻辑部件包括:现场可编程门阵列(Field Programmable Gate Array,FPGA)、专用集成电路(Application Specific Integrated Circuit,ASIC)、专用标准产品(Application Specific Standard Parts,ASSP)、片上系统(System on Chip,SOC)、复杂可编程逻辑设备(CPLD)等等。
在本公开的上下文中,机器可读介质可以是有形的介质,其可以包含或存储以供指令执行系统、装置或设备使用或与指令执行系统、装置或设备结合地使用的程序。机器可读介质可以是机器可读信号介质或机器可读储存介质。机器可读介质可以包括但不限于电子的、磁性的、光学的、电磁的、红外的、或半导体系统、装置或设备,或者上述内容的任何合适组合。机器可读存储介质的更具体示例会包括基于一个或多个线的电气连接、便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM或快闪存储器)、光纤、便捷式紧凑盘只读存储器(CD-ROM)、光学储存设备、磁储存设备、或上述内容的任何合适组合。
根据本公开的一个或多个实施例,提供了一种模型量化方法,该方法包括:
确定待量化模型中各模块的能力指标;其中,所述各模块的能力指标表征所述各模块处理数据的能力;
确定所述各模块中相邻的模块间所述能力指标的差值,并根据各所述差值确定待联合的相邻模块对;
对所述待量化模型中已确定的所述待联合的相邻模块对进行联合量化。
根据本公开的一个或多个实施例,提供了一种模型量化方法,还包括:
在一些可选的实现方式中,所述确定待量化模型中各模块的能力指标,包括:
在所述各模块满足预设条件的情况下,根据所述各模块的参数量和比特宽度,确定所述各模块的能力指标;
其中,各模块的参数量和比特宽度,分别与对应模块的能力指标正相关。
根据本公开的一个或多个实施例,提供了一种模型量化方法,还包括:
在一些可选的实现方式中,所述确定待量化模型中各模块的能力指标,包括:
基于样本数据对所述待量化模型进行逐模块的初始量化;
根据初始量化后的各模块输出的第一特征,以及所述待量化模型中对应模块输出的第二特征,确定所述待量化模型中各模块的模块量化损失;
根据所述模块量化损失,确定所述待量化模型中各模块的能力指标;
其中,所述模块量化损失与所述待量化模型中各模块的能力指标反相关。
根据本公开的一个或多个实施例,提供了一种模型量化方法,还包括:
在一些可选的实现方式中,所述根据各所述差值确定待联合的相邻模块对,包括:
根据所述差值的降序排列,确定预设数量个待联合的相邻模块对。
根据本公开的一个或多个实施例,提供了一种模型量化方法,还包括:
在一些可选的实现方式中,所述预设条件包括所述各模块的卷积层的数量相同,所述根据各所述差值确定待联合的相邻模块对,包括:
确定所述差值最大的相邻模块对作为待联合的相邻模块对;
根据所述差值的降序排列,依次确定所述差值对应的相邻模块对与已确定的待联合的相邻模块对不存在重合模块时,将所述差值对应的相邻模块对确定为待联合的相邻模块对,直至已确定的所述待联合的相邻模块对的数量达到预设数量。
根据本公开的一个或多个实施例,提供了一种模型量化方法,还包括:
在一些可选的实现方式中,若所述待联合的相邻模块对中存在重合模块,则所述对所述待量化模型中已确定的所述待联合的相邻模块对进行联合量化,包括:
将每个所述重合模块所属的待联合的相邻模块对中的各模块,作为与每个所述重合模块对应的目标模块;
基于所述重合模块在所述待量化模型中位置顺序的连续性,根据与每个所述重合模块对应的目标模块,确定各目标模块组;
对所述各目标模块组中的模块进行联合量化。
根据本公开的一个或多个实施例,提供了一种模型量化方法,还包括:
在一些可选的实现方式中,所述基于所述重合模块在所述待量化模型中位置顺序的连续性,根据与每个所述重合模块对应的目标模块,确定各目标模块组,包括下述至少一项:
若至少两个所述重合模块在所述待量化模型中位置顺序连续,则将与所述至少两个所述重合模块对应的目标模块进行合并,得到目标模块组;
若当前所述重合模块与其他所述重合模块在所述待量化模型中位置顺序不连续,则由所述当前所述重合模块对应的目标模块构成目标模块组。
根据本公开的一个或多个实施例,提供了一种模型量化方法,还包括:
在一些可选的实现方式中,所述对所述待量化模型中已确定的所述待联合的相邻模块对进行联合量化,包括:
基于预设批量的样本数据,对所述待量化模型中已确定的所述待联合的相邻模块对进行联合量化。
根据本公开的一个或多个实施例,提供了一种模型量化装置,该装置包括:
能力指标评估模块,用于确定待量化模型中各模块的能力指标;其中,所述各模块的能力指标表征所述各模块处理数据的能力;
联合信息确定模块,用于确定所述各模块中相邻的模块间所述能力指标的差值,并根据各所述差值确定待联合的相邻模块对;
联合量化模块,用于对所述待量化模型中已确定的所述待联合的相邻模块对进行联合量化。
以上描述仅为本公开的较佳实施例以及对所运用技术原理的说明。本领域技术人员应当理解,本公开中所涉及的公开范围,并不限于上述技术特征的特定组合而成的技术方案,同时也应涵盖在不脱离上述公开构思的情况下,由上述技术特征或其等同特征进行任意组合而形成的其它技术方案。例如上述特征与本公开中公开的(但不限于)具有类似功能的技术特征进行互相替换而形成的技术方案。
此外,虽然采用特定次序描绘了各操作,但是这不应当理解为要求这些操作以所示出的特定次序或以顺序次序执行来执行。在一定环境下,多任务和并行处理可能是有利的。同样地,虽然在上面论述中包含了若干具体实现细节, 但是这些不应当被解释为对本公开的范围的限制。在单独的实施例的上下文中描述的某些特征还可以组合地实现在单个实施例中。相反地,在单个实施例的上下文中描述的各种特征也可以单独地或以任何合适的子组合的方式实现在多个实施例中。
尽管已经采用特定于结构特征和/或方法逻辑动作的语言描述了本主题,但是应当理解所附权利要求书中所限定的主题未必局限于上面描述的特定特征或动作。相反,上面所描述的特定特征和动作仅仅是实现权利要求书的示例形式。

Claims (11)

  1. 一种模型量化方法,包括:
    确定待量化模型中各模块的能力指标,其中,所述各模块的能力指标表征所述各模块处理数据的能力;
    确定所述各模块中相邻的模块间所述能力指标的差值,并根据各所述差值确定待联合的相邻模块对;
    对所述待量化模型中已确定的所述待联合的相邻模块对进行联合量化。
  2. 根据权利要求1所述的方法,其中,所述确定待量化模型中各模块的能力指标包括:
    在所述各模块满足预设条件的情况下,根据所述各模块的参数量和比特宽度,确定所述各模块的能力指标;
    其中,各模块的参数量和比特宽度,分别与对应模块的能力指标正相关。
  3. 根据权利要求1所述的方法,其中,所述确定待量化模型中各模块的能力指标包括:
    基于样本数据对所述待量化模型进行逐模块的初始量化;
    根据初始量化后的各模块输出的第一特征,以及所述待量化模型中对应模块输出的第二特征,确定所述待量化模型中各模块的模块量化损失;
    根据所述模块量化损失,确定所述待量化模型中各模块的能力指标;
    其中,所述模块量化损失与所述待量化模型中各模块的能力指标反相关。
  4. 根据权利要求1所述的方法,其中,所述根据各所述差值确定待联合的相邻模块对包括:
    根据所述差值的降序排列,确定预设数量个待联合的相邻模块对。
  5. 根据权利要求2所述的方法,其中,所述预设条件包括所述各模块的卷积层的数量相同,所述根据各所述差值确定待联合的相邻模块对包括:
    确定所述差值最大的相邻模块对作为待联合的相邻模块对;
    根据所述差值的降序排列,依次确定所述差值对应的相邻模块对与已确定的待联合的相邻模块对不存在重合模块时,将所述差值对应的相邻模块对确定为待联合的相邻模块对,直至已确定的所述待联合的相邻模块对的数量达到预设数量。
  6. 根据权利要求1所述的方法,其中,若所述待联合的相邻模块对中存在重合模块,则所述对所述待量化模型中已确定的所述待联合的相邻模块对进行联合量化包括:
    将每个所述重合模块所属的待联合的相邻模块对中的各模块,作为与每个所述重合模块对应的目标模块;
    基于所述重合模块在所述待量化模型中位置顺序的连续性,根据与每个所述重合模块对应的目标模块,确定各目标模块组;
    对所述各目标模块组中的模块进行联合量化。
  7. 根据权利要求6所述的方法,其中,所述基于所述重合模块在所述待量化模型中位置顺序的连续性,根据与每个所述重合模块对应的目标模块,确定各目标模块组,包括下述至少一项:
    若至少两个所述重合模块在所述待量化模型中位置顺序连续,则将与所述至少两个所述重合模块对应的目标模块进行合并,得到目标模块组;
    若当前所述重合模块与其他所述重合模块在所述待量化模型中位置顺序不连续,则由所述当前所述重合模块对应的目标模块构成目标模块组。
  8. 根据权利要求1-7中任一所述的方法,其中,所述对所述待量化模型中已确定的所述待联合的相邻模块对进行联合量化包括:
    基于预设批量的样本数据,对所述待量化模型中已确定的所述待联合的相邻模块对进行联合量化。
  9. 一种模型量化装置,包括:
    能力指标评估模块,用于确定待量化模型中各模块的能力指标;其中,所 述各模块的能力指标表征所述各模块处理数据的能力;
    联合信息确定模块,用于确定所述各模块中相邻的模块间所述能力指标的差值,并根据各所述差值确定待联合的相邻模块对;
    联合量化模块,用于对所述待量化模型中已确定的所述待联合的相邻模块对进行联合量化。
  10. 一种电子设备,所述电子设备包括:
    一个或多个处理器;
    存储装置,用于存储一个或多个程序,
    当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如权利要求1-8中任一所述的模型量化方法。
  11. 一种包含计算机可执行指令的存储介质,所述计算机可执行指令在由计算机处理器执行时用于执行如权利要求1-8中任一所述的模型量化方法。
PCT/CN2024/081245 2023-03-13 2024-03-12 一种模型量化方法、装置、电子设备及存储介质 WO2024188247A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202310259827.3A CN118643872A (zh) 2023-03-13 2023-03-13 一种模型量化方法、装置、电子设备及存储介质
CN202310259827.3 2023-03-13

Publications (1)

Publication Number Publication Date
WO2024188247A1 true WO2024188247A1 (zh) 2024-09-19

Family

ID=92661674

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2024/081245 WO2024188247A1 (zh) 2023-03-13 2024-03-12 一种模型量化方法、装置、电子设备及存储介质

Country Status (2)

Country Link
CN (1) CN118643872A (zh)
WO (1) WO2024188247A1 (zh)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110852439A (zh) * 2019-11-20 2020-02-28 字节跳动有限公司 神经网络模型的压缩与加速方法、数据处理方法及装置
CN113537470A (zh) * 2021-07-21 2021-10-22 Oppo广东移动通信有限公司 模型量化方法及装置、存储介质及电子设备
CN113902114A (zh) * 2021-09-29 2022-01-07 南京后摩智能科技有限公司 神经网络模型的量化方法、装置和系统、电子设备和存储介质
CN114048853A (zh) * 2021-11-29 2022-02-15 上海阵量智能科技有限公司 神经网络的量化方法、装置、计算机设备及存储介质
CN114861886A (zh) * 2022-05-30 2022-08-05 阿波罗智能技术(北京)有限公司 神经网络模型的量化方法及其装置
WO2023029349A1 (zh) * 2021-09-03 2023-03-09 上海商汤智能科技有限公司 模型量化方法、装置、设备、存储介质、计算机程序产品及计算机程序

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110852439A (zh) * 2019-11-20 2020-02-28 字节跳动有限公司 神经网络模型的压缩与加速方法、数据处理方法及装置
CN113537470A (zh) * 2021-07-21 2021-10-22 Oppo广东移动通信有限公司 模型量化方法及装置、存储介质及电子设备
WO2023029349A1 (zh) * 2021-09-03 2023-03-09 上海商汤智能科技有限公司 模型量化方法、装置、设备、存储介质、计算机程序产品及计算机程序
CN113902114A (zh) * 2021-09-29 2022-01-07 南京后摩智能科技有限公司 神经网络模型的量化方法、装置和系统、电子设备和存储介质
CN114048853A (zh) * 2021-11-29 2022-02-15 上海阵量智能科技有限公司 神经网络的量化方法、装置、计算机设备及存储介质
CN114861886A (zh) * 2022-05-30 2022-08-05 阿波罗智能技术(北京)有限公司 神经网络模型的量化方法及其装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
MA YUEXIAO; LI HUIXIA; ZHENG XIAWU; XIAO XUEFENG; WANG RUI; WEN SHILEI; PAN XIN; CHAO FEI; JI RONGRONG: "Solving Oscillation Problem in Post-Training Quantization Through a Theoretical Perspective", 2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), IEEE, 17 June 2023 (2023-06-17), pages 7950 - 7959, XP034401625, DOI: 10.1109/CVPR52729.2023.00768 *

Also Published As

Publication number Publication date
CN118643872A (zh) 2024-09-13

Similar Documents

Publication Publication Date Title
CN113436620B (zh) 语音识别模型的训练方法、语音识别方法、装置、介质及设备
US20240242089A1 (en) Data protection method, training method and apparatus for network structure, medium, and device
CN113392018B (zh) 流量分发方法、装置、存储介质及电子设备
WO2024199349A1 (zh) 对象推荐方法、装置、介质及电子设备
WO2024188247A1 (zh) 一种模型量化方法、装置、电子设备及存储介质
WO2024198944A1 (zh) 一种视频上传方法、装置、设备和存储介质
CN111598227B (zh) 数据处理方法、装置、电子设备及计算机可读存储介质
CN111915689B (zh) 用于生成目标函数的方法、装置、电子设备和计算机可读介质
CN112561779B (zh) 图像风格化处理方法、装置、设备及存储介质
CN116628049B (zh) 一种基于大数据的信息系统维护管理系统及方法
CN113256339A (zh) 资源投放的方法、装置、存储介质及电子设备
CN116483891A (zh) 一种信息预测方法、装置、设备和存储介质
CN112734462B (zh) 一种信息推荐方法、装置、设备及介质
WO2024183592A1 (zh) 一种图像处理方法、装置、电子设备及存储介质
CN118365481B (zh) 基于园区企业用电特征的企业短期用能配置方法及系统
CN116894163B (zh) 基于信息安全的充放电设施负荷预测信息生成方法和装置
CN112270170B (zh) 一种隐式表述语句的分析方法、装置、介质和电子设备
WO2024012306A1 (zh) 神经网络模型结构确定方法、装置、设备、介质及产品
CN116800834B (zh) 虚拟礼物合并方法、装置、电子设备和计算机可读介质
CN116755889B (zh) 应用于服务器集群数据交互的数据加速方法、装置与设备
CN112926629B (zh) 超参数确定方法、装置、深度强化学习框架、介质及设备
WO2024007938A1 (zh) 一种多任务预测方法、装置、电子设备及存储介质
CN118014029A (zh) 存算一体芯片的量化感知训练方法、装置、设备以及介质
US20230009941A1 (en) Method of processing data for target model, electronic device, and storage medium
CN118898274A (zh) 神经网络的性能确定方法及设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 24769938

Country of ref document: EP

Kind code of ref document: A1