CN112163677B - Method, device and equipment for applying machine learning model - Google Patents
Method, device and equipment for applying machine learning model Download PDFInfo
- Publication number
- CN112163677B CN112163677B CN202011096940.7A CN202011096940A CN112163677B CN 112163677 B CN112163677 B CN 112163677B CN 202011096940 A CN202011096940 A CN 202011096940A CN 112163677 B CN112163677 B CN 112163677B
- Authority
- CN
- China
- Prior art keywords
- machine learning
- learning model
- processing function
- output
- parameter configuration
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/448—Execution paradigms, e.g. implementations of programming paradigms
- G06F9/4482—Procedural
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Medical Informatics (AREA)
- Data Mining & Analysis (AREA)
- Computing Systems (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Mathematical Physics (AREA)
- Artificial Intelligence (AREA)
- Image Analysis (AREA)
Abstract
The application discloses a method, a device and equipment for applying a machine learning model, and belongs to the technical field of machine learning. The method comprises the following steps: acquiring an execution code, a processing type and parameter configuration information of a target machine learning model; based on the processing type, acquiring an output processing function of the target machine learning model, which is not subjected to parameter configuration; based on the parameter configuration information, performing parameter configuration on the output processing function which is not subjected to parameter configuration to obtain the output processing function of the target machine learning model; acquiring target input data to be input into a target machine learning model, obtaining target output data based on an execution code of the target machine learning model and the target input data, and processing the target output data based on an output processing function to obtain a processing result corresponding to the target input data. The application can avoid the problem that the machine learning model cannot be normally used because technicians mix the machine learning model with corresponding parameters.
Description
Technical Field
The present application relates to the field of machine learning technologies, and in particular, to a method, an apparatus, and a device for applying a machine learning model.
Background
With the continuous development of machine learning technology, the application scenarios and application platforms of the machine learning technology are more and more, and functions that can be realized by the machine learning technology are more and more, such as face recognition, voice recognition, and the like. Wherein the various functions described above may be implemented by a machine learning model after training.
Before using the machine learning model, a technician needs to write an output processing function related to the machine learning model according to parameters corresponding to the machine learning model, for example, writing a corresponding output processing function according to a processing type of the machine learning model, or the like. The output processing function is used for determining a processing result corresponding to the data input into the machine learning model according to the machine learning model output vector, for example, determining a classification result of the classification model on the input data according to a vector which is output by the classification model and consists of confidence degrees corresponding to various categories.
In carrying out the present application, the inventors have found that the related art has at least the following problems:
different machine learning models can correspond to different parameters, when a technician uses more machine learning models, the machine learning models and the corresponding parameters are easily mixed, so that an output processing function written by the technician according to the parameters is not matched with the machine learning models, and the machine learning models cannot be normally used.
Disclosure of Invention
The embodiment of the application provides a method, a device and equipment for applying a machine learning model, which can avoid the problem that a machine learning model cannot be normally used due to the fact that a technician mixes the machine learning model with corresponding parameters. The technical scheme is as follows:
in one aspect, a method of applying a machine learning model is provided, the method comprising:
acquiring an execution code, a processing type and parameter configuration information of a target machine learning model, wherein the parameter configuration information comprises configuration values of preset parameters;
based on the processing type, acquiring an output processing function of the target machine learning model, which is not subjected to parameter configuration;
based on the parameter configuration information, performing parameter configuration on the output processing function which is not subjected to parameter configuration to obtain the output processing function of the target machine learning model;
acquiring target input data to be input into the target machine learning model, obtaining target output data based on the execution code of the target machine learning model and the target input data, and processing the target output data based on the output processing function to obtain a processing result corresponding to the target input data.
Optionally, the acquiring the execution code, the processing type and the parameter configuration information of the target machine learning model includes:
and obtaining a model encapsulation file of the target machine learning model, and performing decapsulation processing on the model encapsulation file to obtain the execution code, the processing type and the parameter configuration information of the target machine learning model.
Optionally, the method further comprises:
based on the processing type, acquiring an input processing function of the target machine learning model, which is not subjected to parameter configuration;
based on the parameter configuration information, performing parameter configuration on the input processing function which is not subjected to parameter configuration to obtain the input processing function of the target machine learning model;
the target output data is obtained based on the execution code of the target machine learning model and the target input data, and the method comprises the following steps:
and processing the target input data based on the input processing function to obtain processed target input data, and inputting the processed target input data into the target machine learning model to obtain corresponding target output data.
Optionally, the acquiring, based on the processing type, an input processing function of the target machine learning model that is not configured with parameters includes:
Acquiring an input processing function which is not subjected to parameter configuration of the target machine learning model from a preset function library based on the processing type and the corresponding relation between the processing type and the input processing function which is not subjected to parameter configuration;
the obtaining, based on the processing type, an output processing function of the target machine learning model that is not configured with parameters includes:
acquiring an output layer identification of the target machine learning model in the model packaging file;
and acquiring the output data processing function which is not subjected to parameter configuration of the target machine learning model from a preset function library based on the processing type, the output layer identification of the target machine learning model and the corresponding relation between the processing type, the output layer identification and the output processing function which is not subjected to parameter configuration.
Optionally, the preset parameters include an input parameter and an output parameter;
and performing parameter configuration on the input processing function which is not subjected to parameter configuration based on the parameter configuration information to obtain the input processing function of the target machine learning model, wherein the input processing function comprises the following components:
based on the configuration value of the input parameters, performing parameter configuration on the input processing function which is not subjected to parameter configuration to obtain an input data processing function of the target machine learning model;
And performing parameter configuration on the output processing function which is not subjected to parameter configuration based on the parameter configuration information to obtain the output processing function of the target machine learning model, wherein the method comprises the following steps:
and carrying out parameter configuration on the output processing function which is not subjected to parameter configuration based on the configuration value of the output parameter, and obtaining the output data processing function of the target machine learning model.
Optionally, the performing parameter configuration on the input processing function without parameter configuration based on the configuration value of the input parameter to obtain an input data processing function of the target machine learning model includes:
based on the prestored insertion bit of each input parameter in the input processing function which is not subjected to parameter configuration, inserting a configuration value of each input parameter into a corresponding insertion bit to obtain an input data processing function of the target machine learning model;
and performing parameter configuration on the output processing function which is not subjected to parameter configuration based on the configuration value of the output parameter to obtain an output data processing function of the target machine learning model, wherein the output data processing function comprises:
and inserting the configuration value of each output parameter into a corresponding insertion bit based on the pre-stored insertion bit of each output parameter in the output processing function which is not configured with the parameters, so as to obtain the output data processing function of the target machine learning model.
Optionally, the configuration values of the preset parameters include a resolution of the input data and a resolution of the output data.
In another aspect, an apparatus for applying a machine learning model is provided, the apparatus comprising:
the acquisition module is used for acquiring the execution code, the processing type and the parameter configuration information of the target machine learning model, wherein the parameter configuration information comprises configuration values of preset parameters; based on the processing type, acquiring an output processing function of the target machine learning model, which is not subjected to parameter configuration;
the configuration module is used for carrying out parameter configuration on the output processing function which is not subjected to parameter configuration based on the parameter configuration information to obtain the output processing function of the target machine learning model;
the processing module is used for acquiring target input data to be input into the target machine learning model, obtaining target output data based on the execution code of the target machine learning model and the target input data, and processing the target output data based on the output processing function to obtain a processing result corresponding to the target input data.
Optionally, the acquiring module is configured to:
And obtaining a model encapsulation file of the target machine learning model, and performing decapsulation processing on the model encapsulation file to obtain the execution code, the processing type and the parameter configuration information of the target machine learning model.
Optionally, the acquiring module is further configured to:
based on the processing type, acquiring an input processing function of the target machine learning model, which is not subjected to parameter configuration;
the configuration module is further configured to: based on the parameter configuration information, performing parameter configuration on the input processing function which is not subjected to parameter configuration to obtain the input processing function of the target machine learning model;
the process model is also for: and processing the target input data based on the input processing function to obtain processed target input data, and inputting the processed target input data into the target machine learning model to obtain corresponding target output data.
Optionally, the acquiring module is configured to
Acquiring an input processing function which is not subjected to parameter configuration of the target machine learning model from a preset function library based on the processing type and the corresponding relation between the processing type and the input processing function which is not subjected to parameter configuration;
Acquiring an output layer identification of the target machine learning model in the model packaging file;
and acquiring the output data processing function which is not subjected to parameter configuration of the target machine learning model from a preset function library based on the processing type, the output layer identification of the target machine learning model and the corresponding relation between the processing type, the output layer identification and the output processing function which is not subjected to parameter configuration.
Optionally, the preset parameters include an input parameter and an output parameter;
the configuration module is used for: based on the configuration value of the input parameters, performing parameter configuration on the input processing function which is not subjected to parameter configuration to obtain an input data processing function of the target machine learning model; and carrying out parameter configuration on the output processing function which is not subjected to parameter configuration based on the configuration value of the output parameter, and obtaining the output data processing function of the target machine learning model.
Optionally, the configuration module is configured to:
based on the prestored insertion bit of each input parameter in the input processing function which is not subjected to parameter configuration, inserting a configuration value of each input parameter into a corresponding insertion bit to obtain an input data processing function of the target machine learning model;
And inserting the configuration value of each output parameter into a corresponding insertion bit based on the pre-stored insertion bit of each output parameter in the output processing function which is not configured with the parameters, so as to obtain the output data processing function of the target machine learning model.
In yet another aspect, a computer device is provided, characterized in that the computer device comprises a processor and a memory, the memory having stored therein at least one instruction that is loaded and executed by the processor to implement the operations performed by the method of applying a machine learning model as described above.
In yet another aspect, a computer readable storage medium is provided, wherein at least one instruction is stored in the storage medium, the at least one instruction being loaded and executed by a processor to implement operations performed by a method of applying a machine learning model as described above.
The technical scheme provided by the embodiment of the application has the beneficial effects that:
after the execution code, the processing type and the parameter configuration information of the target machine learning model are acquired, the output processing function which is not subjected to parameter configuration and corresponds to the target machine learning model can be determined according to the processing type, then the output processing function which is not subjected to parameter configuration is subjected to parameter configuration according to the parameter configuration information to obtain the output processing function of the target machine learning model, finally the data output by the target machine learning model is processed through the output processing function, the processing result which corresponds to the target input data can be directly obtained.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a method for applying a machine learning model provided by an embodiment of the present application;
FIG. 2 is a schematic diagram of a method for applying a machine learning model according to an embodiment of the present application;
FIG. 3 is a schematic diagram of a device for applying a machine learning model according to an embodiment of the present application;
fig. 4 is a schematic diagram of a terminal structure according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the embodiments of the present application will be described in further detail with reference to the accompanying drawings.
The method for applying the machine learning model provided by the embodiment of the application can be realized by a terminal, and the terminal can be intelligent equipment such as a mobile phone, a tablet computer, a notebook computer, a desktop computer and the like. The device may be provided with a processor and a memory, where the memory may store an execution program and execution data corresponding to a method of applying the machine learning model, and the processor may execute the execution program stored in the memory to process the execution data, so as to implement the method of applying the machine learning model provided by the embodiment of the present application.
The machine learning model may be stored in the form of a file after training is completed. The technical manufacturer for developing the machine learning model can provide the trained machine learning model to technicians of other manufacturers or other departments, and the technicians of other manufacturers or other departments can set other functions on the basis of the functions realized by the trained machine learning model. For example, the machine learning model is a gender recognition model, and other manufacturers can recognize the gender of the user through the face image of the user according to the gender recognition model, and can beautify the corresponding user according to the gender. The method for applying the machine learning model provided by the embodiment of the application can enable technicians of other manufacturers or other departments to directly apply the machine learning model without configuring parameters related to the machine learning model after acquiring the machine learning model.
Fig. 1 is a flowchart of a method for applying a machine learning model according to an embodiment of the present application. Referring to fig. 1, this embodiment includes:
step 101, acquiring an execution code, a processing type and parameter configuration information of a target machine learning model, wherein the parameter configuration information comprises configuration values of preset parameters.
The target machine learning model may be any machine learning model after training, such as a face detection model, an image classification model, and the like. The execution code of the target machine learning model is the execution code of a program corresponding to the machine learning model after training, the processing type is the task type of the target machine learning model, such as a face detection task, a target classification task and the like, the parameter configuration information comprises a parameter value of an output parameter in an output processing function corresponding to the target machine learning model, for example, the output processing function corresponding to the image processing model can process an image output by the image processing model into a preset resolution, and the parameter value can be a data resolution (image resolution); the output processing function corresponding to the classification model can process each class output by the classification model and the corresponding confidence coefficient according to the number of identifiable classes of the classification model to obtain a classification result of the classification model, and the parameter value can be the number of identifiable classes; the output processing function corresponding to the semantic segmentation model can form a matrix with a preset size according to the probability of each pixel of the image to be segmented corresponding to the image category output by the semantic segmentation model, then the image to be segmented can be segmented according to the matrix with the preset size, and then the parameter value can be the data size (the preset size of the matrix) and the like.
Optionally, the execution code, the processing type and the parameter configuration information of the target machine learning model may be encapsulated in a file, and a technician may obtain the execution code, the processing type and the parameter configuration information of the target machine learning model by obtaining a model encapsulation file of the target machine learning model and decapsulating the model encapsulation file.
In practice, the process type and parameter configuration information of the target machine learning model may be provided in an encrypted binary file that may be obtained by a user or technician applying the target machine learning model to obtain the process type and parameter configuration information of the target machine learning model. The binary file may be provided by a person developing the target machine learning model to a user or technician applying the target machine learning model. After obtaining the encrypted binary file, a technician applying the target machine learning model can decrypt and unpack the encrypted binary file to obtain the execution code, the processing type, the parameter configuration information and the like of the target machine learning model.
The binary file may be generated as follows:
And obtaining a model file corresponding to the trained target machine learning model, wherein the model file can be a file for recording an execution code corresponding to the target machine learning model. And a model configuration file corresponding to the target machine learning model is obtained, wherein the model configuration file can comprise auxiliary reasoning information, business application information, binding embedded information and the like. The auxiliary reasoning information may include parameter values corresponding to data input to the target machine learning model, such as resolution of data (resolution of image), size of data, type of data, etc. input to the target machine learning model. That is, the target machine learning model may also be corresponding to an input processing function, and the input processing function may adjust data to be input to the target machine learning model to a resolution, a size, a type, and the like that can be processed by the target machine learning model according to a parameter value of an input parameter in the auxiliary reasoning information. The service application information may include parameter values of output parameters, such as resolution of output data, size of output data, number of identifiable categories, etc., and may further include a processing type, an operation platform type, an output layer identifier, etc. The binding embedded information may include some description information related to the model, an ID (Identity document, serial number) of the model, a version number of the model, developer information of the model, and the like, and when the corresponding machine learning model is abnormal, error checking may be performed according to the binding embedded information, for example, whether the model is correct is determined according to the ID of the model, the version number of the model, and the like, or error checking may be performed by contacting with the developer according to the developer information of the model. The parameter configuration information is a parameter value corresponding to the output processing function in the service application information, and may further include a parameter value corresponding to the auxiliary reasoning information. The technician can write the auxiliary reasoning information, the business application information and the binding embedded information of the target machine learning model in a text file according to a preset format, for example, the text format can be txt, json, xml and the like. And then, converting the auxiliary reasoning information, the business application information, the binding embedded information and the like in the text file into structural data according to the corresponding relation. For example, a text file may record "input resolution: 224 x 224"," input data type: unsigned char "," model task type: detection "," detection category: face "," output layer position: last layer "," model provider: XXX "," model ID: face_det ", the corresponding structured data may be:
InputShape:{1,3,1,224,224};
InputDataType:U08;
taskType:DETCTION;
categoryInfo:FACE;
layerPosition:-1;
modelAuthor:“XXX”;
modelld:“Face_det”;
The technician may encapsulate the corresponding model file and the structured data corresponding to the model configuration file by using an existing encapsulation protocol to obtain an encapsulated file, and then encrypt the encapsulated file, for example, by using an AES algorithm, an SM4 algorithm, a 3DES algorithm, etc., to obtain an encrypted binary file.
Step 102, based on the processing type, obtaining an output processing function of the target machine learning model, which is not subjected to parameter configuration.
The output processing function is a function for processing an output vector of the machine learning model, and since a value output by a general machine learning model is a vector and cannot be directly applied, the output processing function for processing output data of the machine learning model is generally provided. For example, the machine learning model is an age recognition model, after a face image is input into the age recognition model, the age recognition model outputs a vector composed of each age value and a confidence coefficient corresponding to each age value, then the corresponding vector is input into a corresponding output processing function, and the output processing function obtains the age value corresponding to the face image finally recognized by the age recognition model. The output processing function which is not subjected to parameter configuration, namely the output processing function which is not provided with parameter values corresponding to the processing parameters, wherein the parameters corresponding to the output processing function can comprise data resolution, the number of identifiable categories and the like output by the machine learning model. And for machine learning models with the same processing tasks, the same output processing functions without parameter configuration can be corresponding, and different processing parameters can be corresponding. For example, if the number of the classes identifiable by different respective models is different, the confidence that the corresponding parameters need to be calculated is different, and the corresponding parameters may be the number of the classes identifiable by the respective models.
In implementation, the processing types of the target machine learning model are recorded in the model configuration file, wherein the processing types can be divided into model tasks, detection categories and the like, for example, the model tasks can be detection tasks, the detection categories can be faces, that is, the processing categories of the machine learning model are face detection. After acquiring the processing category of the target machine learning, the terminal can determine a corresponding output processing function according to the processing category. In addition, the model configuration file may further include identification information of an operation platform corresponding to the target machine learning model, where the operation platform may be a GPU (Graphics Processing Unit, graphics processor), an ARM (Advanced RISC Machine, reduced instruction set machine), a HISI (a processor), and the like. The identification information of the operating platform may indicate the platform on which the target machine learning model operates.
Optionally, the input processing function of the target machine learning model which is not subjected to parameter configuration may also be corresponding to the input processing function of the target machine learning model which is not subjected to parameter configuration, and the input processing function of the target machine learning model which is not subjected to parameter configuration may be obtained based on the processing type.
The input processing function is a function for processing data to be input, and because the data format input by a general machine learning model, such as the resolution of the data, the data type and the like, is fixed, the input processing function can be set by a technician training the machine learning model. Therefore, before the data to be processed by the target machine learning model is input into the target machine learning model, the data to be input can be processed through an input processing function, so that the data to be input can meet the data format of the input data of the target machine learning model. For example, the image to be input into the image recognition model is processed, and the image to be input is adjusted to a preset image size, etc. In addition, whether the image recognition model or the voice recognition model is used, data such as images and voices are required to be decoded into binary codes, then the binary codes are required to be converted into input vectors with fixed sizes, and the binary codes can be processed into input matrixes with fixed sizes through an input processing function. The input processing function that is not parameter-configured, i.e., the input processing function that has not been set with parameter values corresponding to processing parameters, wherein the processing parameters are the type of data that the machine learning model receives input, the resolution of the image, the size of the input matrix, and so forth. And for machine learning models with the same processing tasks, the same input processing functions without parameter configuration may be corresponding, but different parameter values may be corresponding.
In implementation, the processing types of the target machine learning model are recorded in the model configuration file, wherein the processing types can be divided into model tasks, detection categories and the like, for example, the model tasks can be detection tasks, the detection categories can be faces, that is, the processing categories of the machine learning model are face detection. After acquiring the processing category of the target machine learning, the terminal can determine a corresponding input processing function according to the processing category.
Optionally, for the input processing function, based on the processing type and the corresponding relation between the processing type and the input processing function which is not subjected to parameter configuration, the input processing function which is not subjected to parameter configuration of the target machine learning model can be obtained from a preset function library;
after the terminal obtains the processing type of the target machine learning model from the model configuration file, the input processing function which is not subjected to parameter configuration and corresponds to the target machine learning model can be determined according to the corresponding relation between the prestored processing type and the input processing function which is not subjected to parameter configuration.
Optionally, the output layer identifier of the target machine learning model may be obtained in the model package file, and for the output processing function, the output data processing function of the target machine learning model, which is not configured with parameters, may be obtained in a preset function library based on the processing type, the output layer identifier of the target machine learning model, and the correspondence between the processing type, the output layer identifier and the output processing function, which is not configured with parameters.
There may be multiple layers of neural convolutional networks in the machine learning model, the output of each layer of neural convolutional network may be the input of the next layer of neural convolutional network, and the achievable functions of the data output by different neural convolutional networks may be different. For example, in an age-gender detection model, the detection of the corresponding gender and the age detection may be determined by vectors output by different neural convolutional networks. For example, the detection of gender may be achieved by the data output by the penultimate neural convolutional network in the age-gender detection model, and the detection of age may be achieved by the data output by the last neural convolutional network in the age-gender detection model. Different neural convolutional networks are different output layers, so different output layers can correspond to different output processing functions. Different output processing functions process the data output by the neural convolution network, so that different functions can be realized. After the terminal obtains the processing type of the model from the model configuration file after the model encapsulation file is decapsulated and the output layer identification of the target machine learning model, the corresponding output data processing function without parameter configuration of the corresponding target machine learning model can be searched in a preset function library according to the preset corresponding relation between the processing type, the output layer identification and the output processing function without parameter configuration.
And 103, carrying out parameter configuration on the output data processing function which is not subjected to parameter configuration based on the parameter configuration information, and obtaining the output data processing function of the target machine learning model.
The parameter configuration information comprises configuration values of preset parameters, and the preset parameters comprise output parameters.
In implementation, the output processing function which is not subjected to parameter configuration can be subjected to parameter configuration based on the configuration value of the output parameter, so that the output processing function of the target machine learning model is obtained.
The configuration value of each output parameter can be inserted into the corresponding insertion bit based on the insertion bit of each output parameter in the pre-stored output processing function which is not configured with the parameters, so as to obtain the output data processing function of the target machine learning model.
In implementations, the configuration value for each parameter may correspond to a parameter name, such as Output Size:200×200, the parameter is 200×200, the parameter name is Output Size, the corresponding relation between the parameter name and the position (i.e. the insertion bit) of the parameter to be inserted in the code corresponding to the Output processing function can be stored in the terminal, after the terminal obtains the configuration value of the Output parameter, the configuration value of each Output parameter can be inserted into the insertion bit of the corresponding Output processing function, so as to obtain the corresponding Output data processing function of the target machine learning model.
Optionally, the parameter configuration information includes a configuration value of a preset parameter, and the preset parameter may include an input parameter in addition to an output parameter.
The input data processing function of the target machine learning model is obtained by carrying out parameter configuration on the input processing function which is not subjected to parameter configuration based on the configuration value of the input parameter, and then the configuration value of each input parameter is inserted into a corresponding insertion bit based on the insertion bit of each input parameter in the pre-stored input processing function which is not subjected to parameter configuration;
in implementations, the configuration value for each parameter may correspond to a parameter name, such as Input Size:200×200, the parameter is 200×200, the parameter name is Input Size, the correspondence between the parameter name and the position (i.e. the insertion bit) of the parameter to be inserted in the code corresponding to the corresponding Input processing function can be stored in the terminal, after the terminal obtains the configuration value of the Input parameter, the configuration value of each Input parameter can be inserted into the insertion bit of the corresponding Input processing function, so as to obtain the corresponding Input data processing function of the target machine learning model.
Step 104, obtaining target input data to be input into the target machine learning model, obtaining target output data based on the execution code of the target machine learning model and the target input data, and processing the target output data based on an output data processing function to obtain a processing result corresponding to the target input data.
In practice, after the output processing function of the target machine learning model is obtained, the target input data may be input into the target machine learning model, and the target output data corresponding to the target input data may be output by the target machine learning model. And then processing the target output data according to an output processing function of the target machine learning model to obtain a processing result of the target machine learning model on the target input data.
Alternatively, when the target machine learning model has an input processing function, the corresponding processing may be as follows: processing the target input data based on the input processing function to obtain processed target input data, inputting the processed target input data into the target machine learning model to obtain corresponding target output data, and processing the target output data based on the output data processing function to obtain a processing result corresponding to the target input data.
In implementation, after the input processing function and the output processing function of the target machine learning model are configured, as shown in fig. 2, target input data to be input into the target machine learning model may be processed according to an input processing function composed of an input processing function which is determined in a function library and is not configured with parameters in parameter configuration information, so as to obtain processed target input data, then the processed target input data is input into the target machine learning model, so as to obtain corresponding target output data, and then the target output data is processed according to an output processing function composed of an output processing function which is determined in a function library and is not configured with parameters in parameter configuration information, so as to obtain a processing result corresponding to the target input data.
After the execution code, the processing type and the parameter configuration information of the target machine learning model are acquired, the output processing function which is not subjected to parameter configuration and corresponds to the target machine learning model can be determined according to the processing type, then the output processing function which is not subjected to parameter configuration is subjected to parameter configuration according to the parameter configuration information to obtain the output processing function of the target machine learning model, finally the data output by the target machine learning model is processed through the output processing function, the processing result which corresponds to the target input data can be directly obtained.
Any combination of the above-mentioned optional solutions may be adopted to form an optional embodiment of the present disclosure, which is not described herein in detail.
Fig. 3 is a device for applying a machine learning model according to an embodiment of the present application, where the device may be a terminal in the foregoing embodiment, and the device includes:
an obtaining module 310, configured to obtain an execution code, a processing type, and parameter configuration information of a target machine learning model, where the parameter configuration information includes a configuration value of a preset parameter; based on the processing type, acquiring an output processing function of the target machine learning model, which is not subjected to parameter configuration;
a configuration module 320, configured to perform parameter configuration on the output processing function that is not subjected to parameter configuration based on the parameter configuration information, so as to obtain an output processing function of the target machine learning model;
the processing module 330 is configured to obtain target input data to be input to the target machine learning model, obtain target output data based on the execution code of the target machine learning model and the target input data, and process the target output data based on the output processing function to obtain a processing result corresponding to the target input data.
Optionally, the obtaining module 310 is configured to:
and obtaining a model encapsulation file of the target machine learning model, and performing decapsulation processing on the model encapsulation file to obtain the execution code, the processing type and the parameter configuration information of the target machine learning model.
Optionally, the obtaining module 310 is further configured to:
based on the processing type, acquiring an input processing function of the target machine learning model, which is not subjected to parameter configuration;
the configuration module 320 is further configured to: based on the parameter configuration information, performing parameter configuration on the input processing function which is not subjected to parameter configuration to obtain the input processing function of the target machine learning model;
the process model 320 is also used to: and processing the target input data based on the input processing function to obtain processed target input data, and inputting the processed target input data into the target machine learning model to obtain corresponding target output data.
Optionally, the acquiring module 310 is configured to
Acquiring an input processing function which is not subjected to parameter configuration of the target machine learning model from a preset function library based on the processing type and the corresponding relation between the processing type and the input processing function which is not subjected to parameter configuration;
Acquiring an output layer identification of the target machine learning model in the model packaging file;
and acquiring the output data processing function which is not subjected to parameter configuration of the target machine learning model from a preset function library based on the processing type, the output layer identification of the target machine learning model and the corresponding relation between the processing type, the output layer identification and the output processing function which is not subjected to parameter configuration.
Optionally, the preset parameters include an input parameter and an output parameter;
the configuration module 320 is configured to: based on the configuration value of the input parameters, performing parameter configuration on the input processing function which is not subjected to parameter configuration to obtain an input data processing function of the target machine learning model; and carrying out parameter configuration on the output processing function which is not subjected to parameter configuration based on the configuration value of the output parameter, and obtaining the output data processing function of the target machine learning model.
Optionally, the configuration module 320 is configured to:
based on the prestored insertion bit of each input parameter in the input processing function which is not subjected to parameter configuration, inserting a configuration value of each input parameter into a corresponding insertion bit to obtain an input data processing function of the target machine learning model;
And inserting the configuration value of each output parameter into a corresponding insertion bit based on the pre-stored insertion bit of each output parameter in the output processing function which is not configured with the parameters, so as to obtain the output data processing function of the target machine learning model.
It should be noted that: the apparatus for applying a machine learning model provided in the above embodiment only uses the division of the above functional modules to illustrate when the machine learning model is applied, and in practical application, the above functional allocation may be performed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the device for applying the machine learning model provided in the above embodiment belongs to the same concept as the method embodiment for applying the machine learning model, and the detailed implementation process of the device is referred to the method embodiment, which is not repeated here.
Fig. 4 shows a block diagram of a terminal 400 according to an exemplary embodiment of the present application. The terminal 400 may be: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion picture expert compression standard audio plane 3), an MP4 (Moving Picture Experts Group Audio Layer IV, motion picture expert compression standard audio plane 4) player, a notebook computer, or a desktop computer. The terminal 400 may also be referred to by other names as user equipment, portable terminal, laptop terminal, desktop terminal, etc.
In general, the terminal 400 includes: a processor 401 and a memory 402.
Processor 401 may include one or more processing cores such as a 4-core processor, an 8-core processor, etc. The processor 401 may be implemented in at least one hardware form of DSP (Digital Signal Processing ), FPGA (Field-Programmable Gate Array, field programmable gate array), PLA (Programmable Logic Array ). The processor 401 may also include a main processor, which is a processor for processing data in an awake state, also called a CPU (Central Processing Unit ), and a coprocessor; a coprocessor is a low-power processor for processing data in a standby state. In some embodiments, the processor 401 may integrate a GPU (Graphics Processing Unit, image processor) for rendering and drawing of content required to be displayed by the display screen. In some embodiments, the processor 401 may also include an AI (Artificial Intelligence ) processor for processing computing operations related to machine learning.
Memory 402 may include one or more computer-readable storage media, which may be non-transitory. Memory 402 may also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 402 is used to store at least one instruction for execution by processor 401 to implement the method of applying a machine learning model provided by the method embodiments of the present application.
In some embodiments, the terminal 400 may further optionally include: a peripheral interface 403 and at least one peripheral. The processor 401, memory 402, and peripheral interface 403 may be connected by a bus or signal line. The individual peripheral devices may be connected to the peripheral device interface 403 via buses, signal lines or a circuit board. Specifically, the peripheral device includes: at least one of radio frequency circuitry 404, a touch display 405, a camera 406, audio circuitry 407, a positioning component 408, and a power supply 409.
Peripheral interface 403 may be used to connect at least one Input/Output (I/O) related peripheral to processor 401 and memory 402. In some embodiments, processor 401, memory 402, and peripheral interface 403 are integrated on the same chip or circuit board; in some other embodiments, either or both of the processor 401, memory 402, and peripheral interface 403 may be implemented on separate chips or circuit boards, which is not limited in this embodiment.
The Radio Frequency circuit 404 is configured to receive and transmit RF (Radio Frequency) signals, also known as electromagnetic signals. The radio frequency circuitry 404 communicates with a communication network and other communication devices via electromagnetic signals. The radio frequency circuit 404 converts an electrical signal into an electromagnetic signal for transmission, or converts a received electromagnetic signal into an electrical signal. Optionally, the radio frequency circuit 404 includes: antenna systems, RF transceivers, one or more amplifiers, tuners, oscillators, digital signal processors, codec chipsets, subscriber identity module cards, and so forth. The radio frequency circuitry 404 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocol includes, but is not limited to: metropolitan area networks, various generations of mobile communication networks (2G, 3G, 4G, and 5G), wireless local area networks, and/or WiFi (Wireless Fidelity ) networks. In some embodiments, the radio frequency circuitry 404 may also include NFC (Near Field Communication ) related circuitry, which is not limiting of the application.
The display screen 405 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 405 is a touch display screen, the display screen 405 also has the ability to collect touch signals at or above the surface of the display screen 405. The touch signal may be input as a control signal to the processor 401 for processing. At this time, the display screen 405 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, the display 405 may be one, providing a front panel of the terminal 400; in other embodiments, the display 405 may be at least two, and disposed on different surfaces of the terminal 400 or in a folded design; in still other embodiments, the display 405 may be a flexible display disposed on a curved surface or a folded surface of the terminal 400. Even more, the display screen 405 may be arranged in an irregular pattern that is not rectangular, i.e. a shaped screen. The display 405 may be made of LCD (Liquid Crystal Display ), OLED (Organic Light-Emitting Diode) or other materials.
The camera assembly 406 is used to capture images or video. Optionally, camera assembly 406 includes a front camera and a rear camera. Typically, the front camera is disposed on the front panel of the terminal and the rear camera is disposed on the rear surface of the terminal. In some embodiments, the at least two rear cameras are any one of a main camera, a depth camera, a wide-angle camera and a tele camera, so as to realize that the main camera and the depth camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize a panoramic shooting and Virtual Reality (VR) shooting function or other fusion shooting functions. In some embodiments, camera assembly 406 may also include a flash. The flash lamp can be a single-color temperature flash lamp or a double-color temperature flash lamp. The dual-color temperature flash lamp refers to a combination of a warm light flash lamp and a cold light flash lamp, and can be used for light compensation under different color temperatures.
The audio circuit 407 may include a microphone and a speaker. The microphone is used for collecting sound waves of users and environments, converting the sound waves into electric signals, and inputting the electric signals to the processor 401 for processing, or inputting the electric signals to the radio frequency circuit 404 for realizing voice communication. For the purpose of stereo acquisition or noise reduction, a plurality of microphones may be respectively disposed at different portions of the terminal 400. The microphone may also be an array microphone or an omni-directional pickup microphone. The speaker is used to convert electrical signals from the processor 401 or the radio frequency circuit 404 into sound waves. The speaker may be a conventional thin film speaker or a piezoelectric ceramic speaker. When the speaker is a piezoelectric ceramic speaker, not only the electric signal can be converted into a sound wave audible to humans, but also the electric signal can be converted into a sound wave inaudible to humans for ranging and other purposes. In some embodiments, audio circuit 407 may also include a headphone jack.
The location component 408 is used to locate the current geographic location of the terminal 400 to enable navigation or LBS (Location Based Service, location-based services). The positioning component 408 may be a positioning component based on the united states GPS (Global Positioning System ), the beidou system of china, the grainer system of russia, or the galileo system of the european union.
The power supply 409 is used to power the various components in the terminal 400. The power supply 409 may be an alternating current, a direct current, a disposable battery, or a rechargeable battery. When power supply 409 comprises a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, the terminal 400 further includes one or more sensors 410. The one or more sensors 410 include, but are not limited to: acceleration sensor 411, gyroscope sensor 412, pressure sensor 413, fingerprint sensor 414, optical sensor 415, and proximity sensor 416.
The acceleration sensor 411 may detect the magnitudes of accelerations on three coordinate axes of the coordinate system established with the terminal 400. For example, the acceleration sensor 411 may be used to detect components of gravitational acceleration on three coordinate axes. The processor 401 may control the touch display screen 405 to display a user interface in a lateral view or a longitudinal view according to the gravitational acceleration signal acquired by the acceleration sensor 411. The acceleration sensor 411 may also be used for the acquisition of motion data of a game or a user.
The gyro sensor 412 may detect a body direction and a rotation angle of the terminal 400, and the gyro sensor 412 may collect a 3D motion of the user to the terminal 400 in cooperation with the acceleration sensor 411. The processor 401 may implement the following functions according to the data collected by the gyro sensor 412: motion sensing (e.g., changing UI according to a tilting operation by a user), image stabilization at shooting, game control, and inertial navigation.
The pressure sensor 413 may be disposed at a side frame of the terminal 400 and/or at a lower layer of the touch display 405. When the pressure sensor 413 is disposed at a side frame of the terminal 400, a grip signal of the terminal 400 by a user may be detected, and the processor 401 performs a left-right hand recognition or a shortcut operation according to the grip signal collected by the pressure sensor 413. When the pressure sensor 413 is disposed at the lower layer of the touch display screen 405, the processor 401 controls the operability control on the UI interface according to the pressure operation of the user on the touch display screen 405. The operability controls include at least one of a button control, a scroll bar control, an icon control, and a menu control.
The fingerprint sensor 414 is used to collect a fingerprint of the user, and the processor 401 identifies the identity of the user based on the fingerprint collected by the fingerprint sensor 414, or the fingerprint sensor 414 identifies the identity of the user based on the collected fingerprint. Upon recognizing that the user's identity is a trusted identity, the user is authorized by the processor 401 to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying for and changing settings, etc. The fingerprint sensor 414 may be provided on the front, back or side of the terminal 400. When a physical key or vendor Logo is provided on the terminal 400, the fingerprint sensor 414 may be integrated with the physical key or vendor Logo.
The optical sensor 415 is used to collect the ambient light intensity. In one embodiment, the processor 401 may control the display brightness of the touch display screen 405 according to the ambient light intensity collected by the optical sensor 415. Specifically, when the intensity of the ambient light is high, the display brightness of the touch display screen 405 is turned up; when the ambient light intensity is low, the display brightness of the touch display screen 405 is turned down. In another embodiment, the processor 401 may also dynamically adjust the shooting parameters of the camera assembly 406 according to the ambient light intensity collected by the optical sensor 415.
A proximity sensor 416, also referred to as a distance sensor, is typically provided on the front panel of the terminal 400. The proximity sensor 416 is used to collect the distance between the user and the front of the terminal 400. In one embodiment, when the proximity sensor 416 detects a gradual decrease in the distance between the user and the front face of the terminal 400, the processor 401 controls the touch display 405 to switch from the bright screen state to the off screen state; when the proximity sensor 416 detects that the distance between the user and the front surface of the terminal 400 gradually increases, the processor 401 controls the touch display screen 405 to switch from the off-screen state to the on-screen state.
Those skilled in the art will appreciate that the structure shown in fig. 4 is not limiting of the terminal 400 and may include more or fewer components than shown, or may combine certain components, or may employ a different arrangement of components.
In an exemplary embodiment, a computer readable storage medium, such as a memory comprising instructions executable by a processor in a terminal to perform the method of applying a machine learning model in the above embodiment is also provided. The computer readable storage medium may be non-transitory. For example, the computer readable storage medium may be a ROM (Read-Only Memory), a RAM (Random Access Memory ), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program for instructing relevant hardware, where the program may be stored in a computer readable storage medium, and the storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The foregoing description of the preferred embodiments of the application is not intended to limit the application to the precise form disclosed, and any such modifications, equivalents, and alternatives falling within the spirit and scope of the application are intended to be included within the scope of the application.
Claims (12)
1. A method of applying a machine learning model, the method comprising:
Obtaining a model packaging file of a target machine learning model, and performing unpacking processing on the model packaging file to obtain an execution code, a processing type and parameter configuration information of the target machine learning model, wherein the parameter configuration information comprises configuration values of preset parameters;
acquiring an output layer identifier of the target machine learning model in the model packaging file, and acquiring an output processing function which is not subjected to parameter configuration of the target machine learning model in a preset function library based on the processing type, the output layer identifier of the target machine learning model and the corresponding relation between the processing type, the output layer identifier and the output processing function which is not subjected to parameter configuration;
based on the parameter configuration information, performing parameter configuration on the output processing function which is not subjected to parameter configuration to obtain the output processing function of the target machine learning model;
acquiring target input data to be input into the target machine learning model, obtaining target output data based on the execution code of the target machine learning model and the target input data, and processing the target output data based on the output processing function to obtain a processing result corresponding to the target input data.
2. The method according to claim 1, wherein the method further comprises:
based on the processing type, acquiring an input processing function of the target machine learning model, which is not subjected to parameter configuration;
based on the parameter configuration information, performing parameter configuration on the input processing function which is not subjected to parameter configuration to obtain the input processing function of the target machine learning model;
the target output data is obtained based on the execution code of the target machine learning model and the target input data, and the method comprises the following steps:
and processing the target input data based on the input processing function to obtain processed target input data, and inputting the processed target input data into the target machine learning model to obtain corresponding target output data.
3. The method of claim 2, wherein the obtaining input processing functions of the target machine learning model that are not parameter configured based on the processing type comprises:
and acquiring the input processing function which is not subjected to parameter configuration of the target machine learning model from a preset function library based on the processing type and the corresponding relation between the processing type and the input processing function which is not subjected to parameter configuration.
4. The method of claim 2, wherein the preset parameters include an input parameter and an output parameter;
and performing parameter configuration on the input processing function which is not subjected to parameter configuration based on the parameter configuration information to obtain the input processing function of the target machine learning model, wherein the input processing function comprises the following components:
based on the configuration value of the input parameters, performing parameter configuration on the input processing function which is not subjected to parameter configuration to obtain an input data processing function of the target machine learning model;
and performing parameter configuration on the output processing function which is not subjected to parameter configuration based on the parameter configuration information to obtain the output processing function of the target machine learning model, wherein the method comprises the following steps:
and carrying out parameter configuration on the output processing function which is not subjected to parameter configuration based on the configuration value of the output parameter, and obtaining the output processing function of the target machine learning model.
5. The method of claim 4, wherein the performing parameter configuration on the input processing function without parameter configuration based on the configuration value of the input parameter to obtain the input data processing function of the target machine learning model comprises:
Based on the prestored insertion bit of each input parameter in the input processing function which is not subjected to parameter configuration, inserting a configuration value of each input parameter into a corresponding insertion bit to obtain an input data processing function of the target machine learning model;
and performing parameter configuration on the output processing function which is not subjected to parameter configuration based on the configuration value of the output parameter to obtain the output processing function of the target machine learning model, wherein the method comprises the following steps:
and inserting the configuration value of each output parameter into a corresponding insertion bit based on the pre-stored insertion bit of each output parameter in the output processing function which is not subjected to parameter configuration, so as to obtain the output processing function of the target machine learning model.
6. The method of claim 2, wherein the configuration values of the preset parameters include a resolution of the input data and a resolution of the output data.
7. An apparatus for applying a machine learning model, the apparatus comprising:
the system comprises an acquisition module, a processing module and a parameter configuration module, wherein the acquisition module is used for acquiring a model encapsulation file of a target machine learning model, and performing decapsulation processing on the model encapsulation file to obtain an execution code, a processing type and parameter configuration information of the target machine learning model, wherein the parameter configuration information comprises configuration values of preset parameters; acquiring an output layer identifier of the target machine learning model in the model packaging file, and acquiring an output processing function which is not subjected to parameter configuration of the target machine learning model in a preset function library based on the processing type, the output layer identifier of the target machine learning model and the corresponding relation between the processing type, the output layer identifier and the output processing function which is not subjected to parameter configuration;
The configuration module is used for carrying out parameter configuration on the output processing function which is not subjected to parameter configuration based on the parameter configuration information to obtain the output processing function of the target machine learning model;
the processing module is used for acquiring target input data to be input into the target machine learning model, obtaining target output data based on the execution code of the target machine learning model and the target input data, and processing the target output data based on the output processing function to obtain a processing result corresponding to the target input data.
8. The apparatus of claim 7, wherein the acquisition module is further configured to:
based on the processing type, acquiring an input processing function of the target machine learning model, which is not subjected to parameter configuration;
the configuration module is further configured to: based on the parameter configuration information, performing parameter configuration on the input processing function which is not subjected to parameter configuration to obtain the input processing function of the target machine learning model;
the processing module is further configured to: and processing the target input data based on the input processing function to obtain processed target input data, and inputting the processed target input data into the target machine learning model to obtain corresponding target output data.
9. The apparatus of claim 8, wherein the acquisition module is configured to
And acquiring the input processing function which is not subjected to parameter configuration of the target machine learning model from a preset function library based on the processing type and the corresponding relation between the processing type and the input processing function which is not subjected to parameter configuration.
10. The apparatus of claim 8, wherein the preset parameters include an input parameter and an output parameter;
the configuration module is used for: based on the configuration value of the input parameters, performing parameter configuration on the input processing function which is not subjected to parameter configuration to obtain an input data processing function of the target machine learning model; and carrying out parameter configuration on the output processing function which is not subjected to parameter configuration based on the configuration value of the output parameter, and obtaining the output processing function of the target machine learning model.
11. The apparatus of claim 10, wherein the configuration module is configured to:
based on the prestored insertion bit of each input parameter in the input processing function which is not subjected to parameter configuration, inserting a configuration value of each input parameter into a corresponding insertion bit to obtain an input data processing function of the target machine learning model;
And inserting the configuration value of each output parameter into a corresponding insertion bit based on the pre-stored insertion bit of each output parameter in the output processing function which is not subjected to parameter configuration, so as to obtain the output processing function of the target machine learning model.
12. A computer device comprising a processor and a memory having stored therein at least one instruction that is loaded and executed by the processor to implement the operations performed by the method of applying a machine learning model of any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011096940.7A CN112163677B (en) | 2020-10-14 | 2020-10-14 | Method, device and equipment for applying machine learning model |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011096940.7A CN112163677B (en) | 2020-10-14 | 2020-10-14 | Method, device and equipment for applying machine learning model |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112163677A CN112163677A (en) | 2021-01-01 |
CN112163677B true CN112163677B (en) | 2023-09-19 |
Family
ID=73868220
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011096940.7A Active CN112163677B (en) | 2020-10-14 | 2020-10-14 | Method, device and equipment for applying machine learning model |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112163677B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113570030B (en) * | 2021-01-18 | 2024-05-10 | 腾讯科技(深圳)有限公司 | Data processing method, device, equipment and storage medium |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105518647A (en) * | 2013-07-05 | 2016-04-20 | 里索非特德夫公司 | Systems and methods for creating and implementing artificially intelligent agent or system |
CN109325541A (en) * | 2018-09-30 | 2019-02-12 | 北京字节跳动网络技术有限公司 | Method and apparatus for training pattern |
CN110147251A (en) * | 2019-01-28 | 2019-08-20 | 腾讯科技(深圳)有限公司 | For calculating the framework, chip and calculation method of neural network model |
CN110163345A (en) * | 2019-05-09 | 2019-08-23 | 腾讯科技(深圳)有限公司 | A kind of Processing with Neural Network method, apparatus, equipment and medium |
CN110363291A (en) * | 2018-03-26 | 2019-10-22 | 上海寒武纪信息科技有限公司 | Operation method, device, computer equipment and the storage medium of neural network |
CN110580527A (en) * | 2018-06-08 | 2019-12-17 | 上海寒武纪信息科技有限公司 | method and device for generating universal machine learning model and storage medium |
CN110839128A (en) * | 2018-08-16 | 2020-02-25 | 杭州海康威视数字技术股份有限公司 | Photographing behavior detection method and device and storage medium |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10157045B2 (en) * | 2016-11-17 | 2018-12-18 | The Mathworks, Inc. | Systems and methods for automatically generating code for deep learning systems |
US11062459B2 (en) * | 2019-02-07 | 2021-07-13 | Vysioneer INC. | Method and apparatus for automated target and tissue segmentation using multi-modal imaging and ensemble machine learning models |
US20200320428A1 (en) * | 2019-04-08 | 2020-10-08 | International Business Machines Corporation | Fairness improvement through reinforcement learning |
-
2020
- 2020-10-14 CN CN202011096940.7A patent/CN112163677B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105518647A (en) * | 2013-07-05 | 2016-04-20 | 里索非特德夫公司 | Systems and methods for creating and implementing artificially intelligent agent or system |
CN110363291A (en) * | 2018-03-26 | 2019-10-22 | 上海寒武纪信息科技有限公司 | Operation method, device, computer equipment and the storage medium of neural network |
CN110580527A (en) * | 2018-06-08 | 2019-12-17 | 上海寒武纪信息科技有限公司 | method and device for generating universal machine learning model and storage medium |
CN110839128A (en) * | 2018-08-16 | 2020-02-25 | 杭州海康威视数字技术股份有限公司 | Photographing behavior detection method and device and storage medium |
CN109325541A (en) * | 2018-09-30 | 2019-02-12 | 北京字节跳动网络技术有限公司 | Method and apparatus for training pattern |
CN110147251A (en) * | 2019-01-28 | 2019-08-20 | 腾讯科技(深圳)有限公司 | For calculating the framework, chip and calculation method of neural network model |
CN110163345A (en) * | 2019-05-09 | 2019-08-23 | 腾讯科技(深圳)有限公司 | A kind of Processing with Neural Network method, apparatus, equipment and medium |
Non-Patent Citations (2)
Title |
---|
Image processing with neural networks——a review;M.Egmont-Petersen 等;Pattern Recognition;全文 * |
基于深度学习的城市高分遥感图像变化检测方法的研究;陈璐 等;计算机应用研究;全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN112163677A (en) | 2021-01-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111753784B (en) | Video special effect processing method, device, terminal and storage medium | |
CN108717365B (en) | Method and device for executing function in application program | |
CN112256320B (en) | Version number generation method, device, terminal and storage medium | |
CN111027490A (en) | Face attribute recognition method and device and storage medium | |
CN115766490A (en) | Calibration data acquisition method, calibration data storage method, device and equipment | |
CN109117466B (en) | Table format conversion method, device, equipment and storage medium | |
CN111241499A (en) | Application program login method, device, terminal and storage medium | |
CN108734662B (en) | Method and device for displaying icons | |
CN111459466B (en) | Code generation method, device, equipment and storage medium | |
CN111128115B (en) | Information verification method and device, electronic equipment and storage medium | |
CN110677713B (en) | Video image processing method and device and storage medium | |
CN110290191B (en) | Resource transfer result processing method, device, server, terminal and storage medium | |
CN111354378A (en) | Voice endpoint detection method, device, equipment and computer storage medium | |
CN111753606A (en) | Intelligent model upgrading method and device | |
CN112163677B (en) | Method, device and equipment for applying machine learning model | |
CN110992954A (en) | Method, device, equipment and storage medium for voice recognition | |
CN110968549B (en) | File storage method, device, electronic equipment and medium | |
CN110045999B (en) | Method, device, terminal and storage medium for drawing assembly | |
CN111294320B (en) | Data conversion method and device | |
CN113076452A (en) | Application classification method, device, equipment and computer readable storage medium | |
CN112487162A (en) | Method, device and equipment for determining text semantic information and storage medium | |
CN112214115A (en) | Input mode identification method and device, electronic equipment and storage medium | |
CN113722827B (en) | CAD data creation method, device and computer storage medium | |
CN112990424A (en) | Method and device for training neural network model | |
CN111541742B (en) | Data transmission method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |