[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN108665065A - Processing method, device, equipment and the storage medium of task data - Google Patents

Processing method, device, equipment and the storage medium of task data Download PDF

Info

Publication number
CN108665065A
CN108665065A CN201810378952.5A CN201810378952A CN108665065A CN 108665065 A CN108665065 A CN 108665065A CN 201810378952 A CN201810378952 A CN 201810378952A CN 108665065 A CN108665065 A CN 108665065A
Authority
CN
China
Prior art keywords
task
training
learning network
network
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810378952.5A
Other languages
Chinese (zh)
Other versions
CN108665065B (en
Inventor
裴京
施路平
焦鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Original Assignee
Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University filed Critical Tsinghua University
Priority to CN201810378952.5A priority Critical patent/CN108665065B/en
Publication of CN108665065A publication Critical patent/CN108665065A/en
Application granted granted Critical
Publication of CN108665065B publication Critical patent/CN108665065B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The present invention provides a kind of processing method of task data, device, equipment and storage medium, this method:Obtain pending task data;The pending task data is input to preset multi-task learning network to handle, obtains multiple target output datas;Wherein, the multi-task learning network is the neural network being trained based on multiple trained target single task learning networks completed and the first training dataset.Multi-task learning network in this method can more accurately export pending data, existing over-fitting probability when to reduce processing waiting task data.Further, when pending task data being input in the multi-task learning network in the present embodiment, the processing accuracy and treatment effeciency of waiting task data can be improved.

Description

Method, device and equipment for processing task data and storage medium
Technical Field
The present invention relates to the field of neural networks, and in particular, to a method, an apparatus, a device, and a storage medium for processing task data.
Background
With the development of neural network technology, neural networks are applied to more and more fields, such as spam filtering, web page retrieval, image recognition, voice recognition and the like by using the neural networks. The neural network can be divided into a single-task learning network and a multi-task learning network.
For a multitask learning network, when the multitask learning network is obtained through training in the traditional technology, a Hard parameter (Hard) or Soft parameter (Soft) sharing mode of a hidden layer is often adopted for construction. Taking a hard parameter sharing mode as an example, when a multitask learning network is specifically constructed, the multitask learning network is obtained by adjusting the weight values and bias values of corresponding neurons in the neural network based on corresponding training data and an initial multitask neural network, so that multitask image recognition and other tasks are performed by using the multitask learning network.
However, the multi-task learning network obtained by the conventional technology often has overfitting when processing a plurality of task data, which results in low task data processing precision and low processing efficiency.
Disclosure of Invention
In view of the above, it is necessary to provide a method, an apparatus, a device and a storage medium for processing task data, which can improve the accuracy and efficiency of processing task data.
In a first aspect, an embodiment of the present invention provides a method for processing task data, where the method includes:
acquiring task data to be processed;
inputting the task data to be processed into a preset multi-task learning network for processing to obtain a plurality of target output data;
the multi-task learning network is a neural network obtained by training based on a plurality of trained target single-task learning networks and a first training data set, and is provided with a plurality of task processing layers, wherein one task processing layer comprises at least one neuron function; the target single-task learning network is a neural network obtained by training based on an initial single-task learning network and a second training data set;
the first training data set includes first training input data and first training output labels for the multi-tasking learning network, and the second training data set includes second training input data and second training output labels for each initial single-tasking learning network.
In a second aspect, an embodiment of the present invention provides an apparatus for processing task data, where the apparatus includes:
the acquisition module is used for acquiring task data to be processed;
the determining module is used for inputting the task data to be processed into a preset multi-task learning network for processing to obtain a plurality of target output data;
the multi-task learning network is a neural network obtained by training based on a plurality of trained target single-task learning networks and a first training data set, and is provided with a plurality of task processing layers, wherein one task processing layer comprises at least one neuron function; the target single-task learning network is a neural network obtained by training based on an initial single-task learning network and a second training data set;
the first training data set includes first training input data and first training output labels for the multi-tasking learning network, and the second training data set includes second training input data and second training output labels for each initial single-tasking learning network.
In a third aspect, an embodiment of the present invention provides a computer device, including a memory and a processor, where the memory stores a computer program, and the processor implements the following steps when executing the computer program:
acquiring task data to be processed;
inputting the task data to be processed into a preset multi-task learning network for processing to obtain a plurality of target output data;
the multi-task learning network is a neural network obtained by training based on a plurality of trained target single-task learning networks and a first training data set, and is provided with a plurality of task processing layers, wherein one task processing layer comprises at least one neuron function; the target single-task learning network is a neural network obtained by training based on an initial single-task learning network and a second training data set;
the first training data set includes first training input data and first training output labels for the multi-tasking learning network, and the second training data set includes second training input data and second training output labels for each initial single-tasking learning network.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the following steps:
acquiring task data to be processed;
inputting the task data to be processed into a preset multi-task learning network for processing to obtain a plurality of target output data;
the multi-task learning network is a neural network obtained by training based on a plurality of trained target single-task learning networks and a first training data set, and is provided with a plurality of task processing layers, wherein one task processing layer comprises at least one neuron function; the target single-task learning network is a neural network obtained by training based on an initial single-task learning network and a second training data set;
the first training data set includes first training input data and first training output labels for the multi-tasking learning network, and the second training data set includes second training input data and second training output labels for each initial single-tasking learning network.
After the computer equipment acquires the task data to be processed, the task data to be processed is input into a preset multi-task learning network for processing to obtain a plurality of target output data. The multi-task learning network is a neural network obtained by training based on a plurality of trained target single-task learning networks and a first training data set. In this embodiment, the multi-task learning network is trained based on a plurality of trained target single-task learning networks and the first training data set, that is, the multi-task learning network in this embodiment is constructed according to the target single-task learning network that can output the second training data set accurately on one hand, and is trained again through the first training data set on the other hand, that is, the multi-task learning network is obtained by optimizing the first training data set again, so that the multi-task learning network in this embodiment can output the data to be processed more accurately, thereby reducing the overfitting probability existing when the data to be processed is processed. Further, when the task data to be processed is input into the multitask learning network in the embodiment, the processing accuracy and the processing efficiency of the task data to be processed can be improved.
Drawings
FIG. 1 is a schematic diagram of a neural network according to an embodiment;
FIG. 2 is a flowchart illustrating a method for processing task data according to an embodiment;
FIG. 3 is a flowchart illustrating a task data processing method according to another embodiment;
FIG. 3a is a flowchart illustrating a task data processing method according to another embodiment;
FIG. 3b is a flowchart illustrating a task data processing method according to another embodiment;
FIG. 4 is a flowchart illustrating a task data processing method according to another embodiment;
FIG. 5 is a block diagram of an initial multi-task learning network, according to an embodiment;
FIG. 6 is a flowchart illustrating a task data processing method according to another embodiment;
FIG. 7 is a flowchart illustrating a task data processing method according to another embodiment;
FIG. 8 is a schematic structural diagram of a first learning network according to an embodiment;
FIG. 9 is a flowchart illustrating a task data processing method according to yet another embodiment;
FIG. 10 is a flowchart illustrating a task data processing method according to yet another embodiment;
FIG. 11 is a block diagram of a task data processing apparatus according to an embodiment;
fig. 12 is a schematic structural diagram of a task data processing device according to another embodiment;
fig. 13 is a schematic structural diagram of a task data processing device according to yet another embodiment;
fig. 14 is a schematic structural diagram of a task data processing device according to yet another embodiment;
fig. 15 is a schematic structural diagram of a task data processing device according to yet another embodiment;
fig. 16 is a schematic structural diagram of a task data processing device according to yet another embodiment;
fig. 17 is a schematic structural diagram of a task data processing device according to yet another embodiment.
Detailed Description
With the development of neural network technology, neural networks are applied to more and more fields, such as spam filtering, web page retrieval, image recognition, voice recognition and the like by using the neural networks. The neural network is an operational model, and as shown in a schematic structural diagram of the neural network shown in fig. 1, a neural network is formed by connecting a plurality of layers of neurons, each neuron has a neuron function, the types of the neuron functions on each layer are the same, connecting lines between every two neurons correspond to different weighted values and bias values, which are called weights and biases (equivalent to memories of an artificial neural network), and the input of a neuron in the next layer is determined by the output of a neuron in the previous layer. Wherein, the layer composed of neurons in the same layer is called a task processing layer. FIG. 1 shows a two-layer neural network architecture with 1 input and 4 output.
Aiming at the input and output corresponding relation of the neural network, the neural network is divided into a multi-task learning network and a single-task learning network. Specifically, for a neural network, there is only one output for one input, and the neural network is called a single-task learning network. Correspondingly, for a neural network, there are multiple outputs for one input, and the neural network is called a multitask learning network. In practical application, because the single-task learning network ignores the relation between the single task and other tasks, and the learning tasks in real life are often related to a lot of ways, for example, when a goalkeeper learns how to shoot a ball, the goalkeeper does not simply learn how to shoot the ball, but relates to many related learning contents, such as how to make a prejudgment, how to move a step, how to make a jump, how to land smoothly, and the like, the multi-task learning network is usually used. In the construction process of the multitask learning network, the traditional technology is usually constructed in a Hard parameter (Hard) or Soft parameter (Soft) sharing mode of a hidden layer. However, the multi-task learning network obtained by the conventional technology often has overfitting when processing a plurality of task data, which results in low task data processing precision and low processing efficiency. The application provides a method, a device, equipment and a storage medium for processing task data, and aims to solve the technical problems of the conventional technology.
The execution subject of the following method embodiments may be various computer devices, such as a personal computer, a notebook, a tablet computer, and the like, and the present application is not limited thereto, and the execution subject of the following method embodiments is described by taking a computer device as an example.
In order to make the objects, technical solutions and advantages of the present invention more apparent, the technical solutions in the embodiments of the present invention are further described in detail by the following embodiments in conjunction with the accompanying drawings. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Fig. 2 is a schematic flowchart of a task data processing method according to an embodiment, where the embodiment relates to a process in which a computer device inputs task data to be processed into a preset multitask learning network to obtain multiple target output data, and the method includes:
and S101, acquiring task data to be processed.
Specifically, the above task refers to an operation that can be performed by a neural network, for example, license plate number recognition, voice recognition, mail classification, and the like can be performed. For the application of the neural network, taking the identification of the license plate number as an example, the data to be processed is related information of a picture containing the license plate number. Optionally, for training of the neural network, the task data to be processed is test data for evaluating the trained neural network.
S102, inputting the task data to be processed into a preset multi-task learning network for processing to obtain a plurality of target output data.
The multi-task learning network is a neural network obtained by training based on a plurality of trained target single-task learning networks and a first training data set, and is provided with a plurality of task processing layers, wherein one task processing layer comprises at least one neuron function; the target single-task learning network is a neural network obtained by training based on an initial single-task learning network and a second training data set; the first training data set includes first training input data and first training output labels for the multi-tasking learning network, and the second training data set includes second training input data and second training output labels for each initial single-tasking learning network.
Specifically, the multi-task learning network is a neural network obtained by training based on a plurality of trained target single-task learning networks and a first training data set, and each single-target learning network is a neural network obtained by training based on an initial single-task learning network and a second training data set.
The initial single-task learning network refers to a single-task learning network in which the number of task processing layers, the type of the task processing layers, the number of neuron functions in the task processing layers, and network parameters corresponding to each neuron function are in a default state. It should be noted that, usually, the network parameter corresponding to each neuron function is defaulted to 0, and the task processing layer type may be a convolutional layer, a fully-connected layer, a pooling layer, and the like, and the neuron function included in the task processing layer type is determined by the task processing layer type, that is, when the task processing layer type is a convolutional layer, the neuron function on the layer is a convolutional function, when the task processing layer type is a fully-connected layer, the neuron function on the layer is a fully-connected function, and when the task processing layer type is a pooling layer, the neuron function on the layer is a pooling function.
Based on the initial single-task learning network, the computer device inputs second training input data in a second training data set into the initial single-task learning network, and trains the initial single-task learning network into a target single-task learning network by using neural network model training algorithms such as a newton algorithm, a conjugate gradient method or a quasi-newton method, wherein the target single-task learning network refers to a neural network which has only one output (the output is usually represented by one vector) for one input and the theoretical output error of the output value corresponding to the input is within an acceptable range, namely the trained single-task learning network. The second training data set refers to data for training the single-task learning network, that is, data for training the initial single-task learning network, and the second training data set includes at least one second training data set including a second training input data and a second training output label corresponding to the second training input data.
It should be noted that the newton algorithm, the conjugate gradient algorithm, the quasi-newton method, and the like belong to a traditional neural network model training algorithm, and details of this embodiment are not described again.
After the training of the target single-task learning network is completed, the computer device can train to obtain a multi-task learning network according to the first training data set and the plurality of target single-task learning networks. For example, the computer device inputs the first training data set into a plurality of target single task learning networks respectively, and trains the plurality of target single task learning networks again according to a newton algorithm to obtain a group of final target single task learning networks; then, inputting the other first training data set to a plurality of target single-task learning networks respectively, training the plurality of target single-task learning networks again according to a conjugate gradient algorithm, and obtaining another group of final plurality of target single-task learning networks; and finally, selecting a group of multi-target single-task learning networks from the two groups of final multi-target single-task learning networks, and taking the selected group of multi-target single-task learning networks as a multi-task learning network. The first training data set is data for training the multitask learning network, that is, the first training data set includes training data for a plurality of tasks in the multitask learning network. In addition, a first training data set includes at least a first training input data and a first training output label corresponding to the first training input data. The multi-task learning network refers to a neural network which has a plurality of outputs for one input and the errors of a plurality of theoretical outputs corresponding to the outputs and the input are within an acceptable range, namely the trained multi-task learning network. In addition, the present embodiment obtains a multitask learning network having a plurality of task processing layers, and one task processing layer includes at least one neuron function.
It should be noted that, in this embodiment, a specific training manner of how to obtain the multi-task learning network according to the training of the first training data set and the multiple target single-task learning networks is not limited, and as long as the multi-task learning network obtained according to the training of the first training data set and the multiple target single-task learning networks is within the protection scope of this embodiment.
Based on the above, the task data to be processed is input into the multitask learning network to obtain a plurality of target output data, wherein the number of the target output data of the multitask learning network is the same as the number of the tasks included in the multitask learning network. In addition, each target output data in the multitask learning network includes a channel identifier, that is, when the multitask learning network outputs a plurality of target output data, the output of which task the target output data belongs to can be identified according to the channel identifier included in each target output data.
In practical applications, taking object attribute identification as an example, when the multi-task learning network is a multi-task learning network for identifying animals, plants, and traffic signs, the task data to be processed may be: when the task data to be processed is the data for characterizing the animal, the target output data are respectively as follows: the probability that the input data is data characterizing animals, the probability that the input data is data characterizing plants, and the probability that the input data is data characterizing traffic signs. Further, the multitask learning network considers the input data to be data characterizing the animal if the input data has the highest probability of being data characterizing the animal.
In summary, the multi-task learning network is trained based on a plurality of trained target single-task learning networks and a first training data set, that is, the multi-task learning network in this embodiment is constructed according to a target single-task learning network that can output a second training data set accurately on one hand, and is trained again through the first training data set on the other hand, that is, the multi-task learning network is obtained by optimizing the first training data set again, so that the multi-task learning network in this embodiment can output the data to be processed more accurately, thereby reducing the overfitting probability existing when the data to be processed is processed.
In the method for processing task data provided by this embodiment, after acquiring task data to be processed, a computer device inputs the task data to be processed into a preset multi-task learning network for processing, so as to obtain a plurality of target output data. The multi-task learning network is a neural network obtained by training based on a plurality of trained target single-task learning networks and a first training data set. In this embodiment, the multi-task learning network is trained based on a plurality of trained target single-task learning networks and the first training data set, that is, the multi-task learning network in this embodiment is constructed according to the target single-task learning network that can output the second training data set accurately on one hand, and is trained again through the first training data set on the other hand, that is, the multi-task learning network is obtained by optimizing the first training data set again, so that the multi-task learning network in this embodiment can output the data to be processed more accurately, thereby reducing the overfitting probability existing when the data to be processed is processed. Further, when the task data to be processed is input into the multitask learning network in the embodiment, the processing accuracy and the processing efficiency of the task data to be processed can be improved.
Fig. 3 is a flowchart illustrating a task data processing method according to another embodiment, where the embodiment relates to a process of building a multitask learning network before a computer device inputs task data to be processed into a preset multitask learning network for processing. On the basis of the above embodiment, the method includes:
s201, training network parameters in a plurality of initial single-task learning networks according to second training input data and second training output labels in the second training data set to obtain a plurality of target single-task learning networks.
S202, training to obtain the multi-task learning network according to the first training data set and the plurality of target single-task learning networks.
Specifically, the specific explanation of the second training data set, the second training input data, the second training output label, the initial single-task learning network, and the target single-task learning network is the same as the explanation of the embodiment shown in fig. 2, and is not repeated here. The network parameters in the initial single-task learning networks refer to the bias and weight corresponding to each neuron function in each task processing layer contained in the initial single-task learning networks. In addition, the specific explanation of the first training data set, the target single-task learning network and the multi-task learning network is the same as the explanation of the embodiment shown in fig. 2, and the detailed explanation thereof is omitted here.
As a possible implementation manner of training to obtain the target single-task learning network, the above-mentioned training process of training each initial single-task learning network to obtain the target single-task learning network may be shown in fig. 3a, that is, the above-mentioned S201 may include the following steps:
s301, inputting second input data in the second training data set into the initial single-task learning network to obtain actual output data of the initial single-task learning network.
And S302, calculating an error between actual output data of the initial single-task learning network and a second training output label according to an error loss function corresponding to the initial single-task learning network.
And S303, if the error is within an acceptable range, taking the initial single-task learning network as a target single-task learning network.
S304, if the error is not in the acceptable error range, reversely distributing the error to the weight and the offset corresponding to each neuron function in the initial single-task learning network according to the error loss function corresponding to the single-task learning network, taking the initial single-task learning network with the weight and the offset of the neuron function adjusted as a new initial single-task learning network, and repeating the step S301 until the error between the actual output data of the new initial single-task learning network and the second training output label is in the acceptable range. At this time, the new initial single-task learning network with the error within the acceptable range is used as the target single-task learning network.
As another possible implementation manner of training to obtain the target single-task learning network, the above-mentioned training process of training each initial single-task learning network to obtain the target single-task learning network may also be shown in fig. 3b, that is, the above-mentioned S201 may include the following steps:
s401, inputting second input data in a second training data set into an initial single-task learning network to obtain actual output data of the initial single-task learning network; calculating the error between the actual output data of the initial single-task learning network and the second training output label according to the error loss function corresponding to the single-task learning network; and reversely distributing the error to the weight and the bias corresponding to each neuron function in the initial single-task learning network according to the error loss function corresponding to the single-task learning network.
S402, taking the initial single-task learning network after the weight and bias of the neuron function in the network are adjusted as a new initial single-task learning network, repeating the step S401 until the preset training times are reached, and taking the new initial single-task learning network reaching the preset training times as a target single-task learning network. The corresponding error loss function of the single-task learning network in the two modes can be defined as follows: loss ═ sigma (Ypi-Yi)2Where i is 1,2,3, etc., indicating the number of output data, Yp refers to the second training output label, Y refers to the actual output data of the initial one-task learning network, and Loss refers to the error between the actual output data of the initial one-task learning network and the second training output label. Of course, it is also possible to learn the network correspondence error for a single taskThe loss function is defined as an error loss function in other forms, and this embodiment is not limited thereto.
Based on the error loss function described above: loss ═ sigma (Ypi-Yi)2In the two modes, the error is inversely distributed to the weight and the bias corresponding to each neuron function in the initial single-task learning network according to the error loss function corresponding to the single-task learning network, and the specific implementation mode is that Y in the formula is replaced by an expression mode which comprises the bias and the weight corresponding to each neuron function in the initial single-task learning network, on the basis of the expression mode, the derivative of the formula is calculated, and the weight and the bias corresponding to each neuron function in the initial single-task learning network are adjusted according to the derivative. In addition, the error loss function corresponding to each single-task learning network may be the same or different, and this embodiment is not limited thereto.
It should be noted that, when a plurality of initial single-task learning networks are trained, the types of task processing layers in the first layers of the plurality of initial single-task learning networks and the number of neuron functions in the task processing layers may be set to be the same, or may be set to be different, which is not limited in this embodiment.
In addition, when the multitask learning network is trained, the multitask learning network may be trained based on the first training data set and the plurality of target single-task learning networks. The training method of how to obtain the multi-task learning network according to the first training data set and the training of the multiple target single-task learning networks is not limited.
In the task data processing method provided by this embodiment, when a computer device constructs a multi-task learning network, network parameters in a plurality of initial single-task learning networks are trained according to second training input data and second training output labels in a second training data set to obtain a plurality of target single-task learning networks; and then training to obtain the multi-task learning network according to the first training data set and the multiple target single-task learning networks. The multi-task learning network is obtained by training based on a plurality of trained target single-task learning networks and the first training data set, that is, the multi-task learning network in the embodiment is constructed according to the target single-task learning network which can accurately output the second training data set on one hand, and is trained again through the first training data set on the other hand, that is, the multi-task learning network is obtained by optimizing the first training data set again on the other hand, so that the multi-task learning network in the embodiment can output the data to be processed more accurately, and the overfitting probability existing when the data to be processed is reduced. Further, when the task data to be processed is input into the multitask learning network in the embodiment, the processing accuracy and the processing efficiency of the task data to be processed can be improved.
Based on any one of the above embodiments, as another possible implementation manner of obtaining a target task learning network by training, the training process of obtaining the target task learning network by training each initial single-task learning network may further include the following steps, that is, the step S201 may include the following steps:
s501, training network parameters in each initial single-task learning network according to second training input data in the second training data set, second training output labels, the number of preset task processing layers, the type of the preset task processing layers and the number of neuron functions on the preset task processing layers to obtain a plurality of target single-task learning networks; the types of the task processing layers in the same layer number in the front M layers in each target single-task learning network are the same, and the number of the neuron functions on the task processing layers in the same layer number in the front M layers is the same.
Specifically, M is an integer of 1 or more, and usually, M is 3. Before the initial single-task learning network is trained, the types of task processing layers in the front M layers of task processing layers of the initial single-task learning networks and the number of neuron functions on the task processing layers are set to be the same respectively. For example, when the multitask learning network includes three tasks, the first three task processes of the initial single-task learning network corresponding to the three tasks are respectively set as: the first layer is a convolutional layer, which contains 512 neurons; the second layer is a full connection layer and comprises 256 neurons; the third layer is a pooling layer, which comprises 100 neurons, and other task processing layers are arranged according to actual conditions. And when the initial single-task learning network with the set task processing layer is trained, inputting second training input data in a second training data set into the initial single-task learning network for training. The training process of the initial single-task learning network can be referred to the embodiment shown in fig. 3a or 3b, and the detailed description of this embodiment is omitted here.
It should be noted that, for a specific training mode for training the multi-task learning network based on the target single-task learning network and the first training data set obtained from the above content, this embodiment is not limited as long as the multi-task learning network and the first training data set obtained from the above content can be trained.
Optionally, in an embodiment, based on the training process of obtaining the multi-task learning network through training to the multiple target single-task learning networks and the first training data set in the training mode for the target single-task learning network in S501, as shown in fig. 4, the method may specifically include the following steps, that is, the step S202 may specifically include the following steps:
s502, determining an initial pre-training layer with M layers according to the type of each task processing layer in the front M layers in the target single-task learning networks and the number of neuron functions on each task processing layer in the front M layers; wherein the network parameter of each neuron function on the initial pre-training layer is equal to 0.
S503, determining the residual task processing layer in each target single-task learning network as an initial single-task output layer; all the remaining task processing layers in one target single-task learning network correspond to one initial single-task output layer, and network parameters of each neuron function on the initial single-task output layer are the same as network parameters of neuron functions on the remaining task processing layers in the corresponding target single-task learning network.
S504, determining the multi-task learning network according to the first target network parameters, the second target network parameters, the initial pre-training layer and each initial single-task output layer.
Specifically, as shown in fig. 5, the network architecture of the untrained multitask learning network (i.e., the initial multitask learning network) is composed of an initial pre-training layer and an initial single-task output layer. The initial pre-training layer is an M layer, the types of the previous M layers are respectively the types of the preset task processing layers, the number of the neuron functions on the previous M layers is respectively the number of the neuron functions on the preset task processing layers, and the network parameters of all the neuron functions in the previous M layers are all 0. For example, when the multitask learning network includes three tasks TaskA, TaskB, and TaskC, the first three layers of the multitask learning network corresponding to the target multitask learning network are: the first layers are convolutional layers, each of which comprises 100 neurons; the second layers are all full connection layers and all comprise 256 neurons; the third layer is a pooling layer, each comprising 512 neurons. At this time, the first 3 layers of the multitask learning network are respectively: the first layer is a convolutional layer containing 100 neurons; the second layer is a full connection layer and comprises 256 neurons; the third layer is a pooling layer containing 512 neurons. In addition, the weights and the bias weights corresponding to all neuron functions in the first three layers are both 0. Each initial single task output layer learns the task processing layers except the previous M layer in the network for the corresponding target single task, and keeps the network parameters of the task processing layers except the previous M layer unchanged, namely keeps the weight and the bias of each neuron in the task processing layers except the previous M layer unchanged.
Based on the above, inputting the first training input data in the first training data set into an initial multi-task learning network (as shown in fig. 5) composed of an initial pre-training layer and an initial single-task output layer in S502 and S503, to obtain actual output data of a plurality of tasks, respectively; taking a task A as an example for explanation, calculating an error between actual output data corresponding to the task A and a first training output label corresponding to the first training data set according to an error loss function corresponding to the task A, and distributing the error to each neuron function in an initial multi-task learning network, so as to adjust the weight and bias of each neuron function in the initial multi-task learning network; taking the adjusted initial multi-task learning network as a new initial multi-task learning network, calculating an error between actual output data corresponding to the task TaskA and a corresponding first training output label in the first training data set according to an error loss function corresponding to the task TaskA again, and distributing the error to each neuron function in the initial multi-task learning network, so as to adjust the weight and bias of each neuron function in the initial multi-task learning network; taking the adjusted initial multi-task learning network as a new initial multi-task learning network; repeating the preset iteration times, and determining the network parameters of each layer of neuron functions corresponding to the TaskA; respectively processing other tasks according to the processing process of the TaskA; and obtaining network parameters of each layer of neuron functions corresponding to a plurality of tasks contained in the multi-task learning network, thereby determining the multi-task learning network. The weights and offsets of the neuron functions in the first M layers in the multitask learning network are the first target network parameters in S504, and the weights and offsets of the neuron functions in the multitask learning network except the M layers are the second target network parameters in S504.
Optionally, the first training input data in the first training data set may also be input into an initial multi-task learning network composed of an initial pre-training layer and an initial single-task output layer in S502 and S503, so as to obtain actual output data of multiple tasks respectively; calculating a plurality of errors between actual output data of the plurality of tasks and a plurality of corresponding training output labels in the first training data set; and determining a final multitask learning network according to the plurality of errors. How to determine the final initial multi-task learning network according to the magnitude of the plurality of errors is described below with reference to the embodiment shown in fig. 6.
In this embodiment, because the types of the preceding M layers of task processing layers in the initial single task learning network and the numbers of neurons thereon are the same, a large degree of correlation exists between the plurality of target single task learning networks obtained by training the initial single task learning network provided by this embodiment, and further, the multi-task learning network obtained by training the plurality of target single task learning networks in this embodiment can perform more accurate output on each task therein, and in addition, the multi-task learning network is optimally trained again through the first training data set, so that the multi-task learning network in this embodiment can perform more accurate output on the data to be processed, thereby reducing the overfitting probability existing when the data to be processed is processed. Further, when the task data to be processed is input into the multitask learning network in the embodiment, the processing accuracy and the processing efficiency of the task data to be processed can be improved.
Fig. 6 is a flowchart illustrating a processing method of task data according to yet another embodiment, where this embodiment relates to a process in which another computer device trains network parameters of a neuron function on an initial pre-training layer and network parameters of each neuron function on each initial single task output layer according to a first training data set to obtain first target network parameters corresponding to the initial pre-training layer and second target network parameters corresponding to each initial single task output layer, that is, another specific process for implementing the foregoing S504, based on the embodiment shown in fig. 4, the process includes:
s601, determining an initial multi-task learning network according to the initial pre-training layer and each initial single-task output layer.
Specifically, referring to fig. 5, the initial pre-training layer is used as a front-layer network of the initial multi-task learning network (that is, a plurality of tasks included in the multi-task learning network share the initial pre-training layer), and the initial single-task output layers of each initial single-task learning network are respectively connected in parallel to the initial pre-training layer, so as to determine the initial multi-task learning network.
S602, executing training processing operation, wherein the training processing operation comprises: inputting first training input data in the first training data set into the initial multi-task learning network for processing, determining actual output data of each initial single-task output layer, and calculating a first error between the actual output data of each initial single-task output layer and a first training output label in the first training data set according to a preset error loss function.
Specifically, the preset error loss functions are a plurality of error loss functions corresponding to a plurality of tasks included in the multi-task learning network, where the plurality of preset error loss functions may be the same or different, and this embodiment is not limited thereto.
The specific process of the training processing operation is as follows: inputting first training input data in a first training data set into an initial multi-task learning network shown in fig. 5, and respectively obtaining a plurality of actual output data corresponding to each initial single-task output layer; and inputting the actual output data of one initial single task output layer and the first training output label in the first data set into the corresponding preset error loss function, so as to obtain a first error between the actual output data of one initial single task output layer and the first training output label in the first training data set, and calculating a first error between the actual output of each initial single task output layer and the corresponding first training output label in the first data set based on the first error.
S603, judging whether the first error is smaller than a first preset threshold value.
Specifically, the first preset threshold is a maximum value that allows an error to exist between an output corresponding to each single task and a theoretical output value corresponding to an input in the multitask learning network, where the first preset threshold corresponding to each single task in the multitask learning network may be the same or different, and this embodiment is not limited thereto.
S603a, if yes, determining the initial multi-task learning network as the multi-task learning network.
When the first errors corresponding to all tasks in the multiple tasks included in the multi-task learning network are smaller than a first preset threshold value, the initial multi-task learning network can accurately output all the tasks, and the initial multi-task learning network is determined as the multi-task learning network.
S603b, if not, adjusting the network parameters of each neuron function in the initial multi-task learning network according to the first error and the error loss function to obtain an adjusted learning network; and taking the adjusted learning network as a new initial multi-task learning network, and returning to execute the training processing operation until the adjustment times reach a first preset time.
Specifically, when a first error corresponding to one of the tasks included in the initial multi-task learning network is smaller than a first preset threshold, it indicates that the initial multi-task learning network can accurately output the single task, and the network parameter corresponding to the single task is not changed. Otherwise, if it is stated that the initial multi-task learning network cannot accurately output the single task, the learning network corresponding to the single task needs to be adjusted, specifically adjusting the following process:
and reversely distributing the first error to network parameters corresponding to each neuron function in a learning network (hereinafter referred to as a first learning network) corresponding to the first error according to a preset error loss function, and calling the multitask learning network after the network parameters of the neuron functions are adjusted once as a new initial multitask learning network. And then inputting first training input data in the first training data set into a new initial multi-task learning network for processing, and determining a first error between actual output data of the first learning network and a first training output label in the first training data set according to a preset error loss function corresponding to the first learning network. And finally, adjusting the network parameters in the new initial multi-task learning network according to the first error and a preset error loss function corresponding to the first learning network until the adjustment times reach a first preset time.
S604a, if a first error between the actual output data of the learning network after being adjusted for the first preset number of times and the first training output label is smaller than the first preset threshold, determining that the learning network after being adjusted for the first preset number of times is the multi-task learning network.
S604b, if a first error between the actual output data of the learning network adjusted by the first preset number of times and the first training output label is not smaller than the first preset threshold, adding at least one single-task output layer to the learning network adjusted by the preset number of times to obtain a learning network with an increased number of layers; and taking the learning network with the increased number of layers as a new initial multi-task learning network, returning to execute the training processing operation until the adjustment times reach a second preset time, and determining the learning network with the second preset time as the multi-task learning network.
Specifically, in practical application, after the network parameter of the initial multi-task learning network is adjusted by a first preset number of times, a first error corresponding to at least one task still remains in the new initial multi-task learning network and is still not less than a first preset threshold, which indicates that the new initial multi-task learning network cannot accurately output the task, and therefore, the network structure of the learning network corresponding to the first error (i.e., the first learning network) needs to be adjusted. When the network structure of the learning network corresponding to the first error is adjusted, at least one single-task output layer is generally added to the learning network corresponding to the first error. For example, if the first error corresponding to the task a is still not smaller than the first preset threshold, at least one single task output layer is added after the initial single task output layer corresponding to the task a. The network parameters of the neuron functions in the newly added single task output layer are all 0 or in a default state, so that the initial multi-task learning network with the added layer number still needs to be trained, specifically, the initial multi-task learning network with the added layer number is used as a new initial multi-task learning network, then first training input data in a first training data set is input into the new initial multi-task learning network for processing, and a first error between actual output data of the added layer number of the initial single task output layer and a first training output label in the first training data set is determined according to a preset error loss function corresponding to a task with the first error not less than a first preset threshold value. And finally, adjusting the network parameters in the new initial multi-task learning network according to the first error and a preset error loss function corresponding to the task with the first error not less than a first preset threshold until the adjustment times reach a second preset time, and taking the learning network with the network parameters adjusted by the second preset time as the multi-task learning network.
It should be noted that, in this embodiment, the first preset number and the second preset number may be the same or different, which is not limited in this embodiment, and specific values of the first preset number and the second preset number are also not limited in this embodiment.
Fig. 7 is a flowchart illustrating a task data processing method according to yet another embodiment, based on the embodiment shown in fig. 3, where as another possible implementation manner for obtaining a target task learning network through training, the training process for obtaining the target task learning network through training each initial single-task learning network may further include the following steps, that is, the step S201 may include the following steps:
s701, selecting N groups of third training data sets from training samples by adopting a put-back random sampling algorithm; wherein each set of third training data sets comprises third training input data and third training output labels.
Specifically, the above-mentioned put-back random sampling algorithm is usually a bagging algorithm, but of course, other put-back random sampling algorithms may also be used, which is not limited in this embodiment. The training samples include a plurality of training data (including training input data and training output labels) for training the multitask learning network, that is, the training data in the training samples are a plurality of the first training data sets, that is, the third training data set is a first training data set selected from the training samples by using a put-back random sampling algorithm.
The specific implementation process of the S701 is as follows: through a put-back random sampling algorithm, K training data are extracted in a put-back mode from training samples, and the extraction in the put-back mode is repeated for N times, so that N groups of third training data sets are obtained, wherein each group of the third training data sets comprises K training data.
S702, inputting third training input data in each group of third training data sets to each target single-task learning network respectively for processing, and obtaining actual output data of each target single-task learning network corresponding to each group of third training data.
Specifically, after the third training input data in each group of third training data sets are respectively input into each target single-task learning network, N groups of actual output data are obtained, wherein each group of actual output data comprises actual output data of a plurality of tasks.
And S703, calculating second errors of the actual output data of each target single-task learning network and the third training output labels according to an error loss function aiming at the actual output data of each target single-task learning network corresponding to each group of third training data, and performing weighted summation on each second error to obtain a weighted error corresponding to each group of third training data sets.
Specifically, taking one of the N groups as an example for explanation, after the third training input data in the group of third training data sets is respectively input to the P target single-task learning networks (assuming that the multi-task learning network includes P tasks), each target single-task learning network corresponds to one actual output data, and thus, P actual output data are obtained in total. And inputting the P actual output data and a third training output label in a third training data set into an error loss function (the error loss function is an error loss function corresponding to the target single-task learning network) to obtain P second errors, and performing weighted summation on the P second errors to further obtain weighted errors. And repeating the weighted error calculation and determination process, and respectively calculating N weighted errors corresponding to the N groups of third training data sets.
When performing weighted summation, the weight coefficient in the weighted summation may be set to a default value. Optionally, the setting is also performed according to a magnitude relationship of a plurality of second errors corresponding to the group of third training data sets, for example, a smallest error of the plurality of second errors is given a largest weight.
S704, judging whether the weighted error corresponding to each group of third training data sets is smaller than a second preset threshold value.
S704a, if yes, determining a first learning network composed of the plurality of target single-task learning networks as the multi-task learning network.
Specifically, the second preset threshold is a maximum value that allows the multitask learning network to have an error between the input and the output. If the weighted errors of the N groups are all smaller than a second preset threshold value, the multitask learning network can accurately output the multitask. At this time, if the network parameters in each of the N groups of target single-task learning networks are the same, the first learning network formed by any one of the groups of target single-task learning networks is selected as the multi-task learning network, as shown in fig. 8.
S704b, otherwise, adjusting network parameters in the plurality of target single-task learning networks according to the error loss function and the first weighted error until the calculated new first weighted error is smaller than the second preset threshold when the third training data corresponding to the first weighted error is input to the plurality of adjusted target single-task learning networks, and determining the second learning network formed by the plurality of adjusted target single-task learning networks as the first to-be-selected multi-task learning network.
Specifically, if a part of the N groups of weighted errors exists in a plurality of target single-task learning networks that are not smaller than a second preset threshold, that is, a part of the N groups of the plurality of target single-task learning networks does not accurately output multi-task data, the part of the plurality of target single-task learning networks needs to be trained continuously, and the weighted errors corresponding to the part of the plurality of target single-task learning networks are referred to as first weighted errors.
Wherein, the process of continuously training any one multi-target single-task learning network in the part of the multi-target single-task learning networks comprises the following steps:
inversely distributing the first weighted error to each target single-task learning network according to the first weighted error and the error loss function (namely the error loss function corresponding to each target single-task learning network), so as to adjust the network parameters of each neuron function in each target single-task learning network; inputting third training input data in a third data set into each adjusted target single-task learning network again, and calculating a first weighted error; and repeating the process until the calculated new first weighted error is smaller than a second preset threshold value when third training data corresponding to the first weighted error is input into the adjusted multiple target single-task learning networks, calling the learning network formed by the adjusted multiple target single-task learning networks as a second learning network, and determining the second learning network as the first multi-task learning network to be selected.
S704c, determining a plurality of target single-task learning networks corresponding to a second weighted error smaller than the second preset threshold among the N weighted errors as a second multi-task learning network to be selected;
specifically, on the basis of D1, there are a plurality of target single-task learning networks in the N groups of weighted errors, where another part of the weighted errors is smaller than the second preset threshold, that is, there is a part of the plurality of target single-task learning networks corresponding to the N groups of third training data sets that can accurately output the third training data sets, and then the part of the plurality of target single-task learning networks is used as the second multi-task learning network to be selected, and the weighted errors corresponding to the part of the plurality of target single-task learning networks are referred to as the second weighted errors.
S704d, determining the learning network formed by the first to-be-selected multi-task learning network and the second to-be-selected multi-task learning network as the multi-task learning network.
Optionally, the third training input data in the N sets of third training data sets may also be input into a plurality of target single-task learning networks, N weighting errors are determined respectively, and the target single-task learning networks corresponding to the smallest weighting error of the N weighting errors are used as the multi-task learning networks. Optionally, the following embodiment shown in fig. 9 may also be referred to based on a multitask learning network formed by a first to-be-selected multitask learning network and a second to-be-selected multitask learning network, that is, taking N groups of multiple target single-task learning networks as the multitask learning network, and specifically how to base on the multitask learning network formed by the first to-be-selected multitask learning network and the second to-be-selected multitask learning network.
In the method for processing task data provided by this embodiment, because a put-back random sampling algorithm is adopted, N sets of third training data sets are selected from training samples, which is equivalent to increasing the training data sets, that is, more training data sets are used to train a plurality of trained target multi-task learning networks again, so that the target single-task learning network can output the second training data set more accurately. Furthermore, the multi-task learning network constructed based on the multiple target single-task learning networks in the embodiment can more accurately output the data to be processed. Further, when the task data to be processed is input into the multitask learning network in the embodiment, the processing accuracy and the processing efficiency of the task data to be processed can be improved.
Fig. 9 is a flowchart illustrating a task data processing method according to yet another embodiment, where the embodiment relates to a process of inputting task data to be processed into a preset multitask learning network for processing to obtain a plurality of target output numbers, that is, a manner of implementing S103. On the basis of the embodiment shown in fig. 7, the method comprises the following steps:
s801, respectively inputting task data to be processed into the multitask learning networks to obtain output data sets, wherein the output data sets comprise output data of each first multitask learning network to be selected and output data of each second multitask learning network to be selected.
S802, at least one learning network group is determined from each first multi-task learning network to be selected and each second multi-task learning network to be selected, wherein the output data of each multi-task learning network to be selected in the same learning network group are the same.
And S803, determining the output data of the multitask learning network group with the largest number of output data in all the learning network groups as the target output data.
Specifically, based on the content of the embodiment shown in fig. 7, since the multi-task learning network is composed of the first multi-task learning network to be selected and the second multi-task learning network to be selected, and the first multi-task learning network to be selected and the second multi-task learning network to be selected correspond to N groups of multiple target single-task learning networks in total, the task data to be processed are input into the multi-task learning networks, respectively, and the output data corresponding to the N groups of multiple tasks are obtained. And taking a plurality of single-task learning networks corresponding to the same output data in the N groups of output data as a learning network group. And determining the output data of the multitask learning network group with the largest number of output data in all the learning network groups as a plurality of target output data. For example, if N is 7 and the output data corresponding to the 7 groups of target individual task learning networks are X1, X2, X2, X3, X3, X3, and X3, respectively, the target individual task learning networks corresponding to X1 are regarded as one learning network group, the two groups of target individual task learning networks corresponding to two X2 are regarded as one learning network group, and the three groups of target individual task learning networks corresponding to three X3 are regarded as one learning network group. Since three sets of multiple target single-task learning networks corresponding to three X3 have the most identical output data as one learning network set, X3 is set as the target output data.
In the method for processing task data provided by this embodiment, because a put-back random sampling algorithm is adopted, N sets of third training data sets are selected from training samples, which is equivalent to increasing the training data sets, that is, more training data sets are used to train a plurality of trained target multi-task learning networks again, so that the target single-task learning network can output the second training data set more accurately. Furthermore, the multi-task learning network constructed based on the multiple target single-task learning networks in the embodiment can more accurately output the data to be processed. Further, when the task data to be processed is input into the multitask learning network in the embodiment, the processing accuracy and the processing efficiency of the task data to be processed can be improved.
Fig. 10 is a flowchart illustrating a task data processing method according to another embodiment, where this embodiment relates to another process of inputting task data to be processed into a preset multitask learning network for processing to obtain a plurality of target output numbers, that is, another way of implementing S103 described above. On the basis of the embodiment shown in fig. 7, the method comprises the following steps:
s901, respectively inputting task data to be processed into the multitask learning networks to obtain output data sets, wherein the output data sets comprise output data of each first multitask learning network to be selected and output data of each second multitask learning network to be selected.
S902, performing weighted summation operation on the output data of each first to-be-selected multi-task learning network and the output data of each second to-be-selected multi-task learning network to obtain a plurality of target output data.
Specifically, based on the content of the embodiment shown in fig. 7, since the multi-task learning network is composed of the first multi-task learning network to be selected and the second multi-task learning network to be selected, and the first multi-task learning network to be selected and the second multi-task learning network to be selected correspond to N groups of multiple target single-task learning networks in total, the task data to be processed are input into the multi-task learning networks respectively, and the output data corresponding to the N groups of multiple tasks, that is, the output data set, is obtained. And then, carrying out weighted summation operation on the output data of each first to-be-selected multi-task learning network and the output data of each second to-be-selected multi-task learning network to obtain a plurality of target output data. For example, if N is 7, and the output data corresponding to the 7 groups of target single-task learning networks are X1, X2, X2, X3, X3, X3, and X3, respectively, the target output data is a 1X 1+2 a 2X 2+ 3a 3X 3. Wherein, X1, X2, and X3 respectively include a plurality of target output data corresponding to a plurality of target single task learning networks. The weighting coefficient of the weighted summation may be a default value, or may be set according to the error magnitude relationship of the N groups of multiple target single-task learning networks when the N groups of multiple target single-task learning networks are tested in advance.
Optionally, the output data of each first to-be-selected multi-task learning network and the output data of each second to-be-selected multi-task learning network may be averaged, and the average value is used as the target output data.
In the method for processing task data provided by this embodiment, because a put-back random sampling algorithm is adopted, N sets of third training data sets are selected from training samples, which is equivalent to increasing the training data sets, that is, more training data sets are used to train a plurality of trained target multi-task learning networks again, so that the target single-task learning network can output the second training data set more accurately. Furthermore, the multi-task learning network constructed based on the multiple target single-task learning networks in the embodiment can more accurately output the data to be processed. Further, when the task data to be processed is input into the multitask learning network in the embodiment, the processing accuracy and the processing efficiency of the task data to be processed can be improved.
It should be understood that, although the steps in the flowcharts shown in fig. 2,3, 4, 6, 7, 9 and 10 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2,3, 4, 6, 7, 9, and 10 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performing the sub-steps or stages is not necessarily sequential, but may be alternated or performed with other steps or at least some of the sub-steps or stages of other steps.
In one embodiment, as shown in fig. 11, a schematic structural diagram of a processing apparatus for task data is provided, including: the device comprises an acquisition module 10 and a determination module 11, wherein:
the acquiring module 10 is used for acquiring task data to be processed;
the determining module 11 is configured to input the task data to be processed to a preset multitask learning network for processing, so as to obtain a plurality of target output data;
the multi-task learning network is a neural network obtained by training based on a plurality of trained target single-task learning networks and a first training data set, and is provided with a plurality of task processing layers, wherein one task processing layer comprises at least one neuron function; the target single-task learning network is a neural network obtained by training based on an initial single-task learning network and a second training data set;
the first training data set includes first training input data and first training output labels for the multi-tasking learning network, and the second training data set includes second training input data and second training output labels for each initial single-tasking learning network.
The processing apparatus for task data provided in this embodiment may execute the above embodiment of the method shown in fig. 2, and the implementation principle and the technical effect are similar, which are not described herein again.
In an embodiment, on the basis of the embodiment shown in fig. 11, as shown in fig. 12, the processing device for task data further includes: a first training module 12 and a second training module 13, wherein: a first training module 12, configured to train network parameters in a plurality of initial single-task learning networks according to second training input data and a second training output label in the second training data set, so as to obtain a plurality of target single-task learning networks;
and the second training module 13 is configured to train to obtain the multi-task learning network according to the first training data set and the plurality of target single-task learning networks.
The processing apparatus for task data provided in this embodiment may execute the above-mentioned embodiment of the method shown in fig. 3, and the implementation principle and the technical effect are similar, which are not described herein again.
In an embodiment, on the basis of the embodiment shown in fig. 12, as shown in fig. 13, optionally, the second training module 13 further includes a first determining unit 131, a second determining unit 132, a training unit 133, and a third determining unit 134. Wherein:
the first training module 12 is further configured to train network parameters in each initial single-task learning network according to second training input data in the second training data set, the second training output labels, the number of preset task processing layers, the type of the preset task processing layers, and the number of neuron functions on the preset task processing layers, so as to obtain a plurality of target single-task learning networks;
the types of the task processing layers in the same layer number in the front M layers in each target single-task learning network are the same, and the number of the neuron functions on the task processing layers in the same layer number in the front M layers is the same.
A first determining unit 131, configured to determine, according to the type of each task processing layer in the previous M layers in the multiple target single-task learning networks and the number of neuron functions on each task processing layer in the previous M layers, an initial pre-training layer with the number of layers being M; wherein the network parameter of each neuron function on the initial pre-training layer is equal to 0;
a second determining unit 132, configured to determine a remaining task processing layer in each of the target single-task learning networks as an initial single-task output layer; all the rest task processing layers in a target single-task learning network correspond to an initial single-task output layer, and the network parameters of each neuron function on the initial single-task output layer are the same as the network parameters of the neuron functions on the rest task processing layers in the corresponding target single-task learning network;
a training unit 133, configured to train, according to the first training data set, a network parameter of a neuron function on the initial pre-training layer and a network parameter of each neuron function on each initial single task output layer, so as to obtain a first target network parameter corresponding to the initial pre-training layer and a second target network parameter corresponding to each initial single task output layer;
a third determining unit 134, configured to determine the multi-task learning network according to the first target network parameter, the second target network parameter, the initial pre-training layer, and each initial single-task output layer.
The processing apparatus for task data provided in this embodiment may execute the above-mentioned embodiment of the method shown in fig. 4, and the implementation principle and the technical effect are similar, which are not described herein again.
In an embodiment, on the basis of the embodiment shown in fig. 13, as shown in fig. 14, the first training unit 133 includes: a training subunit 31, an execution subunit 32, and a judgment subunit 33. Wherein,
a training subunit 31, configured to determine an initial multi-task learning network according to the initial pre-training layer and each of the initial single-task output layers;
an execution subunit 32, configured to execute a training processing operation, where the training processing operation includes: inputting first training input data in the first training data set into the initial multi-task learning network for processing, determining actual output data of each initial single-task output layer, and calculating a first error between the actual output data of each initial single-task output layer and a first training output label in the first training data set according to a preset error loss function;
a determining subunit 33, configured to determine whether the first error is smaller than a first preset threshold;
and when the first error is smaller than a first preset threshold value, the initial multitask learning network is determined as the multitask learning network.
If the first error is not smaller than the first preset threshold, adjusting the network parameters of each neuron function in the initial multi-task learning network according to the first error and the error loss function to obtain an adjusted learning network;
taking the adjusted learning network as a new initial multi-task learning network, and returning to execute the training processing operation until the adjustment times reach a first preset time;
if a first error between the actual output data of the learning network after the adjustment of the first preset times and the first training output label is smaller than a first preset threshold, determining the learning network after the adjustment of the first preset times as the multi-task learning network;
if the first error between the actual output data of the learning network after the adjustment of the first preset times and the first training output label is not smaller than the first preset threshold, adding at least one single-task output layer to the learning network after the adjustment of the preset times to obtain the learning network after the number of layers is increased;
and taking the learning network with the increased number of layers as a new initial multi-task learning network, returning to execute the training processing operation until the adjustment times reach a second preset time, and determining the learning network with the second preset time as the multi-task learning network.
The processing apparatus for task data provided in this embodiment may execute the above-mentioned embodiment of the method shown in fig. 6, and the implementation principle and the technical effect are similar, and are not described herein again.
In an embodiment, on the basis of the embodiment shown in fig. 11, as shown in fig. 15, the processing device for task data further includes: a selection module 14, an input module 15, a summation module 16 and a decision module 17. Wherein:
a selecting module 14, configured to select N sets of third training data sets from the training samples by using a put-back random sampling algorithm; wherein each set of third training data sets comprises third training input data and third training output labels;
the input module 15 is configured to input third training input data in each set of third training data into each target single-task learning network respectively for processing, so as to obtain actual output data of each target single-task learning network corresponding to each set of third training data;
a summation module 16, configured to calculate, according to an error loss function, second errors of the actual output data of each target single-task learning network and the third training output labels for each group of actual output data of each target single-task learning network corresponding to each group of third training data sets, and perform weighted summation on each second error to obtain a weighted error corresponding to each group of third training data;
a judging module 17, configured to judge whether a weighted error corresponding to each group of third training data sets is smaller than a second preset threshold;
and when first weighted errors in the N weighted errors corresponding to the N groups of third training data sets are all smaller than a second preset threshold value, determining a first learning network formed by the target single-task learning networks as the multi-task learning network.
When at least one first weighted error in the N weighted errors corresponding to the N groups of third training data sets is not smaller than a second preset threshold, the method is used for adjusting network parameters in the multiple target single-task learning networks according to an error loss function and the first weighted error until a new calculated first weighted error is smaller than the second preset threshold when the third training data set corresponding to the first weighted error is input into the adjusted multiple target single-task learning networks, and determining a second learning network formed by the adjusted multiple target single-task learning networks as a first multi-task learning network to be selected;
determining a plurality of target single-task learning networks corresponding to a second weighted error smaller than a first preset threshold value in the N weighted errors as second multi-task learning networks to be selected;
and determining a learning network formed by the first to-be-selected multi-task learning network and the second to-be-selected multi-task learning network as the multi-task learning network.
The processing apparatus for task data provided in this embodiment may execute the above-mentioned embodiment of the method shown in fig. 7, and the implementation principle and the technical effect are similar, which are not described herein again.
In an embodiment, based on the embodiment shown in fig. 15, as shown in fig. 16, the determining module 11 includes: a first input unit 111, a first determination unit 112, and a second determination unit 113, wherein:
the first input unit 111 is configured to input task data to be processed into the multitask learning networks respectively to obtain an output data set, where the output data set includes output data of each first multitask learning network to be selected and output data of each second multitask learning network to be selected;
a first determining unit 112, configured to determine at least one learning network group from each first to-be-selected multi-task learning network and each second to-be-selected multi-task learning network, where output data of each to-be-selected learning network in the same learning network group is the same;
a second determining unit 113, configured to determine, as the target output data, output data of a learning network group having the largest number of output data among all the learning network groups.
The processing apparatus for task data provided in this embodiment may execute the above-mentioned embodiment of the method shown in fig. 9, and the implementation principle and the technical effect are similar, and are not described herein again.
In an embodiment, based on the embodiment shown in fig. 15, as shown in fig. 17, the determining module 11 includes: a second input unit 114 and a summing unit 115, wherein:
a second input unit 114, configured to input task data to be processed into the multitask learning networks respectively, so as to obtain an output data set, where the output data set includes output data of each first multitask learning network to be selected and output data of each second multitask learning network to be selected;
and the summing unit 115 is configured to perform weighted summing operation on the output data of each first to-be-selected multi-task learning network and the output data of each second to-be-selected multi-task learning network to obtain the plurality of target output data.
The processing apparatus for task data provided in this embodiment may execute the above-mentioned embodiment of the method shown in fig. 10, and the implementation principle and the technical effect are similar, and are not described herein again.
For specific limitations of the processing device of the task data, reference may be made to the above limitations of the processing method of the task data, which are not described herein again. The respective modules in the processing device of the task data may be wholly or partially implemented by software, hardware, and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In one embodiment, the present embodiment further provides a computer device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the following steps when executing the computer program:
acquiring task data to be processed;
inputting the task data to be processed into a preset multi-task learning network for processing to obtain a plurality of target output data;
the multi-task learning network is a neural network obtained by training based on a plurality of trained target single-task learning networks and a first training data set, and is provided with a plurality of task processing layers, wherein one task processing layer comprises at least one neuron function; the target single-task learning network is a neural network obtained by training based on an initial single-task learning network and a second training data set;
the first training data set includes first training input data and first training output labels for the multi-tasking learning network, and the second training data set includes second training input data and second training output labels for each initial single-tasking learning network.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of:
acquiring task data to be processed;
inputting the task data to be processed into a preset multi-task learning network for processing to obtain a plurality of target output data;
the multi-task learning network is a neural network obtained by training based on a plurality of trained target single-task learning networks and a first training data set, and is provided with a plurality of task processing layers, wherein one task processing layer comprises at least one neuron function; the target single-task learning network is a neural network obtained by training based on an initial single-task learning network and a second training data set;
the first training data set includes first training input data and first training output labels for the multi-tasking learning network, and the second training data set includes second training input data and second training output labels for each initial single-tasking learning network.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (14)

1. A method for processing task data is characterized by comprising the following steps:
acquiring task data to be processed;
inputting the task data to be processed into a preset multi-task learning network for processing to obtain a plurality of target output data;
the multi-task learning network is a neural network obtained by training based on a plurality of trained target single-task learning networks and a first training data set, and is provided with a plurality of task processing layers, wherein one task processing layer comprises at least one neuron function; the target single-task learning network is a neural network obtained by training based on an initial single-task learning network and a second training data set;
the first training data set includes first training input data and first training output labels for the multi-tasking learning network, and the second training data set includes second training input data and second training output labels for each initial single-tasking learning network.
2. The method of claim 1, wherein the multitask learning network is constructed by:
training network parameters in a plurality of initial single-task learning networks according to second training input data and second training output labels in the second training data set to obtain a plurality of target single-task learning networks;
and training to obtain the multi-task learning network according to the first training data set and the plurality of target single-task learning networks.
3. The method of claim 2, wherein training network parameters in a plurality of initial single-task learning networks according to second training input data and second training output labels in the second training data set to obtain a plurality of target single-task learning networks comprises:
training network parameters in each initial single-task learning network according to second training input data in the second training data set, the second training output labels, the number of preset task processing layers, the type of the preset task processing layers and the number of neuron functions on the preset task processing layers to obtain a plurality of target single-task learning networks;
the types of the task processing layers in the same layer number in the front M layers in each target single-task learning network are the same, and the number of the neuron functions on the task processing layers in the same layer number in the front M layers is the same.
4. The method of claim 3, wherein training the multi-task learning network based on the first training data set and a plurality of the target single-task learning networks comprises:
determining an initial pre-training layer with M layers according to the type of each task processing layer in the front M layers in the target single-task learning networks and the number of neuron functions on each task processing layer in the front M layers; wherein the network parameter of each neuron function on the initial pre-training layer is equal to 0;
determining the residual task processing layer in each target single task learning network as an initial single task output layer; all the rest task processing layers in a target single-task learning network correspond to an initial single-task output layer, and the network parameters of each neuron function on the initial single-task output layer are the same as the network parameters of the neuron functions on the rest task processing layers in the corresponding target single-task learning network;
training the network parameters of the neuron functions on the initial pre-training layer and the network parameters of each neuron function on each initial single task output layer according to the first training data set to obtain first target network parameters corresponding to the initial pre-training layer and second target network parameters corresponding to each initial single task output layer;
and determining the multi-task learning network according to the first target network parameter, the second target network parameter, the initial pre-training layer and each initial single-task output layer.
5. The method of claim 4, wherein the training the network parameters of the neuron functions on the initial pre-training layer and the network parameters of each neuron function on each initial single task output layer according to the first training data set to obtain a first target network parameter corresponding to the initial pre-training layer and a second target network parameter corresponding to each initial single task output layer comprises:
determining an initial multi-task learning network according to the initial pre-training layer and each initial single-task output layer;
performing a training processing operation, wherein the training processing operation comprises: inputting first training input data in the first training data set into the initial multi-task learning network for processing, determining actual output data of each initial single-task output layer, and calculating a first error between the actual output data of each initial single-task output layer and a first training output label in the first training data set according to a preset error loss function;
judging whether the first error is smaller than a first preset threshold value or not;
and if so, determining the initial multi-task learning network as the multi-task learning network.
6. The method of claim 5, further comprising:
if the first error is not smaller than the first preset threshold, adjusting the network parameters of each neuron function in the initial multi-task learning network according to the first error and the error loss function to obtain an adjusted learning network;
taking the adjusted learning network as a new initial multi-task learning network, and returning to execute the training processing operation until the adjustment times reach a first preset time;
and if the first error between the actual output data of the learning network after the adjustment of the first preset times and the first training output label is smaller than the first preset threshold, determining the learning network after the adjustment of the first preset times as the multi-task learning network.
7. The method of claim 6, further comprising:
if the first error between the actual output data of the learning network after the first preset times of adjustment and the first training output label is not smaller than the first preset threshold, adding at least one single-task output layer to the learning network after the preset times of adjustment to obtain the learning network after the number of layers is increased;
and taking the learning network with the increased number of layers as a new initial multi-task learning network, returning to execute the training processing operation until the adjustment times reach a second preset time, and determining the learning network with the second preset time as the multi-task learning network.
8. The method of claim 2, wherein training the multi-task learning network based on the first training dataset and a plurality of the target single-task learning networks comprises:
selecting N groups of third training data sets from the training samples by adopting a put-back random sampling algorithm; wherein each set of third training data sets comprises third training input data and third training output labels;
respectively inputting third training input data in each group of third training data sets into each target single-task learning network for processing to obtain actual output data of each target single-task learning network corresponding to each group of third training data;
calculating second errors of the actual output data of each target single-task learning network and the third training output labels according to an error loss function aiming at the actual output data of each target single-task learning network corresponding to each group of third training data sets, and performing weighted summation on each second error to obtain a weighted error corresponding to each group of third training data;
judging whether the weighted error corresponding to each group of third training data sets is smaller than a second preset threshold value or not;
and if so, determining a first learning network formed by the plurality of target single-task learning networks as the multi-task learning network.
9. The method of claim 8, further comprising:
if at least one first weighted error in N weighted errors corresponding to the N groups of third training data sets is not smaller than a second preset threshold, network parameters in the multiple target single-task learning networks are adjusted according to an error loss function and the first weighted error until a new calculated first weighted error is smaller than the second preset threshold when the third training data set corresponding to the first weighted error is input into the adjusted multiple target single-task learning networks, and a second learning network formed by the adjusted multiple target single-task learning networks is determined as a first multi-task learning network to be selected;
determining a plurality of target single-task learning networks corresponding to a second weighted error smaller than a first preset threshold value in the N weighted errors as second multi-task learning networks to be selected;
and determining a learning network formed by the first to-be-selected multi-task learning network and the second to-be-selected multi-task learning network as the multi-task learning network.
10. The method according to claim 9, wherein the inputting the task data to be processed into a preset multitask learning network for processing to obtain a plurality of target output data comprises:
respectively inputting task data to be processed into the multi-task learning networks to obtain output data sets, wherein the output data sets comprise output data of each first multi-task learning network to be selected and output data of each second multi-task learning network to be selected;
determining at least one learning network group from each first to-be-selected multi-task learning network and each second to-be-selected multi-task learning network, wherein the output data of each to-be-selected learning network in the same learning network group is the same;
and determining the output data of the learning network group with the largest number of output data in all the learning network groups as the target output data.
11. The method according to claim 9, wherein the inputting the task data to be processed into a preset multitask learning network for processing to obtain a plurality of target output data comprises:
respectively inputting task data to be processed into the multi-task learning networks to obtain output data sets, wherein the output data sets comprise output data of each first multi-task learning network to be selected and output data of each second multi-task learning network to be selected;
and performing weighted summation operation on the output data of each first to-be-selected multi-task learning network and the output data of each second to-be-selected multi-task learning network to obtain a plurality of target output data.
12. A device for processing task data, comprising:
the acquisition module is used for acquiring task data to be processed;
the determining module is used for inputting the task data to be processed into a preset multi-task learning network for processing to obtain a plurality of target output data;
the multi-task learning network is a neural network obtained by training based on a plurality of trained target single-task learning networks and a first training data set, and is provided with a plurality of task processing layers, wherein one task processing layer comprises at least one neuron function; the target single-task learning network is a neural network obtained by training based on an initial single-task learning network and a second training data set;
the first training data set includes first training input data and first training output labels for the multi-tasking learning network, and the second training data set includes second training input data and second training output labels for each initial single-tasking learning network.
13. A computer device comprising a memory and a processor, the memory storing a computer program, wherein the processor implements the steps of the method of any one of claims 1 to 11 when executing the computer program.
14. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 11.
CN201810378952.5A 2018-04-25 2018-04-25 Method, device and equipment for processing task data and storage medium Active CN108665065B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810378952.5A CN108665065B (en) 2018-04-25 2018-04-25 Method, device and equipment for processing task data and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810378952.5A CN108665065B (en) 2018-04-25 2018-04-25 Method, device and equipment for processing task data and storage medium

Publications (2)

Publication Number Publication Date
CN108665065A true CN108665065A (en) 2018-10-16
CN108665065B CN108665065B (en) 2020-08-04

Family

ID=63780813

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810378952.5A Active CN108665065B (en) 2018-04-25 2018-04-25 Method, device and equipment for processing task data and storage medium

Country Status (1)

Country Link
CN (1) CN108665065B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110136828A (en) * 2019-05-16 2019-08-16 杭州健培科技有限公司 A method of medical image multitask auxiliary diagnosis is realized based on deep learning
CN110766231A (en) * 2019-10-30 2020-02-07 上海天壤智能科技有限公司 Crime prediction method and system based on multi-head neural network
CN111881968A (en) * 2020-07-22 2020-11-03 平安科技(深圳)有限公司 Multi-task classification method and device and related equipment
CN112288075A (en) * 2020-09-29 2021-01-29 华为技术有限公司 Data processing method and related equipment
CN112488098A (en) * 2020-11-16 2021-03-12 浙江新再灵科技股份有限公司 Training method of target detection model
CN113191201A (en) * 2021-04-06 2021-07-30 上海夏数网络科技有限公司 Vision-based intelligent identification method and system for male and female chicken
CN114519381A (en) * 2021-12-31 2022-05-20 上海仙途智能科技有限公司 Sensing method and device based on multitask learning network, storage medium and terminal

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106503669A (en) * 2016-11-02 2017-03-15 重庆中科云丛科技有限公司 A kind of based on the training of multitask deep learning network, recognition methods and system
CN106529402A (en) * 2016-09-27 2017-03-22 中国科学院自动化研究所 Multi-task learning convolutional neural network-based face attribute analysis method
CN106845549A (en) * 2017-01-22 2017-06-13 珠海习悦信息技术有限公司 A kind of method and device of the scene based on multi-task learning and target identification
CN107145867A (en) * 2017-05-09 2017-09-08 电子科技大学 Face and face occluder detection method based on multitask deep learning
EP3218890A1 (en) * 2014-11-13 2017-09-20 NEC Laboratories America, Inc. Hyper-class augmented and regularized deep learning for fine-grained image classification
CN107480730A (en) * 2017-09-05 2017-12-15 广州供电局有限公司 Power equipment identification model construction method and system, the recognition methods of power equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3218890A1 (en) * 2014-11-13 2017-09-20 NEC Laboratories America, Inc. Hyper-class augmented and regularized deep learning for fine-grained image classification
CN106529402A (en) * 2016-09-27 2017-03-22 中国科学院自动化研究所 Multi-task learning convolutional neural network-based face attribute analysis method
CN106503669A (en) * 2016-11-02 2017-03-15 重庆中科云丛科技有限公司 A kind of based on the training of multitask deep learning network, recognition methods and system
CN106845549A (en) * 2017-01-22 2017-06-13 珠海习悦信息技术有限公司 A kind of method and device of the scene based on multi-task learning and target identification
CN107145867A (en) * 2017-05-09 2017-09-08 电子科技大学 Face and face occluder detection method based on multitask deep learning
CN107480730A (en) * 2017-09-05 2017-12-15 广州供电局有限公司 Power equipment identification model construction method and system, the recognition methods of power equipment

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110136828A (en) * 2019-05-16 2019-08-16 杭州健培科技有限公司 A method of medical image multitask auxiliary diagnosis is realized based on deep learning
CN110766231A (en) * 2019-10-30 2020-02-07 上海天壤智能科技有限公司 Crime prediction method and system based on multi-head neural network
CN111881968A (en) * 2020-07-22 2020-11-03 平安科技(深圳)有限公司 Multi-task classification method and device and related equipment
CN111881968B (en) * 2020-07-22 2024-04-09 平安科技(深圳)有限公司 Multi-task classification method and device and related equipment
CN112288075A (en) * 2020-09-29 2021-01-29 华为技术有限公司 Data processing method and related equipment
WO2022068627A1 (en) * 2020-09-29 2022-04-07 华为技术有限公司 Data processing method and related device
CN112288075B (en) * 2020-09-29 2024-02-02 华为技术有限公司 Data processing method and related equipment
CN112488098A (en) * 2020-11-16 2021-03-12 浙江新再灵科技股份有限公司 Training method of target detection model
CN113191201A (en) * 2021-04-06 2021-07-30 上海夏数网络科技有限公司 Vision-based intelligent identification method and system for male and female chicken
CN114519381A (en) * 2021-12-31 2022-05-20 上海仙途智能科技有限公司 Sensing method and device based on multitask learning network, storage medium and terminal
CN114519381B (en) * 2021-12-31 2024-09-17 上海仙途智能科技有限公司 Sensing method and device based on multi-task learning network, storage medium and terminal

Also Published As

Publication number Publication date
CN108665065B (en) 2020-08-04

Similar Documents

Publication Publication Date Title
CN108665065B (en) Method, device and equipment for processing task data and storage medium
CN110866190B (en) Method and device for training neural network model for representing knowledge graph
CN110930417B (en) Training method and device for image segmentation model, and image segmentation method and device
CN109063742B (en) Butterfly identification network construction method and device, computer equipment and storage medium
CN109146076A (en) model generating method and device, data processing method and device
CN109919183B (en) Image identification method, device and equipment based on small samples and storage medium
US11585918B2 (en) Generative adversarial network-based target identification
CN111598213B (en) Network training method, data identification method, device, equipment and medium
KR20210032140A (en) Method and apparatus for performing pruning of neural network
CN115618941A (en) Training refined machine learning models
CN111738269B (en) Model training method, image processing device, model training apparatus, and storage medium
CN112183295A (en) Pedestrian re-identification method and device, computer equipment and storage medium
CN110473592A (en) The multi-angle of view mankind for having supervision based on figure convolutional network cooperate with lethal gene prediction technique
CN112037862B (en) Cell screening method and device based on convolutional neural network
CN114093422B (en) Prediction method and system for interaction between miRNA and gene based on multiple relationship graph rolling network
CN112132278A (en) Model compression method and device, computer equipment and storage medium
CN113642652A (en) Method, device and equipment for generating fusion model
CN117153260A (en) Spatial transcriptome data clustering method, device and medium based on contrast learning
CN113626610A (en) Knowledge graph embedding method and device, computer equipment and storage medium
CN115496144A (en) Power distribution network operation scene determining method and device, computer equipment and storage medium
KR20230103206A (en) Method for performing continual learning using representation learning and apparatus thereof
CN113283388A (en) Training method, device and equipment of living human face detection model and storage medium
CN115496227A (en) Method for training member reasoning attack model based on federal learning and application
US11467728B2 (en) Storage device using neural network and operating method for automatic redistribution of information and variable storage capacity based on accuracy-storage capacity tradeoff thereof
CN114780407A (en) Test data generation method, test data generation device, server, storage medium, and program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant