[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN109389207A - A kind of adaptive neural network learning method and nerve network system - Google Patents

A kind of adaptive neural network learning method and nerve network system Download PDF

Info

Publication number
CN109389207A
CN109389207A CN201811173901.5A CN201811173901A CN109389207A CN 109389207 A CN109389207 A CN 109389207A CN 201811173901 A CN201811173901 A CN 201811173901A CN 109389207 A CN109389207 A CN 109389207A
Authority
CN
China
Prior art keywords
output
layer
neural network
knowledge
similarity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811173901.5A
Other languages
Chinese (zh)
Inventor
孙兴波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan University of Science and Engineering
Original Assignee
Sichuan University of Science and Engineering
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan University of Science and Engineering filed Critical Sichuan University of Science and Engineering
Priority to CN201811173901.5A priority Critical patent/CN109389207A/en
Publication of CN109389207A publication Critical patent/CN109389207A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The invention discloses a kind of adaptive neural network learning method and the nerve network systems, the nerve network system includes: input layer, hidden layer, output layer and knowledge base, this method includes: (1) creating and initialize knowledge base, the weighed value adjusting to hidden layer and output layer simultaneously makes output result similarity meet the condition of convergence;(2) online adaptive learns, and obtains output result using learning algorithm using this group of connection weight as initial value for a certain input data;(3) judge whether export result and the similarity of corresponding desired output meets the requirements: when demanded, then exporting result;Otherwise, step (2) are repeated, goes through all knowledge in knowledge base, does not find yet, then regard the data as the sample of a new knowledge;(4) connection weight is adjusted according to step (1), is added in knowledge base.Method of the invention is capable of handling and differentiates inside and outside data, new legacy data, realizes online adaptive study and identification function.

Description

Self-adaptive neural network learning method and neural network system
Technical Field
The invention belongs to the technical field of artificial intelligence, and particularly relates to a self-adaptive neural network learning method and a neural network system.
Background
Within the discipline of artificial intelligence, the research efforts of neural networks have been successfully transplanted into a considerable number of fields, such as decision support, face recognition, knowledge base systems, expert systems, and emotional robots. In representing traditional research, most models generally work in already explained areas: that is, for an interpreted context, the system designer typically gives some implicit a priori conventions under which it is difficult to transform the context, objective, or representation as the problem solving process progresses.
At present, the main means of the image thinking simulation is to simulate the connection mechanism mainly based on the 'intelligent' -artificial neural network connection related to the copied image thinking. In terms of a calculation processing method, a new approach is opened up by a connection mechanism method, namely a method of parallel processing and distributed expression is adopted. Specifically, the method uses a network of a plurality of nodes, each of which can be connected to each other, to represent information. The semantic network used for expressing knowledge in the past is that one node corresponds to one concept, and the artificial neural network corresponds to one concept by a distribution mode of the nodes and the size of a weighting quantity, so that even if the information attribute on each node is distorted and distorted, the attribute of the concept expressed by the network is not changed greatly. In addition, information on some common elements can also be used to express similar concepts, but simulating visual thinking in various methods including the neural network connection method described above is also not completely successful, as well as symbolic representations of logical thinking.
The neural network system can identify the input, and the input related knowledge and grammar rules are expressed by the neural network structure and the neuron connection weight, so that the neural network system has even a certain degree of fault tolerance. Neural network systems are data driven and it is unclear whether these data sources are internal or external, new or old. After the neural network is trained, if the input data into the system is data not contained in the training set, namely a new data sample, the neural network cannot judge whether the input is new information or not relative to the knowledge of the input data, and cannot actively learn the new data, but the knowledge obtained in the training process is used for carrying out error judgment on the new data sample. Or, in the online training process, the neural network does not distinguish the knowledge states which are already existed by the neural network, and the knowledge states which are externally input data are always processed without difference.
Disclosure of Invention
The invention aims to provide a self-adaptive neural network learning method and a neural network system, which solve the problem that the existing neural network system cannot distinguish new data from old data and learn, can process and distinguish internal data and external data and new data from old data, and realize the functions of on-line self-adaptive learning and identification.
In order to achieve the above object, the present invention provides an adaptive neural network learning method, wherein a neural network system comprises: input layer, hidden layer, output layer and knowledge base, where K ═ S1,S2……Sm),SbTo connect the weight, S, to the neural networkbWhere b is 1,2 … … m, Y is the desired output, and V and W are the connection weight of the network hidden layer and the connection weight of the output layer, respectively.
The method comprises the following steps:
(1) establishing and initializing a knowledge base: acquiring knowledge of a training set, wherein each knowledge corresponds to a group of connection weights V of a hidden layer and a group of connection weights W of an output layer, adjusting the weights of the hidden layer and the output layer in the training process, enabling the similarity E of output results to meet a convergence condition, and determining the final output connection weight;
(2) on-line adaptive learning, searching any knowledge in the knowledge base to obtain a connection weight V of a corresponding hidden layer and a connection weight W of an output layer, taking the group of connection weights as an initial value for certain input data, adjusting the connection weights by using the adjustment of the weights in the step 1, and continuously calculating the specified learning times N to obtain an output result of actual operation;
(3) judging whether the similarity of the output result and the corresponding expected output meets the requirement: when the similarity between the output result and the corresponding expected output meets the requirement, outputting the result; when the similarity between the output result and the corresponding expected output does not meet the requirement, searching and selecting a new knowledge from the knowledge base in sequence, acquiring a new set of hidden layer, output layer and expected output, repeating the step (2), and if all the knowledge in the knowledge base is gone through and the corresponding knowledge is not found yet, regarding the data as a sample of a new knowledge;
(4) and (3) adjusting the connection weight according to the adjustment of the weight in the step (1) until the similarity E of the output result meets the convergence condition, combining the connection weight of the corresponding output layer, the connection weight of the hidden layer and the expected output into a new knowledge, and adding the new knowledge into a knowledge base K.
Preferably, the connection weight of the network hidden layer and the connection weight of the output layer are both matrixes; wherein,
preferably, in step (1), the algorithm for adjusting the weight includes:
δj=(yj-oj)f′(netj) (3)。
in formulae (1) to (3), wijAnd vijThe connection weights of the output layer and the hidden layer in the matrix (i, j), α are the scaling factors, deltajIs the learning rate of j columns in the matrix, f' (net)j) As the derivative of the neuron excitation function, yjAnd ojDesired output and actual output, o, respectively, of j columns in the matrixiIs the actual output of the i rows in the matrix.
Preferably, in step (1), the equation of the convergence condition of the output result similarity E is:
E=∑Ep(5)。
in formulae (4) and (5), EpRepresenting the resultant similarity, y, of the pp-th output neuronPjAnd oPjRespectively, the expected output and the actual output of (p, j) in the matrix, and E represents the result similarity and is used for judging whether the network meets the convergence requirement.
Preferably, in the step (1), the similarity requirement is satisfied when the result similarity E is lower than 20% of the expected output result; in step (4), the similarity requirement is satisfied when the result similarity E is lower than 5% of the expected output result.
The present invention also provides an adaptive neural network system, comprising: input layer, hidden layer, output layer and knowledge base, where K ═ S1,S2……Sm),SbTo connect the weight, S, to the neural networkb(V | W | Y), b ═ 1,2 … … m, Y is the desired output, V and W are the connection weights of the network hidden layer and the output layer, respectively; the neural network system is a forward type neural network system, the input layer, the hidden layer and the output layer are sequentially connected for transmission, and the knowledge base is connected with the hidden layer.
Preferably, the connection weight of the network hidden layer and the connection weight of the output layer are both matrixes; wherein,
the self-adaptive neural network learning method and the neural network system solve the problem that the existing neural network system cannot distinguish new data from old data and learn, and have the following advantages that:
the self-adaptive neural network learning method and the neural network system store the neural network connection weight in the knowledge base in the form of knowledge, thereby realizing the processing of searching, adding and the like of the knowledge, leading the neural network system to have the functions of processing and distinguishing internal and external data and new and old data, and further realizing the functions of on-line self-adaptive learning and identification. Moreover, the method of the present invention is flexible in transforming context, goals or representations for the context of the interpretation as the problem-solving process progresses.
Drawings
FIG. 1 is a schematic diagram of a neural network system of the present invention.
FIG. 2 is a flow chart of the adaptive neural network learning method of the present invention.
Detailed Description
The technical solution of the present invention is further described below with reference to the accompanying drawings and examples.
A method for adaptive neural network learning, as shown in fig. 1, is a schematic diagram of a neural network system of the present invention, the neural network system comprising: input layer LA=(a1……ah……an) Hidden layer LB=(b1……bi……bp) An output layer LC=(c1……cj……cq) And knowledge base K ═ S1,S2……Sm),SbTo connect the weight, S, to the neural networkbWhere (V | W | Y), b ═ 1,2 … … m (natural numbers), Y is the desired output, and V and W are the connection weights of the network hidden layer and the output layer, respectively, where,
as shown in fig. 2, a flow chart of the adaptive neural network learning method of the present invention is shown, the method includes:
(1) establishing and initializing a knowledge base: for each data set, namely training set, sent to the input layer, a knowledge can be obtained, namely the connection weight V of a group of corresponding hidden layers and the connection weight W of the output layer, and the weight adjustment algorithm of the hidden layers and the output layer in the learning process is as follows:
δj=(yj-oj)f′(netj) (3);
in formulae (1) to (3), wijAnd vijThe connection weights of the output layer and the hidden layer in the matrix (i, j), α are the scaling factors, deltajIs the learning rate of j columns in the matrix, f' (net)j) As the derivative of the neuron excitation function, yjAnd ojDesired output and actual output, o, respectively, of j columns in the matrixiIs the actual output of the i rows in the matrix.
Determining the final output connection weight value according to whether the output result similarity E meets the convergence condition, namely determining one knowledge in a knowledge base:
E=∑Ep(5);
in formulae (4) and (5), EpIs shown asResult similarity, y, of pp output neuronsPjAnd oPjRespectively, the expected output and the actual output of (p, j) in the matrix, and E represents the result similarity and is used for judging whether the network meets the convergence requirement.
(2) On-line adaptive learning, setting learning times N, setting output result similarity, namely an error E value, searching any knowledge in a knowledge base, separating a connection weight V of a corresponding hidden layer and a connection weight W of an output layer, applying the group of connection weights as an initial value to certain input data, adjusting the connection weights by using the adjustment of the weights in the step (1), and continuously calculating the specified learning times N to obtain an output result of actual operation.
(3) Judging whether the similarity of the output result and the corresponding expected output meets the requirement: if the similarity between the output result and the corresponding expected output meets the requirement, outputting the result, and if the result similarity E is lower than 20% of the expected output result, meeting the requirement of the similarity; if the similarity between the output result and the corresponding expected output does not meet the requirement, searching and selecting a new knowledge from the knowledge base in sequence, namely selecting a new set of hidden layer, output layer and expected output, repeating the step (2), if all the knowledge in the knowledge base is passed, the corresponding knowledge is not found, and even if the similarity between the output result and the corresponding expected output meets the requirement, the data is regarded as a sample of a new knowledge;
(4) and (3) adjusting the connection weight according to the formulas (1) to (3) in the step (1) until the similarity E of the output result meets the convergence condition, combining the connection weight of the corresponding output layer, the connection weight of the hidden layer and the expected output into a new knowledge in order to meet the similarity requirement when the error is lower than 5% of the expected output result, and adding the new knowledge into a knowledge base K.
An adaptive neural network system, as shown in fig. 1, is a schematic diagram of the neural network system of the present invention, the neural network system is a forward type neural network system, and includes: input layer LA=(a1……ah……an) Hidden layerLB=(b1……bi……bp) An output layer LC=(c1……cj……bq) And knowledge base K ═ S1,S2……Sm),SbTo connect the weight, S, to the neural networkb(V | W | Y), b ═ 1,2 … … m, Y is the desired output, V and W are the connection weights of the hidden layer and the output layer of the network, respectively, the input layer, the hidden layer and the output layer are connected in sequence for transmission, the knowledge base is connected to the hidden layer, wherein,
in summary, the adaptive neural network learning method and the neural network system of the present invention store the neural network connection weight in the knowledge base in the form of knowledge, thereby implementing processes such as searching and adding knowledge, and enabling the neural network system to have functions of processing and distinguishing internal and external data and new and old data, thereby implementing functions of online adaptive learning and identification.
While the present invention has been described in detail with reference to the preferred embodiments, it should be understood that the above description should not be taken as limiting the invention. Various modifications and alterations to this invention will become apparent to those skilled in the art upon reading the foregoing description. Accordingly, the scope of the invention should be determined from the following claims.

Claims (7)

1. An adaptive neural network learning method, wherein a neural network system comprises: input layer, hidden layer, output layer and knowledge base, where K ═ S1,S2……Sm),SbTo connect the weight, S, to the neural networkb(V | W | Y), b ═ 1,2 … … m, Y is the desired output, V and W are the connection weights of the network hidden layer and the output layer, respectively;
the method comprises the following steps:
(1) establishing and initializing a knowledge base: acquiring knowledge of a training set, wherein each knowledge corresponds to a group of connection weights V of a hidden layer and a group of connection weights W of an output layer, adjusting the weights of the hidden layer and the output layer in the training process, enabling the similarity E of output results to meet a convergence condition, and determining the final output connection weight;
(2) on-line adaptive learning, searching any knowledge in the knowledge base to obtain a connection weight V of a corresponding hidden layer and a connection weight W of an output layer, taking the group of connection weights as an initial value for certain input data, adjusting the connection weights by using the adjustment of the weights in the step 1, and continuously calculating the specified learning times N to obtain an output result of actual operation;
(3) judging whether the similarity of the output result and the corresponding expected output meets the requirement: when the similarity between the output result and the corresponding expected output meets the requirement, outputting the result; when the similarity between the output result and the corresponding expected output does not meet the requirement, searching and selecting a new knowledge from the knowledge base in sequence, acquiring a new set of hidden layer, output layer and expected output, repeating the step (2), and if all the knowledge in the knowledge base is gone through and the corresponding knowledge is not found yet, regarding the data as a sample of a new knowledge;
(4) and (3) adjusting the connection weight according to the adjustment of the weight in the step (1) until the similarity E of the output result meets the convergence condition, combining the connection weight of the corresponding output layer, the connection weight of the hidden layer and the expected output into a new knowledge, and adding the new knowledge into a knowledge base K.
2. The adaptive neural network learning method of claim 1, wherein the connection weights of the hidden layer and the output layer are matrices; wherein,
3. the adaptive neural network learning method of claim 2, wherein in step (1), the weight value adjustment algorithm comprises:
δj=(yj-oj)f′(netj) (3);
in formulae (1) to (3), wijAnd vijThe connection weights of the output layer and the hidden layer in the matrix (i, j), α are the scaling factors, deltajIs the learning rate of j columns in the matrix, f' (net)j) As the derivative of the neuron excitation function, yjAnd ojDesired output and actual output, o, respectively, of j columns in the matrixiIs the actual output of the i rows in the matrix.
4. The adaptive neural network learning method according to any one of claims 1 to 3, wherein in step (1), the equation of the convergence condition of the output result similarity E is:
E=∑Ep(5);
in formulae (4) and (5), EpRepresenting the resulting similarity, y, of the p-th output neuronPjAnd oPjRespectively, the expected output and the actual output of (p, j) in the matrix, and E represents the result similarity and is used for judging whether the network meets the convergence requirement.
5. The adaptive neural network learning method of claim 1, wherein in step (1), the similarity requirement is satisfied when the result similarity E is lower than 20% of the expected output result; in step (4), the similarity requirement is satisfied when the result similarity E is lower than 5% of the expected output result.
6. An adaptive neural network system, comprising: input layer, hidden layer, output layer and knowledge base, where K ═ S1,S2……Sm),SbTo connect the weight, S, to the neural networkb(V | W | Y), b ═ 1,2 … … m, Y is the desired output, V and W are the connection weights of the network hidden layer and the output layer, respectively; the neural network system is a forward type neural network system, the input layer, the hidden layer and the output layer are sequentially connected for transmission, and the knowledge base is connected with the hidden layer.
7. The adaptive neural network system of claim 6, wherein the connection weights of the hidden layer and the output layer are matrices; wherein,
CN201811173901.5A 2018-10-09 2018-10-09 A kind of adaptive neural network learning method and nerve network system Pending CN109389207A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811173901.5A CN109389207A (en) 2018-10-09 2018-10-09 A kind of adaptive neural network learning method and nerve network system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811173901.5A CN109389207A (en) 2018-10-09 2018-10-09 A kind of adaptive neural network learning method and nerve network system

Publications (1)

Publication Number Publication Date
CN109389207A true CN109389207A (en) 2019-02-26

Family

ID=65426789

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811173901.5A Pending CN109389207A (en) 2018-10-09 2018-10-09 A kind of adaptive neural network learning method and nerve network system

Country Status (1)

Country Link
CN (1) CN109389207A (en)

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11403069B2 (en) 2017-07-24 2022-08-02 Tesla, Inc. Accelerated mathematical engine
US11409692B2 (en) 2017-07-24 2022-08-09 Tesla, Inc. Vector computational unit
US11487288B2 (en) 2017-03-23 2022-11-01 Tesla, Inc. Data synthesis for autonomous control systems
US11537811B2 (en) 2018-12-04 2022-12-27 Tesla, Inc. Enhanced object detection for autonomous vehicles based on field view
US11561791B2 (en) 2018-02-01 2023-01-24 Tesla, Inc. Vector computational unit receiving data elements in parallel from a last row of a computational array
US11562231B2 (en) 2018-09-03 2023-01-24 Tesla, Inc. Neural networks for embedded devices
US11567514B2 (en) 2019-02-11 2023-01-31 Tesla, Inc. Autonomous and user controlled vehicle summon to a target
US11610117B2 (en) 2018-12-27 2023-03-21 Tesla, Inc. System and method for adapting a neural network model on a hardware platform
US11636333B2 (en) 2018-07-26 2023-04-25 Tesla, Inc. Optimizing neural network structures for embedded systems
US11665108B2 (en) 2018-10-25 2023-05-30 Tesla, Inc. QoS manager for system on a chip communications
US11681649B2 (en) 2017-07-24 2023-06-20 Tesla, Inc. Computational array microprocessor system using non-consecutive data formatting
US11734562B2 (en) 2018-06-20 2023-08-22 Tesla, Inc. Data pipeline and deep learning system for autonomous driving
US11748620B2 (en) 2019-02-01 2023-09-05 Tesla, Inc. Generating ground truth for machine learning from time series elements
US11790664B2 (en) 2019-02-19 2023-10-17 Tesla, Inc. Estimating object properties using visual image data
US11816585B2 (en) 2018-12-03 2023-11-14 Tesla, Inc. Machine learning models operating at different frequencies for autonomous vehicles
US11841434B2 (en) 2018-07-20 2023-12-12 Tesla, Inc. Annotation cross-labeling for autonomous control systems
US11893393B2 (en) 2017-07-24 2024-02-06 Tesla, Inc. Computational array microprocessor system with hardware arbiter managing memory requests
US11893774B2 (en) 2018-10-11 2024-02-06 Tesla, Inc. Systems and methods for training machine models with augmented data
US12014553B2 (en) 2019-02-01 2024-06-18 Tesla, Inc. Predicting three-dimensional features for autonomous driving

Cited By (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11487288B2 (en) 2017-03-23 2022-11-01 Tesla, Inc. Data synthesis for autonomous control systems
US12020476B2 (en) 2017-03-23 2024-06-25 Tesla, Inc. Data synthesis for autonomous control systems
US12086097B2 (en) 2017-07-24 2024-09-10 Tesla, Inc. Vector computational unit
US11409692B2 (en) 2017-07-24 2022-08-09 Tesla, Inc. Vector computational unit
US11403069B2 (en) 2017-07-24 2022-08-02 Tesla, Inc. Accelerated mathematical engine
US11681649B2 (en) 2017-07-24 2023-06-20 Tesla, Inc. Computational array microprocessor system using non-consecutive data formatting
US11893393B2 (en) 2017-07-24 2024-02-06 Tesla, Inc. Computational array microprocessor system with hardware arbiter managing memory requests
US11797304B2 (en) 2018-02-01 2023-10-24 Tesla, Inc. Instruction set architecture for a vector computational unit
US11561791B2 (en) 2018-02-01 2023-01-24 Tesla, Inc. Vector computational unit receiving data elements in parallel from a last row of a computational array
US11734562B2 (en) 2018-06-20 2023-08-22 Tesla, Inc. Data pipeline and deep learning system for autonomous driving
US11841434B2 (en) 2018-07-20 2023-12-12 Tesla, Inc. Annotation cross-labeling for autonomous control systems
US11636333B2 (en) 2018-07-26 2023-04-25 Tesla, Inc. Optimizing neural network structures for embedded systems
US12079723B2 (en) 2018-07-26 2024-09-03 Tesla, Inc. Optimizing neural network structures for embedded systems
US11562231B2 (en) 2018-09-03 2023-01-24 Tesla, Inc. Neural networks for embedded devices
US11983630B2 (en) 2018-09-03 2024-05-14 Tesla, Inc. Neural networks for embedded devices
US11893774B2 (en) 2018-10-11 2024-02-06 Tesla, Inc. Systems and methods for training machine models with augmented data
US11665108B2 (en) 2018-10-25 2023-05-30 Tesla, Inc. QoS manager for system on a chip communications
US11816585B2 (en) 2018-12-03 2023-11-14 Tesla, Inc. Machine learning models operating at different frequencies for autonomous vehicles
US11908171B2 (en) 2018-12-04 2024-02-20 Tesla, Inc. Enhanced object detection for autonomous vehicles based on field view
US11537811B2 (en) 2018-12-04 2022-12-27 Tesla, Inc. Enhanced object detection for autonomous vehicles based on field view
US11610117B2 (en) 2018-12-27 2023-03-21 Tesla, Inc. System and method for adapting a neural network model on a hardware platform
US12136030B2 (en) 2018-12-27 2024-11-05 Tesla, Inc. System and method for adapting a neural network model on a hardware platform
US11748620B2 (en) 2019-02-01 2023-09-05 Tesla, Inc. Generating ground truth for machine learning from time series elements
US12014553B2 (en) 2019-02-01 2024-06-18 Tesla, Inc. Predicting three-dimensional features for autonomous driving
US11567514B2 (en) 2019-02-11 2023-01-31 Tesla, Inc. Autonomous and user controlled vehicle summon to a target
US11790664B2 (en) 2019-02-19 2023-10-17 Tesla, Inc. Estimating object properties using visual image data

Similar Documents

Publication Publication Date Title
CN109389207A (en) A kind of adaptive neural network learning method and nerve network system
CN108491765B (en) Vegetable image classification and identification method and system
US9619749B2 (en) Neural network and method of neural network training
CN107729999A (en) Consider the deep neural network compression method of matrix correlation
CN113469356A (en) Improved VGG16 network pig identity recognition method based on transfer learning
CN107798349B (en) Transfer learning method based on depth sparse self-coding machine
CN107689224A (en) The deep neural network compression method of reasonable employment mask
CN110222163A (en) A kind of intelligent answer method and system merging CNN and two-way LSTM
CN107506590A (en) A kind of angiocardiopathy forecast model based on improvement depth belief network
WO2015134900A1 (en) Neural network and method of neural network training
CN109492750B (en) Zero sample image classification method based on convolutional neural network and factor space
CN106777402A (en) A kind of image retrieval text method based on sparse neural network
CN111079837B (en) Method for detecting, identifying and classifying two-dimensional gray level images
CN110598737B (en) Online learning method, device, equipment and medium of deep learning model
CN115511069A (en) Neural network training method, data processing method, device and storage medium
CN113157919A (en) Sentence text aspect level emotion classification method and system
CN117174163A (en) Virus evolution trend prediction method and system
CN107528824A (en) A kind of depth belief network intrusion detection method based on two-dimensionses rarefaction
CN103559510B (en) Method for recognizing social group behaviors through related topic model
CN111461229A (en) Deep neural network optimization and image classification method based on target transfer and line search
CN110288002A (en) A kind of image classification method based on sparse Orthogonal Neural Network
Zhu et al. Emotion Recognition in Learning Scenes Supported by Smart Classroom and Its Application.
Hu et al. Tree species identification based on the fusion of multiple deep learning models transfer learning
CN109118483A (en) A kind of label quality detection method and device
CN115331045A (en) Neural network adaptive expansion pruning-based visual object classification method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20190226

RJ01 Rejection of invention patent application after publication