[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN109389207A - A kind of adaptive neural network learning method and nerve network system - Google Patents

A kind of adaptive neural network learning method and nerve network system Download PDF

Info

Publication number
CN109389207A
CN109389207A CN201811173901.5A CN201811173901A CN109389207A CN 109389207 A CN109389207 A CN 109389207A CN 201811173901 A CN201811173901 A CN 201811173901A CN 109389207 A CN109389207 A CN 109389207A
Authority
CN
China
Prior art keywords
output
layer
neural network
knowledge
similarity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811173901.5A
Other languages
Chinese (zh)
Inventor
孙兴波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan University of Science and Engineering
Original Assignee
Sichuan University of Science and Engineering
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan University of Science and Engineering filed Critical Sichuan University of Science and Engineering
Priority to CN201811173901.5A priority Critical patent/CN109389207A/en
Publication of CN109389207A publication Critical patent/CN109389207A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

本发明公开了一种自适应神经网络学习方法及该神经网络系统,该神经网络系统包含:输入层、隐含层、输出层和知识库,该方法包含:(1)新建并初始化知识库,对隐含层和输出层的权值调整并使输出结果相似度满足收敛条件;(2)在线自适应学习,对于某一输入数据将该组连接权值作为初始值,使用学习算法,得到输出结果;(3)判断输出结果和对应的期望输出的相似度是否满足要求:当满足要求时,则输出结果;否则,重复步骤(2),历遍知识库中的所有知识,仍未找到,则视该数据为一项新知识的样本;(4)按照步骤(1)对连接权值进行调整,添加到知识库中。本发明的方法能够处理和判别内外数据、新旧数据,实现在线自适应学习和识别功能。

The invention discloses an adaptive neural network learning method and the neural network system. The neural network system includes an input layer, a hidden layer, an output layer and a knowledge base. The method includes: (1) creating and initializing the knowledge base, Adjust the weights of the hidden layer and the output layer so that the similarity of the output result satisfies the convergence condition; (2) Online adaptive learning, for a certain input data, the set of connection weights is used as the initial value, and the learning algorithm is used to obtain the output Result; (3) Determine whether the similarity between the output result and the corresponding expected output meets the requirements: when the requirements are met, output the result; otherwise, repeat step (2), traverse all the knowledge in the knowledge base, but still not found, Then regard the data as a sample of new knowledge; (4) adjust the connection weight according to step (1) and add it to the knowledge base. The method of the invention can process and discriminate internal and external data, new and old data, and realize online self-adaptive learning and identification functions.

Description

一种自适应神经网络学习方法及神经网络系统An adaptive neural network learning method and neural network system

技术领域technical field

本发明属于人工智能技术领域,具体涉及一种自适应神经网络学习方法及神经网络系统。The invention belongs to the technical field of artificial intelligence, and in particular relates to an adaptive neural network learning method and a neural network system.

背景技术Background technique

在人工智能学科之内,神经网络的研究成果已经被成功地移植到相当多的领域中,比如决策支持、人脸识别、知识库系统、专家系统和情感机器人等。在表示传统研究中,大多数模型一般工作在已经解释好的领域中:即对于解释的上下文,系统设计者通常都会给出一些隐含的先验约定,在这种约定下,很难随着问题求解过程的进展而将上下文、目标或表示进行转换。Within the discipline of artificial intelligence, the research results of neural networks have been successfully transplanted into quite a few fields, such as decision support, face recognition, knowledge base systems, expert systems and emotional robots. In representational traditional research, most models generally work in the domain that has been explained: that is, for the context of explanation, system designers usually give some implicit prior conventions, under which it is difficult to follow The context, goal, or representation is transformed as the problem solving process progresses.

目前,形象思维模拟的主要手段是以模拟与复制形象思维相关的“象智”—人工神经网络联接为主的联接机制。从计算处理方法上来说,联接机制方法另辟了新的途径,就是采用并行处理及分布式表达的方法。具体来说,这种方法用“若干个结点,每两个结点间可以连接起来的网络”表示信息。以往用以表示知识的语义网络是一个结点与一个概念对应,而人工神经网络是以结点的一种分布模式以及加权量的大小与一个概念对应,这样即使每个结点上的信息属性发生了畸变与失真,也不至于使网络所表达的概念的属性产生重大的变化。另外,有些共同的单元上的信息也可以用来表达相类似的概念,但用包括上述神经网络联接方法在内的各种方法来模拟形象思维也和逻辑思维的符号化表示一样,未能获得完全成功。At present, the main method of image thinking simulation is to simulate and replicate the image thinking related "Xiangzhi"-artificial neural network connection-based connection mechanism. In terms of computing and processing methods, the connection mechanism method has opened up a new way, that is, the method of parallel processing and distributed expression. Specifically, this method uses "a network of several nodes that can be connected between every two nodes" to represent information. In the past, the semantic network used to represent knowledge is that a node corresponds to a concept, while the artificial neural network corresponds to a concept based on a distribution pattern of nodes and the size of the weight, so that even the information attributes on each node correspond to a concept. Distortions and distortions have occurred, but will not cause significant changes in the properties of the concepts expressed by the network. In addition, the information on some common units can also be used to express similar concepts, but using various methods including the above-mentioned neural network connection method to simulate image thinking is like the symbolic representation of logical thinking. Totally successful.

神经网络系统能够对输入进行识别,它将输入相关知识和语法规律以神经网络结构和神经元连接权值来表示,它甚至具有一定程度的容错性。神经网络系统是数据驱动的,而且分不清这些数据来源是内部还是外部,是新数据还是旧数据。神经网络训练好以后,假设输入到系统中的是训练集中不包含的数据,也就是一个新的数据样本,神经网络不能判断这个输入相对于它的知识是不是新信息,也不能主动学习这个新数据,反而用训练过程得到的知识对这个新数据样本进行错误判断。或者,在线训练过程中,神经网络没有区分那些是自己已有的知识状态,那些是外部输入数据,总是无差别处理。The neural network system can recognize the input, and it will represent the input related knowledge and grammar rules with the neural network structure and neuron connection weights, and it even has a certain degree of fault tolerance. Neural network systems are data-driven, and it is unclear whether these data sources are internal or external, new or old. After the neural network is trained, it is assumed that the input into the system is data that is not included in the training set, that is, a new data sample. The neural network cannot determine whether the input is new information relative to its knowledge, nor can it actively learn this new data. data, but use the knowledge gained from the training process to make misjudgments on this new data sample. Or, in the online training process, the neural network does not distinguish between those that are its own existing knowledge states and those that are external input data, which are always processed indiscriminately.

发明内容SUMMARY OF THE INVENTION

本发明的目的是提供一种自适应神经网络学习方法及神经网络系统,该方法解决了现有神经网络系统不能区分新旧数据及学习的问题,能够处理和判别内外数据、新旧数据,实现在线自适应学习和识别功能。The purpose of the present invention is to provide an adaptive neural network learning method and neural network system, which solves the problem that the existing neural network system cannot distinguish between new and old data and learning, can process and distinguish internal and external data, new and old data, and realize online automatic Adaptive learning and recognition functions.

为了达到上述目的,本发明提供了一种自适应神经网络学习方法,神经网络系统包含:输入层、隐含层、输出层和知识库,该知识库K=(S1,S2……Sm),Sb为神经网络连接权值,Sb=(V|W|Y),b=1,2……m,Y是期望输出,V和W分别为网络隐含层的连接权值和输出层的连接权值。In order to achieve the above object, the present invention provides an adaptive neural network learning method, the neural network system includes: an input layer, a hidden layer, an output layer and a knowledge base, the knowledge base K=(S 1 , S 2 ...... S m ), S b is the connection weight of the neural network, S b = (V|W|Y), b=1, 2... m, Y is the expected output, V and W are the connection weight of the hidden layer of the network respectively and the connection weights of the output layer.

该方法包含:The method contains:

(1)新建并初始化知识库:获得训练集合的知识,每项知识对应一组隐含层的连接权值V和输出层的连接权值W,对训练过程中的隐含层和输出层的权值进行调整,并使输出结果相似度E满足收敛条件,确定最终输出的连接权值;(1) Create and initialize the knowledge base: obtain the knowledge of the training set. Each knowledge corresponds to a set of connection weights V of the hidden layer and connection weights W of the output layer. The weights are adjusted, and the similarity E of the output result satisfies the convergence condition, and the connection weight of the final output is determined;

(2)在线自适应学习,搜索所述知识库中任意一项知识,获得其对应的隐含层的连接权值V和输出层的连接权值W,对于某一输入数据将该组连接权值作为初始值,使用步骤1中权值的调整对连接权值进行调整,连续运算规定的学习次数N,得到实际运算的输出结果;(2) Online adaptive learning, search for any knowledge in the knowledge base, and obtain the connection weight V of the corresponding hidden layer and the connection weight W of the output layer. For a certain input data, the group of connection weights is obtained. value as the initial value, use the adjustment of the weight value in step 1 to adjust the connection weight value, continuously operate the specified number of learning N, and obtain the output result of the actual operation;

(3)判断输出结果和对应的期望输出的相似度是否满足要求:当输出结果与对应的期望输出的相似度达到要求时,则输出结果;当输出结果与对应期望输出相似度未达到要求时,则从知识库中按序搜索并选取新的一项知识,获取一组新的隐含层、输出层以及期望输出,重复步骤(2),若历遍所述知识库中的所有知识,仍未找到对应的知识时,则视该数据为一项新知识的样本;(3) Judging whether the similarity between the output result and the corresponding expected output meets the requirements: when the similarity between the output result and the corresponding expected output meets the requirements, the result is output; when the similarity between the output result and the corresponding expected output does not meet the requirements , then sequentially search and select a new knowledge from the knowledge base, obtain a new set of hidden layer, output layer and expected output, repeat step (2), if all knowledge in the knowledge base is traversed, When the corresponding knowledge has not been found, the data is regarded as a sample of new knowledge;

(4)按照步骤(1)中的权值的调整对连接权值进行调整,直至输出结果相似度E满足收敛条件,将对应输出层的连接权值、隐含层的连接权值和期望输出合并为一项新的知识,添加到知识库K中。(4) Adjust the connection weights according to the adjustment of the weights in step (1) until the similarity E of the output result satisfies the convergence condition, and then adjust the connection weights of the corresponding output layer, the connection weights of the hidden layer and the expected output Merged into a new knowledge, added to the knowledge base K.

优选地,所述的网络隐含层的连接权值和输出层的连接权值均为矩阵;其中,Preferably, the connection weights of the hidden layer of the network and the connection weights of the output layer are both matrices; wherein,

优选地,在步骤(1)中,所述权值的调整算法包含:Preferably, in step (1), the adjustment algorithm of the weight includes:

δj=(yj-oj)f′(netj) (3)。δ j =(y j -o j )f'(net j ) (3).

式(1)-(3)中,wij和vij分别为输出层和隐含层在矩阵中(i,j)的连接权值,α为比例系数,δj为矩阵中j列的学习率,f’(netj)为神经元激励函数导数,yj和oj分别为矩阵中j列的期望输出和实际输出,oi为矩阵中i排的实际输出。In formulas (1)-(3), w ij and v ij are the connection weights of the output layer and the hidden layer in the matrix (i, j), respectively, α is the scale coefficient, and δ j is the learning of the j column in the matrix. rate, f'(net j ) is the derivative of neuron excitation function, y j and o j are the expected output and actual output of column j in the matrix, respectively, and o i is the actual output of row i in the matrix.

优选地,在步骤(1)中,所述输出结果相似度E的收敛条件的方程为:Preferably, in step (1), the equation of the convergence condition of the output result similarity E is:

E=∑Ep (5)。E=ΣE p (5).

式(4)和(5)中,Ep表示第pp个输出神经元的结果相似度,yPj和oPj分别为矩阵中(p,j)的期望输出和实际输出,E表示结果相似度,用于判断网络是否达到收敛要求。In equations (4) and (5), E p represents the result similarity of the ppth output neuron, y Pj and o Pj are the expected output and actual output of (p, j) in the matrix, respectively, and E represents the result similarity , which is used to judge whether the network meets the convergence requirements.

优选地,在步骤(1)中,当结果相似度E低于期望输出结果的20%时为满足相似度要求;在步骤(4)中,当结果相似度E低于期望输出结果的5%时为满足相似度要求。Preferably, in step (1), the similarity requirement is satisfied when the result similarity E is lower than 20% of the expected output result; in step (4), when the result similarity E is lower than 5% of the expected output result to meet the similarity requirement.

本发明还提供了一种自适应神经网络系统,该神经网络系统包含:输入层、隐含层、输出层和知识库,该知识库K=(S1,S2……Sm),Sb为神经网络连接权值,Sb=(V|W|Y),b=1,2……m,Y是期望输出,V和W分别为网络隐含层的连接权值和输出层的连接权值;该神经网络系统为前向型神经网络系统,所述的输入层、隐含层和输出层依次连接以传输,所述的知识库和所述的隐含层连接。The present invention also provides an adaptive neural network system, the neural network system includes: an input layer, a hidden layer, an output layer and a knowledge base, the knowledge base K=(S 1 , S 2 ...... S m ), S b is the connection weight of the neural network, S b = (V|W|Y), b=1, 2...m, Y is the expected output, V and W are the connection weight of the hidden layer of the network and the output layer, respectively Connection weight; the neural network system is a forward neural network system, the input layer, the hidden layer and the output layer are connected in turn for transmission, and the knowledge base is connected with the hidden layer.

优选地,所述的网络隐含层的连接权值和输出层的连接权值均为矩阵;其中,Preferably, the connection weights of the hidden layer of the network and the connection weights of the output layer are both matrices; wherein,

本发明的自适应神经网络学习方法及神经网络系统,解决了现有神经网络系统不能区分新旧数据及学习的问题,具有以下优点:The adaptive neural network learning method and neural network system of the present invention solve the problem that the existing neural network system cannot distinguish between old and new data and learning, and has the following advantages:

本发明的自适应神经网络学习方法及神经网络系统将神经网络连接权值以知识的形式存储于知识库中,从而实现知识的查找、增添等处理,使得神经网络系统具有处理和判别内外数据、新旧数据功能,从而实现在线自适应学习和识别功能。而且,本发明的方法对于解释的上下文,能灵活地随着问题求解过程的进展而将上下文、目标或表示进行转换。The adaptive neural network learning method and the neural network system of the present invention store the neural network connection weights in the knowledge base in the form of knowledge, so as to realize processing such as searching and adding knowledge, so that the neural network system has the ability to process and discriminate internal and external data, New and old data functions, so as to realize online adaptive learning and recognition functions. Furthermore, the method of the present invention has the flexibility to transform contexts, goals or representations as the problem solving process progresses with respect to the context of the interpretation.

附图说明Description of drawings

图1为本发明的神经网络系统的示意图。FIG. 1 is a schematic diagram of the neural network system of the present invention.

图2为本发明的自适应神经网络学习方法的流程图。FIG. 2 is a flowchart of the adaptive neural network learning method of the present invention.

具体实施方式Detailed ways

以下结合附图和实施例对本发明的技术方案做进一步的说明。The technical solutions of the present invention will be further described below with reference to the accompanying drawings and embodiments.

一种自适应神经网络学习方法,如图1所示,为本发明的神经网络系统的示意图,该神经网络系统包含:输入层LA=(a1……ah……an)、隐含层LB=(b1……bi……bp)、输出层LC=(c1……cj……cq)和知识库K=(S1,S2……Sm),Sb为神经网络连接权值,Sb=(V|W|Y),b=1,2……m(自然数),Y是期望输出,V和W分别为网络隐含层的连接权值和输出层的连接权值,其中,An adaptive neural network learning method, as shown in FIG. 1 , is a schematic diagram of the neural network system of the present invention. Containing layer L B =(b 1 ......b i ...... b p ), output layer L C =(c 1 ...... c j ...... c q ) and knowledge base K=(S 1 ,S 2 ...... S m ), S b is the neural network connection weight, S b = (V|W|Y), b=1,2...m (natural number), Y is the expected output, V and W are the connections of the hidden layer of the network, respectively weights and connection weights of the output layer, where,

如图2所示,为本发明的自适应神经网络学习方法的流程图,该方法包含:As shown in Figure 2, it is a flowchart of the adaptive neural network learning method of the present invention, and the method includes:

(1)新建并初始化知识库:对于每一个送入到输入层的数据集合,即训练集合,都能得到一项知识,即对应的一组隐含层的连接权值V和输出层的连接权值W,学习过程中隐含层和输出层的权值调整算法为:(1) Create and initialize the knowledge base: For each data set sent to the input layer, that is, the training set, a piece of knowledge can be obtained, that is, the connection weight V of the corresponding set of hidden layers and the connection of the output layer The weight W, the weight adjustment algorithm of the hidden layer and the output layer in the learning process is:

δj=(yj-oj)f′(netj) (3);δ j =(y j -o j )f'(net j ) (3);

式(1)-(3)中,wij和vij分别为输出层和隐含层在矩阵中(i,j)的连接权值,α为比例系数,δj为矩阵中j列的学习率,f’(netj)为神经元激励函数导数,yj和oj分别为矩阵中j列的期望输出和实际输出,oi为矩阵中i排的实际输出。In formulas (1)-(3), w ij and v ij are the connection weights of the output layer and the hidden layer in the matrix (i, j), respectively, α is the scale coefficient, and δ j is the learning of the j column in the matrix. rate, f'(net j ) is the derivative of neuron excitation function, y j and o j are the expected output and actual output of column j in the matrix, respectively, and o i is the actual output of row i in the matrix.

根据输出结果相似度E是否满足收敛条件,确定最终输出的连接权值,即确定知识库中的一项知识:According to whether the similarity E of the output result satisfies the convergence condition, the connection weight of the final output is determined, that is, a knowledge in the knowledge base is determined:

E=∑Ep (5);E=∑E p (5);

式(4)和(5)中,Ep表示第pp个输出神经元的结果相似度,yPj和oPj分别为矩阵中(p,j)的期望输出和实际输出,E表示结果相似度,用于判断网络是否达到收敛要求。In equations (4) and (5), E p represents the result similarity of the ppth output neuron, y Pj and o Pj are the expected output and actual output of (p, j) in the matrix, respectively, and E represents the result similarity , which is used to judge whether the network meets the convergence requirements.

(2)在线自适应学习,设定学习次数N,设定输出结果相似度,即误差E值,搜索知识库中任意一项知识,分离出其中对应的隐含层的连接权值V和输出层的连接权值W,对于某一输入数据应用该组连接权值作为初始值,使用步骤(1)中权值的调整对连接权值进行调整,连续运算规定的学习次数N,得到实际运算的输出结果。(2) Online adaptive learning, set the number of learning N, set the similarity of the output result, that is, the error E value, search for any knowledge in the knowledge base, and separate the connection weight V and output of the corresponding hidden layer. The connection weight W of the layer, for a certain input data, apply this group of connection weights as the initial value, use the adjustment of the weights in step (1) to adjust the connection weights, and continuously calculate the specified number of learning times N to obtain the actual operation. output result.

(3)判断输出结果和对应的期望输出的相似度是否满足要求:如果输出结果与对应的期望输出的相似度达到要求,则输出结果,当结果相似度E低于期望输出结果的20%时为满足相似度要求;如果输出结果与对应期望输出相似度未达到要求,则从知识库中按序搜索并选取新的一项知识,即选择一组新的隐含层、输出层以及期望输出,重复步骤(2),若历遍知识库中的所有知识,仍未找到对应的知识,即使输出结果与对应的期望输出的相似度达到要求,则视该数据为一项新知识的样本;(3) Determine whether the similarity between the output result and the corresponding expected output meets the requirements: if the similarity between the output result and the corresponding expected output meets the requirements, output the result, when the result similarity E is lower than 20% of the expected output result In order to meet the similarity requirements; if the similarity between the output result and the corresponding expected output does not meet the requirements, search and select a new knowledge from the knowledge base in order, that is, select a new set of hidden layer, output layer and expected output. , repeat step (2), if the corresponding knowledge is not found after traversing all the knowledge in the knowledge base, even if the similarity between the output result and the corresponding expected output meets the requirements, the data is regarded as a sample of new knowledge;

(4)按照步骤(1)中的公式(1)-(3)对连接权值进行调整,直至输出结果相似度E满足收敛条件,当误差低于期望输出结果的5%时为满足相似度要求,将对应输出层的连接权值、隐含层的连接权值和期望输出合并为一项新的知识,添加到知识库K中。(4) Adjust the connection weights according to the formulas (1)-(3) in step (1) until the similarity E of the output result satisfies the convergence condition, and when the error is less than 5% of the expected output result, the similarity is satisfied It is required to combine the connection weight of the corresponding output layer, the connection weight of the hidden layer and the expected output into a new knowledge and add it to the knowledge base K.

一种自适应神经网络系统,如图1所示,为本发明的神经网络系统的示意图,该神经网络系统为前向型神经网络系统,包含:输入层LA=(a1……ah……an)、隐含层LB=(b1……bi……bp)、输出层LC=(c1……cj……bq)和知识库K=(S1,S2……Sm),Sb为神经网络连接权值,Sb=(V|W|Y),b=1,2……m,Y是期望输出,V和W分别为网络隐含层的连接权值和输出层的连接权值,输入层、隐含层和输出层依次连接以传输,知识库和隐含层连接,其中,An adaptive neural network system, as shown in FIG. 1 , is a schematic diagram of the neural network system of the present invention. The neural network system is a forward neural network system, including: an input layer L A =(a 1 … a h ... a n ), hidden layer L B =(b 1 ...b i ... b p ), output layer L C =(c 1 ... c j ... b q ) and knowledge base K = (S 1 , S 2 ……S m ), S b is the neural network connection weight, S b = (V|W|Y), b=1, 2……m, Y is the expected output, V and W are the network hidden The connection weight of the containing layer and the connection weight of the output layer, the input layer, the hidden layer and the output layer are connected in turn for transmission, and the knowledge base and the hidden layer are connected, among which,

综上所述,本发明的自适应神经网络学习方法及神经网络系统将神经网络连接权值以知识的形式存储于知识库中,从而实现知识的查找、增添等处理,使得神经网络系统具有处理和判别内外数据、新旧数据功能,从而实现在线自适应学习和识别功能。To sum up, the adaptive neural network learning method and neural network system of the present invention store the neural network connection weights in the knowledge base in the form of knowledge, so as to realize processing such as searching and adding knowledge, so that the neural network system has the ability to process knowledge. And the function of distinguishing internal and external data, old and new data, so as to realize online adaptive learning and recognition functions.

尽管本发明的内容已经通过上述优选实施例作了详细介绍,但应当认识到上述的描述不应被认为是对本发明的限制。在本领域技术人员阅读了上述内容后,对于本发明的多种修改和替代都将是显而易见的。因此,本发明的保护范围应由所附的权利要求来限定。While the content of the present invention has been described in detail by way of the above preferred embodiments, it should be appreciated that the above description should not be construed as limiting the present invention. Various modifications and alternatives to the present invention will be apparent to those skilled in the art upon reading the foregoing. Accordingly, the scope of protection of the present invention should be defined by the appended claims.

Claims (7)

1.一种自适应神经网络学习方法,其特征在于,神经网络系统包含:输入层、隐含层、输出层和知识库,该知识库K=(S1,S2……Sm),Sb为神经网络连接权值,Sb=(V|W|Y),b=1,2……m,Y是期望输出,V和W分别为网络隐含层的连接权值和输出层的连接权值;1. An adaptive neural network learning method, characterized in that the neural network system comprises: an input layer, a hidden layer, an output layer and a knowledge base, the knowledge base K=(S 1 , S 2 ...... S m ), S b is the connection weight of the neural network, S b =(V|W|Y), b=1,2...m, Y is the expected output, V and W are the connection weight of the hidden layer and the output layer of the network, respectively The connection weight of ; 该方法包含:The method contains: (1)新建并初始化知识库:获得训练集合的知识,每项知识对应一组隐含层的连接权值V和输出层的连接权值W,对训练过程中的隐含层和输出层的权值进行调整,并使输出结果相似度E满足收敛条件,确定最终输出的连接权值;(1) Create and initialize the knowledge base: obtain the knowledge of the training set. Each knowledge corresponds to a set of connection weights V of the hidden layer and connection weights W of the output layer. The weights are adjusted, and the similarity E of the output result satisfies the convergence condition, and the connection weight of the final output is determined; (2)在线自适应学习,搜索所述知识库中任意一项知识,获得其对应的隐含层的连接权值V和输出层的连接权值W,对于某一输入数据将该组连接权值作为初始值,使用步骤1中权值的调整对连接权值进行调整,连续运算规定的学习次数N,得到实际运算的输出结果;(2) Online adaptive learning, search for any knowledge in the knowledge base, and obtain the connection weight V of the corresponding hidden layer and the connection weight W of the output layer. For a certain input data, the group of connection weights is obtained. value as the initial value, use the adjustment of the weight value in step 1 to adjust the connection weight value, continuously operate the specified number of learning N, and obtain the output result of the actual operation; (3)判断输出结果和对应的期望输出的相似度是否满足要求:当输出结果与对应的期望输出的相似度达到要求时,则输出结果;当输出结果与对应期望输出相似度未达到要求时,则从知识库中按序搜索并选取新的一项知识,获取一组新的隐含层、输出层以及期望输出,重复步骤(2),若历遍所述知识库中的所有知识,仍未找到对应的知识时,则视该数据为一项新知识的样本;(3) Judging whether the similarity between the output result and the corresponding expected output meets the requirements: when the similarity between the output result and the corresponding expected output meets the requirements, the result is output; when the similarity between the output result and the corresponding expected output does not meet the requirements , then sequentially search and select a new knowledge from the knowledge base, obtain a new set of hidden layer, output layer and expected output, repeat step (2), if all knowledge in the knowledge base is traversed, When the corresponding knowledge has not been found, the data is regarded as a sample of new knowledge; (4)按照步骤(1)中的权值的调整对连接权值进行调整,直至输出结果相似度E满足收敛条件,将对应输出层的连接权值、隐含层的连接权值和期望输出合并为一项新的知识,添加到知识库K中。(4) Adjust the connection weights according to the adjustment of the weights in step (1) until the similarity E of the output result satisfies the convergence condition, and then adjust the connection weights of the corresponding output layer, the connection weights of the hidden layer and the expected output Merged into a new knowledge, added to the knowledge base K. 2.根据权利要求1所述的自适应神经网络学习方法,其特征在于,所述的网络隐含层的连接权值和输出层的连接权值均为矩阵;其中,2. The adaptive neural network learning method according to claim 1, wherein the connection weight of the hidden layer of the network and the connection weight of the output layer are both matrices; wherein, 3.根据权利要求2所述的自适应神经网络学习方法,其特征在于,在步骤(1)中,所述权值的调整算法包含:3. The adaptive neural network learning method according to claim 2, wherein in step (1), the adjustment algorithm of the weight comprises: δj=(yj-oj)f′(netj) (3);δ j =(y j -o j )f'(net j ) (3); 式(1)-(3)中,wij和vij分别为输出层和隐含层在矩阵中(i,j)的连接权值,α为比例系数,δj为矩阵中j列的学习率,f’(netj)为神经元激励函数导数,yj和oj分别为矩阵中j列的期望输出和实际输出,oi为矩阵中i排的实际输出。In formulas (1)-(3), w ij and v ij are the connection weights of the output layer and the hidden layer in the matrix (i, j), respectively, α is the scale coefficient, and δ j is the learning of the j column in the matrix. rate, f'(net j ) is the derivative of neuron excitation function, y j and o j are the expected output and actual output of column j in the matrix, respectively, and o i is the actual output of row i in the matrix. 4.根据权利要求1-3中任意一项所述的自适应神经网络学习方法,其特征在于,在步骤(1)中,所述输出结果相似度E的收敛条件的方程为:4. The adaptive neural network learning method according to any one of claims 1-3, wherein in step (1), the equation of the convergence condition of the output result similarity E is: E=∑Ep (5);E=∑E p (5); 式(4)和(5)中,Ep表示第p个输出神经元的结果相似度,yPj和oPj分别为矩阵中(p,j)的期望输出和实际输出,E表示结果相似度,用于判断网络是否达到收敛要求。In equations (4) and (5), E p represents the result similarity of the pth output neuron, y Pj and o Pj are the expected output and actual output of (p, j) in the matrix, respectively, and E represents the result similarity , which is used to judge whether the network meets the convergence requirements. 5.根据权利要求1所述的自适应神经网络学习方法,其特征在于,在步骤(1)中,当结果相似度E低于期望输出结果的20%时为满足相似度要求;在步骤(4)中,当结果相似度E低于期望输出结果的5%时为满足相似度要求。5. adaptive neural network learning method according to claim 1, is characterized in that, in step (1), when result similarity E is lower than 20% of expected output result, is to satisfy similarity requirement; In step ( 4), when the result similarity E is lower than 5% of the expected output result, the similarity requirement is met. 6.一种自适应神经网络系统,其特征在于,该神经网络系统包含:输入层、隐含层、输出层和知识库,该知识库K=(S1,S2……Sm),Sb为神经网络连接权值,Sb=(V|W|Y),b=1,2……m,Y是期望输出,V和W分别为网络隐含层的连接权值和输出层的连接权值;该神经网络系统为前向型神经网络系统,所述的输入层、隐含层和输出层依次连接以传输,所述的知识库和所述的隐含层连接。6. An adaptive neural network system, characterized in that the neural network system comprises: an input layer, a hidden layer, an output layer and a knowledge base, the knowledge base K=(S 1 , S 2 ...... S m ), S b is the connection weight of the neural network, S b =(V|W|Y), b=1,2...m, Y is the expected output, V and W are the connection weight of the hidden layer and the output layer of the network, respectively The neural network system is a forward neural network system, the input layer, the hidden layer and the output layer are connected in turn for transmission, and the knowledge base is connected with the hidden layer. 7.根据权利要求6所述的自适应神经网络系统,其特征在于,所述的网络隐含层的连接权值和输出层的连接权值均为矩阵;其中,7. The adaptive neural network system according to claim 6, wherein the connection weights of the hidden layer of the network and the connection weights of the output layer are both matrices; wherein,
CN201811173901.5A 2018-10-09 2018-10-09 A kind of adaptive neural network learning method and nerve network system Pending CN109389207A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811173901.5A CN109389207A (en) 2018-10-09 2018-10-09 A kind of adaptive neural network learning method and nerve network system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811173901.5A CN109389207A (en) 2018-10-09 2018-10-09 A kind of adaptive neural network learning method and nerve network system

Publications (1)

Publication Number Publication Date
CN109389207A true CN109389207A (en) 2019-02-26

Family

ID=65426789

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811173901.5A Pending CN109389207A (en) 2018-10-09 2018-10-09 A kind of adaptive neural network learning method and nerve network system

Country Status (1)

Country Link
CN (1) CN109389207A (en)

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11403069B2 (en) 2017-07-24 2022-08-02 Tesla, Inc. Accelerated mathematical engine
US11409692B2 (en) 2017-07-24 2022-08-09 Tesla, Inc. Vector computational unit
US11487288B2 (en) 2017-03-23 2022-11-01 Tesla, Inc. Data synthesis for autonomous control systems
US11537811B2 (en) 2018-12-04 2022-12-27 Tesla, Inc. Enhanced object detection for autonomous vehicles based on field view
US11561791B2 (en) 2018-02-01 2023-01-24 Tesla, Inc. Vector computational unit receiving data elements in parallel from a last row of a computational array
US11562231B2 (en) 2018-09-03 2023-01-24 Tesla, Inc. Neural networks for embedded devices
US11567514B2 (en) 2019-02-11 2023-01-31 Tesla, Inc. Autonomous and user controlled vehicle summon to a target
US11610117B2 (en) 2018-12-27 2023-03-21 Tesla, Inc. System and method for adapting a neural network model on a hardware platform
US11636333B2 (en) 2018-07-26 2023-04-25 Tesla, Inc. Optimizing neural network structures for embedded systems
US11665108B2 (en) 2018-10-25 2023-05-30 Tesla, Inc. QoS manager for system on a chip communications
US11681649B2 (en) 2017-07-24 2023-06-20 Tesla, Inc. Computational array microprocessor system using non-consecutive data formatting
US11734562B2 (en) 2018-06-20 2023-08-22 Tesla, Inc. Data pipeline and deep learning system for autonomous driving
US11748620B2 (en) 2019-02-01 2023-09-05 Tesla, Inc. Generating ground truth for machine learning from time series elements
US11790664B2 (en) 2019-02-19 2023-10-17 Tesla, Inc. Estimating object properties using visual image data
US11816585B2 (en) 2018-12-03 2023-11-14 Tesla, Inc. Machine learning models operating at different frequencies for autonomous vehicles
US11841434B2 (en) 2018-07-20 2023-12-12 Tesla, Inc. Annotation cross-labeling for autonomous control systems
US11893393B2 (en) 2017-07-24 2024-02-06 Tesla, Inc. Computational array microprocessor system with hardware arbiter managing memory requests
US11893774B2 (en) 2018-10-11 2024-02-06 Tesla, Inc. Systems and methods for training machine models with augmented data
US12014553B2 (en) 2019-02-01 2024-06-18 Tesla, Inc. Predicting three-dimensional features for autonomous driving
US12236689B2 (en) 2023-09-22 2025-02-25 Tesla, Inc. Estimating object properties using visual image data

Cited By (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11487288B2 (en) 2017-03-23 2022-11-01 Tesla, Inc. Data synthesis for autonomous control systems
US12020476B2 (en) 2017-03-23 2024-06-25 Tesla, Inc. Data synthesis for autonomous control systems
US11893393B2 (en) 2017-07-24 2024-02-06 Tesla, Inc. Computational array microprocessor system with hardware arbiter managing memory requests
US11409692B2 (en) 2017-07-24 2022-08-09 Tesla, Inc. Vector computational unit
US12216610B2 (en) 2017-07-24 2025-02-04 Tesla, Inc. Computational array microprocessor system using non-consecutive data formatting
US11403069B2 (en) 2017-07-24 2022-08-02 Tesla, Inc. Accelerated mathematical engine
US11681649B2 (en) 2017-07-24 2023-06-20 Tesla, Inc. Computational array microprocessor system using non-consecutive data formatting
US12086097B2 (en) 2017-07-24 2024-09-10 Tesla, Inc. Vector computational unit
US11561791B2 (en) 2018-02-01 2023-01-24 Tesla, Inc. Vector computational unit receiving data elements in parallel from a last row of a computational array
US11797304B2 (en) 2018-02-01 2023-10-24 Tesla, Inc. Instruction set architecture for a vector computational unit
US11734562B2 (en) 2018-06-20 2023-08-22 Tesla, Inc. Data pipeline and deep learning system for autonomous driving
US11841434B2 (en) 2018-07-20 2023-12-12 Tesla, Inc. Annotation cross-labeling for autonomous control systems
US12079723B2 (en) 2018-07-26 2024-09-03 Tesla, Inc. Optimizing neural network structures for embedded systems
US11636333B2 (en) 2018-07-26 2023-04-25 Tesla, Inc. Optimizing neural network structures for embedded systems
US11562231B2 (en) 2018-09-03 2023-01-24 Tesla, Inc. Neural networks for embedded devices
US11983630B2 (en) 2018-09-03 2024-05-14 Tesla, Inc. Neural networks for embedded devices
US11893774B2 (en) 2018-10-11 2024-02-06 Tesla, Inc. Systems and methods for training machine models with augmented data
US11665108B2 (en) 2018-10-25 2023-05-30 Tesla, Inc. QoS manager for system on a chip communications
US11816585B2 (en) 2018-12-03 2023-11-14 Tesla, Inc. Machine learning models operating at different frequencies for autonomous vehicles
US12198396B2 (en) 2018-12-04 2025-01-14 Tesla, Inc. Enhanced object detection for autonomous vehicles based on field view
US11908171B2 (en) 2018-12-04 2024-02-20 Tesla, Inc. Enhanced object detection for autonomous vehicles based on field view
US11537811B2 (en) 2018-12-04 2022-12-27 Tesla, Inc. Enhanced object detection for autonomous vehicles based on field view
US11610117B2 (en) 2018-12-27 2023-03-21 Tesla, Inc. System and method for adapting a neural network model on a hardware platform
US12136030B2 (en) 2018-12-27 2024-11-05 Tesla, Inc. System and method for adapting a neural network model on a hardware platform
US12014553B2 (en) 2019-02-01 2024-06-18 Tesla, Inc. Predicting three-dimensional features for autonomous driving
US11748620B2 (en) 2019-02-01 2023-09-05 Tesla, Inc. Generating ground truth for machine learning from time series elements
US12223428B2 (en) 2019-02-01 2025-02-11 Tesla, Inc. Generating ground truth for machine learning from time series elements
US12164310B2 (en) 2019-02-11 2024-12-10 Tesla, Inc. Autonomous and user controlled vehicle summon to a target
US11567514B2 (en) 2019-02-11 2023-01-31 Tesla, Inc. Autonomous and user controlled vehicle summon to a target
US11790664B2 (en) 2019-02-19 2023-10-17 Tesla, Inc. Estimating object properties using visual image data
US12236689B2 (en) 2023-09-22 2025-02-25 Tesla, Inc. Estimating object properties using visual image data

Similar Documents

Publication Publication Date Title
CN109389207A (en) A kind of adaptive neural network learning method and nerve network system
CN108549658B (en) Deep learning video question-answering method and system based on attention mechanism on syntax analysis tree
CN107025284A (en) The recognition methods of network comment text emotion tendency and convolutional neural networks model
CN110866113B (en) Text classification method based on sparse self-attention mechanism fine-tuning burt model
CN110188351A (en) The training method and device of sentence smoothness degree and syntactic score model
CN107729999A (en) Consider the deep neural network compression method of matrix correlation
CN112686056B (en) Emotion classification method
CN108520298A (en) A Semantic Consistency Verification Method for Land and Air Conversation Based on Improved LSTM-RNN
CN108491515B (en) A Sentence Pair Matching Degree Prediction Method for Campus Psychological Counseling
CN109271516B (en) Method and system for classifying entity types in knowledge graph
CN111125333A (en) Generation type knowledge question-answering method based on expression learning and multi-layer covering mechanism
CN108171319A (en) The construction method of the adaptive depth convolution model of network connection
CN115510814B (en) Chapter-level complex problem generation method based on dual planning
CN115511069A (en) Neural network training method, data processing method, device and storage medium
CN116415177A (en) A Classifier Parameter Identification Method Based on Extreme Learning Machine
CN114491039A (en) Gradient Improvement Based Meta-Learning Few-Shot Text Classification Method
CN114282592A (en) A deep learning-based industry text matching model method and device
CN110232118A (en) A kind of novel answer preference pattern based on GRU attention mechanism
CN112100439B (en) Recommendation method based on dependency embedding and neural attention network
Chien et al. Self-supervised adversarial training for contrastive sentence embedding
CN113094504A (en) Self-adaptive text classification method and device based on automatic machine learning
CN111461229B (en) Deep neural network optimization and image classification method based on target transfer and line search
CN117033961A (en) Multi-mode image-text classification method for context awareness
CN117435737B (en) Lifelong emotion classification method based on self-adaptive parameter updating
CN111368524A (en) Microblog viewpoint sentence recognition method based on self-attention bidirectional GRU and SVM

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20190226