CN112560997B - Fault identification model training method, fault identification method and related device - Google Patents
Fault identification model training method, fault identification method and related device Download PDFInfo
- Publication number
- CN112560997B CN112560997B CN202011590807.7A CN202011590807A CN112560997B CN 112560997 B CN112560997 B CN 112560997B CN 202011590807 A CN202011590807 A CN 202011590807A CN 112560997 B CN112560997 B CN 112560997B
- Authority
- CN
- China
- Prior art keywords
- data
- layer
- fault
- training
- neuron processing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000012549 training Methods 0.000 title claims abstract description 211
- 238000000034 method Methods 0.000 title claims abstract description 75
- 238000003062 neural network model Methods 0.000 claims abstract description 56
- 238000005070 sampling Methods 0.000 claims abstract description 41
- 239000000523 sample Substances 0.000 claims description 110
- 210000002569 neuron Anatomy 0.000 claims description 107
- 238000012545 processing Methods 0.000 claims description 106
- 230000008569 process Effects 0.000 claims description 14
- 239000003990 capacitor Substances 0.000 claims description 13
- 239000013068 control sample Substances 0.000 claims description 9
- 238000010606 normalization Methods 0.000 claims description 7
- 238000012360 testing method Methods 0.000 claims description 7
- 238000004590 computer program Methods 0.000 claims description 5
- 238000007689 inspection Methods 0.000 claims description 5
- 230000002035 prolonged effect Effects 0.000 abstract description 3
- 230000006870 function Effects 0.000 description 17
- 238000013528 artificial neural network Methods 0.000 description 10
- 239000011159 matrix material Substances 0.000 description 9
- 238000010586 diagram Methods 0.000 description 8
- 238000004891 communication Methods 0.000 description 7
- 230000004913 activation Effects 0.000 description 6
- 238000004364 calculation method Methods 0.000 description 4
- 238000000605 extraction Methods 0.000 description 4
- 230000002159 abnormal effect Effects 0.000 description 3
- 239000000306 component Substances 0.000 description 3
- 238000009434 installation Methods 0.000 description 3
- 238000012423 maintenance Methods 0.000 description 3
- 238000012546 transfer Methods 0.000 description 3
- 239000008358 core component Substances 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000007476 Maximum Likelihood Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000009472 formulation Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000000630 rising effect Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2415—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
-
- F—MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
- F24—HEATING; RANGES; VENTILATING
- F24F—AIR-CONDITIONING; AIR-HUMIDIFICATION; VENTILATION; USE OF AIR CURRENTS FOR SCREENING
- F24F11/00—Control or safety arrangements
- F24F11/30—Control or safety arrangements for purposes related to the operation of the system, e.g. for safety or monitoring
- F24F11/32—Responding to malfunctions or emergencies
- F24F11/38—Failure diagnosis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06Q—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
- G06Q10/00—Administration; Management
- G06Q10/20—Administration of product repair or maintenance
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Molecular Biology (AREA)
- Business, Economics & Management (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Human Resources & Organizations (AREA)
- Chemical & Material Sciences (AREA)
- Mechanical Engineering (AREA)
- Combustion & Propulsion (AREA)
- Economics (AREA)
- Entrepreneurship & Innovation (AREA)
- Marketing (AREA)
- Operations Research (AREA)
- Quality & Reliability (AREA)
- Strategic Management (AREA)
- Tourism & Hospitality (AREA)
- General Business, Economics & Management (AREA)
- Probability & Statistics with Applications (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Testing And Monitoring For Control Systems (AREA)
Abstract
The embodiment of the invention provides a fault identification model training method, a fault identification method and a related device, and relates to the technical field of air conditioners. The fault identification model training method comprises the following steps: acquiring training sample data; the training sample data comprises control sampling data and corresponding fault classification labels; the control sampling data is the control output quantity of a compressor control element; and training the preselected neural network model layer by utilizing the training sample data to obtain the fault identification model. The fault recognition model obtained through training is utilized to realize timely and accurate recognition of the faults of the compressor, the service life of the air conditioner is prolonged, and property loss is avoided.
Description
Technical Field
The invention relates to the technical field of air conditioners, in particular to a fault identification model training method, a fault identification method and a related device.
Background
Along with the development of scientific technology, the intelligent air conditioner has multiple functions and comfortableness, and becomes the first-choice equipment of the digital building temperature regulator. With the rising of the sales volume of the air conditioner, the later detection and the safety maintenance of the operation of the air conditioner are very important.
For the air conditioner, the compressor is a core component, and the compressor cannot be overhauled in time usually due to the special installation position of the compressor in the use process of the air conditioner, so that serious accidents are caused. Not only can the service life of the air conditioner be affected, but also property losses can be caused.
Disclosure of Invention
The invention solves the problem that the faults of the compressor of the air conditioner are difficult to be found timely and accurately.
In order to solve the above problems, the embodiments of the present invention provide a fault recognition model training method, a fault recognition method, and related devices.
In a first aspect, the present invention provides a fault recognition model training method, including: acquiring training sample data; the training sample data comprises control sampling data and corresponding fault classification labels; the control sampling data is the control output quantity of a compressor control element; and training the preselected neural network model layer by utilizing the training sample data to obtain the fault identification model.
In the embodiment, the control output quantity of the compressor control element under different fault types is used as a training sample, the neural network model is trained layer by layer, and the model which can identify whether the compressor has faults or not according to the control output quantity generated in the operation process of the air conditioner is obtained, so that the faults of the compressor can be timely and accurately identified, the service life of the air conditioner is prolonged, and property loss is avoided.
In an alternative embodiment, the neural network model includes a plurality of neuron processing layers, and the step of training the preselected neural network model layer by layer using the training sample data includes: when the to-be-trained neuron processing layer is a first layer in a plurality of layers of the neuron processing layers, inputting the control sampling data in the training sample data into the to-be-trained neuron processing layer to obtain first output data; decoding the first output data to obtain first decoded data; iterating the neuron processing layer to be trained by utilizing the difference between the control sampling data and the first decoding data; when the neuron processing layer to be trained is not a first layer in a plurality of layers of the neuron processing layers, inputting second output data of a neuron processing layer adjacent to the previous layer of the neuron processing layer to be trained to obtain third output data; decoding the third output data to obtain second decoded data; and iterating the neuron processing layer to be trained by utilizing the difference between the second output data and the second decoding data.
In the above embodiment, the feature extraction accuracy of each layer of neuron processing layer is ensured by layer-by-layer training of the neuron processing layer. Meanwhile, compared with the integral iteration of the model in the prior art, the training complexity and the training workload are reduced.
In an alternative embodiment, the neural network model includes a classification layer, and the step of training the preselected neural network model layer by layer using the training sample data further includes: inputting the fourth output data output by the last layer in the multiple layers of neuron processing layers into the classification layer to obtain a classification result; and iterating the classification layer according to the classification result and the corresponding fault classification label of the training sample data.
In the above embodiment, by training the classifier pertinently, the classification accuracy of the model is ensured. Meanwhile, compared with the integral iteration of the model in the prior art, the training complexity and the training workload are reduced.
In an alternative embodiment, before inputting the control sample data in the training sample data into the neuron processing layer to be trained, the step of training the pre-selected neural network model layer by layer using the training sample data further comprises: and carrying out normalization processing on control sampling data in each training sample data so as to input the control sampling data after normalization processing into the neural network model.
In an alternative embodiment, the compressor control element includes one or more of a compressor main control chip, a compressor electrolytic capacitor, an intelligent power module, and a compressor insulated gate bipolar transistor.
In the embodiment, from the angles of a plurality of compressor control elements, whether the compressor has faults or not is evaluated, and the fault identification accuracy is improved.
In an alternative embodiment, the training sample data further includes operation data of the compressor and corresponding fault classification tags.
In a second aspect, the present invention provides a fault identification method, applied to an air conditioner, the fault identification method comprising:
collecting control output quantity corresponding to a compressor control element as test data in the running process of the air conditioner;
Inputting the test data into a fault recognition model to obtain a fault recognition result; the fault recognition model is a model obtained through training by the fault recognition model training method according to any one of the previous embodiments.
In an alternative embodiment, if the type of the fault is not determined by using the control output, the fault identification method further includes:
Collecting real-time operation data of the compressor;
and inputting the real-time operation data into a fault recognition model to determine a fault recognition result.
In an alternative embodiment, in the case of determining that a new fault occurs, the fault identification method further includes:
Acquiring a target control output quantity acquired in a preset time period; wherein the preset time period comprises an occurrence time point corresponding to the new fault;
And associating the target control output quantity with a fault classification label corresponding to the new fault, and generating new training sample data so as to update and train the fault recognition model by using the fault recognition model training method.
In a third aspect, the present invention provides a failure recognition model training apparatus, comprising:
the acquisition module is used for acquiring training sample data; the training sample data comprises control sampling data and corresponding fault classification labels; the control sampling data is the control output quantity of a compressor control element;
And the training module is used for training the preselected neural network model layer by utilizing the training sample data so as to obtain the fault identification model.
In a fourth aspect, the present invention provides a fault recognition apparatus, for use in an air conditioner, comprising:
the acquisition module is used for acquiring the control output quantity corresponding to the control element of the compressor as the check data in the running process of the air conditioner;
The identification module is used for inputting the inspection data into a fault identification model to obtain a fault identification result; the fault recognition model is a model obtained through training by the fault recognition model training method according to any one of the previous embodiments.
In a fifth aspect, the present invention provides an air conditioner, including a processor and a memory, the memory storing machine executable instructions executable by the processor to implement the fault identification model training method according to any one of the preceding embodiments, or the processor may execute the machine executable instructions to implement the fault identification method according to any one of the preceding embodiments.
In a sixth aspect, the present invention provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a fault recognition model training method as in any of the previous embodiments; or a computer program which when executed by a processor implements the fault identification method according to any of the preceding embodiments.
Drawings
FIG. 1 is a diagram showing an example of the structure of a neural network model according to an embodiment of the present invention;
fig. 2 is a schematic diagram of an electronic device according to an embodiment of the present invention;
FIG. 3 is a flowchart illustrating steps of a method for training a failure recognition model according to an embodiment of the present invention;
FIG. 4 is one of the sub-step flowcharts of step S102 provided in the embodiment of the present invention;
FIG. 5 is a second flowchart of the substep of step S102 according to an embodiment of the present invention;
FIG. 6 is a flowchart illustrating steps of a fault identification method according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of a training device for a failure recognition model according to an embodiment of the present invention;
fig. 8 is a schematic diagram of a fault recognition device according to an embodiment of the present invention.
Reference numerals illustrate:
100-an air conditioner; 110-memory; a 120-processor; 130-a communication module; 300-a fault recognition model training device; 301-an acquisition module; 302-a training module; 400-fault recognition means; 401-an acquisition module; 402-an identification module.
Detailed Description
With the rapid increase of the installation quantity of the air conditioner, the later detection and the safety maintenance of the equipment are very important. The compressor and the controller thereof are used as core components of the air conditioner, and the compressor and the controller thereof cannot be overhauled in time usually due to special installation positions in the use process, so that early minor faults are not easy to find; and the critical devices of the compressor and the controller in a part of air conditioners are longer than the specified service life, if the devices run for a long time, faults can occur, for example, the internal devices of the compressor age to cause abnormal operation, the critical devices such as a main control chip, an electrolytic capacitor, an IPM module, an IGBT and the like on the controller increase along with the working time, the devices age, the output control data have the problems of step, deviation, abnormal amplitude and the like caused by functional failure, and thus the abnormal operation of the air conditioner is caused. If the faults cannot be found and handled in time, accidents can occur due to the fact that the faulty air conditioner continuously operates, and finally the service life of the air conditioner can be influenced, and property loss is caused.
In recent years, the failure occurrence rate of air conditioners is gradually increased, and if the compressor and the controller of each air conditioner are safely maintained only by a manual mode, a great amount of cost and material resources are consumed in the process, so that the safety maintenance of the air conditioner by the manual mode cannot meet the requirements.
In order to solve the above problems, the embodiments of the present invention provide a fault recognition model training method, a fault recognition method, and related devices.
In order that the above objects, features and advantages of the invention will be readily understood, a more particular description of the invention will be rendered by reference to specific embodiments thereof which are illustrated in the appended drawings.
The training sample data in the embodiment of the invention consists of a data body and a data label, wherein the data body corresponds to the data label one by one. The data body may be a control output of a control element of the compressor or may be operation data of the compressor. The data tags may be fault classification tags, such as no fault, minor fault, and typical faults. For convenience of description, in the embodiment of the present invention, the data tag and the fault classification tag may be used interchangeably. In some embodiments, the training sample data may be generated based on control output or compressor operation data collected from the air conditioner under different fault classifications.
The compressor control element in the embodiment of the invention is a control element capable of affecting the operation of the compressor, such as one or more of a compressor main control chip, a compressor electrolytic capacitor, an Intelligent Power Module (IPM) and a compressor Insulated Gate Bipolar Transistor (IGBT).
Obviously, different sample training sets may be constructed for the compressor control element and the compressor for storing the corresponding training sample data. Meanwhile, different sample training sets can be constructed for different compressor control elements. And combining different sample training sets together to obtain an offline training data set.
Details of constructing different sample training sets are presented below:
(1) And constructing a sample training set A, A= (A1, A2 and A3) corresponding to the main control chip.
Alternatively, A1 is a set of training sample data generated based on the control output of the main control chip of the compressor in the fault-free state, and a set of "a 1,a2...ap" represents the control output (i.e., control sampling data) collected at p control output points of the main control chip at the same time, for example, may be a voltage value. n represents the number of sets of control outputs, it being understood that under fault-free operation, multiple sets a 1,a2...ap can be acquired at different points in time. The above (I, J) refers to an index of the data tag "a 1,a2...ap", through which a corresponding data tag can be found, specifically, I e (1, 2, 3.) represents a serial number of the data tag, J e (1, 2, 3.) represents a serial number of a data tag type. The above-mentioned I takes an integer value representing the total number of data tags. J is also an integer representing the total number of data tag categories. For example, if the serial number of the label type corresponding to "no fault" is 1, the index of the data label corresponding to each group of control output in A1 is (i, 1). p is also an integer value representing the training data segment length. If the training sample data is too long, the length of the training data segment can be cut.
Optionally, A2 is a set of training sample data generated based on the control output of the main control chip of the compressor in the slight fault state, a 2={(a′1,a′2...a′p)i,j n is set, and a set of "a' 1,a′2...a′p" represents the control output (i.e. control sampling data) collected at p control output points of the main control chip at the same time. n represents the number of sets of control outputs, it being understood that multiple sets a' 1,a′2...a′p may be acquired at different points in time during operation in a light fault condition. (i, j) refers to the index of the data tag of "a' 1,a′2...a′p", by which the corresponding data tag can be found. Specifically, I e (1, 2, 3.) represents the serial number of the data tag, J e (1, 2, 3.) represents the serial number of the data tag type. The above-mentioned I takes an integer value representing the total number of data tags. J is also an integer representing the total number of data tag categories. For example, if the number of the tag type corresponding to the "slight failure" is 2, the index of the data tag corresponding to each group of control output in A2 is (i, 2), respectively.
Optionally, A3 is a set of training sample data generated based on the control output of the main control chip of the compressor in a typical fault state, a 3={(a″1,a″2...a″p)i,j n is set of "a" 1,a″2...a″p "representing the control output (i.e. control sampling data) collected at p control output points of the main control chip at the same time. n represents the number of sets of control outputs, it being understood that multiple sets a "1,a″2...a″p may be acquired at different points in time during operation under typical fault conditions. (i, j) refers to the index of the data tag of "a" 1,a″2...a″p ". The above I e (1, 2, 3.) represents the serial number of the data tag, and J e (1, 2, 3.) represents the serial number of the data tag type. The above-mentioned I takes an integer value representing the total number of data tags. J is also an integer representing the total number of data tag categories. For example, if the number of the tag type corresponding to the "typical failure" is 3, the index of the data tag corresponding to each group of control output in A3 is (i, 3), respectively.
(2) And constructing a sample training set C, C= (C1, C2 and C3) corresponding to the electrolytic capacitor.
Optionally, C1 is a set of training sample data generated based on the control output of the electrolytic capacitor in the fault-free state, and the above-mentioned C 1={(c1,c2...cq)i,j n }, and a set of "C 1,c2...cq" represents the control output (i.e., control sampling data) collected at q control output points of the electrolytic capacitor at the same time, for example, may be a voltage value. n represents the number of sets of control outputs, it being understood that under fault-free operation, multiple sets of c 1,c2...cq may be acquired at different points in time. The above (I, J) refers to an index of the data tag of "c 1,c2...cq", through which a corresponding data tag can be found, specifically, I e (1, 2, 3.) represents a serial number of the data tag, J e (1, 2, 3.) represents a serial number of a data tag type. The above-mentioned I takes an integer value representing the total number of data tags. J is also an integer representing the total number of data tag categories. For example, if the serial number of the label type corresponding to "no fault" is 1, the index of the data label corresponding to each group of control output in C1 is (i, 1), respectively. q is also an integer value, characterizing the training data segment length. If the training sample data is too long, the length of the training data segment can be cut.
Optionally, the above C2 is a set of training sample data generated based on the control output of the electrolytic capacitor of the compressor in the slight fault state, and the above C 2={(c′1,c′2...c′q)i,j n }, and the set of "C '1,c′2..c'" represents the control output (i.e., the control sample data) collected at q control output points of the electrolytic capacitor at the same time. n represents the number of groups of control outputs, it being understood that multiple groups c '1,c′2..c' can be collected at different points in time during operation in a light fault condition. (I, J) refers to the index of the data tag of "c '1,c′2..c'", i.e. (1, 2, 3.), I) represents the serial number of the data tag, J e (1, 2, 3.) represents the serial number of the data tag type. The above-mentioned I takes an integer value representing the total number of data tags. J is also an integer representing the total number of data tag categories. For example, if the number of the tag type corresponding to the "slight failure" is 2, the index of the data tag corresponding to each group of control output amounts in C2 is (i, 2), respectively.
Optionally, C3 is a set of training sample data generated based on the control output of the electrolytic capacitor of the compressor in a typical fault condition, C 3={(c″1,c″2..c″q)i,j n, and a set of "C" 1,c″2..c″q "represents the control output (i.e., control sample data) collected at q control output points of the electrolytic capacitor at the same time. n represents the number of sets of control outputs, it being understood that multiple sets of c "1,c″2..c″q may be collected at different points in time during operation under typical fault conditions. (i, j) refers to the index of the data tag of "c" 1,c″2..c″q ". The above I e (1, 2, 3.) represents the serial number of the data tag, and J e (1, 2, 3.) represents the serial number of the data tag type. The above-mentioned I takes an integer value representing the total number of data tags. J is also an integer representing the total number of data tag categories. For example, if the number of the tag type corresponding to the "typical failure" is 3, the index of the data tag corresponding to each group of control output in C3 is (i, 3), respectively.
(3) And constructing a sample training set D, D= (D1, D2 and D3) corresponding to the intelligent power module.
Optionally, D1 is a set of training sample data generated based on the control output of the intelligent power module in the fault-free state, and D 1={(d1,d2...de)i,j n above, and a set of "D 1,d2...de" represents the control output (i.e., control sampling data) collected at the e control output points of the intelligent power module at the same time. n represents the number of sets of control outputs, it being understood that under fault-free operation, multiple sets of d 1,d2...de can be acquired at different points in time. The above (I, J) refers to an index of the data tag of "d 1,d2...de", through which a corresponding data tag can be found, specifically, I e (1, 2, 3.) represents a serial number of the data tag, J e (1, 2, 3.) represents a serial number of a data tag type. The above-mentioned I takes an integer value representing the total number of data tags. J is also an integer representing the total number of data tag categories. For example, if the serial number of the label type corresponding to "no fault" is 1, the index of the data label corresponding to each group of control output in D1 is (i, 1), respectively. e is also an integer value, representing the length of the training data segment. If the training sample data is too long, the length of the training data segment can be cut.
Optionally, D2 is a set of training sample data generated based on the control output of the intelligent power module in the slight fault state, D 2={(d′1,d′2...d′e)i,j n, and a set of "D' 1,d′2...d′e" represents the control output (i.e. control sampling data) collected at e control output points of the intelligent power module at the same time. n represents the number of sets of control outputs, it being understood that multiple sets d' 1,d′2...d′e may be acquired at different points in time during operation in a light fault condition. (I, J) refers to the index of the data tag of "d' 1,d′2...d′e", I represents the serial number of the data tag, J represents the serial number of the data tag type. The above-mentioned I takes an integer value representing the total number of data tags. J is also an integer representing the total number of data tag categories. For example, if the number of the tag type corresponding to the "slight failure" is 2, the index of the data tag corresponding to each group of control output in D2 is (i, 2), respectively.
Optionally, D3 is a set of training sample data generated based on the control output of the intelligent power module of the compressor in the typical fault condition, D 3={(d″1,d″2..d″e)i,j n, and a set of "D" 1,d″2..d″e "represents the control output (i.e., control sample data) collected at e control output points of the intelligent power module at the same time. n represents the number of sets of control outputs, it being understood that multiple sets of d "1,d″2..d″e may be acquired at different points in time during operation under typical fault conditions. (i, j) refers to the index of the data tag of "d" 1,d″2..d″e ". The above I e (1, 2, 3.) represents the serial number of the data tag, and J e (1, 2, 3.) represents the serial number of the data tag type. The above-mentioned I takes an integer value representing the total number of data tags. J is also an integer representing the total number of data tag categories. For example, if the number of the tag type corresponding to the "typical failure" is 3, the index of the data tag corresponding to each group of control output in D3 is (i, 3), respectively.
(4) And constructing a sample training set U, U= (U1, U2 and U3) corresponding to the insulated gate bipolar transistor of the compressor.
Optionally, U1 is a set of training sample data generated based on the control output of the igbt in the fault-free state, and U 1={(u1,u2...ur)i,j n above, and a set of "U 1,u2...ur" represents the control output (i.e., control sample data) collected at r control output points of the igbt at the same time. n represents the number of sets of control outputs, it being understood that under fault-free operation, multiple sets u 1,u2...ur may be acquired at different points in time. The above (I, J) refers to an index of a data tag of "u 1,u2...ur", through which a corresponding data tag can be found, specifically, I e (1, 2, 3.) represents a serial number of the data tag, J e (1, 2, 3.) represents a serial number of a data tag type. The above-mentioned I takes an integer value representing the total number of data tags. J is also an integer representing the total number of data tag categories. For example, if the serial number of the label type corresponding to "no fault" is 1, the index of the data label corresponding to each group of control output in U1 is (i, 1). r is also an integer value, representing the length of the training data segment. If the training sample data is too long, the length of the training data segment can be cut.
Optionally, U2 is a set of training sample data generated based on the control output of the igbt in the slight fault condition, and U 2={(u′1,u′2...u′r)i,j n, a set of "U' 1,u′2...u′r" represents the control output (i.e., control sample data) collected at r control output points of the igbt at the same time. n represents the number of sets of control outputs, it being understood that multiple sets u' 1,u′2...u′r may be acquired at different points in time during operation in a light fault condition. (I, J) refers to the index of the data tag of "u' 1,u′2...u′r", where I e (1, 2, 3.) above represents the serial number of the data tag, and J e (1, 2, 3.) represents the serial number of the data tag type. The above-mentioned I takes an integer value representing the total number of data tags. J is also an integer representing the total number of data tag categories. For example, if the number of the tag type corresponding to the "slight failure" is 2, the index of the data tag corresponding to each group of control output amounts in U2 is (i, 2), respectively.
Optionally, U3 is a set of training sample data generated based on the control output of the compressor igbt in a typical fault condition, U 3={(u″1,u″2..u″r)i,j n, and a set of "U" 1,u″2..u″r "represents the control output (i.e., control sample data) collected at r control output points of the igbt at the same time. n represents the number of sets of control outputs, it being understood that during operation in a typical fault condition, multiple sets u "1,u″2..u″r may be acquired at different points in time. (i, j) refers to the index of the data tag of "u" 1,u″2..u″r ". The above I e (1, 2, 3.) represents the serial number of the data tag, and J e (1, 2, 3.) represents the serial number of the data tag type. The above-mentioned I takes an integer value representing the total number of data tags. J is also an integer representing the total number of data tag categories. For example, if the number of the tag type corresponding to the "typical failure" is 3, the index of the data tag corresponding to each group of control output in U3 is (i, 3), respectively.
(5) And constructing a sample training set V, V= (V1, V2 and V3) corresponding to the compressor operation data.
Optionally, V1 is a set of training sample data generated based on the compressor operation data in the fault-free state, and V 1={(v1,v2...vt)i,j n above, and a set of "V 1,v2...vt" represents data collected at t operation data sampling points of the compressor at the same time. n represents the number of sets of operational data, it being understood that under fault-free operation, multiple sets of v 1,v2...vt may be acquired at different points in time. The above (I, J) refers to an index of the data tag "v 1,v2...vt", through which a corresponding data tag can be found, specifically, I e (1, 2, 3.) represents a serial number of the data tag, J e (1, 2, 3.) represents a serial number of a data tag type. The above-mentioned I takes an integer value representing the total number of data tags. J is also an integer representing the total number of data tag categories. For example, if the serial number of the tag type corresponding to "no fault" is 1, the index of the data tag corresponding to each set of operation data in V1 is (i, 1). t is also an integer value representing the length of the training data segment. If the training sample data is too long, the length of the training data segment can be cut.
Optionally, V2 is a set of training sample data generated based on the operation data of the compressor in the slight fault state, V 2={(v′1,v′2...v′t)i,j n and a set of "V' 1,v′2...v′t" represent a set of operation data collected at t operation data sampling points of the compressor at the same time. n represents the number of sets of operational data, it being understood that multiple sets v' 1,v′2...v′t may be acquired at different points in time during operation in a light fault condition. (I, J) refers to the index of the data tag of "v' 1,v′2...v′t", where I e (1, 2, 3.) above represents the serial number of the data tag, and J e (1, 2, 3.) represents the serial number of the data tag type. The above-mentioned I takes an integer value representing the total number of data tags. J is also an integer representing the total number of data tag categories. For example, if the number of the tag type corresponding to the "slight failure" is 2, the index of the data tag corresponding to each set of operation data in V2 is (i, 2), respectively.
Optionally, V3 is a set of training sample data generated based on the operation data of the compressor in the typical fault state, and V 3={(v″1,v″2..v″t)i,j n is set of "V" 1,v″2..v″t "representing a set of operation data collected at t operation data sampling points of the compressor at the same time. n represents the number of sets of operational data, it being understood that multiple sets of v "1,v″2..v″t may be collected at different points in time during operation under typical fault conditions. (i, j) refers to the index of the data tag of "v" 1,v″2..v″t ". The above I e (1, 2, 3.) represents the serial number of the data tag, and J e (1, 2, 3.) represents the serial number of the data tag type. The above-mentioned I takes an integer value representing the total number of data tags. J is also an integer representing the total number of data tag categories. For example, if the serial number of the tag type corresponding to the "typical failure" is 3, the index of the data tag corresponding to each set of operation data in V3 is (i, 3), respectively.
From the sample training set A, C, D, U, V described above, an offline training dataset M is created. In some embodiments, one or more of the sample training sets A, C, D, U, V may be combined to obtain the offline training data set M. For example, combining sample training sets A, C, D, U, V together, then the resulting m= { A, C, D, U, V }. In this way, a plurality of pieces of training sample data can be directly obtained from the offline training data set M for training of the neural network model. Each piece of training sample data may be represented as x= [ X 1,x2,...xy ], where X is equivalent to a and y is equivalent to p if the training sample data is from the sample training set a. Similarly, if the training sample data is from sample training set C, then x is equivalent to C, y is equivalent to q, and so on.
The neural network model referred to in the embodiments of the present invention may be a deep-via network (deep neural network, DNN), also referred to as a multi-layer neural network, which may be understood as a neural network having multiple hidden layers. As shown in fig. 1, DNNs are divided according to the positions of different layers, and a neural network inside a DNN may be divided into an input layer, a plurality of hidden layers, an output layer, and a classification layer. The input layer, the plurality of hidden layers, and the output layer are all essentially neuron processing layers made up of neurons. The input layer is the first layer of the neuron processing layers in the neural network model, and the output layer is the last layer of the neuron processing layers in the neural network model.
The neural network model described above may combine low-level features of data together through multiple layers of nonlinear transformations to form a model of more abstract high-level features. The method can effectively extract potential characteristics of the data. The neural network model training mechanism and the feature extraction method can effectively analyze potential features of compressor operation data and control output. The construction of the neural network model is achieved by stacking a plurality of automatic encoders, each layer of automatic encoder is equivalent to a simple artificial neural network, the automatic encoder is realized by enabling an output signal to be equal to an input signal, the output is a mode of enabling the output to reconstruct and input, the weight and the bias parameters in each layer of the neural network are adjusted through training, and the trained neural network model output can be regarded as a representation of input data.
In some embodiments, the neural network model formulation described above is formulated as follows:
Wherein Net represents a generated neural network model, tr represents parameter configuration of the neural network model, funtion (·) functions are used for generating the neural network model, θ= { θ 1,θ2,...,θN } represents parameter sets of each layer of neuron processing layer in the neural network model, and θ 1=(w1,b1),w1 and b 1 are respectively a weight matrix and a bias matrix of a first layer in the neuron processing layers in the neural network model; o N represents the number N E (1, 2,.. N) of neurons in the nth neuron processing layer in the neural network model, and N is a positive integer representing the number of layers of the neuron processing layer in the neural network model; Representing training sample data for use in training neural network models The number of neurons in the first layer of neuron processing layer of the neural network model equal to the length of the input training data segment is denoted as O 1 = y. Parameters θ n=(wn,bn of the initialized neural network model), the neural network model weight matrix is initialized to w n=rand(On, p), the function rand (·) represents randomly generating random values of 0 to 1, and all neuron bias matrices in the network initialize a 0 matrix, denoted b n=zeros(On, 1).
In some embodiments, the fault recognition model training method, the fault recognition method and the related devices provided by the embodiments of the present invention may be applied to the electronic device shown in fig. 2.
In some embodiments, the electronic device may be an air conditioner. In other embodiments, the electronic device may be a computer device communicatively coupled to the air conditioner, where the computer device may obtain the required data from the air conditioner to train or perform the fault identification process on the fault identification model.
Taking an electronic device as an example of the air conditioner 100, as shown in fig. 2, a block diagram of the air conditioner 100 is shown. The air conditioner 100 includes a memory 110, a processor 120, and a communication module 130. The memory 110, the processor 120, and the communication module 130 are electrically connected directly or indirectly to each other to realize data transmission or interaction. For example, the components may be electrically connected to each other via one or more communication buses or signal lines.
Wherein the memory 110 is used for storing programs or data. The Memory 110 may be, but is not limited to, random access Memory (Random Access Memory, RAM), read Only Memory (ROM), programmable Read Only Memory (Programmable Read-Only Memory, PROM), erasable Read Only Memory (Erasable Programmable Read-Only Memory, EPROM), electrically erasable Read Only Memory (Electric Erasable Programmable Read-Only Memory, EEPROM), etc.
The processor 120 is used to read/write data or programs stored in the memory 110 and perform corresponding functions.
The communication module 130 is used for establishing communication connection between the server and other communication terminals through the network, and is used for receiving and transmitting data through the network.
It should be understood that the architecture shown in fig. 2 is merely a schematic diagram of a server that may also include more or fewer components than those shown in fig. 2, or have a different configuration than that shown in fig. 2. The components shown in fig. 2 may be implemented in hardware, software, or a combination thereof.
Referring to fig. 3, fig. 3 illustrates a fault recognition model training method provided by an embodiment of the present invention. The fault identification model training method is applied to the electronic equipment. As shown in fig. 3, the fault recognition model training method may include the following steps:
Step S101, training sample data is acquired.
In some embodiments, training sample data may be obtained from an offline training data set. For example, each piece of data in the offline training data set may be used as training sample data.
It will be appreciated that if a piece of training sample data is extracted from any one of the sample training sets A, C, D, U corresponding to the offline training data set, the piece of training sample data is composed of the control sample data and the corresponding fault classification label. The control sampling data is the control output quantity of a compressor control element (namely one of a compressor main control chip, a compressor electrolytic capacitor, an intelligent power module and a compressor insulated gate bipolar transistor). If one piece of training sample data is extracted from the sample training set V corresponding to the offline training data set, the piece of training sample data consists of the operation data of the compressor and the corresponding fault classification label.
In some embodiments, in the face of a scenario in which the length of the training sample data exceeds the data length receivable by the input layer of the neural network model, on the one hand, the data extracted from the offline training data set may be cropped so that the length of the training sample data obtained after cropping does not exceed the data length receivable by the input layer of the neural network model. Alternatively, when selecting a neural network model for training, a model with a sufficient number of neurons at the input layer may be selected to increase the data length that the input layer can receive.
Step S102, training the preselected neural network model layer by utilizing training sample data to obtain a fault identification model.
In some embodiments, training may be performed for each layer in the neural network model in turn. That is, each layer in the neural network model may be traversed (it is understood that the traversed layer may include a neuron processing layer, and may also include a classification layer), with each traversed to-each layer neuron processing layer being the neuron processing layer to be trained. And training the neuron processing layer to be trained.
When the to-be-trained neuron processing layer is the first layer of the multiple neuron processing layers, as shown in fig. 4, the step S102 may include:
step S102-1, inputting the data body in the training sample data into the neuron processing layer to be trained to obtain first output data.
The first output data is a feature matrix extracted by a first layer neuron processing layer.
The data body may refer to control sampling data or compressor operation data. In the actual running process, the data type pointed by the data body is different according to the different training sample data. For example, the training sample data is extracted from the sample training set a, and the corresponding data body is control sampling data (for example, "a1, a2 … ap") corresponding to the main control chip of the compressor.
In some embodiments, the formula may be utilized based on the data body in the training sample data:
Extraction of the first output data is performed. Wherein h1 represents the first output data, Is the activation function of each neuron, expressed as a sigmoid functionParameter z=w 1h1+b1; the above w1 and b1 are the network parameters θ 1=(w1,b1 of the first layer neuron processing layer before the neural network model iterates), which are known quantities.
Step S102-2, decoding the first output data to obtain first decoded data.
In some embodiments, the first output data may be decoded using a decoding network to obtain first decoded data. For example, from the first output data, the formula is used:
First decoded data is calculated. Wherein, delta (·) is the sigmoid activation function of the first layer neuron processing layer, Is an activation function of the decoding network,Representing the parameters of the decoding network,Is the weight matrix of the decoding network and d 1 is the decoding network bias matrix.
And step S102-3, iterating the neuron processing layer to be trained by utilizing the difference between the data body and the first decoding data.
In some embodiments, the network parameters of the neuron processing layer to be trained and the corresponding decoding network may be updated with error reverse transfer based on the difference between the data body and the first decoded data, i.e., θ 1=(w1,b1) and with error reverse transfer
In some embodiments, the network parameters θ 1=(w1,b1 of the first layer neuron processing layer are optimized by minimizing the reconstruction error f (X, Y 1,w1,b1), thus making the reconstruction error calculation method for the output-near-input neural networkY 1 is the first decoded data, and X is the data body in the training sample data. Further, network parameters are updated through a gradient descent strategy, and the parameter updating calculation is expressed as follows:
Where α is the learning rate of the initialization setting, w 1,s+1 and b 1,s+1 represent model parameters updated by back propagation. S is the number of the current network update error back propagation layers, S represents the maximum number of the error back propagation layers, for example, the first layer neuron processing layer and the corresponding decoding network are updated through the error back propagation, then the value of S is 2, the corresponding S of the decoding network is 1, and the corresponding S of the first layer neuron processing layer is 2. Representing the derivative of the error function with respect to the network weights w 1,Is the derivative of the error function with respect to the network bias b 1 as the direction of the neural network gradient descent.
When the to-be-trained neuron processing layer is not the first layer of the plurality of neuron processing layers, as shown in fig. 5, the step S102 includes the following sub-steps:
Step S102-4, inputting the second output data of the adjacent upper neuron processing layer into the neuron processing layer to be trained to obtain the third output data.
In some embodiments, the formula may be used based on the second output data:
Extraction of the first output data is performed. Wherein h i-1 represents the second output data. i represents the number of layers in the model where the training neuron processing layer is located, and may be a positive integer greater than 1, and in particular, when i=2, the adjacent upper neuron processing layer is represented as a first neuron processing layer in the model, and then the second output data is the first output data. Is the activation function of each neuron, expressed as a sigmoid functionParameter z=w ihi+bi; the above w i and b i are the network parameters θ i=(wi,bi of the neuron processing layer to be trained before the neural network model iterates, which are known quantities.
And step S102-5, decoding the third output data to obtain second decoded data.
In some embodiments, the third output data may be decoded by using a decoding network corresponding to the neuron processing layer to be trained, so as to obtain second decoded data. For example, according to the third output data, the formula is used:
first decoded data is calculated. Wherein delta (·) is the sigmoid activation function of the neuron processing layer to be trained, Is the activation function of the corresponding decoding network,Representing the corresponding decoded network parameters,Is the weight matrix of the decoding network corresponding to the neuron processing layer to be trained, and d i is the decoding network bias matrix corresponding to the neuron processing layer to be trained. i represents the number of layers in the model where the processing layer of the generation training neuron is located, and can be a positive integer greater than 1.
In some embodiments, the decoding networks corresponding to the different neuron processing layers may be different.
And S102-6, iterating the neuron processing layer to be trained by utilizing the difference between the second output data and the second decoding data.
In some embodiments, the network parameters of the neuron processing layer to be trained and the corresponding decoding network may be updated with error reverse transfer, i.e., θ i=(wi,bi) and based on the difference between the second output data and the second decoded data
In some embodiments, the network parameters θ i=(wi,bi of the neuron processing layer to be trained are optimized by minimizing the reconstruction error f (h i-1,Yi,wi,bi), thus making the reconstruction error calculation method for the output close to the input neural networkY i is second decoded data, and h i-1 is second output data. Further, network parameters are updated through a gradient descent strategy, and the parameter updating calculation is expressed as follows:
where α is the learning rate of the initialization setting, i represents the number of layers in the model where the processing layer of the generation training neuron is located, and may be a positive integer greater than 1, w i,s+1 and b i,s+1 represent model parameters updated by back propagation. S is the current network update error back propagation layer number, S represents the maximum error back propagation layer number, Representing the derivative of the error function with respect to the network weights w i,Is the derivative of the error function with respect to the network bias b i as the direction of the neural network gradient descent.
In addition, the neural network model includes a classification layer, and if the classification layer in the model is not trained, the model does not have classification capability, so in some embodiments, the step S102 further includes:
and inputting the fourth output data output by the last layer in the multiple layers of neuron processing layers into a classification layer to obtain a classification result. And iterating the classification layer according to the classification result and the corresponding fault classification label of the training sample data.
Take the example of a Softmax classifier as the classification layer. The purpose of the softmax classifier is to classify the fault type corresponding to the compressor operation data or the control output of various compressor control elements, i.e. to distinguish between normal data, minor fault type, typical fault type. The data label of training sample data x= [ X 1,x2,…,xy ] is denoted as l (I), I e {1,2, …, I }, sample data X is taken as input of the neural network model, and the output characteristic h N of the last neuron processing layer is taken as input of the Softmax classifier, and each data value is represented as a probability value calculated by using likelihood functionThe goal of training the Softmax classifier is to optimize the parameter phi so that the cost function can achieve a minimum, expressed as
Data were then obtained using a trained Softmax classifierThe training process and the classification process are expressed as follows:
Where l (I) represents the data tag, f (label=s|x; phi) represents the I parameters of the maximum likelihood function h φ (X), phi= [ phi 1,φ2,…,φI ] is the model parameters of the Softmax classifier. The probability value calculated by the function for each category is expressed as f (l=i|x n, phi), and finally the classification result is determined by the probability value and expressed as
In some embodiments, before inputting the data body in the training sample data into the neuron processing layer to be trained, the step S102 further includes:
And carrying out normalization processing on the data body in each training sample data so as to input the normalized control sampling data into the neural network model. For example, the training sample data obtained is x= { X1, X2 … xy }, using the formula: And (5) carrying out normalization processing. Where m may take on each value between 1,2 … y in turn, x min represents the minimum value among "x 1,x2…xy" and x max represents the maximum value among "x 1,x2…xy". After normalization treatment, the product is obtained Correspondingly, the data body of the training sample data mentioned in the steps S102-1 to S102-6 is
Referring to fig. 6, the embodiment of the invention further provides a fault identification method. As shown in fig. 6, the fault identification method may include the steps of:
in step S201, during operation of the air conditioner 100, a control output corresponding to the compressor control element is collected as the inspection data.
In some embodiments, the compressor control element may be one or more of a compressor main control chip, a compressor electrolytic capacitor, an intelligent power module, and a compressor insulated gate bipolar transistor. The corresponding control output is collected by the collection device arranged at the output port of the compressor control element.
Step S202, inputting the test data into the fault recognition model to obtain a fault recognition result.
In some embodiments, the fault identification model is a model trained by the fault identification model training method.
In some embodiments, if the type of the fault is not determined by using the control output, the fault identification method further includes: collecting real-time operation data of a compressor; and inputting the real-time operation data into a fault recognition model to determine a fault recognition result.
In addition, in the case of determining that a new fault occurs, the fault identification method further includes:
And acquiring the target control output quantity and the target compressor operation data acquired in a preset time period. It is understood that the above-described preset period is a period including an occurrence time point corresponding to a new failure. And associating the target control output quantity with a fault classification label corresponding to the new fault to generate new training sample data. And adding new training sample data into the offline training data set, and repeatedly using the fault identification model training method to update and train the fault identification model.
In order to perform the corresponding steps in the foregoing embodiments and the various possible manners, an implementation manner of the failure recognition model training apparatus 300 is given below, and alternatively, the failure recognition model training apparatus 300 may employ the device structure of the air conditioner 100 shown in fig. 2. Further, referring to fig. 7, fig. 7 is a functional block diagram of a fault identification model training apparatus 300 according to an embodiment of the present invention. It should be noted that, the basic principle and the technical effects of the failure recognition model training apparatus 300 provided in this embodiment are the same as those of the foregoing embodiments, and for brevity, reference may be made to the corresponding contents of the foregoing embodiments. The failure recognition model training apparatus 300 includes: the acquisition module 301 and the training module 302.
An acquisition module 301, configured to acquire training sample data; the training sample data comprises control sampling data and corresponding fault classification labels; the control sampling data is the control output quantity of the control element of the compressor.
And the training module 302 is configured to train the pre-selected neural network model layer by using the training sample data to obtain the fault identification model.
In order to perform the corresponding steps in the foregoing embodiments and the various possible manners, an implementation manner of the fault identifying apparatus 400 is given below, and alternatively, the fault identifying apparatus 400 may employ the device structure of the air conditioner 100 shown in fig. 2. Further, referring to fig. 8, fig. 8 is a functional block diagram of a fault recognition device 400 according to an embodiment of the present invention. It should be noted that, the basic principle and the technical effects of the fault identifying apparatus 400 provided in this embodiment are the same as those of the foregoing embodiments, and for brevity, reference may be made to the corresponding contents of the foregoing embodiments. The fault recognition device 400 includes: the acquisition module 401 and the identification module 402.
The collection module 401 is configured to collect, as the inspection data, a control output corresponding to the compressor control element during the operation of the air conditioner 100.
The identification module 402 is configured to input the inspection data into a fault identification model to obtain a fault identification result.
Alternatively, the above modules may be stored in the memory 110 shown in fig. 2 in the form of software or Firmware (Firmware) or be solidified in an Operating System (OS) of the air conditioner 100, and may be executed by the processor 120 in fig. 2. Meanwhile, data, codes of programs, and the like, which are required to execute the above-described modules, may be stored in the memory 110.
In summary, the embodiment of the invention provides a fault identification model training method, a fault identification method and a related device. The fault identification model training method comprises the following steps: acquiring training sample data; the training sample data comprises control sampling data and corresponding fault classification labels; the control sampling data is the control output quantity of a compressor control element; and training the preselected neural network model layer by utilizing the training sample data to obtain the fault identification model. The control output quantity of the compressor control element under different fault types is used as a training sample, the neural network model is trained layer by layer, and the model which can identify whether the compressor has faults according to the control output quantity generated in the operation process of the air conditioner is obtained, so that the faults of the compressor are timely and accurately identified, the service life of the air conditioner is prolonged, and property loss is avoided.
Although the present invention is disclosed above, the present invention is not limited thereto. Various changes and modifications may be made by one skilled in the art without departing from the spirit and scope of the invention, and the scope of the invention should be assessed accordingly to that of the appended claims.
Claims (12)
1. A method of training a fault recognition model, the method comprising:
Acquiring training sample data; the training sample data comprises control sampling data and corresponding fault classification labels; the control sampling data is the control output quantity of a compressor control element;
Training the preselected neural network model layer by utilizing the training sample data to obtain the fault identification model; during the running process of the air conditioner, collecting the control output quantity corresponding to the control element of the compressor to be used as test data, and inputting the test data into a fault recognition model to obtain a fault recognition result; the neural network model comprises a plurality of neuron processing layers, and the step of training the preselected neural network model layer by using the training sample data comprises the following steps:
When the to-be-trained neuron processing layer is a first layer in a plurality of layers of the neuron processing layers, inputting the control sampling data in the training sample data into the to-be-trained neuron processing layer to obtain first output data;
Decoding the first output data to obtain first decoded data;
Iterating the neuron processing layer to be trained by utilizing the difference between the control sampling data and the first decoding data;
When the neuron processing layer to be trained is not a first layer in a plurality of layers of the neuron processing layers, inputting second output data of a neuron processing layer adjacent to the previous layer of the neuron processing layer to be trained to obtain third output data;
Decoding the third output data to obtain second decoded data;
And iterating the neuron processing layer to be trained by utilizing the difference between the second output data and the second decoding data.
2. The method of claim 1, wherein the neural network model includes a classification layer, and wherein the step of training the preselected neural network model layer-by-layer using the training sample data further comprises:
Inputting fourth output data output by the last layer in the multiple layers of neuron processing layers into the classification layer to obtain classification results;
and iterating the classification layer according to the classification result and the corresponding fault classification label of the training sample data.
3. The method of claim 1, wherein prior to inputting the control sample data in the training sample data into the neuron processing layer to be trained, the step of training a preselected neural network model layer by layer using the training sample data further comprises:
And carrying out normalization processing on control sampling data in each training sample data so as to input the control sampling data after normalization processing into the neural network model.
4. The method of claim 1, wherein the compressor control element comprises one or more of a compressor master control chip, a compressor electrolytic capacitor, an intelligent power module, and a compressor insulated gate bipolar transistor.
5. The method of claim 1, wherein the training sample data further comprises compressor operation data and corresponding fault classification labels.
6. A fault identification method, which is applied to an air conditioner, the fault identification method comprising:
collecting control output quantity corresponding to a compressor control element as test data in the running process of the air conditioner;
Inputting the test data into a fault recognition model to obtain a fault recognition result; wherein the fault recognition model is a model trained by the fault recognition model training method according to any one of claims 1 to 5;
The neural network model comprises a plurality of neuron processing layers, and when the neuron processing layer to be trained is a first layer of the neuron processing layers, the control sampling data in the training sample data is input into the neuron processing layer to be trained so as to obtain first output data; decoding the first output data to obtain first decoded data; iterating the neuron processing layer to be trained by utilizing the difference between the control sampling data and the first decoding data; when the neuron processing layer to be trained is not a first layer in a plurality of layers of the neuron processing layers, inputting second output data of a neuron processing layer adjacent to the previous layer of the neuron processing layer to be trained to obtain third output data; decoding the third output data to obtain second decoded data; and iterating the neuron processing layer to be trained by utilizing the difference between the second output data and the second decoding data.
7. The fault identification method according to claim 6, wherein if the fault type is not determined using the control output quantity, the fault identification method further comprises:
Collecting real-time operation data of the compressor;
and inputting the real-time operation data into a fault recognition model to determine a fault recognition result.
8. The fault identification method according to claim 6, wherein in case it is determined that a new fault occurs, the fault identification method further comprises:
Acquiring a target control output quantity acquired in a preset time period; wherein the preset time period comprises an occurrence time point corresponding to the new fault;
And associating the target control output quantity with a fault classification label corresponding to the new fault, and generating new training sample data so as to update and train the fault recognition model by using the fault recognition model training method.
9. A failure recognition model training apparatus, characterized in that the failure recognition model training apparatus comprises:
the acquisition module is used for acquiring training sample data; the training sample data comprises control sampling data and corresponding fault classification labels; the control sampling data is the control output quantity of a compressor control element;
the training module is used for training the pre-selected neural network model layer by utilizing the training sample data so as to obtain the fault identification model;
the neural network model comprises a plurality of neuron processing layers, and the training module is specifically used for:
When the to-be-trained neuron processing layer is a first layer in a plurality of layers of the neuron processing layers, inputting the control sampling data in the training sample data into the to-be-trained neuron processing layer to obtain first output data;
Decoding the first output data to obtain first decoded data;
Iterating the neuron processing layer to be trained by utilizing the difference between the control sampling data and the first decoding data;
When the neuron processing layer to be trained is not a first layer in a plurality of layers of the neuron processing layers, inputting second output data of a neuron processing layer adjacent to the previous layer of the neuron processing layer to be trained to obtain third output data;
Decoding the third output data to obtain second decoded data;
And iterating the neuron processing layer to be trained by utilizing the difference between the second output data and the second decoding data.
10. A fault identification device, which is applied to an air conditioner, the fault identification device comprising:
the acquisition module is used for acquiring the control output quantity corresponding to the control element of the compressor as the check data in the running process of the air conditioner;
The identification module is used for inputting the inspection data into a fault identification model to obtain a fault identification result; wherein the fault recognition model is a model trained by the fault recognition model training method according to any one of claims 1 to 5;
The neural network model comprises a plurality of neuron processing layers, and when the neuron processing layer to be trained is a first layer of the neuron processing layers, the control sampling data in the training sample data is input into the neuron processing layer to be trained so as to obtain first output data; decoding the first output data to obtain first decoded data; iterating the neuron processing layer to be trained by utilizing the difference between the control sampling data and the first decoding data; when the neuron processing layer to be trained is not a first layer in a plurality of layers of the neuron processing layers, inputting second output data of a neuron processing layer adjacent to the previous layer of the neuron processing layer to be trained to obtain third output data; decoding the third output data to obtain second decoded data; and iterating the neuron processing layer to be trained by utilizing the difference between the second output data and the second decoding data.
11. An air conditioner comprising a processor and a memory, the memory storing machine executable instructions executable by the processor to implement the fault identification model training method of any one of claims 1-5 or the processor to implement the fault identification method of any one of claims 6-8.
12. A computer readable storage medium having stored thereon a computer program, wherein the computer program when executed by a processor implements the fault recognition model training method of any of claims 1-5; or a computer program which when executed by a processor implements the fault identification method as claimed in any one of claims 6 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011590807.7A CN112560997B (en) | 2020-12-29 | 2020-12-29 | Fault identification model training method, fault identification method and related device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011590807.7A CN112560997B (en) | 2020-12-29 | 2020-12-29 | Fault identification model training method, fault identification method and related device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112560997A CN112560997A (en) | 2021-03-26 |
CN112560997B true CN112560997B (en) | 2024-10-18 |
Family
ID=75032748
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011590807.7A Active CN112560997B (en) | 2020-12-29 | 2020-12-29 | Fault identification model training method, fault identification method and related device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112560997B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113689111B (en) * | 2021-08-20 | 2022-11-11 | 北京百度网讯科技有限公司 | Fault recognition model training method, fault recognition device and electronic equipment |
CN114738938B (en) * | 2022-03-04 | 2024-09-13 | 青岛海尔空调电子有限公司 | Multi-split air conditioning unit fault monitoring method, device and storage medium |
CN117961976B (en) * | 2024-03-29 | 2024-06-21 | 湖南大学 | Assembly robot online detection method and device based on generation diffusion migration |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111489539A (en) * | 2019-01-29 | 2020-08-04 | 珠海格力电器股份有限公司 | Household appliance system fault early warning method, system and device |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107506825A (en) * | 2017-09-05 | 2017-12-22 | 河海大学 | A kind of pumping plant fault recognition method |
CN110751261B (en) * | 2018-07-23 | 2024-05-28 | 第四范式(北京)技术有限公司 | Training method and system and prediction method and system for neural network model |
CN109931678B (en) * | 2019-03-13 | 2020-09-25 | 中国计量大学 | Air conditioner fault diagnosis method based on deep learning LSTM |
CN111140986A (en) * | 2019-12-23 | 2020-05-12 | 珠海格力电器股份有限公司 | Operating state detection method and device of air conditioning system, storage medium and air conditioner |
CN111860667A (en) * | 2020-07-27 | 2020-10-30 | 海尔优家智能科技(北京)有限公司 | Method and device for determining equipment fault, storage medium and electronic device |
-
2020
- 2020-12-29 CN CN202011590807.7A patent/CN112560997B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111489539A (en) * | 2019-01-29 | 2020-08-04 | 珠海格力电器股份有限公司 | Household appliance system fault early warning method, system and device |
Also Published As
Publication number | Publication date |
---|---|
CN112560997A (en) | 2021-03-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111914873B (en) | Two-stage cloud server unsupervised anomaly prediction method | |
CN112560997B (en) | Fault identification model training method, fault identification method and related device | |
Liu et al. | Fault diagnosis of water quality monitoring devices based on multiclass support vector machines and rule-based decision trees | |
CN114898121B (en) | Automatic generation method for concrete dam defect image description based on graph attention network | |
CN112488142A (en) | Radar fault prediction method and device and storage medium | |
CN116186633A (en) | Power consumption abnormality diagnosis method and system based on small sample learning | |
CN113283550A (en) | Abnormal identification model training method for vehicle network electric coupling data | |
CN115576293B (en) | Pressure-sensitive adhesive on-line production analysis method and system based on data monitoring | |
CN113824575B (en) | Method and device for identifying fault node, computing equipment and computer storage medium | |
CN115791174B (en) | Rolling bearing abnormality diagnosis method, system, electronic equipment and storage medium | |
CN115905959A (en) | Method and device for analyzing relevance fault of power circuit breaker based on defect factor | |
CN116007937A (en) | Intelligent fault diagnosis method and device for mechanical equipment transmission part | |
CN113758652B (en) | Oil leakage detection method and device for converter transformer, computer equipment and storage medium | |
CN113469013B (en) | Motor fault prediction method and system based on transfer learning and time sequence | |
CN113935413A (en) | Distribution network wave recording file waveform identification method based on convolutional neural network | |
CN116760033B (en) | Real-time power demand prediction system based on artificial intelligence | |
CN117933531A (en) | Distributed photovoltaic power generation power prediction system and method | |
CN113076217B (en) | Disk fault prediction method based on domestic platform | |
CN116415147A (en) | Pre-training method for photovoltaic system equipment operation data based on self-learning | |
CN111523557A (en) | Wind power intelligent fault diagnosis method based on big data | |
CN115545339A (en) | Transformer substation safety operation situation assessment method and device | |
CN115831339B (en) | Medical system risk management and control pre-prediction method and system based on deep learning | |
CN114118630B (en) | Demand forecasting method and system based on meteorological event | |
CN118940216A (en) | Induction motor fault diagnosis method based on multi-source signal fusion deep learning | |
CN115545355A (en) | Power grid fault diagnosis method, device and equipment based on multi-class information fusion identification |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
TA01 | Transfer of patent application right | ||
TA01 | Transfer of patent application right |
Effective date of registration: 20230506 Address after: 315000 No.1166 Mingguang North Road, Jiangshan Town, Yinzhou District, Ningbo City, Zhejiang Province Applicant after: NINGBO AUX ELECTRIC Co.,Ltd. Address before: 519080 202, 2nd floor, building B, headquarters base, No.2 Qianwan 2nd Road, Tangjiawan Town, high tech Zone, Zhuhai City, Guangdong Province Applicant before: ZHUHAI TUOXIN TECHNOLOGY Co.,Ltd. Applicant before: NINGBO AUX ELECTRIC Co.,Ltd. |
|
GR01 | Patent grant | ||
GR01 | Patent grant |