CN109063761B - Diffuser falling detection method and device and electronic equipment - Google Patents
Diffuser falling detection method and device and electronic equipment Download PDFInfo
- Publication number
- CN109063761B CN109063761B CN201810810454.3A CN201810810454A CN109063761B CN 109063761 B CN109063761 B CN 109063761B CN 201810810454 A CN201810810454 A CN 201810810454A CN 109063761 B CN109063761 B CN 109063761B
- Authority
- CN
- China
- Prior art keywords
- extraction network
- sample
- feature extraction
- samples
- features
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2413—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01D—MEASURING NOT SPECIALLY ADAPTED FOR A SPECIFIC VARIABLE; ARRANGEMENTS FOR MEASURING TWO OR MORE VARIABLES NOT COVERED IN A SINGLE OTHER SUBCLASS; TARIFF METERING APPARATUS; MEASURING OR TESTING NOT OTHERWISE PROVIDED FOR
- G01D21/00—Measuring or testing not otherwise provided for
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
Landscapes
- Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a method and a device for detecting the falling of a diffuser and electronic equipment, and relates to the technical field of 3D (three-dimensional) shooting, wherein the method comprises the following steps: acquiring an image to be detected through a camera device; inputting an image to be detected into a pre-trained feature extraction network so that the feature extraction network extracts features of the image to be detected; wherein the feature extraction network is generated by training a new sample set; the new sample set is formed by a new sample group obtained by combining given samples; inputting the features into a pre-trained classifier to obtain a classification result; and determining whether the diffuser dropping occurs according to the classification result. The embodiment of the invention can carry out more efficient diffuser falling detection on the camera device, and has higher detection accuracy.
Description
Technical Field
The invention relates to the technical field of 3D (three-dimensional) shooting, in particular to a method and a device for detecting the falling of a diffuser and electronic equipment.
Background
With the wide application of 3D modules, more and more mobile terminals will have a 3D camera module, for example, a common Time-of-flight (Time-of-flight) module technology, and emit modulated near-infrared light through a sensor, and reflect the light when encountering an object, and the sensor converts the distance of the shot object by calculating the Time difference or phase difference between light emission and reflection to generate depth information.
The camera module usually has a Diffuser (also called diffusion sheet or Diffuser) in front of the infrared light source, so that the infrared light can be uniformly irradiated to the whole shooting scene. Without the diffuser, infrared light emitted by the infrared light source is converged into one beam, so that the 3D camera cannot normally sense the depth of a shot object in a scene. In the in-service use process, the 3D camera module receives the collision or vibrations and has the problem that the diffuser drops to appear with certain probability for the 3D camera module is invalid.
Aiming at the problem of poor precision of detecting the dropping of the diffuser in the prior art, no effective solution is provided at present.
Disclosure of Invention
In view of the above, the present invention provides a method, an apparatus and an electronic device for detecting a diffuser drop, which can improve the accuracy of detecting the diffuser drop.
In order to achieve the above purpose, the embodiment of the present invention adopts the following technical solutions:
in a first aspect, an embodiment of the present invention provides a method for detecting a diffuser drop, where the diffuser is disposed in an image capturing device, the method including: acquiring an image to be detected through the camera device; inputting the image to be detected into a pre-trained feature extraction network so that the feature extraction network extracts the features of the image to be detected; wherein the feature extraction network is generated by a new sample set training; the new sample set is formed by a new sample group obtained by combining given samples; inputting the features into a pre-trained classifier to obtain a classification result; and determining whether the diffuser falling-off occurs according to the classification result.
Further, the method further comprises: inputting one of the new set of samples into the feature extraction network; the new set of samples comprises at least two of the given samples; respectively extracting the features of each given sample through the feature extraction network; if the two given samples belong to the same class of samples, performing minimization processing on the features of the two given samples to train the feature extraction network; the categories include a non-shedding category and a shedding category; if the two given samples do not belong to the same class of samples, performing maximization processing on the features of the two given samples to train the feature extraction network; and sequentially inputting new sample groups in the new sample set until the feature extraction network converges.
Further, the step of inputting one of the new sample sets into a feature extraction network comprises: inputting a new sample group consisting of a shedding sample, a normal sample and an attack sample into the feature extraction network; said step of minimizing said features of both said given samples comprises: minimizing the features of the attack and normal samples; said step of maximizing said features of both said given samples comprises: maximizing the features of the shed and normal samples; maximizing the features of the shedding sample and the attacking sample.
Further, the step of inputting one of the new sample sets into a feature extraction network comprises: randomly selecting two of the given samples to form the new sample set, and inputting the new sample set into the feature extraction network.
Further, the step of minimizing the features of the two training samples includes: and calculating the distance between the vectors of the two features, and minimizing the distance to perform minimization processing.
Further, the step of maximizing the features of the two training samples includes: and calculating the distance between the vectors of the two features, and maximizing the distance to perform maximization processing.
Further, the method further comprises: extracting the characteristics of the shedding sample, the normal sample and the attack sample through the pre-trained characteristic extraction network; and dividing the characteristics of the shedding sample into shedding categories, dividing the characteristics of the normal sample and the attacking sample into non-shedding categories, and training the classifier.
Further, the method further comprises: and when the diffuser is determined to fall off, alarming and reminding are carried out.
In a second aspect, an embodiment of the present invention further provides a device for detecting dropping of a diffuser, where the diffuser is disposed in an image capturing apparatus, and the device includes: the acquisition module is used for acquiring an image to be detected through the camera device; the characteristic extraction module is used for inputting the image to be detected into a pre-trained characteristic extraction network so that the characteristic extraction network extracts the characteristics of the image to be detected; wherein the feature extraction network is generated by a new sample set training; the new sample set is formed by a new sample group obtained by combining given samples; the classification module is used for inputting the characteristics into a pre-trained classifier to obtain a classification result; and the judging module is used for determining whether the diffuser falls off or not according to the classification result.
In a third aspect, an embodiment of the present invention provides an electronic device, including a memory and a processor, where the memory stores a computer program operable on the processor, and the processor implements the steps of the method according to any one of the first aspect when executing the computer program.
In a fourth aspect, the present invention provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, performs the steps of the method according to any one of the above first aspects.
The embodiment of the invention provides a method and a device for detecting the falling of a diffuser and electronic equipment, wherein the method and the device can be used for extracting the features of an image to be detected through a pre-trained feature extraction network, the image to be acquired is acquired by a camera device to be detected, the feature extraction network is obtained by training a new sample set obtained by combining given samples, classification is carried out after the features are extracted, whether the falling of the diffuser occurs or not is determined, the more efficient detection of the falling of the diffuser can be carried out on the camera device, and the detection accuracy is higher.
Additional features and advantages of the disclosure will be set forth in the description which follows, or in part may be learned by the practice of the above-described techniques of the disclosure, or may be learned by practice of the disclosure.
In order to make the aforementioned objects, features and advantages of the present disclosure more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a schematic structural diagram of a processing apparatus according to an embodiment of the present invention;
FIG. 2 is a flow chart of a diffuser dropout detection method according to an embodiment of the present invention;
fig. 3 is a flowchart of a feature extraction network training method according to an embodiment of the present invention;
FIG. 4 is a schematic illustration of a given sample provided by an embodiment of the present invention;
FIG. 5 is a schematic diagram of a training feature extraction network according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of another training feature extraction network provided in an embodiment of the present invention;
fig. 7 is a block diagram of a diffuser dropout detection apparatus according to an embodiment of the present invention.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
With the application of 3D sensing technology on intelligent terminal, the detection that drops to the diffuser of 3D module of making a video recording becomes new problem. The detection precision requirement of intelligent terminal manufacturer or camera module manufacturer to the diffuser is above 99.9%, especially will reduce the misidentification of the yields as far as possible. When the intelligent terminal user actually uses the intelligent terminal, there is also a need to detect whether the diffuser drops. The existing method for detecting whether the diffuser falls off has poor precision and cannot meet the requirements of manufacturers or users. To improve this problem, embodiments of the present invention provide a method, an apparatus and an electronic device for detecting a diffuser drop, and the embodiments of the present invention are described in detail below.
The first embodiment is as follows:
first, an exemplary electronic device 100 for implementing the diffuser drop-out detection method, apparatus and electronic device according to the embodiments of the present invention is described with reference to fig. 1.
As shown in fig. 1, an electronic device 100 includes one or more processors 102, one or more memory devices 104, an input device 106, an output device 108, and an image capture device 110, which are interconnected via a bus system 112 and/or other type of connection mechanism (not shown). It should be noted that the components and structure of the electronic device 100 shown in fig. 1 are exemplary only, and not limiting, and the electronic device may have other components and structures as desired.
The processor 102 may be implemented in at least one hardware form of a Digital Signal Processor (DSP), a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), the processor 102 may be one or a combination of several of a Central Processing Unit (CPU) or other forms of processing units with data processing capability and/or instruction execution capability, and may control other components in the electronic device 100 to perform desired functions.
The storage 104 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, Read Only Memory (ROM), hard disk, flash memory, etc. On which one or more computer program instructions may be stored that may be executed by processor 102 to implement client-side functionality (implemented by the processor) and/or other desired functionality in embodiments of the invention described below. Various applications and various data, such as various data used and/or generated by the applications, may also be stored in the computer-readable storage medium.
The input device 106 may be a device used by a user to input instructions and may include one or more of a keyboard, a mouse, a microphone, a touch screen, and the like.
The output device 108 may output various information (e.g., images or sounds) to the outside (e.g., a user), and may include one or more of a display, a speaker, and the like.
The image capture device 110 may take images (e.g., photographs, videos, etc.) desired by the user and store the taken images in the storage device 104 for use by other components. The image capturing device 110 includes a 3D camera module, and the 3D camera module includes a diffuser.
Exemplary electronic devices for implementing the diffuser drop detection method, apparatus and system according to embodiments of the present invention may be implemented as smart terminals such as smart phones, tablet computers, and the like.
Example two:
referring to a flowchart of a method for detecting the dropping of a diffuser disposed in an image capturing apparatus shown in fig. 2, the method may be executed by the electronic device provided in the foregoing embodiment, and the method specifically includes the following steps:
step S202, an image to be detected is obtained through the camera device.
The camera device can be installed on the intelligent terminal or can be used independently. When the image pickup device is used for detecting the falling of the diffuser, the image pickup device collects an image as an image to be detected. For example, the image to be detected is received and stored through the intelligent terminal, or other external equipment connected with the camera device receives and stores the image to be detected.
And step S204, inputting the image to be detected into a pre-trained feature extraction network so that the feature extraction network extracts the features of the image to be detected.
Wherein, the feature extraction network is generated by training a new sample set obtained by combining given samples. The deep learning mode is carried out by the feature extraction network, the feature extraction accuracy is higher, parameters do not need to be adjusted manually, but the operation speed is relatively slow, and the model needs to be trained in advance.
Since the diffuser is not always peeled off, there is a problem that the number of peeled samples is small and the number of cases to be handled is large. The given samples include a shedding sample and a non-shedding sample, the present embodiment may combine the shedding sample and the non-shedding sample, for example, a combination of one shedding sample and one non-shedding sample, a combination of one shedding sample and two non-shedding samples, a combination of two shedding samples, or a combination of two non-shedding samples, to obtain a new sample group, where the number of the new sample group is greater than the sum of the numbers of the shedding sample and the normal sample, and a new sample set formed by the new sample group is used as a training sample set of the feature extraction network. By the method, the number of training samples of the feature extraction network is increased, so that a more accurate extraction model can be obtained only through original small sample data.
And step S206, inputting the features into a pre-trained classifier to obtain a classification result.
When training the classifier, the input samples are the features obtained by extracting the training samples for the feature extraction network, and each training sample can be represented as a feature vector. And marking the feature vectors corresponding to the shedding samples as one class, marking the feature vectors corresponding to the non-shedding samples as another class, and training the classifier through the two classes. The shedding sample is an image collected by the camera device when the diffuser is shed; the non-shedding sample comprises a normal sample and an attack sample, wherein the normal sample is an image acquired by the camera device when the diffuser works normally; the attack sample is an approximate falling signal and is an image acquired under the condition that the diffuser is normally installed and works, but the shooting is abnormal due to improper operation of a user or other factors.
After the training of the classifier is completed, the features extracted by the feature extraction network can be classified.
And step S208, determining whether the diffuser falling-off occurs according to the classification result.
After the classifier outputs the classification result of the features of the image to be detected, whether the image pickup device has the condition that the diffuser falls off or not can be determined according to the classification result. When the diffuser is determined to fall off, the warning reminding can be carried out, so that a user of the intelligent terminal can conveniently find out problems and needs to maintain, or a manufacturer can conveniently reject the problem camera device or maintain the problem camera device.
The diffuser falling-off detection method provided by the embodiment of the invention can extract the features of the image to be detected through the pre-trained feature extraction network, the image to be acquired is acquired by the camera device to be detected, the feature extraction network is obtained by training a new sample group obtained by combining given samples, classification is carried out after the features are extracted, whether the diffuser falling-off occurs or not is determined, the more efficient diffuser falling-off detection can be carried out on the camera device, and the detection accuracy is higher.
The feature extraction network used in the method is trained based on a feature learning scheme of a small sample, and referring to a flow chart of a feature extraction network training method shown in fig. 3, the method specifically includes the following steps:
step S302, inputting a new sample group in the new sample set into the feature extraction network. The new set of samples comprises at least two given samples.
The feature extraction network may be an existing neural network that can perform image feature extraction. And combining the given samples, wherein the combination mode can be random sample combination, for example, any two given samples or three given samples are combined, and a plurality of combined given samples are used as a new sample group to be input into the feature extraction network. The number of training samples can be increased by forming a new set of samples.
Step S304, respectively extracting the characteristics of each given sample through a characteristic extraction network.
Before training, the feature extraction network performs feature extraction for a given sample using initial parameters.
And step S306, if the two given samples belong to the samples of the same category, minimizing the characteristics of the two given samples to train the characteristic extraction network.
The classes comprise a non-shedding class and a shedding class, wherein the normal samples and the attack samples in the given samples are divided into the non-shedding class, and the shedding samples are divided into the shedding class. Referring to the schematic diagram of a given sample shown in fig. 4, a normal sample, an attack sample, and a cast-off sample are sequentially arranged from left to right, taking an IR (Infrared) image as an example. Wherein, the normal sample is an image collected by the camera device when the diffuser works normally; the attack sample refers to an approximate falling signal, which is an image acquired under the condition that a diffuser is normally installed and works, but shooting is abnormal due to improper operation of a user or other factors, and generally, a relatively obvious light spot exists in the image, for example, a finger of the user is very close to a lens of a camera device; the falling sample is an image collected by the camera device when the diffuser falls off, and because the diffuser is not arranged in front of the light source of the camera device, the light emitted by the light source is a narrow beam, only a small part of the collected image has content, and the other part of the collected image has no content.
Step S308, if the two given samples do not belong to the samples of the same category, the features of the two given samples are maximized to train the feature extraction network.
Minimizing the characteristics of two given samples belonging to the same category in the new sample group; maximizing the characteristics of two given samples belonging to different classes in the new sample group; thereby optimizing the parameters of the feature extraction network. The maximization and minimization may be performed by the distance of the feature vector corresponding to the feature, for example: the distance between two feature vectors is calculated, maximizing the distance, minimizing the distance. For example, sample 1 received a characteristic of F1(multidimensional vector), sample 2 gets feature F2(feature vectors of the same dimension), maximization leads to a maximum distance Max F between vectors1-F2| ^2, minimized to Min | F1-F2And | ^ 2. After the distance of the feature vector is maximized/minimized, the feature extraction network can be returned, and finally the parameters of the feature extraction network are optimized, so that the classification result of the extracted features is more accurate. The method for optimizing the network can adopt the existing network optimization method, such as a gradient descent optimization algorithm and the like.
And step S310, sequentially inputting new sample groups in the new sample set until the feature extraction network converges.
In the training process, the new sample group is input for training each time to obtain an optimized feature extraction network, then a second batch of samples are selected, and the optimization is carried out again on the basis of the optimized feature extraction network until the network converges.
Consider that a given sample may include three types: the normal samples, the attack samples and the falling samples can be used as a falling sample, a normal sample and an attack sample to be input into the feature extraction network for training, and two samples can be randomly selected to be input into the feature extraction network for training.
Referring to the schematic diagram of the training feature extraction network shown in fig. 5, a new sample set composed of a shedding sample, a normal sample and an attack sample is input into the feature extraction network as an example for explanation. And the characteristic extraction network carries out characteristic extraction on the samples to respectively obtain the shedding characteristic, the attack characteristic and the normal characteristic. As shown in fig. 5, every two features are subjected to the maximization or minimization process, specifically: minimizing the attack characteristic and the normal characteristic; performing maximization treatment on the shedding characteristic and the normal characteristic; and performing maximization processing on the shedding characteristic and the attack characteristic. In the above manner, the feature extraction network is continuously trained until convergence, with three samples being input each time.
Referring to fig. 6, a schematic diagram of a training feature extraction network is shown, and two randomly selected samples are input into the feature extraction network for illustration. Two samples are input each time, in fig. 6, sample 1 and sample 2 are taken as an example, the two samples are randomly selected from all samples, and the feature extraction network performs feature extraction on the samples to obtain features of sample 1 and sample 2, respectively. As shown in fig. 6, two features are maximized or minimized, specifically: similar samples are minimized, e.g., sample 1 and sample 2 both belong to the non-shedding category or the shedding category; the maximization process is performed on samples of different classes, for example sample 1 and sample 2 belonging to the non-shedding class and the shedding class, respectively, as described above.
Before the classifier is used for classification, it needs to be trained, and the classifier may be an existing SVM (Support Vector Machine) or other applicable classifiers, which is not limited in this embodiment. The training process of the classifier can be carried out according to the following steps:
(1) extracting the characteristics of the shedding sample, the normal sample and the attack sample through a pre-trained characteristic extraction network;
(2) and (3) dividing the characteristics of the shedding sample into shedding categories, dividing the characteristics of the normal sample and the attacking sample into non-shedding categories, and training the classifier. Each sample is expressed as a feature vector, and the feature vectors of the falling samples are classified into one class, and the feature vectors of the normal samples and the feature vectors of the attack samples are classified into one class for training.
According to the method, the falling samples and the non-falling samples (including the normal samples and the attack samples) are combined in pairs to increase the training sample set, and the characteristics of the falling samples and the non-falling samples are obtained through learning, so that the falling detection is performed, the network training can be performed on the basis of the falling samples of the small samples, and the high-efficiency falling detection can be performed on the basis of the network.
Example three:
with respect to the diffuser dropout detection method provided in the second embodiment, an embodiment of the present invention provides a diffuser dropout detection apparatus, referring to a block diagram of a structure of the diffuser dropout detection apparatus shown in fig. 7, including:
an obtaining module 701, configured to obtain an image to be detected through a camera device;
a feature extraction module 702, configured to input the image to be detected into a pre-trained feature extraction network, so that the feature extraction network extracts features of the image to be detected; wherein the feature extraction network is generated by training a new sample set; the new sample set is formed by a new sample group obtained by combining given samples;
a classification module 703, configured to input the features into a pre-trained classifier to obtain a classification result;
a decision block 704, configured to determine whether a diffuser drop occurs according to the classification result.
The diffuser falling-off detection device provided by the embodiment of the invention can extract the features of the image to be detected through the pre-trained feature extraction network, the image to be acquired is acquired by the camera device to be detected, the feature extraction network is obtained by training a new sample set obtained by combining given samples, classification is carried out after the features are extracted, whether the diffuser falling-off occurs or not is determined, the more efficient diffuser falling-off detection can be carried out on the camera device, and the detection accuracy is higher.
In an embodiment, the apparatus further includes a feature extraction network training module, configured to: inputting a new sample group in the new sample set into the feature extraction network; the new set of samples comprises at least two given samples; respectively extracting the characteristics of each given sample through a characteristic extraction network; if the two given samples belong to the samples of the same category, minimizing the characteristics of the two given samples to train a characteristic extraction network; the categories include normal category and shedding category; if the two given samples do not belong to the samples of the same category, performing maximization processing on the features of the two given samples to train a feature extraction network; and sequentially inputting new sample groups in the new sample set until the feature extraction network converges.
Wherein, the above feature extraction network training module is further configured to: a new sample group consisting of a shedding sample, a normal sample and an attack sample is input into the feature extraction network; the step of minimizing the features of two given samples to optimize the parameters of the feature extraction network comprises: minimizing the characteristics of the attack sample and the normal sample; the method comprises the following steps of performing maximization processing on the characteristics of two given samples to optimize the parameters of the characteristic extraction network, wherein the steps comprise: maximizing the characteristics of the shedding sample and the normal sample; and performing maximization processing on the characteristics of the shedding sample and the attacking sample.
The feature extraction network training module is further configured to: two given samples are randomly selected to form a new sample set, and the new sample set is input into the feature extraction network. The feature extraction network training module is further configured to: and calculating the distance between the vectors of the two features, and minimizing the distance to perform minimization processing. The feature extraction network training module is further configured to: the distance between the vectors of the two features is calculated and maximized for maximization. The feature extraction network training module is further configured to: the distance between the vectors of the two features is calculated according to the following formula: i F1-F2 | ^ 2; where F1 and F2 represent feature vectors for features, respectively.
In another embodiment, the apparatus further comprises a classifier training module configured to: extracting the characteristics of the shedding sample, the normal sample and the attack sample through a pre-trained characteristic extraction network; and (3) dividing the characteristics of the shedding sample into shedding categories, dividing the characteristics of the normal sample and the attacking sample into non-shedding categories, and training the classifier.
In another embodiment, the device further comprises a reminding module for giving an alarm when the diffuser falling-off is determined to occur.
The device provided by the embodiment has the same implementation principle and technical effect as the foregoing embodiment, and for the sake of brief description, reference may be made to the corresponding contents in the foregoing method embodiment for the portion of the embodiment of the device that is not mentioned.
In addition, the present embodiment further provides an electronic device, which includes a memory and a processor, where the memory stores a computer program that can be executed on the processor, and the processor implements the steps of the diffuser drop detection method provided in the second embodiment when executing the computer program.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process of the above-described device may refer to the corresponding process in the foregoing embodiment, and is not described herein again.
The present embodiment further provides a computer-readable storage medium, on which a computer program is stored, and when being executed by a processor, the computer program performs the steps of the method provided in the second embodiment.
The computer program product of the method, the apparatus, and the processing device for detecting the diffuser drop according to the embodiments of the present invention includes a computer readable storage medium storing a program code, where instructions included in the program code may be used to execute the method described in the foregoing method embodiments, and specific implementation may refer to the method embodiments, and will not be described herein again. The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present invention, which are used for illustrating the technical solutions of the present invention and not for limiting the same, and the protection scope of the present invention is not limited thereto, although the present invention is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present invention, and they should be construed as being included therein. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.
Claims (10)
1. A method for detecting the dropping of a diffuser, wherein the diffuser is provided in an image pickup device, the method comprising:
acquiring an image to be detected through the camera device;
inputting the image to be detected into a pre-trained feature extraction network so that the feature extraction network extracts the features of the image to be detected; wherein the feature extraction network is generated by a new sample set training; the new sample set is formed by a new sample group obtained by combining given samples;
inputting the features into a pre-trained classifier to obtain a classification result;
and determining whether the diffuser falling-off occurs according to the classification result.
2. The method of claim 1, further comprising:
inputting one of the new sample sets into a feature extraction network; the new set of samples comprises at least two of the given samples;
respectively extracting the features of each given sample through the feature extraction network;
if the two given samples belong to the same class of samples, performing minimization processing on the features of the two given samples to train the feature extraction network; the categories include a non-shedding category and a shedding category;
if the two given samples do not belong to the same class of samples, performing maximization processing on the features of the two given samples to train the feature extraction network;
and sequentially inputting new sample groups in the new sample set until the feature extraction network converges.
3. The method of claim 2, wherein said step of inputting one of said new sample sets into a feature extraction network comprises:
inputting a new sample group consisting of a shedding sample, a normal sample and an attack sample into the feature extraction network;
said step of minimizing said features of both said given samples comprises: minimizing the features of the attack and normal samples;
said step of maximizing said features of both said given samples comprises: maximizing the features of the shed and normal samples; maximizing the features of the shedding sample and the attacking sample.
4. The method of claim 2, wherein said step of inputting one of said new sample sets into a feature extraction network comprises:
randomly selecting two of the given samples to form the new sample set, and inputting the new sample set into the feature extraction network.
5. The method of claim 2, wherein said step of minimizing said features of said two given samples comprises:
and calculating the distance between the vectors of the two features, and minimizing the distance to perform minimization processing.
6. The method of claim 2, wherein said step of maximizing said features of said two given samples comprises:
and calculating the distance between the vectors of the two features, and maximizing the distance to perform maximization processing.
7. The method of claim 1, further comprising:
extracting the characteristics of the shedding sample, the normal sample and the attack sample through the pre-trained characteristic extraction network;
and dividing the characteristics of the shedding sample into shedding categories, dividing the characteristics of the normal sample and the attacking sample into non-shedding categories, and training the classifier.
8. A device for detecting the dropping of a diffuser, wherein the diffuser is provided in an image pickup device, the device comprising:
the acquisition module is used for acquiring an image to be detected through the camera device;
the characteristic extraction module is used for inputting the image to be detected into a pre-trained characteristic extraction network so that the characteristic extraction network extracts the characteristics of the image to be detected; wherein the feature extraction network is generated by a new sample set training; the new sample set is formed by a new sample group obtained by combining given samples;
the classification module is used for inputting the characteristics into a pre-trained classifier to obtain a classification result;
and the judging module is used for determining whether the diffuser falls off or not according to the classification result.
9. An electronic device comprising a memory and a processor, the memory having stored therein a computer program operable on the processor, wherein the processor implements the steps of the method of any of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, having stored thereon a computer program, characterized in that the computer program, when being executed by a processor, is adapted to carry out the steps of the method of any of the claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810810454.3A CN109063761B (en) | 2018-07-20 | 2018-07-20 | Diffuser falling detection method and device and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810810454.3A CN109063761B (en) | 2018-07-20 | 2018-07-20 | Diffuser falling detection method and device and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109063761A CN109063761A (en) | 2018-12-21 |
CN109063761B true CN109063761B (en) | 2020-11-03 |
Family
ID=64834877
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810810454.3A Active CN109063761B (en) | 2018-07-20 | 2018-07-20 | Diffuser falling detection method and device and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109063761B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111310578B (en) * | 2020-01-17 | 2023-05-02 | 上海乐普云智科技股份有限公司 | Method and device for generating heart beat data sample classification network |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107396094A (en) * | 2017-08-17 | 2017-11-24 | 上海大学 | The automatic testing method of single camera damage towards in multi-cam monitoring system |
CN108198175A (en) * | 2017-12-28 | 2018-06-22 | 广东欧珀移动通信有限公司 | Detection method, detection device, computer equipment and computer readable storage medium |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160241733A1 (en) * | 2015-02-18 | 2016-08-18 | Xerox Corporation | Led illumination uniformity |
-
2018
- 2018-07-20 CN CN201810810454.3A patent/CN109063761B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107396094A (en) * | 2017-08-17 | 2017-11-24 | 上海大学 | The automatic testing method of single camera damage towards in multi-cam monitoring system |
CN108198175A (en) * | 2017-12-28 | 2018-06-22 | 广东欧珀移动通信有限公司 | Detection method, detection device, computer equipment and computer readable storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN109063761A (en) | 2018-12-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108304829B (en) | Face recognition method, device and system | |
US9478039B1 (en) | Background modeling and foreground extraction method based on depth image | |
JP5570629B2 (en) | Classifier learning method and apparatus, and processing apparatus | |
Ko et al. | Spatiotemporal bag-of-features for early wildfire smoke detection | |
CN108491794B (en) | Face recognition method and device | |
KR102399025B1 (en) | Improved data comparison method | |
CN109840413B (en) | Phishing website detection method and device | |
JP2016191975A (en) | Machine learning apparatus | |
CN111814690B (en) | Target re-identification method, device and computer readable storage medium | |
CN111291887A (en) | Neural network training method, image recognition method, device and electronic equipment | |
CN108596079B (en) | Gesture recognition method and device and electronic equipment | |
JP5214679B2 (en) | Learning apparatus, method and program | |
CN109783680B (en) | Image pushing method, image acquisition device and image processing system | |
JPWO2015146113A1 (en) | Identification dictionary learning system, identification dictionary learning method, and identification dictionary learning program | |
CN109063761B (en) | Diffuser falling detection method and device and electronic equipment | |
Khan et al. | Dimension invariant model for human head detection | |
US20160155068A1 (en) | Information processing apparatus, information processing method, and recording medium for classifying input data | |
KR101521136B1 (en) | Method of recognizing face and face recognition apparatus | |
CN113947209A (en) | Integrated learning method, system and storage medium based on cloud edge cooperation | |
CN111382791B (en) | Deep learning task processing method, image recognition task processing method and device | |
KR20190056873A (en) | Apparatus for detecting object using multi neural network and method thereof, and method for learning detection of object | |
JP5389723B2 (en) | Object detection device and learning device thereof | |
US20190311184A1 (en) | High Accuracy and Volume Facial Recognition on Mobile Platforms | |
JP2016099716A (en) | System, identifier unit, identification model generator, information processing method and program | |
CN112418244A (en) | Target detection method, device and electronic system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |