[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN108280474A - A kind of food recognition methods based on neural network - Google Patents

A kind of food recognition methods based on neural network Download PDF

Info

Publication number
CN108280474A
CN108280474A CN201810054620.1A CN201810054620A CN108280474A CN 108280474 A CN108280474 A CN 108280474A CN 201810054620 A CN201810054620 A CN 201810054620A CN 108280474 A CN108280474 A CN 108280474A
Authority
CN
China
Prior art keywords
neural network
image
food
recognition methods
obtains
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810054620.1A
Other languages
Chinese (zh)
Inventor
杨德顺
陈晓鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Send Cheng Ke Food Information Technology Co Ltd
Original Assignee
Guangzhou Send Cheng Ke Food Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Send Cheng Ke Food Information Technology Co Ltd filed Critical Guangzhou Send Cheng Ke Food Information Technology Co Ltd
Priority to CN201810054620.1A priority Critical patent/CN108280474A/en
Publication of CN108280474A publication Critical patent/CN108280474A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/213Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/467Encoded features or binary features, e.g. local binary patterns [LBP]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/60Type of objects
    • G06V20/68Food, e.g. fruit or vegetables

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The food recognition methods based on neural network that the invention discloses a kind of, includes the following steps:S1 obtains food image and inputs neural network;S2 carries out feature extraction and dimensionality reduction to the image of input, obtains key feature;S3 carries out LBP characteristic value calculating to the image of input, obtains LBP characteristic patterns;S4 merges LBP characteristic patterns with key feature, obtains minutia, and minutia is input to next layer;S5 carries out the classification of food image using the feature that neural network finally extracts.The extraction of key feature is not only carried out to the food image of acquisition, LBP feature calculations also are carried out to it, obtained LBP characteristic patterns are merged with key feature, so that minutia is not abandoned, upper network layer still can learn minutia, and it is inaccurate to solve the problems, such as that current food recognition methods identifies details.

Description

A kind of food recognition methods based on neural network
Technical field
The present invention relates to a kind of food recognition methods more particularly to a kind of food recognition methods based on neural network.
Background technology
With the development of intellectualized technology, it is based on deep neural network, is extracted in input food image using convolutional calculation Feature, and be gradually applied to the method that food is classified according to the obtained feature of extraction.At present using nerve Network model is carried out convolutional calculation to given input picture and carries out feature extraction, after being operated to convolutional calculation using pondization As a result dimensionality reduction is carried out, feature extraction is carried out from part to the overall situation to image, food image is finally carried out according to the feature of extraction Classification, in most cases can accurately identify and food picture of classifying.But network is carrying out the operation of pond pondization Some detailed information that the convolutional calculation of low layer obtains can be lost in the process, so playing important work to classification for some details Scene, accuracy can reduce.
Specifically, current food recognition methods is to have preferable effect food former material is identified, its original is studied carefully Because then existing, unprocessed in food former material, food volume to be identified is big, even if identification knot will not be influenced by abandoning big measure feature Fruit.But for prepared food, particularly with the Chinese meal to be finely good at, food former material passes through careful processing, in vegetable Food is often the Filamentous or granular of superfine cause, in this case, can largely be abandoned to characteristics of image using script Recognition methods then can cause important detailed information to be lost, and None- identified Chinese meal vegetable.In this regard, people need one kind can be with The method being more preferably identified for food details.
Invention content
The present invention provides a kind of food recognition methods based on neural network, it is intended to solve current food recognition methods to thin The inaccurate problem of section identification.
A kind of food recognition methods based on neural network of the present invention, includes the following steps:
S1 obtains food image and inputs neural network;
S2 carries out feature extraction and dimensionality reduction to the image of input, obtains key feature;
S3 carries out LBP characteristic value calculating to the image of input, obtains LBP characteristic patterns;
S4 merges LBP characteristic patterns with key feature, obtains minutia, and minutia is input to next layer;
S5 carries out the classification of food image using the feature that neural network finally extracts.
A kind of food recognition methods based on neural network of the present invention, not only closes the food image of acquisition The extraction of key feature also carries out LBP feature calculations to it, and obtained LBP characteristic patterns and key feature are merged, that is, It says, the LBP characteristic patterns that can will be obtained, be input in the higher level of network using jump connection, the feature extracted with it carries out Merge, as input next time, forms residual error module so that network can acquire some and lose and passed through in the higher layers The local detail information after variation is crossed, realization accurately identifies details, solves current food recognition methods and is carrying out Chi Huacao Some detailed information that the convolutional calculation of low layer obtains can be lost during work, and important work is played to classification for some details The problem of scene, accuracy can reduce.Specifically, current food recognition methods is when identifying Chinese meal vegetable, by Tiny and the characteristics of be mutually mixed, the volume in the detailed information that loss is identified can make vegetable that is presented food volume in vegetable Tiny food materials are lost, and lead to cannot to accurately identify food, and the present invention is by calculating the LBP characteristic patterns of image, and by LBP Characteristic pattern merges with key feature so that minutia is not abandoned, and upper network layer still can learn minutia, is based on this One advantage, recognition methods of the present invention can be used for precisely identifying food, be particularly suitable for the identification of Chinese meal vegetable, solve The problem of current food recognition methods identification details inaccuracy.Meanwhile claimed recognition methods, not only fit Food for identification is needing to carry out details identification, such as with Chinese meal vegetable have since it has the function of retaining minutia Plastic grain separation of similar feature etc., can use this kind of recognition methods.The present invention provides one kind to carry out The method accurately identified, a kind of food recognition methods based on neural network solve current food recognition methods and know to details Not inaccurate problem.
Description of the drawings
Fig. 1 is a kind of flow chart 1 of the food recognition methods based on neural network;
Fig. 2 is a kind of flow chart 2 of the food recognition methods based on neural network;
Fig. 3 is a kind of flow chart of the step S0 of the food recognition methods based on neural network;
Fig. 4 is a kind of flow chart of the step S01 of the food recognition methods based on neural network;
Fig. 5 is a kind of flow chart of the step S02 of the food recognition methods based on neural network;
Fig. 6 is a kind of flow chart of the step S03 of the food recognition methods based on neural network;
Fig. 7 is a kind of flow chart of the step S2 of the food recognition methods based on neural network;
Fig. 8 is a kind of flow chart of the step S3 of the food recognition methods based on neural network.
Specific implementation mode
As shown in Figure 1, a kind of food recognition methods based on neural network, includes the following steps:S1 obtains food image And input neural network;S2 carries out feature extraction and dimensionality reduction to the image of input, obtains key feature;S3, to the image of input LBP characteristic value calculating is carried out, LBP characteristic patterns are obtained;S4 merges LBP characteristic patterns with key feature, obtains details spy Sign, and minutia is input to next layer;S5 carries out the classification of food image using the feature that neural network finally extracts. The extraction that key feature is not only carried out to the food image of acquisition, also carries out LBP feature calculations to it, the LBP features that will be obtained Figure is merged with key feature so that minutia is not abandoned, and upper network layer still can learn minutia, solve The problem inaccurate to details identification of food recognition methods at present.
As shown in Figure 2 and Figure 3, food recognition methods of this kind based on neural network further includes neural metwork training step S0, the S0 include the following steps:S01 makes and collects sample;S02 selects training sample, extracts its key feature and LBP is special Sign figure simultaneously merges, and obtains minutia and is input to next layer;S03 calculates the damage for the feature that neural network finally extracts It loses, loss is more than threshold value to return to step S02, and step then enters step S04 less than threshold value;S04 uses the image of test sample It is tested, terminates training step if test loss is less than threshold value, enter step S1, returned if test loss is more than threshold value Step S02.The step is neural metwork training step, and neural network has learning functionality, and food is being carried out using neural network Before identification, neural network can be trained, so that its study is needed the object identified, in the study to a large amount of training samples Under, the object that can be more fast and accurately identified to needs is identified.
As shown in figure 4, the step S01 includes the following steps:S011 collects the image of all cuisines, is adjusted to fixed The image of size;S012 is the corresponding cuisine name label of distribution of each cuisine image;S013, by all images and corresponding Label is divided into training sample and test sample.Food is identified using neural network in the present invention, in training neural network It is to need to collect food sample, part of the food sample is trained neural network as training sample, and part sample is as survey Sample sheet tests trained neural network, subsequently to use.
As shown in figure 5, institute step S02 includes the following steps:S021 chooses training sample and inputs network, uses convolution meter Calculate the characteristic pattern of extraction input picture;S022, using the characteristic pattern in maximum pond operation processing step S021, and to result into Row nonlinear activation function handles to obtain key feature;S023, the pass for calculating the LBP characteristic patterns of image and being obtained with step S022 Key feature merges;Step S023 is merged the mean value of gained and variance carries out Batch Normalization operations by S024, will It is normalized in the range of [0,1], and next layer of convolution is inputted as minutia.The LBP characteristic patterns that will be obtained, use jump Jump connection is input in the higher level of network, and the feature extracted with it merges, and as next layer of input, forms residual error Module so that network can acquire some and lose and the local detail information after variation in the higher layers.
As shown in fig. 6, the step S03 includes the following steps:S031 uses the activation value of the last one activation primitive Full articulamentum is launched into specified size;S032, the probability that each cuisine is belonged to using softmax output images obtain pre- mark Label;S033 uses loss function counting loss according to prediction label and physical tags, the end step if losing and being less than threshold value, S034 is entered step if losing and being more than threshold value;S034 calculates gradient updating according to the loss of step S033 using majorized function Network parameter, and return to step S02.Loss is calculated, to be further adjusted training to neural network, ensures god Detailed information appropriate can be retained through network, even if causing details to lack because bottom-up information is abandoned, also not because retaining A large amount of garbages, influence treatment effeciency.
As shown in fig. 7, the step S2 includes the following steps:S21 extracts the feature of input picture using convolutional calculation Figure;S22 uses the characteristic pattern in maximum pond operation processing step S21;S23 uses nonlinear activation function processing step Output in S22, obtains key feature.The step is to be input to food image in network to handle, and network is iteratively every One layer carries out feature extraction to image, is carried out at the same time dimensionality reduction, is then specifically, for the food image of input, uses difference Convolution kernel carry out convolutional calculation and extract different characteristics of image, for the feature that convolutional calculation obtains, operated and gone using pondization Except redundancy reaches dimensionality reduction effect, nonlinear activation primitive is used after pondization operation, by Feature Mapping to Nonlinear Space Between in, for each activation value, stablize activation value using the mean variance of Batch Normalize activation values, network changes In generation, in such a way extracts characteristics of image.
As shown in figure 8, the step S3 includes the following steps:S31 delimit big each pixel using LBP operators In the small neighborhood for k;S32 is and adjacent using centre of neighbourhood pixel as threshold value(k*k-1)The gray value of a pixel is compared; S33, if surrounding pixel is more than center pixel value, the position of the pixel is marked as 1, is otherwise labeled as 0;S34, according to suitable Mark point in neighborhood is arranged in a binary number by hour hands from outside to inside, centered on pixel LBP values;S35, weight Multiple step S31 to step S34, obtains the LBP characteristic patterns of entire image.This step is LBP characteristic images in order to obtain, to make It is input in the higher level of network with jump connection, is merged with its activation value, as next layer of input, form residual error Module so that network can acquire some and lose and the local detail information after variation in the higher layers.
For those skilled in the art, technical solution that can be as described above and design are made other each Kind is corresponding to be changed and deforms, and all these change and deform the protection model that should all belong to the claims in the present invention Within enclosing.

Claims (7)

1. a kind of food recognition methods based on neural network, which is characterized in that include the following steps:
S1 obtains food image and inputs neural network;
S2 carries out feature extraction and dimensionality reduction to the image of input, obtains key feature;
S3 carries out LBP characteristic value calculating to the image of input, obtains LBP characteristic patterns;
S4 merges LBP characteristic patterns with key feature, obtains minutia, and minutia is input to next layer;
S5 carries out the classification of food image using the feature that neural network finally extracts.
2. a kind of food recognition methods based on neural network according to claim 1, which is characterized in that further include nerve Network training step S0, the S0 include the following steps:
S01 makes and collects sample;
S02 selects training sample, extracts its key feature and LBP characteristic patterns and merge, obtain minutia and be input to Next layer;
S03 calculates the loss for the feature that neural network finally extracts, and loss is more than threshold value to return to step S02, and step is less than threshold Value then enters step S04;
S04 is tested using the image of test sample, is terminated training step if test loss is less than threshold value, is entered step S1, the return to step S02 if test loss is more than threshold value.
3. a kind of food recognition methods based on neural network according to claim 2, which is characterized in that the step S01 includes the following steps:
S011 collects the image of all cuisines, is adjusted to the image of fixed size;
S012 is the corresponding cuisine name label of distribution of each cuisine image;
All images and corresponding label are divided into training sample and test sample by S013.
4. a kind of food recognition methods based on neural network according to claim 2, which is characterized in that the step S02 includes the following steps:
S021 is chosen training sample and inputs network, the characteristic pattern of input picture is extracted using convolutional calculation;
S022 is carried out using the characteristic pattern in maximum pond operation processing step S021, and to result at nonlinear activation function Reason obtains key feature;
S023 calculates the LBP characteristic patterns of image and merges with the obtained key features of step S022;
Step S023 is merged the mean value of gained and variance carries out Batch Normalization operations, normalized by S024 To in the range of [0,1], next layer of convolution is inputted as minutia.
5. a kind of food recognition methods based on neural network according to claim 2, which is characterized in that the step S03 includes the following steps:
The activation value of the last one activation primitive is launched into specified size by S031 using full articulamentum;
S032, the probability that each cuisine is belonged to using softmax output images obtain prediction label;
S033 uses loss function counting loss according to prediction label and physical tags, the end step if losing and being less than threshold value, S034 is entered step if losing and being more than threshold value;
S034 calculates gradient updating network parameter, and return to step S02 according to the loss of step S033 using majorized function.
6. according to a kind of food recognition methods based on neural network of claim 1-6 any one of them, which is characterized in that institute Step S2 is stated to include the following steps:
S21 extracts the characteristic pattern of input picture using convolutional calculation;
S22 uses the characteristic pattern in maximum pond operation processing step S21;
S23 obtains key feature using the output in nonlinear activation function processing step S22.
7. according to a kind of food recognition methods based on neural network of claim 1-6 any one of them, which is characterized in that institute Step S3 is stated to include the following steps:
S31 delimit in the neighborhood that size is k each pixel using LBP operators;
S32 is and adjacent using centre of neighbourhood pixel as threshold value(k*k-1)The gray value of a pixel is compared;
S33, if surrounding pixel is more than center pixel value, the position of the pixel is marked as 1, is otherwise labeled as 0;
S34, according to by the mark point in neighborhood, being arranged in a binary number from outside to inside clockwise, centered on pixel LBP values;
S35 repeats step S31 to step S34, obtains the LBP characteristic patterns of entire image.
CN201810054620.1A 2018-01-19 2018-01-19 A kind of food recognition methods based on neural network Pending CN108280474A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810054620.1A CN108280474A (en) 2018-01-19 2018-01-19 A kind of food recognition methods based on neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810054620.1A CN108280474A (en) 2018-01-19 2018-01-19 A kind of food recognition methods based on neural network

Publications (1)

Publication Number Publication Date
CN108280474A true CN108280474A (en) 2018-07-13

Family

ID=62804299

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810054620.1A Pending CN108280474A (en) 2018-01-19 2018-01-19 A kind of food recognition methods based on neural network

Country Status (1)

Country Link
CN (1) CN108280474A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109214272A (en) * 2018-07-17 2019-01-15 北京陌上花科技有限公司 A kind of image-recognizing method and device
CN111008973A (en) * 2018-10-05 2020-04-14 罗伯特·博世有限公司 Method, artificial neural network and device for semantic segmentation of image data
CN112070077A (en) * 2020-11-16 2020-12-11 北京健康有益科技有限公司 Deep learning-based food identification method and device
CN113723498A (en) * 2021-08-26 2021-11-30 广东美的厨房电器制造有限公司 Food maturity identification method, device, system, electric appliance, server and medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105512676A (en) * 2015-11-30 2016-04-20 华南理工大学 Food recognition method at intelligent terminal
CN106203493A (en) * 2016-07-04 2016-12-07 何广森 A kind of food identification device and recognition methods
CN106372624A (en) * 2016-10-15 2017-02-01 杭州艾米机器人有限公司 Human face recognition method and human face recognition system
CN106529447A (en) * 2016-11-03 2017-03-22 河北工业大学 Small-sample face recognition method
CN106529564A (en) * 2016-09-26 2017-03-22 浙江工业大学 Food image automatic classification method based on convolutional neural networks
CN106650568A (en) * 2016-08-31 2017-05-10 浙江大华技术股份有限公司 Human face identifying method and apparatus
CN106845527A (en) * 2016-12-29 2017-06-13 南京江南博睿高新技术研究院有限公司 A kind of vegetable recognition methods
CN107122737A (en) * 2017-04-26 2017-09-01 聊城大学 A kind of road signs automatic detection recognition methods

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105512676A (en) * 2015-11-30 2016-04-20 华南理工大学 Food recognition method at intelligent terminal
CN106203493A (en) * 2016-07-04 2016-12-07 何广森 A kind of food identification device and recognition methods
CN106650568A (en) * 2016-08-31 2017-05-10 浙江大华技术股份有限公司 Human face identifying method and apparatus
CN106529564A (en) * 2016-09-26 2017-03-22 浙江工业大学 Food image automatic classification method based on convolutional neural networks
CN106372624A (en) * 2016-10-15 2017-02-01 杭州艾米机器人有限公司 Human face recognition method and human face recognition system
CN106529447A (en) * 2016-11-03 2017-03-22 河北工业大学 Small-sample face recognition method
CN106845527A (en) * 2016-12-29 2017-06-13 南京江南博睿高新技术研究院有限公司 A kind of vegetable recognition methods
CN107122737A (en) * 2017-04-26 2017-09-01 聊城大学 A kind of road signs automatic detection recognition methods

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
QI JIA ET AL.: "Multi-Layer Sparse Representation for Weighted LBP-Patches Based Facial Expression Recognition", 《SENSORS》 *
刘爽: "利用LBP特征与神经网络在嵌入式系统上实现人脸识别算法", 《现代计算机》 *
李艳玮 等: "融合AAM、CNN与LBP特征的人脸表情识别方法", 《计算机工程与设计》 *
王大伟,陈章玲: "基于LBP与卷积神经网络的人脸识别", 《天津理工大学学报》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109214272A (en) * 2018-07-17 2019-01-15 北京陌上花科技有限公司 A kind of image-recognizing method and device
CN109214272B (en) * 2018-07-17 2024-07-05 北京陌上花科技有限公司 Image recognition method and device
CN111008973A (en) * 2018-10-05 2020-04-14 罗伯特·博世有限公司 Method, artificial neural network and device for semantic segmentation of image data
CN112070077A (en) * 2020-11-16 2020-12-11 北京健康有益科技有限公司 Deep learning-based food identification method and device
CN112070077B (en) * 2020-11-16 2021-02-26 北京健康有益科技有限公司 Deep learning-based food identification method and device
CN113723498A (en) * 2021-08-26 2021-11-30 广东美的厨房电器制造有限公司 Food maturity identification method, device, system, electric appliance, server and medium

Similar Documents

Publication Publication Date Title
CN110443143B (en) Multi-branch convolutional neural network fused remote sensing image scene classification method
CN110348319B (en) Face anti-counterfeiting method based on face depth information and edge image fusion
CN106023220B (en) A kind of vehicle appearance image of component dividing method based on deep learning
CN109271895B (en) Pedestrian re-identification method based on multi-scale feature learning and feature segmentation
CN106682696B (en) The more example detection networks and its training method refined based on online example classification device
CN110717554B (en) Image recognition method, electronic device, and storage medium
CN109325395A (en) The recognition methods of image, convolutional neural networks model training method and device
CN108280474A (en) A kind of food recognition methods based on neural network
CN110059750A (en) House type shape recognition process, device and equipment
CN109284779A (en) Object detection method based on deep full convolution network
CN104268552B (en) One kind is based on the polygonal fine classification sorting technique of part
CN109919252A (en) The method for generating classifier using a small number of mark images
CN105069478A (en) Hyperspectral remote sensing surface feature classification method based on superpixel-tensor sparse coding
CN111784665B (en) OCT image quality evaluation method, system and device based on Fourier transform
CN111753873A (en) Image detection method and device
CN108932712A (en) A kind of rotor windings quality detecting system and method
CN108171119B (en) SAR image change detection method based on residual error network
CN110457677A (en) Entity-relationship recognition method and device, storage medium, computer equipment
CN110008912B (en) Social platform matching method and system based on plant identification
CN110751072A (en) Double-person interactive identification method based on knowledge embedded graph convolution network
CN111401343B (en) Method for identifying attributes of people in image and training method and device for identification model
CN109145685A (en) Fruits and vegetables EO-1 hyperion quality detecting method based on integrated study
CN113869098A (en) Plant disease identification method and device, electronic equipment and storage medium
CN111291818A (en) Non-uniform class sample equalization method for cloud mask
Heryanto et al. Classification of Coffee Beans Defect Using Mask Region-based Convolutional Neural Network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20180713

RJ01 Rejection of invention patent application after publication