CN114841974A - Nondestructive testing method and system for internal structure of fruit, electronic equipment and medium - Google Patents
Nondestructive testing method and system for internal structure of fruit, electronic equipment and medium Download PDFInfo
- Publication number
- CN114841974A CN114841974A CN202210514268.1A CN202210514268A CN114841974A CN 114841974 A CN114841974 A CN 114841974A CN 202210514268 A CN202210514268 A CN 202210514268A CN 114841974 A CN114841974 A CN 114841974A
- Authority
- CN
- China
- Prior art keywords
- fruit
- image
- information
- labeling
- loss function
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 235000013399 edible fruits Nutrition 0.000 title claims abstract description 333
- 238000000034 method Methods 0.000 title claims abstract description 42
- 238000009659 non-destructive testing Methods 0.000 title claims abstract description 30
- 238000002372 labelling Methods 0.000 claims abstract description 69
- 238000012545 processing Methods 0.000 claims abstract description 23
- 238000013528 artificial neural network Methods 0.000 claims abstract description 10
- 230000006870 function Effects 0.000 claims description 72
- 238000004091 panning Methods 0.000 claims description 12
- 238000004590 computer program Methods 0.000 claims description 11
- 238000013519 translation Methods 0.000 claims description 6
- 238000007689 inspection Methods 0.000 claims description 4
- 238000005457 optimization Methods 0.000 claims description 4
- 238000010276 construction Methods 0.000 claims description 3
- 230000001066 destructive effect Effects 0.000 claims description 3
- 238000012360 testing method Methods 0.000 claims description 3
- 238000001514 detection method Methods 0.000 abstract description 3
- 238000013461 design Methods 0.000 description 6
- 238000005516 engineering process Methods 0.000 description 5
- 230000008569 process Effects 0.000 description 5
- 235000006025 Durio zibethinus Nutrition 0.000 description 3
- 240000000716 Durio zibethinus Species 0.000 description 3
- 238000013527 convolutional neural network Methods 0.000 description 3
- 238000011176 pooling Methods 0.000 description 3
- 238000013139 quantization Methods 0.000 description 3
- 230000008713 feedback mechanism Effects 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 238000010801 machine learning Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- 238000012795 verification Methods 0.000 description 2
- 241000238631 Hexapoda Species 0.000 description 1
- 230000002159 abnormal effect Effects 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000012217 deletion Methods 0.000 description 1
- 230000037430 deletion Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000011478 gradient descent method Methods 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/136—Segmentation; Edge detection involving thresholding
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/187—Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/62—Analysis of geometric attributes of area, perimeter, diameter or volume
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
- G06T2207/30128—Food products
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30204—Marker
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Quality & Reliability (AREA)
- Geometry (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to the technical field of nondestructive testing of agricultural products, and aims to provide a nondestructive testing method and system for internal structures of fruits, electronic equipment and a medium. The method comprises the following steps: constructing a primary fruit element identification model based on a neural network; acquiring a first fruit image and first label information corresponding to the first fruit image, and optimizing the primary fruit element identification model through the first fruit image and the first label information to obtain an optimized fruit element identification model; acquiring a fruit image to be detected, and inputting the fruit image to be detected into the optimized fruit element identification model for processing to obtain final labeling information corresponding to the fruit image to be detected; and obtaining a redrawn image corresponding to the fruit image to be detected according to the final marking information. The invention can realize nondestructive detection of the internal structure of the fruit and provide important basis for the quality requirement of the fruit.
Description
Technical Field
The invention relates to the technical field of nondestructive testing of agricultural products, in particular to a nondestructive testing method and system for an internal structure of a fruit, electronic equipment and a medium.
Background
The existing pulp judging technology of durian and the like mainly adopts human eyes to observe appearance contour recognition, has larger uncertainty, and is difficult to judge the situations of pulp size, fruit pit size, whether empty houses exist, whether insects exist, shell thickness and the like.
At present, in the direction of computer vision, artificial intelligence and machine learning technologies are rapidly developed, and the technology is widely applied to the fields of medical treatment, industry and the like, but no large-scale machine learning case exists in the field of fruit imaging. In addition, due to the particularity of fruits, the existing medical image segmentation technology is not suitable for fruit imaging, so that the existing technology cannot be applied to identification of internal structures of the fruits. Therefore, it is highly desirable to develop a method for automatically identifying the internal structure of fruit.
Disclosure of Invention
The invention aims to solve the technical problems at least to a certain extent, and provides a nondestructive testing method and system for internal structures of fruits, electronic equipment and a medium.
The technical scheme adopted by the invention is as follows:
in a first aspect, the invention provides a nondestructive testing method for internal structure of fruit, comprising the following steps:
constructing a primary fruit element identification model based on a neural network;
acquiring a first fruit image and first label information corresponding to the first fruit image, and optimizing the primary fruit element identification model through the first fruit image and the first label information to obtain an optimized fruit element identification model;
acquiring a fruit image to be detected, and inputting the fruit image to be detected into the optimized fruit element identification model for processing to obtain final labeling information corresponding to the fruit image to be detected;
and obtaining a redrawn image corresponding to the fruit image to be detected according to the final marking information.
The invention is used for solving the problem that consumers in the existing market can not know the internal structure of the fruit and evaluate the corresponding quality, can realize the nondestructive detection of the internal structure of the fruit and provides important basis for the quality requirement of the fruit. In particular, in the implementation process of the invention, by constructing a primary fruit element recognition model based on a neural network, then optimizing the primary fruit element recognition model based on the obtained first fruit image and first labeling information corresponding to the first fruit image to obtain an optimized fruit element recognition model, inputting the fruit image to be detected into the optimized fruit element recognition model for processing to obtain final labeling information corresponding to the fruit image to be detected, and finally obtaining a redrawn image corresponding to the fruit image to be detected based on the final labeling information, in the process, the final marking information corresponding to the fruit image to be detected is a nondestructive testing result, so that a user can conveniently obtain the information of each element in the fruit to be detected, and the internal structure condition of the fruit to be detected is visually known based on the finally obtained redrawn image.
In one possible design, optimizing the primary fruit element recognition model through the first fruit image and the first label information to obtain an optimized fruit element recognition model, including:
inputting the first fruit image into the primary fruit element identification model for processing to obtain a plurality of second labeling blocks corresponding to the first fruit image;
constructing and obtaining a plurality of loss functions corresponding to the plurality of second labeling blocks based on the designated pixel area of the first labeling information and the plurality of second labeling blocks, and then solving loss values of the plurality of loss functions;
comparing a plurality of loss values corresponding to the designated pixel areas of the first labeling information to obtain a minimum loss value of the plurality of loss values, and then obtaining second labeling blocks corresponding to the minimum loss values respectively matched with all the pixel areas in the first labeling information;
and taking second labeling blocks corresponding to minimum loss values respectively matched with all pixel regions in the first labeling information as output results of the primary fruit element recognition model on the first fruit image, and then optimizing the primary fruit element recognition model based on the output results to obtain an optimized fruit element recognition model.
In one possible design, the first labeling information includes a first contour image corresponding to a specified fruit element in the first fruit image and first label information corresponding to the first contour image, where the first label information is used to indicate a type of the fruit element corresponding to the first contour image;
the second labeling block comprises a second pixel block corresponding to a specified fruit element in the first fruit image and second label information corresponding to the second pixel block, and the second label information is used for representing the type of the fruit element corresponding to the second pixel block;
the loss function comprises a first loss function, a second loss function, and/or a third loss function; wherein the first loss function is:
wherein m is the number of samples, i.e. the number of all fruit elements in the first fruit image, y' (i) Corresponding to the real classification, x 'of the first labeling information for the ith sample' (i) For the solution classification of the ith sample, h θ Is x' (i) The exact probability of (d);
the second loss function includes:
t x =(x-x a )/w a ,t y =(y-y a )/h a ,
t w =log(w/w a ),t h =log(h/h a ),
wherein x is the abscissa of the center of the prediction output result, x a Is the abscissa of the second labeled block, x is the abscissa of the center of the first contour image, y is the ordinate of the center of the predicted output result, y is the abscissa of the center of the second contour image, and a is the ordinate of the second labeled block, y is the ordinate of the center of the first contour image, w is the width of the predicted output result, w is a W is the width of the second labeled block, w is the width of the designated pixel region of the first labeled information, h is the height of the prediction output result, h is the width of the second labeled block a H is the height of the second marked block, h is the height of the designated pixel area of the first marked information, t x Translating a scaling parameter, t, for the abscissa of the second labeled block relative to the abscissa of the center of the predicted output result w A wide panning scaling parameter, t, for the wide relative prediction output of the second labeled block y Translating a scaling parameter, t, for the ordinate of the second labeled block relative to the ordinate of the center of the predicted output result h A high panning scaling parameter, t, for a high relative prediction output of the second labeled block x A translation scaling parameter, t, of the abscissa of the second marked block relative to the abscissa of the center of the first profile image w A panning scaling parameter, t, of the width of the second labeling block relative to the width of the designated pixel region of the first labeling information y The ordinate of the second marked block is relative to the first contour mapTranslation scaling parameter of the ordinate of the center of the image, t h A panning scaling parameter indicating a height of the second marked block relative to a height of the designated pixel region of the first marked information;
the third loss function is:
FIU(S1,S2)=SI/(S1+S2-SI);
where S1 is the area of the designated pixel region of the first annotation information, S2 is the area of the second pixel region, FIU (S1, S2) is the intersection ratio when the designated pixel region of the first annotation information and the second pixel region are superimposed on the same reference image, and SI is the area of the overlapping region when the designated pixel region of the first annotation information and the second pixel region are superimposed on the same reference image.
In one possible design, the LOSS functions include a first LOSS function, a second LOSS function, and a third LOSS function, the LOSS function TOTAL _ LOSS is a × L1+ B × L2+ C × L3, where L1 represents the first LOSS function, L2 represents the second LOSS function, L3 represents the third LOSS function, a is a weight coefficient of the first LOSS function, B is a weight coefficient of the second LOSS function, and C is a weight coefficient of the third LOSS function.
In one possible design, after obtaining the first image of fruit, the method further includes:
denoising the first fruit image to obtain a denoised fruit image, wherein the first labeling information and the second labeling block are labeling information corresponding to the denoised fruit image; wherein denoising the first fruit image comprises:
acquiring fixed noise of the first fruit image, wherein the fixed noise is pre-stored pixels of a background image when fruits are not placed;
and removing fixed noise from the first fruit image to obtain a preliminary de-noised fruit image, wherein the preliminary de-noised fruit image is the pixel difference between the first fruit image and the background image.
In one possible design, denoising the first fruit image comprises:
acquiring random noise of the first fruit image;
carrying out self-adaptive binarization on the preliminarily denoised fruit image to obtain a binarized image, wherein the pixel value of any point (x, y) in the binarized image is as follows:
wherein f is 1 (x, y) is the pixel value of the point (x, y) in the binarized image, f (x, y) is the pixel value of the point (x, y) in the preliminarily denoised fruit image, t 1 Is a first pixel threshold;
acquiring a connected region except a background region in the binary image, and calculating a secondary connected region with the area smaller than an area threshold value in the connected region;
and acquiring the position information of the secondary connected region, and then deleting the image corresponding to the position information of the secondary connected region in the primarily denoised fruit image to obtain the denoised fruit image.
In a possible design, after the final annotation information corresponding to the fruit image to be detected is obtained, the method further includes:
and obtaining the pulp area corresponding to the fruit image to be detected according to the final labeling information, and then grading the fruit corresponding to the fruit image to be detected according to the pulp area.
In a second aspect, the invention provides a fruit internal structure nondestructive testing system for implementing the fruit internal structure nondestructive testing method according to any one of the above items; the fruit inner structure nondestructive test system includes:
the model construction module is used for constructing a primary fruit element identification model based on a neural network;
the model optimization module is used for acquiring a first fruit image and first label information corresponding to the first fruit image, and optimizing the primary fruit element identification model through the first fruit image and the first label information to obtain an optimized fruit element identification model;
the image identification module is used for acquiring a fruit image to be detected, inputting the fruit image to be detected into the optimized fruit element identification model for processing, and obtaining final marking information corresponding to the fruit image to be detected;
and the image redrawing module is used for obtaining a redrawing image corresponding to the fruit image to be detected according to the final marking information.
In a third aspect, the present invention provides an electronic device, comprising:
a memory for storing computer program instructions; and the number of the first and second groups,
a processor for executing the computer program instructions to perform the operations of the method for non-destructive testing of the internal structure of fruit as described in any one of the above.
In a fourth aspect, the present invention provides a computer readable storage medium for storing computer readable computer program instructions configured to perform the operations of the method for non-destructive testing of internal structure of fruit as described in any one of the above when run.
Drawings
FIG. 1 is a flow chart of a method for non-destructive testing of the internal structure of fruit according to the present invention;
FIG. 2 is a binarized image corresponding to a first fruit image, as exemplified in the present invention;
FIG. 3 is an exemplary binarized image corresponding to a first fruit image after deletion of secondary connected regions in accordance with the present invention;
FIG. 4 is an exemplary denoised fruit image corresponding to a first fruit image in accordance with the present invention;
FIG. 5 is an image corresponding to a first fruit image including final annotation information in accordance with the present invention by way of example;
FIG. 6 is a redrawn image corresponding to an exemplary fruit image to be detected in the present invention;
FIG. 7 is a block diagram of a nondestructive inspection system for internal fruit structure in accordance with the present invention.
Detailed Description
The invention is further described with reference to the following figures and specific embodiments.
It will be understood that, although the terms first, second, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another.
It should be understood that, for the term "and/or" as may appear herein, it is merely an associative relationship that describes an associated object, meaning that three relationships may exist, e.g., a and/or B may mean: a exists alone, B exists alone, and A and B exist at the same time.
It should also be noted that, in some alternative implementations, the functions/acts noted may occur out of the order noted in the figures. For example, two figures shown in succession may, in fact, be executed substantially concurrently, or the figures may sometimes be executed in the reverse order, depending upon the functionality/acts involved.
Example 1:
the present embodiment provides a nondestructive testing method for internal structure of fruit, which can be, but is not limited to, executed by a computer device or a virtual machine with certain computing resources, for example, an electronic device such as a personal computer, a smart phone, a personal digital assistant or a wearable device, or executed by a virtual machine, so as to implement nondestructive testing for internal structure of fruit.
As shown in fig. 1, a method, a system, an electronic device and a medium for nondestructive testing of internal structure of fruit may include, but are not limited to, the following steps:
s1, constructing a primary fruit element identification model based on a neural network;
s2, obtaining a first fruit image and first label information corresponding to the first fruit image, and optimizing the primary fruit element identification model through the first fruit image and the first label information to obtain an optimized fruit element identification model; specifically, the first annotation information includes a first outline image corresponding to a specified fruit element in the first fruit image and first label information corresponding to the first outline image, where the first label information is used to indicate a type of the fruit element corresponding to the first outline image; the first outline image is used for representing the position, outline and the like of a specified fruit element in the first fruit image in the fruit image, wherein the fruit element comprises a fruit core, a fruit pulp, an empty bag, a worm, a fruit thorn, a fruit shell and the like. In this embodiment, first annotation information can but not only be limited to obtain through full artifical mark and/or semi-automatic mark, and wherein, full artifical mark is for being based on the manual work in the first fruit image annotates the profile image that each fruit element corresponds, and semi-automatic mark is in for being based on predetermined mark model annotate the profile image that each fruit element corresponds in the first fruit image, and the profile image that obtains is adjusted through the manual work to the back, and semi-automatic mark is the full artifical mark relatively, and the mark speed is faster, does benefit to the reduction personnel selection cost simultaneously.
In this embodiment, the first labeling information adopts a pattern such as a rectangle, a polygon, a circle, or the like, or a text label, and is used for labeling an abnormal value of the first fruit image.
After obtaining first fruit image and with the first mark information that first fruit image corresponds, still include: constructing a basic information set, wherein the basic information set comprises images with patterns such as toppling, edge missing, center missing and overlapping; if the initial information set is A0, the information set of the fruit pouring image in the initial information set A0 is A01, the information set of the fruit non-pouring image is A02, and the information set A0 comprises A01 and A02; the information set of the edge-missing image in the initial information set a0 is B01, the information set of the edge-non-missing image is B02, and the information set a0 includes B01 and B02; the information set of the central missing image in the initial information set a0 is C01, the information set of the central non-missing image is C02, and the information set a0 includes C01 and C02; other outliers are analogized in turn. And the subsequent steps of optimizing the primary fruit element identification model and the like refer to processing on the same sub-information set or processing on the intersection of a plurality of sub-information sets.
As an example, if the information sets of the fruit non-toppled image and the center non-missing image are processed, all the information sets of a02 and C02 are extracted as the model data set S; or only the information set of the unpumped images is processed, a02 is taken as the model data set S.
Specifically, through the first fruit image and the first labeling information, the primary fruit element recognition model is optimized to obtain an optimized fruit element recognition model, including:
s201, inputting the first fruit image into the primary fruit element recognition model for processing to obtain a plurality of second labeling blocks corresponding to the first fruit image. Specifically, the second labeled block includes a second pixel block corresponding to a specified fruit element in the first fruit image and second label information corresponding to the second pixel block, where the second label information is the same as the first label information corresponding to the specified fruit element, and the second label information is used to indicate a type of the fruit element corresponding to the second pixel block. In this embodiment, when the first fruit image is input into the primary fruit element identification model for processing, a CNN is used to extract features of target elements in the first fruit image, and when the features are extracted, feature sequences of different scales are obtained through sampling of different scales, and the feature sequences are used as data to be processed subsequently; after the primary fruit element recognition model carries out subsequent processing on the feature sequence, a plurality of second feature images corresponding to the first fruit image can be obtained, and a plurality of second labeling blocks corresponding to the first fruit image are arranged on the second feature images. Specifically, sampling of different scales, that is, the size of the second labeling block, can be obtained according to user-defined settings, for example, the size is set to 2 × 2 pixels, which is not limited here, and in the implementation process, the first fruit image is processed by the convolution layer, the pooling layer and the full-link layer and then input to the primary fruit element identification model for processing, so as to obtain the second pixel block and second label information corresponding to the second pixel block, so as to improve the image processing speed.
S202, constructing and obtaining a plurality of loss functions corresponding to a plurality of second labeled blocks based on the designated pixel area and the plurality of second labeled blocks of the first labeled information, and then solving loss values of the plurality of loss functions;
in this embodiment, the loss function includes a first loss function, a second loss function, and/or a third loss function; wherein the first loss function is:
wherein m is a sample number, that is, the number of all fruit elements in the first fruit image, specifically, when there is only one first fruit image, m is the number of all fruit elements in the first fruit image, and when there are n1 first fruit images and there are n2 fruit elements on average in each first fruit image, then m is the number of all fruit elements in the plurality of first fruit images, that is, m is n1 n 2; y' (i) Corresponding to the real classification, x 'of the first labeling information for the ith sample' (i) For solving the classification of the ith sample, h θ Is x' (i) Of (c), i.e. x' (i) Probability corresponding to the real classification; specifically, y' (i) Obtaining x 'based on the second label information corresponding to the appointed pixel area of the first marking information' (i) Solving and automatically generating a first fruit image based on a primary fruit element identification model, wherein real classification and solving classification can be set to be corresponding to fruit elements: 1 for pulp, 2 for worm, 0 for non-pulp, 0 for non-worm, etc., without limitation. In this embodiment, the first loss function is used to measure the degree of deviation of the target class, that is, the degree of deviation between the solved class of the second labeled block and the corresponding real class.
The second loss function includes:
t x =(x-x a )/w a ,t y =(y-y a )/h a ,
t w =log(w/w a ),t h =log(h/h a ),
wherein x is the abscissa of the center of the prediction output result, x a Is the abscissa of the second labeled block, x is the abscissa of the center of the first contour image, y is the ordinate of the center of the predicted output result, y is the abscissa of the center of the second contour image, and a is the ordinate of the second labeled block, y is the ordinate of the center of the first contour image, w is the width of the predicted output result, w is a The width of the second labeled block, w is the width of the designated pixel region of the first labeled information, h is the height of the predicted output result, h a H is the height of the second marked block, h is the height of the designated pixel area of the first marked information, t x Translating a scaling parameter, t, for the abscissa of the second labeled block relative to the abscissa of the center of the predicted output result w A wide panning scaling parameter, t, for the wide relative prediction output of the second labeled block y Translating a scaling parameter, t, for the ordinate of the second labeled block relative to the ordinate of the center of the predicted output result h A high panning scaling parameter, t, for a high relative prediction output of said second labeled block x A translation scaling parameter, t, of the abscissa of the second marked block relative to the abscissa of the center of the first profile image w A panning scaling parameter, t, of the width of the second labeling block relative to the width of the designated pixel region of the first labeling information y A translation scaling parameter, t, of the ordinate of the second marked block relative to the ordinate of the center of the first profile image h A panning scaling parameter indicating a height of the second marked block relative to a height of the designated pixel region of the first marked information; it should be understood that t x 、t w 、t y 、t h 、t x *、t w *、t y Sum of t h For indicating the degree of the related data approaching the first labeled information, the related data is over-connectedAnd the value of the relevant translation scaling parameter is smaller when the first labeling information is close.
The third loss function is:
FIU(S1,S2)=SI/(S1+S2-SI);
where S1 is the area of the designated pixel region of the first annotation information, S2 is the area of the second pixel region, FIU (S1, S2) is the intersection ratio when the designated pixel region of the first annotation information and the second pixel region are superimposed on the same reference image, and SI is the area of the overlapping region when the designated pixel region of the first annotation information and the second pixel region are superimposed on the same reference image.
Specifically, in this embodiment, the first loss function is a multi-class cross-entropy loss function and is obtained by cross entropy quantization, the second loss function is a position initial regression error function and is obtained by Polynomial regression (Polynomial regression) quantization, and the third loss function is a final position regression error function and is obtained by Intersection-over-unity (Intersection-over-unity) index quantization.
Specifically, in this embodiment, the LOSS function includes a first LOSS function, a second LOSS function, and a third LOSS function, where TOTAL _ LOSS is a × L1+ B × L2+ C × L3, where L1 represents the first LOSS function, L2 represents the second LOSS function, L3 represents the third LOSS function, a is a weight coefficient of the first LOSS function, B is a weight coefficient of the second LOSS function, and C is a weight coefficient of the third LOSS function, and A, B, C is determined according to a user requirement for the recognition result of the optimized fruit element recognition model. In this embodiment, the gradient descent method may be used to solve the neural network parameters, and iteration is repeated, so that the LOSS function TOTAL _ LOSS is reduced to a certain level, which is convenient for optimizing the primary fruit element recognition model to obtain the optimized fruit element recognition model.
In this embodiment, increasing or decreasing the iterative information set of the loss function includes:
A1. establishing a feedback mechanism information set S, wherein the feedback mechanism information set S comprises after-sale feedback and instant feedback, if the fruit with a certain number is found to be marked wrongly after sale, the number is classified into an X information set, and when model regression is carried out next time, S ═ S-X is used as a new information set to replace and carry out operation and iteration of a loss function;
A2. for the label information generated by the machine, whether the label information meets the standard of the first labeling information can be judged manually, if so, the photo and the corresponding label information are input into a data set S, and the loss function is recalculated.
In this embodiment, after obtaining the first fruit image, the method further includes:
denoising the first fruit image to obtain a denoised fruit image, wherein the first labeling information and the second labeling block are labeling information corresponding to the denoised fruit image.
Specifically, denoising the first fruit image comprises:
acquiring fixed noise of the first fruit image, wherein the fixed noise is pre-stored pixels of a background image when fruits are not placed;
and removing fixed noise from the first fruit image to obtain a preliminary de-noised fruit image, wherein the preliminary de-noised fruit image is the pixel difference between the first fruit image and the background image.
Denoising the first fruit image, further comprising:
acquiring random noise of the first fruit image;
carrying out self-adaptive binarization on the preliminarily denoised fruit image to obtain a binarized image, wherein the pixel value of any point (x, y) in the binarized image is as follows:
wherein f is 1 (x, y) is the pixel value of the point (x, y) in the binarized image, f (x, y) is the pixel value of the point (x, y) in the preliminarily denoised fruit image, t 1 Is the first pixel threshold, t 1 Obtaining the data by adopting a self-adaptive solving mode; in the binary imageBelow a first pixel threshold t 1 Is black, above a first pixel threshold t 1 The position of (b) shows white; in the present embodiment, the binarized image is as shown in fig. 2;
acquiring a connected region in the binarized image except for a background region, and calculating a secondary connected region with the area smaller than an area threshold value in the connected region; after the secondary connected region is obtained, the method further comprises the following steps: deleting a secondary connected region in the binarized image, wherein the pixel value of any point (x, y) in the binarized image at the connected region is as follows:
wherein f is 2 (x, y) is the pixel value at the point (x, y) in the binarized image at the connected region, areas [ i [ ]]Is the area, t, of a connected region i containing a point (x, y) 2 Is a second pixel threshold; that is, the binary image at the connected region is lower than the second pixel threshold value t 2 Shows white above a second pixel threshold t 2 The position of (2) shows black; in this embodiment, the binarized image after the secondary connected region is deleted is shown in fig. 3;
obtaining the position information of the secondary connected region, and then deleting the image corresponding to the position information of the secondary connected region from the preliminary denoised fruit image to obtain a denoised fruit image, wherein in the embodiment, the denoised fruit image is as shown in fig. 4.
In this embodiment, the first fruit image and the fruit image to be detected are both radiographic images with a wavelength of 0.00775-10 nm.
S203, comparing a plurality of loss values corresponding to the designated pixel areas of the first labeling information to obtain a minimum loss value of the loss values, and then obtaining second labeling blocks corresponding to the minimum loss values respectively matched with all the pixel areas in the first labeling information;
and S204, taking the second labeling blocks corresponding to the minimum loss values respectively matched with all the pixel regions in the first labeling information as the output result of the primary fruit element recognition model on the first fruit image, and then optimizing the primary fruit element recognition model based on the output result to obtain the optimized fruit element recognition model.
S3, obtaining a fruit image to be detected, inputting the fruit image to be detected into the optimized fruit element identification model for processing, and obtaining final marking information corresponding to the fruit image to be detected; as shown in fig. 5, the kernel positions in the two left images include kernel labeling information, all labeling information corresponding to each fruit element constitutes final labeling information, and the two right images are unprocessed images that do not include labeling information.
Specifically, in this embodiment, the step of processing the to-be-detected fruit image by the optimized fruit element recognition model includes:
normalizing the fruit image to be detected to obtain a normalized image so as to further eliminate image noise;
performing feature extraction on the normalized image by adopting a CNN (Convolutional Neural Networks) to obtain a feature vector, and performing feature learning to obtain a training feature sample;
processing the training characteristic sample through a convolutional layer, a pooling layer and a full-connection layer to obtain processed data; the processing speed of the optimized fruit element recognition model on the fruit image to be detected can be improved conveniently by processing the convolution layer, the pooling layer and the full-connection layer;
and identifying the processed data to obtain the final marking information corresponding to the fruit image to be detected so as to realize the prediction of different fruit elements in the fruit image.
And S4, verifying the final marking information according to a preset rule, and deleting data failed in verification so as to redraw the fruit image based on the verified final marking information, thereby improving the accuracy of the redrawn image. Specifically, the verification rule includes:
rule 1) the a kernel element must be contained in the b pulp element, if the a kernel element is not in the b pulp element, the final marking information corresponding to the a kernel element is deleted; when the intersection ratio of the kernel element and the pulp element b is greater than 0.9, the kernel element a is considered to be in the pulp element b;
rule 2) the c empty packet element is not intersected with the a kernel element and the b pulp element, and if the c empty packet element is intersected with the a kernel element and the b pulp element, the final marking information corresponding to the c empty packet element is deleted;
rule 3) if the manual labeling information has a label of toppling, the fruit image is not labeled and is not input into the fruit element identification model.
And S5, obtaining a redrawn image corresponding to the fruit image to be detected according to the final marking information. Fig. 6 shows an exemplary redrawn image corresponding to the fruit image to be detected. After the redrawn image is obtained, the redrawn image can be output in the form of a label, the label comprises a paper label and/or an electronic label, wherein the paper label can be adhered to the fruit for sale, and the electronic label can be displayed on an e-commerce platform, an H5 page or a small program, which is not described herein again.
S6, obtaining the pulp area corresponding to the fruit image to be detected according to the final marking information, and then grading the fruit corresponding to the fruit image to be detected according to the pulp area. It should be noted that the pulp area can reflect the content of the pulp corresponding to the fruit to be detected to a certain extent, so that the pulp condition in durian and the like can not be known based on the appearance, standard classification and pricing are facilitated, and the problem of user pain points related to classification and pricing of durian and other fruits is solved. Of course, the grading pricing can also be performed based on the ratio of the pulp to the total fruit elements, and the method is also included in the protection scope of the present application and is not described herein again.
It should be understood that steps S5 and S6 may be arranged in parallel or in tandem, all within the scope of the present application, and are not limited herein.
The embodiment is used for solving the problem that consumers in the existing market can not know the internal structure of the fruit and evaluate the corresponding quality, can realize the nondestructive detection of the internal structure of the fruit and provides important basis for the quality requirement of the fruit. In particular, in the implementation process, by constructing the primary fruit element recognition model based on the neural network, then optimizing the primary fruit element recognition model based on the obtained first fruit image and first labeling information corresponding to the first fruit image to obtain an optimized fruit element recognition model, inputting the fruit image to be detected into the optimized fruit element recognition model for processing to obtain final labeling information corresponding to the fruit image to be detected, and finally obtaining a redrawn image corresponding to the fruit image to be detected based on the final labeling information, in the process, the final marking information corresponding to the fruit image to be detected is a nondestructive testing result, so that a user can conveniently obtain the information of each element in the fruit to be detected, and the internal structure condition of the fruit to be detected is visually known based on the finally obtained redrawn image.
Example 2:
the embodiment provides a nondestructive testing system for internal structure of fruit, which is used for implementing the nondestructive testing method for internal structure of fruit in embodiment 1; as shown in fig. 7, the fruit internal structure nondestructive testing system comprises:
the model construction module is used for constructing a primary fruit element identification model based on a neural network;
the model optimization module is used for acquiring a first fruit image and first label information corresponding to the first fruit image, and optimizing the primary fruit element identification model through the first fruit image and the first label information to obtain an optimized fruit element identification model;
the image identification module is used for acquiring a fruit image to be detected, inputting the fruit image to be detected into the optimized fruit element identification model for processing, and obtaining final marking information corresponding to the fruit image to be detected;
and the image redrawing module is used for obtaining a redrawing image corresponding to the fruit image to be detected according to the final marking information.
Example 3:
on the basis of embodiment 1 or 2, this embodiment discloses an electronic device, and this device may be a smart phone, a tablet computer, a notebook computer, a desktop computer, or the like. The electronic device may be referred to as a terminal, a portable terminal, a desktop terminal, or the like, and includes:
a memory for storing computer program instructions; and the number of the first and second groups,
a processor for executing the computer program instructions to perform the operations of the method for non-destructive inspection of the internal structure of fruit according to any of embodiment 1.
Example 4:
on the basis of any one of embodiments 1 to 3, the present embodiment discloses a computer-readable storage medium for storing computer-readable computer program instructions configured to perform the operations of the fruit internal structure nondestructive testing method according to embodiment 1 when the computer program instructions are executed.
It should be noted that the functions described herein, if implemented in software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
It will be apparent to those skilled in the art that the modules or steps of the present invention described above may be implemented by a general purpose computing device, they may be centralized on a single computing device or distributed across a network of multiple computing devices, and they may alternatively be implemented by program code executable by a computing device, such that they may be stored in a storage device and executed by a computing device, or fabricated separately as individual integrated circuit modules, or fabricated as a single integrated circuit module from multiple modules or steps. Thus, the present invention is not limited to any specific combination of hardware and software.
The above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: modifications of the technical solutions described in the embodiments or equivalent replacements of some technical features may still be made. And such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.
Finally, it should be noted that the present invention is not limited to the above alternative embodiments, and that various other forms of products can be obtained by anyone in light of the present invention. The above detailed description should not be taken as limiting the scope of the invention, which is defined by the appended claims, which are intended to be interpreted according to the breadth to which the description is entitled.
Claims (10)
1. A nondestructive testing method for internal structure of fruit is characterized in that: the method comprises the following steps:
constructing a primary fruit element identification model based on a neural network;
acquiring a first fruit image and first label information corresponding to the first fruit image, and optimizing the primary fruit element identification model through the first fruit image and the first label information to obtain an optimized fruit element identification model;
acquiring a fruit image to be detected, and inputting the fruit image to be detected into the optimized fruit element identification model for processing to obtain final labeling information corresponding to the fruit image to be detected;
and obtaining a redrawn image corresponding to the fruit image to be detected according to the final marking information.
2. The nondestructive testing method for the internal structure of the fruit according to claim 1, wherein: through first fruit image with first mark information, right elementary fruit element recognition model carries out optimization processing, obtains optimizing back fruit element recognition model, includes:
inputting the first fruit image into the primary fruit element identification model for processing to obtain a plurality of second labeling blocks corresponding to the first fruit image;
constructing and obtaining a plurality of loss functions corresponding to the plurality of second labeling blocks based on the designated pixel area of the first labeling information and the plurality of second labeling blocks, and then solving the loss values of the plurality of loss functions;
comparing a plurality of loss values corresponding to the designated pixel areas of the first labeling information to obtain a minimum loss value of the plurality of loss values, and then obtaining second labeling blocks corresponding to the minimum loss values respectively matched with all the pixel areas in the first labeling information;
and taking second labeling blocks corresponding to minimum loss values respectively matched with all pixel regions in the first labeling information as output results of the primary fruit element recognition model on the first fruit image, and then optimizing the primary fruit element recognition model based on the output results to obtain an optimized fruit element recognition model.
3. The nondestructive testing method for the internal structure of the fruit according to claim 2, wherein: the first labeling information comprises a first contour image corresponding to a specified fruit element in the first fruit image and first label information corresponding to the first contour image, wherein the first label information is used for indicating the type of the fruit element corresponding to the first contour image;
the second labeling block comprises a second pixel block corresponding to a specified fruit element in the first fruit image and second label information corresponding to the second pixel block, and the second label information is used for representing the type of the fruit element corresponding to the second pixel block;
the loss function comprises a first loss function, a second loss function, and/or a third loss function; wherein the first loss function is:
wherein m is the number of samples, i.e. the number of all fruit elements in the first fruit image, y' (i) Corresponding to the real classification, x 'of the first labeling information for the ith sample' (i) For the solution classification of the ith sample, h θ Is x' (i) The exact probability of (d);
the second loss function includes:
t x =(x-x a )/w a ,t y =(y-y a )/h a ,
t w =log(w/w a ),t h =log(h/h a ),
wherein x is the abscissa of the center of the prediction output result, x a Is the abscissa of the second labeled block, x is the abscissa of the center of the first contour image, y is the ordinate of the center of the predicted output result, y is the abscissa of the center of the second contour image, and a is the ordinate of the second labeled block, y is the ordinate of the center of the first contour image, w is the width of the predicted output result, w is a The width of the second labeled block, w is the width of the designated pixel region of the first labeled information, h is the height of the predicted output result, h a H is the height of the second marked block, h is the height of the designated pixel area of the first marked information, t x Translating a scaling parameter, t, for the abscissa of the second labeled block relative to the abscissa of the center of the predicted output result w A wide panning scaling parameter, t, for the wide relative prediction output of the second labeled block y Translating a scaling parameter, t, for the ordinate of the second labeled block relative to the ordinate of the center of the predicted output result h A high panning scaling parameter, t, for a high relative prediction output of said second labeled block x A translation scaling parameter, t, of the abscissa of the second marked block relative to the abscissa of the center of the first profile image w A panning scaling parameter, t, of the width of the second labeling block relative to the width of the designated pixel region of the first labeling information y A translation scaling parameter, t, of the ordinate of the second marked block relative to the ordinate of the center of the first profile image h A panning scaling parameter indicating a height of the second marked block relative to a height of the designated pixel region of the first marked information;
the third loss function is:
FIU(S1,S2)=SI/(S1+S2-SI);
where S1 is the area of the designated pixel region of the first annotation information, S2 is the area of the second pixel region, FIU (S1, S2) is the intersection ratio when the designated pixel region of the first annotation information and the second pixel region are superimposed on the same reference image, and SI is the area of the overlapping region when the designated pixel region of the first annotation information and the second pixel region are superimposed on the same reference image.
4. The nondestructive testing method for the internal structure of the fruit according to claim 3, wherein: the LOSS functions include a first LOSS function, a second LOSS function and a third LOSS function, wherein the LOSS functions TOTAL _ LOSS _ a _ L1+ B _ L2+ C _ L3, wherein L1 represents the first LOSS function, L2 represents the second LOSS function, L3 represents the third LOSS function, a is a weight coefficient of the first LOSS function, B is a weight coefficient of the second LOSS function, and C is a weight coefficient of the third LOSS function.
5. The nondestructive testing method for the internal structure of the fruit according to claim 2, wherein: after obtaining the first fruit image, the method further comprises:
denoising the first fruit image to obtain a denoised fruit image, wherein the first labeling information and the second labeling block are labeling information corresponding to the denoised fruit image; wherein,
denoising the first fruit image, comprising:
acquiring fixed noise of the first fruit image, wherein the fixed noise is pre-stored pixels of a background image when fruits are not placed;
and removing fixed noise from the first fruit image to obtain a preliminary de-noised fruit image, wherein the preliminary de-noised fruit image is the pixel difference between the first fruit image and the background image.
6. The nondestructive testing method for the internal structure of the fruit according to claim 5, wherein: denoising the first fruit image, comprising:
acquiring random noise of the first fruit image;
carrying out self-adaptive binarization on the preliminarily denoised fruit image to obtain a binarized image, wherein the pixel value of any point (x, y) in the binarized image is as follows:
wherein f is 1 (x, y) is the pixel value of the point (x, y) in the binarized image, f (x, y) is the pixel value of the point (x, y) in the preliminarily denoised fruit image, t 1 Is a first pixel threshold;
acquiring a connected region in the binarized image except for a background region, and calculating a secondary connected region with the area smaller than an area threshold value in the connected region;
and acquiring the position information of the secondary connected region, and then deleting the image corresponding to the position information of the secondary connected region in the primarily denoised fruit image to obtain the denoised fruit image.
7. The nondestructive testing method for the internal structure of the fruit according to claim 1, wherein: after the final marking information corresponding to the fruit image to be detected is obtained, the method further comprises the following steps:
and obtaining the pulp area corresponding to the fruit image to be detected according to the final labeling information, and then grading the fruit corresponding to the fruit image to be detected according to the pulp area.
8. The utility model provides a fruit inner structure nondestructive test system which characterized in that: for carrying out a fruit internal structure nondestructive testing method according to any one of claims 1 to 7; the fruit inner structure nondestructive test system includes:
the model construction module is used for constructing a primary fruit element identification model based on a neural network;
the model optimization module is used for acquiring a first fruit image and first label information corresponding to the first fruit image, and optimizing the primary fruit element identification model through the first fruit image and the first label information to obtain an optimized fruit element identification model;
the image identification module is used for acquiring a fruit image to be detected, inputting the fruit image to be detected into the optimized fruit element identification model for processing, and obtaining final marking information corresponding to the fruit image to be detected;
and the image redrawing module is used for obtaining a redrawing image corresponding to the fruit image to be detected according to the final marking information.
9. An electronic device, characterized in that: the method comprises the following steps:
a memory for storing computer program instructions; and the number of the first and second groups,
a processor for executing the computer program instructions to carry out the operations of the method of non-destructive inspection of the internal structure of fruit according to any one of claims 1 to 7.
10. A computer-readable storage medium storing computer-readable computer program instructions, characterized in that: the computer program instructions are configured to perform the operations of the fruit internal structure non-destructive inspection method of any one of claims 1 to 7 when executed.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210514268.1A CN114841974B (en) | 2022-05-11 | 2022-05-11 | Nondestructive testing method, nondestructive testing system, nondestructive testing electronic equipment and nondestructive testing medium for internal structure of fruit |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210514268.1A CN114841974B (en) | 2022-05-11 | 2022-05-11 | Nondestructive testing method, nondestructive testing system, nondestructive testing electronic equipment and nondestructive testing medium for internal structure of fruit |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114841974A true CN114841974A (en) | 2022-08-02 |
CN114841974B CN114841974B (en) | 2024-08-09 |
Family
ID=82570647
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210514268.1A Active CN114841974B (en) | 2022-05-11 | 2022-05-11 | Nondestructive testing method, nondestructive testing system, nondestructive testing electronic equipment and nondestructive testing medium for internal structure of fruit |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114841974B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117765528A (en) * | 2023-12-04 | 2024-03-26 | 北京霍里思特科技有限公司 | Method, device and storage medium for classifying objects |
CN117783287A (en) * | 2024-02-26 | 2024-03-29 | 中国热带农业科学院南亚热带作物研究所 | Device and method for carrying out nondestructive testing on pineapple fruit during transmission |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2011041924A1 (en) * | 2009-10-09 | 2011-04-14 | 江苏大学 | Device and method for identifying ripe oranges in nature scene by filter spectral image technology |
US10546216B1 (en) * | 2019-04-11 | 2020-01-28 | Seetree Systems Ltd. | Recurrent pattern image classification and registration |
CN110969090A (en) * | 2019-11-04 | 2020-04-07 | 口碑(上海)信息技术有限公司 | Fruit quality identification method and device based on deep neural network |
-
2022
- 2022-05-11 CN CN202210514268.1A patent/CN114841974B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2011041924A1 (en) * | 2009-10-09 | 2011-04-14 | 江苏大学 | Device and method for identifying ripe oranges in nature scene by filter spectral image technology |
US10546216B1 (en) * | 2019-04-11 | 2020-01-28 | Seetree Systems Ltd. | Recurrent pattern image classification and registration |
CN110969090A (en) * | 2019-11-04 | 2020-04-07 | 口碑(上海)信息技术有限公司 | Fruit quality identification method and device based on deep neural network |
Non-Patent Citations (1)
Title |
---|
欧阳爱国;吴建;刘燕德;: "高光谱成像在农产品无损检测中的应用", 广东农业科学, no. 23, 10 December 2015 (2015-12-10) * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117765528A (en) * | 2023-12-04 | 2024-03-26 | 北京霍里思特科技有限公司 | Method, device and storage medium for classifying objects |
CN117765528B (en) * | 2023-12-04 | 2024-07-26 | 北京霍里思特科技有限公司 | Method, device and storage medium for classifying objects |
CN117783287A (en) * | 2024-02-26 | 2024-03-29 | 中国热带农业科学院南亚热带作物研究所 | Device and method for carrying out nondestructive testing on pineapple fruit during transmission |
CN117783287B (en) * | 2024-02-26 | 2024-05-24 | 中国热带农业科学院南亚热带作物研究所 | Device and method for carrying out nondestructive testing on pineapple fruit during transmission |
Also Published As
Publication number | Publication date |
---|---|
CN114841974B (en) | 2024-08-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110163080B (en) | Face key point detection method and device, storage medium and electronic equipment | |
CN110956225B (en) | Contraband detection method and system, computing device and storage medium | |
CN111680690B (en) | Character recognition method and device | |
CN110675339A (en) | Image restoration method and system based on edge restoration and content restoration | |
CN112464809A (en) | Face key point detection method and device, electronic equipment and storage medium | |
CN110503103B (en) | Character segmentation method in text line based on full convolution neural network | |
CN109948533B (en) | Text detection method, device and equipment and readable storage medium | |
CN111310826B (en) | Method and device for detecting labeling abnormality of sample set and electronic equipment | |
CN111723841A (en) | Text detection method and device, electronic equipment and storage medium | |
CN112800955A (en) | Remote sensing image rotating target detection method and system based on weighted bidirectional feature pyramid | |
CN112906794A (en) | Target detection method, device, storage medium and terminal | |
CN114841974A (en) | Nondestructive testing method and system for internal structure of fruit, electronic equipment and medium | |
CN111191584B (en) | Face recognition method and device | |
CN114444565B (en) | Image tampering detection method, terminal equipment and storage medium | |
CN113284122B (en) | Roll paper packaging defect detection method and device based on deep learning and storage medium | |
CN116206227B (en) | Picture examination system and method for 5G rich media information, electronic equipment and medium | |
CN113971644A (en) | Image identification method and device based on data enhancement strategy selection | |
CN110969602B (en) | Image definition detection method and device | |
Wicht et al. | Camera-based sudoku recognition with deep belief network | |
WO2024174726A1 (en) | Handwritten and printed text detection method and device based on deep learning | |
CN114581928A (en) | Form identification method and system | |
CN111414922A (en) | Feature extraction method, image processing method, model training method and device | |
CN112200789A (en) | Image identification method and device, electronic equipment and storage medium | |
CN112580624A (en) | Method and device for detecting multidirectional text area based on boundary prediction | |
CN115294405B (en) | Method, device, equipment and medium for constructing crop disease classification model |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |