CN116612178A - Method for extracting grain shape of integrated rice spike grains - Google Patents
Method for extracting grain shape of integrated rice spike grains Download PDFInfo
- Publication number
- CN116612178A CN116612178A CN202310900787.6A CN202310900787A CN116612178A CN 116612178 A CN116612178 A CN 116612178A CN 202310900787 A CN202310900787 A CN 202310900787A CN 116612178 A CN116612178 A CN 116612178A
- Authority
- CN
- China
- Prior art keywords
- grain
- image
- rice
- network
- training
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 235000013339 cereals Nutrition 0.000 title claims abstract description 357
- 235000007164 Oryza sativa Nutrition 0.000 title claims abstract description 137
- 235000009566 rice Nutrition 0.000 title claims abstract description 137
- 238000000034 method Methods 0.000 title claims description 36
- 240000007594 Oryza sativa Species 0.000 title 1
- 241000209094 Oryza Species 0.000 claims abstract description 137
- 238000012549 training Methods 0.000 claims abstract description 62
- 238000001514 detection method Methods 0.000 claims abstract description 58
- 210000005069 ears Anatomy 0.000 claims abstract description 45
- 238000000605 extraction Methods 0.000 claims abstract description 36
- 238000012360 testing method Methods 0.000 claims abstract description 32
- 230000008439 repair process Effects 0.000 claims abstract description 28
- 230000006870 function Effects 0.000 claims description 16
- 238000010586 diagram Methods 0.000 claims description 14
- 238000003384 imaging method Methods 0.000 claims description 14
- 230000008569 process Effects 0.000 claims description 10
- 238000004364 calculation method Methods 0.000 claims description 9
- 238000002372 labelling Methods 0.000 claims description 9
- 238000012795 verification Methods 0.000 claims description 8
- 238000013528 artificial neural network Methods 0.000 claims description 6
- 230000001629 suppression Effects 0.000 claims description 5
- 238000013135 deep learning Methods 0.000 claims description 4
- 230000004913 activation Effects 0.000 claims description 3
- 238000006243 chemical reaction Methods 0.000 claims description 3
- 239000002131 composite material Substances 0.000 claims description 3
- 230000005684 electric field Effects 0.000 claims description 3
- 238000011156 evaluation Methods 0.000 claims description 3
- 238000003306 harvesting Methods 0.000 claims description 3
- 238000011176 pooling Methods 0.000 claims description 3
- 238000005070 sampling Methods 0.000 claims description 3
- 230000011218 segmentation Effects 0.000 claims description 3
- 238000000638 solvent extraction Methods 0.000 claims description 3
- 230000005764 inhibitory process Effects 0.000 claims description 2
- 238000012545 processing Methods 0.000 abstract description 5
- 238000005516 engineering process Methods 0.000 description 7
- 238000005259 measurement Methods 0.000 description 7
- 239000004464 cereal grain Substances 0.000 description 5
- 239000011159 matrix material Substances 0.000 description 3
- 208000006440 Open Bite Diseases 0.000 description 2
- 238000013527 convolutional neural network Methods 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000012214 genetic breeding Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 108090000623 proteins and genes Proteins 0.000 description 2
- 230000008485 antagonism Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000000903 blocking effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 235000013305 food Nutrition 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 230000014509 gene expression Effects 0.000 description 1
- 238000000691 measurement method Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000010200 validation analysis Methods 0.000 description 1
- 235000020985 whole grains Nutrition 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0464—Convolutional networks [CNN, ConvNet]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/62—Analysis of geometric attributes of area, perimeter, diameter or volume
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20016—Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Biophysics (AREA)
- Evolutionary Computation (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Software Systems (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Mathematical Physics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Geometry (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to the technical field of image processing, and particularly provides an integrated extraction method of grain shape of rice ears, which comprises the following steps: s1: detecting ear grains; s11: collecting an original rice spike image and constructing a rice spike image data set; s12: training, verifying and testing the rice ear image data set; s2: repairing the shelter grains; s21: acquiring the occlusion grain images and obtaining a training set of paired grain images; s22: training and testing the training set of the paired grain images; s3: extracting grain type characters to obtain grain length, width, perimeter and grain projection area; s4: and (5) acquiring an integrated ear grain detection and repair model and an integrated ear grain character extraction model. According to the scheme, the rice ear grains which are shielded under the natural form can be repaired, so that the problem of mutual shielding among the rice ear grains is solved, and the efficiency and the accuracy of extracting the grain-type character of the rice ear grains are improved.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to an extraction method of grain shape of integrated rice ears and grains.
Background
Rice is an important food crop and has been widely planted around the world. The grain shape character of the rice spike is an important rice gene expression and is also an important path for researching rice genes, in the grain shape character, the length of the grain is increased to improve the aspect ratio of the grain, the shape of the grain is changed from a short circle shape to a slender shape, but the width and the thickness of the grain are reduced; the grain width and the grain thickness are synergistically increased, the aspect ratio of the grain is reduced, the grain shape is changed from a slender shape to a short round shape, and the grain shape and the aspect ratio of the grain have higher correlation.
In the current genetic breeding process of rice, a large amount of quantitative data is needed to select and support the granular genes of the rice ears. The traditional phenotype measurement method mainly depends on manpower, has large workload and low efficiency, is greatly influenced by subjectivity of people, is difficult to meet the current genetic breeding requirement, and meanwhile, the cost of the current labor is increased year by year, and the cost of rice spike grain type character detection is also higher and higher. With the development of imaging technology and computer technology, high-throughput automation phenotyping methods are becoming more and more popular, but a large amount of mutual shielding exists between rice ear grains, so that the efficiency of extracting rice grain traits is greatly limited. At present, the grain-type characters of high-flux rice grains are mainly divided into two types:
1. the ear grains are threshed and spread on a flat plate, a vibration device is used for avoiding mutual shielding among the grains, then a visible light camera is used for imaging, and a digital image processing technology is used for extracting grain type characters. The threshing process of the method takes longer time, and the condition of grain breakage exists in the threshing process, so that the accuracy of character extraction can be influenced to a certain extent.
2. The rice branches and stems are manually separated and then spread on a flat plate, a visible light camera is used for imaging, and grains with the ears not shielded are obtained and grain-shaped characters are extracted by using a digital image processing technology. The method needs to destroy the natural shape of rice ears, and a large number of ears and grains are blocked mutually after the branches and stems are separated manually, so that the grain-shaped characters cannot be extracted by using a digital image processing technology, and the accuracy of extracting the grain-shaped characters is reduced.
From the above, the above method can not realize rapid and accurate extraction of rice grain type traits, and the shielding between rice ear grains has become a bottleneck for extracting rice ear grain type traits.
Meanwhile, the phenotype measurement of rice spike grains is mainly realized by a deep learning target detection technology at present. Based on the serious mutual shielding among grains on the rice ears in the natural form, even though the influence caused by the shielding of the grains can be reduced to a great extent by a manual method for separating the rice branches, the rice ears can be damaged in the manual process for separating the rice branches, which is a lossy process, and the grain type characters of part of the grain of the rice ears still cannot be directly extracted after the operation is performed, so that the accurate detection and counting of the grain of the rice ears in the natural form are difficult to realize by the current detection network structure.
In summary, how to design an integrated rice ear grain shape character extraction method capable of solving the problem of mutual shielding between grains without damaging rice ears and improving grain shape detection efficiency and detection accuracy is a problem to be solved currently.
Disclosure of Invention
The invention aims to solve the problems, and provides an integrated extraction method for grain type traits of rice ears, which can accurately detect and count rice ears in natural forms, solve the problem of mutual shielding among the ears, realize the extraction of grain type traits of the integrated ears, greatly improve the efficiency and the precision of the extraction of grain type traits of the rice ears, and avoid subjective errors caused by traditional manual measurement.
In order to achieve the above purpose, the present invention proposes the following technical scheme: an extraction method of grain shape of integrated rice spike grains comprises the following steps:
s1: detecting ear grains;
s11: collecting an original rice spike image in a natural form and constructing a rice spike image data set;
s12: training, verifying and testing the rice spike image data set through the grain detection network model;
the grain detection network model comprises a feature extraction network for extracting features of the rice spike image, a feature pyramid network for fusing the features extracted by the feature extraction network at different depths, a region proposal network for generating a prediction frame with a possible target, and a classification regression network for classifying the target detected by the network and optimizing the position of the detected target boundary frame;
s2: repairing the shelter grains;
s21: acquiring the shielded grain images and obtaining a paired grain image training set comprising an unobstructed grain image and a corresponding shielded grain image;
s22: training and testing the training set of paired grain images through a grain repair network model;
the grain restoration network model comprises a generator for restoring the input occlusion image into an unoccluded image and a discriminator for judging the authenticity of the unoccluded image generated by the generator;
s3: extracting grain type characters to obtain grain length, width, perimeter and grain projection area;
s4: acquiring an integrated ear grain shape extraction model;
s41: acquiring an optimal grain detection network model obtained in the step S1;
s42: acquiring an optimal grain restoration network model obtained in the step S2;
s43: and integrating the grain repair network model into a grain detection network model, and obtaining the integrated ear grain detection repair model after the grain repair network model is placed in the area proposal network and is parallel to the classification regression network.
S44: and integrating the character extraction pipeline into an integrated spike grain detection and repair model to obtain an integrated spike grain character extraction model.
Preferably, the objective function formula of the grain detection network model in S12 is as follows:
wherein ,represent the firstiProbability of individual anchor predicted as target, +.>For corresponding to the real tag, if an object exists in the anchor +.>1, otherwise 0; />For a coordinate parameter vector representing the compensation between the predicted and the actual boundary frame coordinates,/for>Representing coordinates of a real bounding box for which an anchor of the object exists; />Indicating the batch size at the time of training; />Representing the number of anchors generated by the regional proposal network;
is a cross entropy loss forWhether the Anchor contains objects is classified, and a specific calculation formula is as follows:
;
for a regression loss, a more accurate bounding box is obtained for the network, and the specific calculation formula is as follows:
。
preferably, the objective function formula of the grain repair network model in S22 is as follows:
wherein ,Din order for the arbiter to be a function of the arbiter,Gthe device for generating the electric field comprises a generator,zin the event of a noise occurrence,xas a real image of the object,yis a constraint condition;
generating an objective function for the countermeasure network for the condition:
l1 loss function for a network:
。
preferably, S21 comprises the following sub-steps:
s211: and (3) shielding grain image acquisition: obtaining whole rice ears, naturally placing the rice ears on a cover plate of a scanner, selecting one shielded grain on the rice ears, fixing the shielded grain, and imaging the fixed shielded grain and the rice ears together to obtain a shielded grain image;
s212: image acquisition of the unobstructed grain: taking down the rice ears in S211, only keeping grains fixed on a cover plate of the scanner, and scanning again to obtain corresponding non-shielded grain images;
s213: a pair of composite occlusion grain images: taking out a red channel of the image for the grain image which is not blocked, binarizing the red channel, obtaining an image contour and solving an external rectangle of the grain contour; then taking out the corresponding shielded subgraphs from the shielding image according to the coordinates of the external rectangle, and splicing the two subgraphs in the horizontal direction to obtain a synthesized shielding grain repair data set;
s214: data enhancement: randomly horizontally or vertically overturning the synthesized occlusion grain image obtained in the step S213 to amplify a data set, wherein the training set is constructed completely and each training set image comprises paired grain images, namely an unoccluded grain image and a corresponding occluded grain image;
s215: dividing the data set: the paired grain images obtained in S214 are taken as a dataset, and the images in the dataset are divided into a training set and a test set according to the number ratio of 4:1.
Preferably, S22 comprises the following sub-steps:
s221: inputting the paired grain image training set obtained in the step S215 into a grain repair network model for training, updating parameters in a neural network until the specified iteration times or the specified accuracy are reached, and stopping training;
s222: inputting the synthesized occlusion grain images in the test set obtained in the step S215 into a grain restoration network model to obtain restored complete grain images, and comparing the complete grain images with the actual complete grain images in the test set;
and (3) selecting 4 grain parameters of grain length parameters, grain width parameters, area parameters and perimeter parameters of grains for comparison test, and calculating average absolute percentage error of the restoration values and the true values of the 4 grain parameters and a correlation coefficient R.
Preferably, S3 comprises the following sub-steps:
s31: graying the single grain image, and extracting a gray level image of an r channel with the maximum contrast between grains and a background for subsequent shape extraction;
s32: binarizing the grain gray level image by using a threshold segmentation algorithm to obtain a grain binary image;
s33: extracting the outline of the grain based on the grain binary image, wherein the outline comprises coordinates of each point on the outline;
s34: and calculating the length, width, perimeter and projection area of the grain based on the contour coordinates obtained in the step S33.
Preferably, in S34, the calculation method of the grain length, width, perimeter and projected area of the grain is as follows:
grain length: based on points on the grain contour, calculating the distance between any two points, and taking the maximum distance as the grain length;
grain width: calculating a linear equation between two points by the two points for calculating the length of the grain, calculating the normal slope of the line, and then calculating the distance between the intersection point of the normal and the contour under different intercept, wherein the maximum value is the grain width;
perimeter of grain: based on points on the grain contour, if two points are adjacent up and down or left and right, the distance between the two points is defined as 1 pixel size; if two points are adjacent in the upper left, lower left, upper right or lower right direction, the distance between the two points is defined asA pixel size; summing the distances between all adjacent pixel points on the contour to obtain the grain perimeter;
grain area: based on points on the grain contour, the grain area is the number of pixels within the grain contour.
Preferably, S11 comprises the following sub-steps:
s111: collecting rice ear images: harvesting mature rice to obtain main rice ears, randomly tiling the main rice ears on visible light scanning imaging equipment, and imaging and storing a rice ear sample;
s112: original image cropping: removing redundant parts in the rice spike image based on the rice spike image acquired in the step S111, and cutting the rice spike image into an image with uniform pixel size;
s113: ear grain image labeling: marking all grain of the rice ears by software based on the cut image obtained in the step S112 by adopting a boundary frame marking mode, recording the left upper corner coordinate, the right lower corner coordinate, the boundary frame area and the target type in the frame of the boundary frame after marking each grain, and storing the grain on the whole rice ears after marking;
s114: data enhancement: carrying out random horizontal overturning or vertical overturning on the rice ear images and the labeling files obtained in the S112 and the S113 according to a certain probability, wherein each sample in the data set comprises the cut rice ear images and the corresponding labeling files;
s115: data set partitioning: after the data enhancement in S114, dividing the samples in the data set into a training set, a verification set and a test set in a ratio of 2:1:1;
s116: data set format conversion: the data set samples divided by S115 are respectively converted into formats prescribed for training of the deep learning object detection method.
Preferably, S12 comprises the following sub-steps:
s121: training a network: inputting the rice spike image training set obtained in the step S116 into a grain detection network model for training, training the network by adopting different super parameters, and continuously updating parameters of the neural network in the training process until the specified iteration times are reached, and stopping training;
s122: network authentication: inputting the rice spike image verification set obtained in the step S116 into a grain detection network model for verification, obtaining the network precision under different super parameters according to target detection evaluation indexes, and then obtaining an optimal grain detection network;
s123: network test: and (3) inputting the rice spike image test set obtained in the step (S116) into the optimal grain detection network obtained in the step (S122) for testing, obtaining the grain number on each rice spike sample, and calculating the average absolute percentage error and the correlation coefficient R of the network predicted value and the true value.
Preferably, the feature extraction network in S12 includes: the input layer is used for inputting the collected rice ear image; the convolution layer is used for extracting information in the image; the pooling layer is used for selecting the information extracted by the convolution layer and reducing the dimension of the information; an activation layer for improving the nonlinear fitting capability of the network; the output layer is used for outputting a feature map which is rich in image high-level semantic information after being activated by a plurality of convolution pools;
the feature pyramid network includes: the convolution layer is used for reducing the dimension of the feature map output by each stage; upsampling layer: the method comprises the steps of up-sampling a characteristic diagram obtained by a network;
the area proposal network includes: sliding window, sliding on the feature graph obtained by the feature pyramid network to generate anchor frames with different sizes for predicting the target; the classifying channel is composed of a plurality of convolution layers and is used for classifying targets in the anchor frame; the regression channel is formed by a plurality of convolution layers and is used for finely adjusting the position of the anchor frame; non-maximum suppression, which is used for eliminating the anchor frame of repeated detection and the anchor frame exceeding the image boundary;
the classification regression network includes: a classification channel composed of a plurality of convolution layers for classifying objects in a proposal area generated by the regional proposal network; the regression channel is composed of a plurality of convolution layers and is used for finely adjusting the position of a proposal area generated by the area proposal network; and the non-maximum value inhibition is used for eliminating redundant detection frames and outputting the final target detection result.
Compared with the prior art, the invention has the following beneficial effects:
1. according to the invention, the counting of the grain of the spike part and the accurate positioning of the grain of the spike part can be realized with high precision under the condition that the rice spike is not separated, so that subjective errors caused by traditional manual measurement are avoided, the time required for manually separating the rice spike and branch stems is saved, the damage to the natural form of rice is avoided, and the workload and the labor cost are saved.
2. According to the invention, the rice ear grains which are blocked in the natural form can be restored by an image restoration technology based on the generation of an antagonism network, so that the blocked grains can restore the natural form, the bottleneck that phenotype measurement cannot be directly carried out due to blocking is avoided greatly, and the efficiency and the accuracy of extracting the grain shape of the rice ear grains are improved.
Drawings
Fig. 1 is a schematic diagram of the overall structure of integrated ear grain shape trait extraction provided according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of an image acquisition apparatus and an acquisition process provided according to an embodiment of the present invention;
FIG. 3 is a block diagram of a spike grain detection network model provided in accordance with an embodiment of the present invention;
FIG. 4 is an overall block diagram of a grain repair network model provided in accordance with an embodiment of the present invention;
fig. 5 is a schematic diagram of a grain shape trait extraction process provided according to an embodiment of the present invention.
Detailed Description
Hereinafter, embodiments of the present invention will be described with reference to fig. 1 to 5. In the following description, like modules are denoted by like reference numerals. In the case of the same reference numerals, their names and functions are also the same. Therefore, a detailed description thereof will not be repeated.
The present invention will be further described in detail with reference to fig. 1 to 5 and the specific embodiments thereof in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not to be construed as limiting the invention.
An extraction method of grain shape of integrated rice spike grains comprises the following steps:
s1: and (5) ear grain detection.
S11: the method for acquiring the original rice spike image under the natural form and constructing the rice spike image data set comprises the following sub-steps:
s111: collecting rice ear images: the method comprises the steps of obtaining main rice ears after harvesting mature rice, randomly tiling the rice ears on visible light scanning imaging equipment, setting scanning precision to be 600dpi, imaging a rice ear sample, storing a rice ear sample image in a tiff format, and enabling the imaging time of a single plant rice ear to be about 60 s. The image acquisition process is shown in fig. 2, wherein a graph a is that rice ears are randomly tiled on a scanning imaging device, b is that a rice ear sample is imaged, c is that an original rice ear image is obtained through scanning, and d is that a cut rice ear image is obtained.
S112: original image cropping: based on the rice ear image acquired in S111, redundant portions in the rice ear image are removed, and collectively cut into images of 5700×6800 pixel size.
S113: ear grain image labeling: and (3) marking all grain of the ears by using a labelme software based on the cut image obtained in the step (S112) in a boundary frame marking mode, recording the left upper corner coordinates, the right lower corner coordinates, the boundary frame area and the target type in the frame of the boundary frame after marking each grain, and storing the grain on the whole rice ears in a json format after marking.
S114: data enhancement: and (3) randomly carrying out horizontal overturning or vertical overturning on the rice ear images and the labeling files obtained in the S112 and the S113 with the probability of 0.2, wherein each sample in the data set comprises the cut rice ear images and the corresponding labeling files.
S115: data set partitioning: after the data enhancement in S114, the samples in the data set are divided into training, validation and test sets in a ratio of 2:1:1.
S116: data set format conversion: the data set samples divided by S115 are respectively converted into formats prescribed for training of the deep learning object detection method.
S12: training, verifying and testing the rice spike image data set through the grain detection network model, wherein the method comprises the following sub-steps:
s121: training a network: inputting the rice spike image training set obtained in the step S116 into a grain detection network model for training, training the network by adopting different super parameters, and continuously updating parameters of the neural network in the training process until the specified iteration times are reached, and stopping training;
s122: network authentication: inputting the rice spike image verification set obtained in the step S116 into a grain detection network model for verification, obtaining the network precision under different super parameters according to target detection evaluation indexes, and then obtaining an optimal grain detection network;
s123: network test: and (3) inputting the rice spike image test set obtained in the step (S116) into the optimal grain detection network obtained in the step (S122) for testing, obtaining the grain number on each rice spike sample, and calculating the average absolute percentage error and the correlation coefficient R of the network predicted value and the true value.
The grain detection network in the embodiment mainly comprises a Faster R-CNN based on a convolutional neural network, and the grain detection network model comprises a feature extraction network, a feature pyramid network, a region proposal network and a classification regression network.
The feature extraction network is used for extracting image features of the rice ears, and comprises:
the input layer is used for inputting the collected rice ear image;
the convolution layer is used for extracting information in the image;
the pooling layer is used for selecting the information extracted by the convolution layer and reducing the dimension of the information;
an activation layer for improving the nonlinear fitting capability of the network;
and the output layer is used for outputting the feature map which is activated by a plurality of convolution pools and is rich in the image advanced semantic information.
The feature pyramid network is used for fusing features extracted by the feature extraction network at different depths, and comprises:
the convolution layer, the convolution kernel size is 1×1, is used for reducing the dimension of the feature map output by each stage;
upsampling layer: the method is used for upsampling the feature images obtained by the network, so that the feature images in different stages have the same resolution for fusion among the feature images;
the regional proposal network is used for generating a prediction frame with possible targets, and comprises:
sliding window, sliding on the feature graph obtained by the feature pyramid network to generate anchor frames with different sizes for predicting the target;
the classification channel is composed of a plurality of convolution layers and is used for classifying targets in the anchor frame and respectively outputting probability values of the targets belonging to a certain class;
the regression channel is formed by a plurality of convolution layers and is used for finely adjusting the position of the anchor frame so as to enable the position to be more matched with the real position of the target;
non-maximum suppression for eliminating duplicate detected anchor boxes and anchor boxes beyond the image boundary.
The classification regression network is used for classifying targets detected by the network and optimizing the positions of the detected target boundary boxes, and comprises:
the classification channel is the same as the classification channel in the regional proposal network and is composed of a plurality of convolution layers and is used for classifying targets in the proposal region generated by the regional proposal network and respectively outputting probability values of the targets belonging to a certain class;
the regression channel is the same as the classification channel in the regional proposal network and is composed of a plurality of convolution layers and is used for finely adjusting the position of the proposal region generated by the regional proposal network so as to be more matched with the real position of the target;
the non-maximum suppression is used for eliminating redundant detection frames and outputting the final target detection result, which is the same as the non-maximum suppression in the area proposal network.
The structure of the ear grain detection network is shown in fig. 3. The objective function formula of the grain detection network model is as follows:
wherein ,represent the firstiProbability of individual anchor predicted as target, +.>For corresponding to the real tag, if an object exists in the anchor +.>1, otherwise 0; />For a coordinate parameter vector representing the compensation between the predicted and the actual boundary frame coordinates,/for>Representing coordinates of a real bounding box for which an anchor of the object exists; />Indicating the batch size at the time of training; />Representing the number of anchors generated by the regional proposal network;
for classifying whether an object is contained in an anchor, the specific calculation formula is as follows:
;
for a regression loss, a more accurate bounding box is obtained for the network, and the specific calculation formula is as follows:
。
s2: and (5) shielding grain repair.
S21: acquiring the occlusion cereal grain image and obtaining a paired cereal grain image training set comprising an unoccluded cereal grain image and a corresponding occluded cereal grain image, wherein the paired cereal grain image training set comprises the following substeps:
s211: and (3) shielding grain image acquisition: the method comprises the steps of obtaining whole rice ears, naturally placing the rice ears on a cover plate of a scanner, selecting one shielded grain on the rice ears, fixing the shielded grain, and imaging the fixed shielded grain and the rice ears together to obtain a shielded grain image.
S212: image acquisition of the unobstructed grain: and (3) taking off the rice ears in S211, only keeping grains fixed on a cover plate of the scanner, and scanning again to obtain a corresponding unobstructed grain image.
S213: a pair of composite occlusion grain images: taking out a red channel of the image for the grain image which is not blocked, binarizing the red channel, obtaining an image contour and solving an external rectangle of the grain contour; and then taking out the corresponding shielded subgraphs from the shielding image according to the coordinates of the circumscribed rectangle, and splicing the two subgraphs in the horizontal direction to obtain a synthesized shielding grain repair data set.
S214: data enhancement: the synthesized occlusion grain images obtained in S213 are randomly flipped horizontally or vertically to amplify the dataset, the training set construction is completed and each training set image comprises paired grain images, i.e. one non-occluded grain image and one corresponding occluded grain image.
S215: dividing the data set: the paired grain images obtained in S214 are taken as a dataset, and the images in the dataset are divided into a training set and a test set according to the number ratio of 4:1.
S22: training and testing the training set of paired grain images through the grain repair network model, comprising the following sub-steps:
s221: inputting the paired grain image training set obtained in the step S215 into a grain repair network model for training, updating parameters in the neural network until the specified iteration times or the specified accuracy are reached, and stopping training.
S222: and (3) inputting the synthesized occlusion grain images in the test set obtained in the step (S215) into a grain restoration network model to obtain restored complete grain images, and comparing the complete grain images with the actual complete grain images in the test set.
And (3) selecting 4 grain parameters of grain length parameters, grain width parameters, area parameters and perimeter parameters of grains for comparison test, and calculating average absolute percentage error of the restoration values and the true values of the 4 grain parameters and a correlation coefficient R.
The grain restoration network model comprises a generator and a discriminator, wherein the generator is used for restoring an input occlusion image into a non-occlusion image, and the generator comprises:
the input layer is used for inputting a synthetic grain shielding image or an actual grain shielding image subgraph;
the coding layer is used for coding the input image, downsampling the input image and increasing the number of channels at the same time so as to extract high-level semantic information in the image;
the decoding layer is used for decoding the image, sampling the coded feature map, reducing the number of channels, grabbing semantic information of different layers and integrating the semantic information in a feature superposition mode; the coding layer and the decoding layer adopt a jump connection structure, and if the total layer number of the network is n, jump connection is added between each ith layer and each n-i layer, so that the information bottleneck problem caused by the coding-decoding structure is avoided;
and the output layer is used for outputting the repaired single grain image, and the size of the output image is the same as that of the input image.
The discriminator is used for judging the authenticity of the non-occlusion image generated by the generator, and comprises:
an input layer for inputting the grain image or the real grain image generated by the generator;
the full convolution layer is independently passed through each region of the input image so as to extract the characteristics of each region and facilitate the network to judge the authenticity;
and the output matrix is mapped into an output matrix after the input image passes through the full convolution layer, and each value of the matrix represents the probability that each region of the input image belongs to the real image.
The objective function formula of the grain repair network model is as follows:
wherein ,Din order for the arbiter to be a function of the arbiter,Gthe device for generating the electric field comprises a generator,zin the event of a noise occurrence,xas a real image of the object,yis a constraint condition;
generating an objective function for the countermeasure network for the condition:
l1 loss function for a network:
。
according to the method for repairing the shielded grain, the shielded grain image and the image pair manufactured by the non-shielded grain image are obtained through two scans before and after, so that after a repairing network is effectively trained, the shielded grain image can be repaired into a non-shielded whole grain image by using a generator of the network, and the comprehensive error for recovering the grains of different varieties of rice is lower than 4%. The method can essentially repair the image information loss caused by shielding, and effectively eliminate the measurement error caused by shielding of grains, thereby providing a high-precision measurement path for the most important grain type measurement in the rice yield traits.
S3: extracting grain type characters to obtain grain length, width, perimeter and grain projection area, wherein the grain type characters comprise the following substeps:
s31: graying the single grain image, and extracting a gray level image of an r channel with the maximum contrast between grains and a background for subsequent shape extraction;
s32: binarizing the grain gray level image by using a threshold segmentation algorithm to obtain a grain binary image;
s33: extracting the outline of the grain based on the grain binary image, wherein the outline comprises coordinates of each point on the outline;
s34: and calculating the length, width, perimeter and projection area of the grain based on the contour coordinates obtained in the step S33.
The calculation method of the length, width, perimeter and projection area of the grain in S34 is as follows:
grain length: based on points on the grain contour, calculating the distance between any two points, and taking the maximum distance as the grain length;
grain width: calculating a linear equation between two points by the two points for calculating the length of the grain, calculating the normal slope of the line, and then calculating the distance between the intersection point of the normal and the contour under different intercept, wherein the maximum value is the grain width;
perimeter of grain: based on points on the grain contour, if two points are adjacent up and down or left and right, the distance between the two points is defined as 1 pixel size; if two points are adjacent in the upper left, lower left, upper right or lower right direction, the distance between the two points is defined asA pixel size; summing the distances between all adjacent pixel points on the contour to obtain the grain perimeter;
grain area: based on points on the grain contour, the grain area is the number of pixels within the grain contour.
S4: the method comprises the steps of obtaining an integral ear grain type character extraction model, wherein grain type character extraction details are shown in fig. 5, a diagram shows an original grain image, b diagram shows a gray level image extracted through an R channel, c diagram shows a binary diagram after binarizing the gray level image, d diagram shows grain outlines extracted through the binary diagram, and e diagram shows that various grain type characters are extracted.
S41: acquiring an optimal grain detection network model obtained in the step S1;
s42: and (3) acquiring the optimal grain restoration network model obtained in the step (S2).
S43: and integrating the grain repair network model into a grain detection network model, and obtaining the integrated ear grain detection repair model after the grain repair network model is placed in the area proposal network and is parallel to the classification regression network.
S44: and integrating the character extraction pipeline into an integrated spike grain detection and repair model to obtain an integrated spike grain character extraction model.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps described in the present disclosure may be performed in parallel, sequentially, or in a different order, provided that the desired results of the technical solutions of the present disclosure are achieved, and are not limited herein.
The above embodiments do not limit the scope of the present invention. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present invention should be included in the scope of the present invention.
Claims (10)
1. An integrated extraction method of grain shape of rice spike grains is characterized by comprising the following steps:
s1: detecting ear grains;
s11: collecting an original rice spike image in a natural form and constructing a rice spike image data set;
s12: training, verifying and testing the rice spike image data set through the grain detection network model;
the grain detection network model comprises a feature extraction network for extracting features of rice spike images, a feature pyramid network for fusing the features extracted by the feature extraction network at different depths, a region proposal network for generating a prediction frame with a possible target, and a classification regression network for classifying the target detected by the network and optimizing the position of the detected target boundary frame;
s2: repairing the shelter grains;
s21: acquiring the shielded grain images and obtaining a paired grain image training set comprising an unobstructed grain image and a corresponding shielded grain image;
s22: training and testing the training set of paired grain images through a grain repair network model;
the grain restoration network model comprises a generator for restoring an input occlusion image into an unoccluded image and a discriminator for judging the authenticity of the unoccluded image generated by the generator;
s3: extracting grain type characters to obtain grain length, width, perimeter and grain projection area;
s4: acquiring an integrated ear grain shape extraction model;
s41: acquiring an optimal grain detection network model obtained in the step S1;
s42: acquiring an optimal grain restoration network model obtained in the step S2;
s43: integrating the grain repair network model into a grain detection network model, and after the grain repair network model is placed in the regional proposal network, obtaining an integrated ear grain detection repair model in parallel with the classification regression network;
s44: and integrating the character extraction pipeline into an integrated spike grain detection and repair model to obtain an integrated spike grain character extraction model.
2. The method for extracting grain shape character of integrated rice spike according to claim 1, wherein the objective function formula of the grain detection network model in S12 is as follows:
wherein ,represent the firstiProbability of individual anchor predicted as target, +.>For corresponding to the real tag, if an object exists in the anchor +.>1, otherwise 0; />For a coordinate parameter vector representing the compensation between the predicted and the actual boundary frame coordinates,/for>Representing coordinates of a real bounding box for which an anchor of the object exists; />Indicating the batch size at the time of training;representing the number of anchors generated by the regional proposal network;
for classifying whether an object is contained in an anchor, the specific calculation formula is as follows:
;
for a regression loss, a more accurate bounding box is obtained for the network, and the specific calculation formula is as follows:
。
3. the method for extracting grain shape character of integrated rice spike according to claim 2, wherein the objective function formula of the grain repair network model in S22 is as follows:
wherein ,Din order for the arbiter to be a function of the arbiter,Gthe device for generating the electric field comprises a generator,zin the event of a noise occurrence,xas a real image of the object,yis a constraint condition;
generating an objective function for the countermeasure network for the condition:
l1 loss function for a network:
。
4. the method for extracting grain shape from integrated rice ear grain according to any one of claims 1 to 3, wherein S21 comprises the following sub-steps:
s211: and (3) shielding grain image acquisition: obtaining whole rice ears, naturally placing the rice ears on a cover plate of a scanner, selecting one shielded grain on the rice ears, fixing the shielded grain, and imaging the fixed shielded grain and the rice ears together to obtain a shielded grain image;
s212: image acquisition of the unobstructed grain: taking down the rice ears in S211, only keeping grains fixed on a cover plate of the scanner, and scanning again to obtain corresponding non-shielded grain images;
s213: a pair of composite occlusion grain images: taking out a red channel of the image for the grain image which is not blocked, binarizing the red channel, obtaining an image contour and solving an external rectangle of the grain contour; then taking out the corresponding shielded subgraphs from the shielding image according to the coordinates of the external rectangle, and splicing the two subgraphs in the horizontal direction to obtain a synthesized shielding grain repair data set;
s214: data enhancement: randomly horizontally or vertically overturning the synthesized occlusion grain image obtained in the step S213 to amplify a data set, wherein the training set is constructed completely and each training set image comprises paired grain images, namely an unoccluded grain image and a corresponding occluded grain image;
s215: dividing the data set: the paired grain images obtained in S214 are taken as a dataset, and the images in the dataset are divided into a training set and a test set according to the number ratio of 4:1.
5. The method for extracting grain shape from integrated rice ear grain according to claim 4, wherein S22 comprises the sub-steps of:
s221: inputting the paired grain image training set obtained in the step S215 into a grain repair network model for training, updating parameters in a neural network until the specified iteration times or the specified accuracy are reached, and stopping training;
s222: inputting the synthesized occlusion grain images in the test set obtained in the step S215 into a grain restoration network model to obtain restored complete grain images, and comparing the complete grain images with the actual complete grain images in the test set;
and (3) selecting 4 grain parameters of grain length parameters, grain width parameters, area parameters and perimeter parameters of grains for comparison test, and calculating average absolute percentage error of the restoration values and the true values of the 4 grain parameters and a correlation coefficient R.
6. The method for extracting grain shape character of integrated rice ear grain according to claim 5, wherein S3 comprises the following sub-steps:
s31: graying the single grain image, and extracting a gray level image of an r channel with the maximum contrast between grains and a background for subsequent shape extraction;
s32: binarizing the grain gray level image by using a threshold segmentation algorithm to obtain a grain binary image;
s33: extracting the outline of the grain based on the grain binary image, wherein the outline comprises coordinates of each point on the outline;
s34: and calculating the length, width, perimeter and projection area of the grain based on the contour coordinates obtained in the step S33.
7. The method for extracting grain shape from integrated rice ear according to claim 6, wherein the calculation method of the grain length, width, perimeter and projected area of the grain in S34 is as follows:
grain length: based on points on the grain contour, calculating the distance between any two points, and taking the maximum distance as the grain length;
grain width: calculating a linear equation between two points by the two points for calculating the length of the grain, calculating the normal slope of the line, and then calculating the distance between the intersection point of the normal and the contour under different intercept, wherein the maximum value is the grain width;
perimeter of grain: based on points on the grain contour, if two points are adjacent up and down or left and right, the distance between the two points is defined as 1 pixel size; if two points are adjacent in the upper left, lower left, upper right or lower right direction, the distance between the two points is defined asA pixel size; summing the distances between all adjacent pixel points on the contour to obtain the grain perimeter;
grain area: based on points on the grain contour, the grain area is the number of pixels within the grain contour.
8. The method for extracting grain shape character from integrated rice ear grain according to claim 7, wherein S11 comprises the following sub-steps:
s111: collecting rice ear images: harvesting mature rice to obtain main rice ears, randomly tiling the main rice ears on visible light scanning imaging equipment, and imaging and storing a rice ear sample;
s112: original image cropping: removing redundant parts in the rice spike image based on the rice spike image acquired in the step S111, and cutting the rice spike image into an image with uniform pixel size;
s113: ear grain image labeling: marking all grain of the rice ears by software based on the cut image obtained in the step S112 by adopting a boundary frame marking mode, recording the left upper corner coordinate, the right lower corner coordinate, the boundary frame area and the target type in the frame of the boundary frame after marking each grain, and storing the grain on the whole rice ears after marking;
s114: data enhancement: carrying out random horizontal overturning or vertical overturning on the rice ear images and the labeling files obtained in the S112 and the S113 according to a certain probability, wherein each sample in the data set comprises the cut rice ear images and the corresponding labeling files;
s115: data set partitioning: after the data enhancement in S114, dividing the samples in the data set into a training set, a verification set and a test set in a ratio of 2:1:1;
s116: data set format conversion: the data set samples divided by S115 are respectively converted into formats prescribed for training of the deep learning object detection method.
9. The method for extracting grain shape character from integrated rice ear grain according to claim 8, wherein S12 comprises the following sub-steps:
s121: training a network: inputting the rice spike image training set obtained in the step S116 into a grain detection network model for training, training the network by adopting different super parameters, and continuously updating parameters of the neural network in the training process until the specified iteration times are reached, and stopping training;
s122: network authentication: inputting the rice spike image verification set obtained in the step S116 into a grain detection network model for verification, obtaining the network precision under different super parameters according to target detection evaluation indexes, and then obtaining an optimal grain detection network;
s123: network test: and (3) inputting the rice spike image test set obtained in the step (S116) into the optimal grain detection network obtained in the step (S122) for testing, obtaining the grain number on each rice spike sample, and calculating the average absolute percentage error and the correlation coefficient R of the network predicted value and the true value.
10. The method for extracting grain shape character from integrated rice spike and grain of claim 9 wherein the feature extraction network in S12 comprises: the input layer is used for inputting the collected rice ear image; the convolution layer is used for extracting information in the image; the pooling layer is used for selecting the information extracted by the convolution layer and reducing the dimension of the information; an activation layer for improving the nonlinear fitting capability of the network; the output layer is used for outputting a feature map which is rich in image high-level semantic information after being activated by a plurality of convolution pools;
the feature pyramid network includes: the convolution layer is used for reducing the dimension of the feature map output by each stage; upsampling layer: the method comprises the steps of up-sampling a characteristic diagram obtained by a network;
the area proposal network comprises: sliding window, sliding on the feature graph obtained by the feature pyramid network to generate anchor frames with different sizes for predicting the target; the classifying channel is composed of a plurality of convolution layers and is used for classifying targets in the anchor frame; the regression channel is formed by a plurality of convolution layers and is used for finely adjusting the position of the anchor frame; non-maximum suppression, which is used for eliminating the anchor frame of repeated detection and the anchor frame exceeding the image boundary;
the classification regression network includes: a classification channel composed of a plurality of convolution layers for classifying objects in a proposal area generated by the regional proposal network; the regression channel is composed of a plurality of convolution layers and is used for finely adjusting the position of a proposal area generated by the area proposal network; and the non-maximum value inhibition is used for eliminating redundant detection frames and outputting the final target detection result.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310900787.6A CN116612178A (en) | 2023-07-21 | 2023-07-21 | Method for extracting grain shape of integrated rice spike grains |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310900787.6A CN116612178A (en) | 2023-07-21 | 2023-07-21 | Method for extracting grain shape of integrated rice spike grains |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116612178A true CN116612178A (en) | 2023-08-18 |
Family
ID=87682293
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310900787.6A Pending CN116612178A (en) | 2023-07-21 | 2023-07-21 | Method for extracting grain shape of integrated rice spike grains |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116612178A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118658046A (en) * | 2024-08-19 | 2024-09-17 | 安徽高哲信息技术有限公司 | Multi-view cereal image processing method, storage medium and electronic device |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8779233B1 (en) * | 2010-07-14 | 2014-07-15 | Iowa State University Research Foundation, Inc. | QTL regulating ear productivity traits in maize |
CN207408272U (en) * | 2017-08-10 | 2018-05-25 | 武汉谷丰光电科技有限公司 | Rice grain shape parameter measuring apparatus based on linear array camera and X-ray Double-mode imaging |
CN110969654A (en) * | 2018-09-29 | 2020-04-07 | 北京瑞智稷数科技有限公司 | Corn high-throughput phenotype measurement method and device based on harvester and harvester |
US20220121919A1 (en) * | 2020-10-16 | 2022-04-21 | X Development Llc | Determining Cereal Grain Crop Yield Based On Cereal Grain Trait Value(s) |
-
2023
- 2023-07-21 CN CN202310900787.6A patent/CN116612178A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8779233B1 (en) * | 2010-07-14 | 2014-07-15 | Iowa State University Research Foundation, Inc. | QTL regulating ear productivity traits in maize |
CN207408272U (en) * | 2017-08-10 | 2018-05-25 | 武汉谷丰光电科技有限公司 | Rice grain shape parameter measuring apparatus based on linear array camera and X-ray Double-mode imaging |
CN110969654A (en) * | 2018-09-29 | 2020-04-07 | 北京瑞智稷数科技有限公司 | Corn high-throughput phenotype measurement method and device based on harvester and harvester |
US20220121919A1 (en) * | 2020-10-16 | 2022-04-21 | X Development Llc | Determining Cereal Grain Crop Yield Based On Cereal Grain Trait Value(s) |
Non-Patent Citations (2)
Title |
---|
LEJUN YU ET AL.: "An integrated rice panicle phenotyping method based on X-ray and RGB scanning and deep learning", 《THE CROP JOURNAL》, pages 42 - 56 * |
余乐俊: "水稻稻穗性状提取关键技术研究", 《中国博士学位论文全文数据库农业科技辑》, pages 047 - 17 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118658046A (en) * | 2024-08-19 | 2024-09-17 | 安徽高哲信息技术有限公司 | Multi-view cereal image processing method, storage medium and electronic device |
CN118658046B (en) * | 2024-08-19 | 2024-10-29 | 安徽高哲信息技术有限公司 | Multi-view cereal image processing method, storage medium and electronic device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106610969A (en) | Multimodal information-based video content auditing system and method | |
CN111259925B (en) | K-means clustering and width mutation algorithm-based field wheat spike counting method | |
CN112435214B (en) | Priori frame linear scaling-based pollen detection method and device and electronic equipment | |
Zharkova et al. | Solar feature catalogues in EGSO | |
CN113269224B (en) | Scene image classification method, system and storage medium | |
CN111160096A (en) | Method, device and system for identifying poultry egg abnormality, storage medium and electronic device | |
CN115272828A (en) | Intensive target detection model training method based on attention mechanism | |
CN111079999A (en) | Flood disaster susceptibility prediction method based on CNN and SVM | |
CN114626445B (en) | Dam termite video identification method based on optical flow network and Gaussian background modeling | |
CN116092179A (en) | Improved Yolox fall detection system | |
CN115239672A (en) | Defect detection method and device, equipment and storage medium | |
CN115861823A (en) | Remote sensing change detection method and device based on self-supervision deep learning | |
CN116612178A (en) | Method for extracting grain shape of integrated rice spike grains | |
Xia et al. | A Deep Learning Application for Building Damage Assessment Using Ultra-High-Resolution Remote Sensing Imagery in Turkey Earthquake | |
CN116758539A (en) | Embryo image blastomere identification method based on data enhancement | |
CN108280410B (en) | Crop identification method and system based on binary coding | |
CN115439654B (en) | Method and system for finely dividing weakly supervised farmland plots under dynamic constraint | |
CN114387517A (en) | Greenhouse intelligent extraction method based on high-resolution remote sensing image | |
CN118397367A (en) | Tampering detection method based on convolution vision Mamba | |
CN117541423A (en) | Aphis gossypii harm monitoring method and system based on fusion map features | |
CN118097372A (en) | Crop growth visual prediction method based on artificial intelligence | |
CN116805415A (en) | Cage broiler health status identification method based on lightweight improved YOLOv5 | |
CN116824141A (en) | Livestock image instance segmentation method and device based on deep learning | |
CN112287787B (en) | Crop lodging grading method based on gradient histogram characteristics | |
CN118334526B (en) | Embankment damage identification method based on improved Yolov model |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20230818 |