[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN114863198A - Crayfish quality grading method based on neural network - Google Patents

Crayfish quality grading method based on neural network Download PDF

Info

Publication number
CN114863198A
CN114863198A CN202210198733.5A CN202210198733A CN114863198A CN 114863198 A CN114863198 A CN 114863198A CN 202210198733 A CN202210198733 A CN 202210198733A CN 114863198 A CN114863198 A CN 114863198A
Authority
CN
China
Prior art keywords
crayfish
target
classification
frame
color
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210198733.5A
Other languages
Chinese (zh)
Other versions
CN114863198B (en
Inventor
王淑青
鲁濠
汤璐
鲁东林
黄剑锋
金浩博
张子言
朱文鑫
柯洋洋
张子蓬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hubei University of Technology
Original Assignee
Hubei University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hubei University of Technology filed Critical Hubei University of Technology
Priority to CN202210198733.5A priority Critical patent/CN114863198B/en
Publication of CN114863198A publication Critical patent/CN114863198A/en
Application granted granted Critical
Publication of CN114863198B publication Critical patent/CN114863198B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A40/00Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
    • Y02A40/80Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in fisheries management
    • Y02A40/81Aquaculture, e.g. of fish

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Engineering & Computer Science (AREA)
  • Molecular Biology (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a crayfish quality grading method based on a neural network, which comprises the steps of (1) acquiring and constructing a crayfish data set, and randomly dividing the crayfish data set into a training set, a verification set and a test set; (2) aiming at the self-collected data picture, improving the positioning precision of an anchor detection frame and constructing a ResNet152 classification network by utilizing a whale algorithm under a pyrrch frame, carrying out data training and repeated verification testing on the improved model, carrying out positioning identification on lobsters, extracting a lobster framework by adopting a distance-based refinement algorithm, finishing the shape classification of the lobsters according to postures and finishing the classification result of freshness and color. (3) And calculating the proportion of the shrimp pixel points by adopting a method of performing binary image conversion on the lobster picture, and classifying according to the specification of a threshold value. (4) And (4) sorting according to different grades according to the sorting results of (2) and (3). The invention integrates the deep learning and image processing technology, and has more accurate positioning of lobsters, higher accuracy of form detection and higher speed.

Description

Crayfish quality grading method based on neural network
Technical Field
The invention belongs to the technical field of deep learning and target detection, and particularly relates to a crayfish quality grading method based on a neural network.
Background
In recent years, the industrial scale of the Chinese crayfish is rapidly increased, the yield of the Chinese crayfish reaches 245.9 ten thousand tons in 2019, and approximately two times of the yield are turned over in five years. Meanwhile, the processing means of the crayfish is continuously optimized, and the industry is continuously developed towards the standardization direction. According to the market survey of the current Chinese crayfish processing products, three products of shrimp balls, shrimp meat and whole shrimps are mainly produced. In the production of shrimp balls, the processing plant is required to cook the crayfish to a semi-cooked state, i.e., the shells of the crayfish are cooked and red, but the shrimp meat is not yet fully cooked. In this case, the crayfish is different in shape from before production, and if the shrimp before cooking is a live shrimp, the shrimp tails after processing are curled, but if the shrimp before cooking is a dead shrimp, the shrimp tails are kept in the original linear shape. In order to ensure the quality and taste of the product, the processed product needs to be classified, and unqualified dead shrimps are distinguished from qualified live shrimps. Meanwhile, in order to classify the quality grade of the processed crayfish products, the heads and the colors of qualified crayfish need to be classified, and the prices of crayfish with different qualities can be conveniently set.
At this stage, the crayfish processing industry is still labor intensive, and the capital expenditure for hiring labor accounts for the total cost of production. The grading sorting work of the crayfish processing products is mostly finished manually, the workload is large, the efficiency is low, the labor cost is high, and the dependence degree on the technology and the equipment is low. The research and system aiming at food automatic grading are not uncommon, the application of machine vision technology in grading of products such as fruits, gems, crops and the like is mature, but the related research aiming at the quality grading of crayfish is less. With the development of deep learning technology in recent years, the difficulty of automatic grading of crayfish processing products can be reduced by combining a traditional machine learning algorithm, and the grading precision is improved. The existing crayfish quality grading method only distinguishes the crayfish according to color or size, does not comprehensively consider the crayfish quality grading method and the crayfish curling degree, is difficult to realize the crayfish quality grading from various aspects such as color, size, death and activity and the like, and is low in reliability, so that the design of the method for grading the crayfish quality according to different appearance attributes has great significance for the automatic development of the crayfish processing industry.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides the crayfish quality grading method based on the neural network, which comprehensively analyzes the characteristics of the processed crayfish such as the curling shape, the color, the size and the like, and grades the crayfish according to various characteristics, thereby solving the problems of few quality grades and low reliability in the existing crayfish processing process and effectively realizing the quality grading of the crayfish.
In order to achieve the aim, the technical scheme provided by the invention is a crayfish quality grading method based on a neural network, which comprises the following steps:
step 1, acquiring a crayfish image data set, carrying out manual labeling, and dividing the crayfish image data set into a training set and a test set according to a certain proportion;
step 2, using a YOLOv5 network as a detection model for positioning a crawfish target frame area, cutting the crawfish target frame area, further judging the posture and the tail shape of the crawfish, and judging whether the quality of the crawfish is qualified or not according to the tail shape of the crawfish;
step 3, carrying out color attribute classification on the qualified crayfish images by adopting a ResNet152 classification network, wherein the crayfish images are classified into bright red and dark red;
step 4, classifying the qualified crayfish images into a large type, a middle type and a small type;
and 5, after the color and size classification tasks of the crayfishes are finished, counting the classes of the two attributes of the crayfishes, and dividing the crayfishes into first-level products, second-level products, third-level products and qualified products.
Further, in the step 2, the input crawfish image is automatically divided into S multiplied by S grids by using a YOLOv5 network, and if the center coordinate of the crawfish to be detected falls into a certain grid, the grid is responsible for detecting the crawfish target; during the detection process, each sxs grid unit predicts N detection frames, N can be adjusted in parameters according to a specific data set, each detection frame contains 5 prediction values: x, y, w, h and confidence coefficient, wherein x and y are coordinates of the center point of the detection frame, and w and h are the width and height of the detection frame.
Further, the specific implementation manner of positioning the crayfish target frame area by using the YOLOv5 network in the step 2 is as follows;
improving the anchor positioning in the YOLOv5 network by an adaptive anchor calculation method based on K-mean and whale algorithms, which is concretely as follows;
dividing a target anchor into K-type clustering centers, and selecting a fitness function as follows:
Figure BDA0003528299150000021
in the formula, box i And center j Respectively representing the ith target frame and the jth cluster center, D ciou The method is a CIOU distance formula, and adopts a CIOU distance calculation mode for the approximate distance between a target frame and a clustering center, namely the target frame and the clustering center are infinitely superposed and have the same width and height, the fitness is best, and the positioning is most accurate;
initializing a population and updating an optimal target frame by adopting a whale algorithm, initializing position information of N target frames, calculating the fitness of each target frame, selecting the target frame with the minimum fitness as the current optimal solution, and then performing next iteration until reaching a fitness stopping threshold value to complete screening;
the target frame is approximated by the optimal target frame position, and the calculation formula is as follows:
Figure BDA0003528299150000031
wherein n is the number of iterations, x i n For the current iteration individual value, x best For the current optimal target frame position, A is a random number which is uniformly distributed in a multidimensional way, and C is a random number which is uniformly distributed in (0-2);
or approaching the position of a random target frame, and the calculation formula is as follows:
Figure BDA0003528299150000032
in the formula, x rand A target box at a random position;
finally obtaining the central point coordinate and the width and height normalized values (x, y, w, h) of the optimal crayfish target frame, and calculating the coordinates of the upper left corner and the lower right corner of the target frame according to the calculation formula:
Figure BDA0003528299150000033
in the formula (x) 1 ,y 1 ) And (x) 2 ,y 2 ) The coordinates of the upper left corner and the lower right corner of the target frame are respectively.
Further, the loss function of YOLOv5 in step 2 is composed of three components, and the formula is:
Figure BDA0003528299150000034
in the formula, L ciou Is the bounding box loss, used to calculate the deviation between the crayfish prediction box and the true box; l is conf Determining whether a crayfish target exists in the prior frame or not for the confidence loss; l is class Calculating the deviation of crayfish classification for classification loss; if no crayfish target exists in the prior frame, only the confidence coefficient loss is calculated, and if the crayfish target exists, the three types of loss are calculated; s 2 Scale of the characteristic map, B is the prior frame, λ noobj As a weight coefficient, if there is a target at the jth prior frame of the ith grid,
Figure BDA0003528299150000041
respectively taking 1 and 0, and if no target exists, respectively taking 0 and 1; rho (·) is Euclidean distance, and c is diagonal distance between the prediction box and the real box closure area; b. w and h are the central coordinate, width and height of the prediction frame; b gt 、w gt 、h gt The center coordinates, width and height of the real frame;
Figure BDA0003528299150000042
the confidence coefficient of the prediction frame and the manual marking frame is obtained;
Figure BDA0003528299150000043
and the category probability of the prediction box and the manual labeling box is obtained.
Further, in the step 2, the target frame area of the crayfish is cut, the posture and the shape of the tail of the crayfish are further judged, and whether the quality of the crayfish is qualified or not is judged according to the shape of the tail of the crayfish, wherein the specific implementation mode is as follows;
setting the abscissa of the upper left corner of the crayfish target frame to be constantly larger than 0, thereby finishing the cutting of the crayfish target;
then, carrying out binarization processing on the cut crayfish picture, calculating the distance from a non-zero pixel point to the nearest zero pixel point, namely calculating the distance from a white pixel point to the nearest black edge in a binary image, and extracting all areas connected with the pixel points farthest from the black pixel points to obtain an image skeleton outline;
the Euclidean distance is used as a criterion for distance calculation, the distance is limited within a (0-128) pixel range, and the larger the distance is, the higher the possibility that the Euclidean distance is a skeleton structure point is; in order to extract the structure point more accurately, for the candidate structure point, the adjacent 8 pixels, P1, P2, P3, P4, P6, P7, P8, and P9, are extracted, and for a point which is most likely to be the central pixel f (x, y), the following two conditions should be satisfied:
P 1 |P 3 |P 7 |P 9 ≥f(x,y)&P 1 |P 3 |P 7 |P 9 ≤f(x,y) (7)
P 2 ≈P 8 ≤f(x,y)|P 4 ≈P 6 ≤f(x,y) (8)
formula (7) shows that the extracted skeleton pixel points have to have higher brightness than at least 1 surrounding pixel point and lower brightness than at least one surrounding pixel point, and the differences of vertexes in four directions are calculated so as to avoid a central point as an isolated point; the formula (8) further finds a central and highlighted pixel point on the basis of the formula (7), and the periphery of the specified central point at least contains a pair of pixel points which are close in value and are smaller than the central pixel point, so that the probability that the central point is the most central point of the structure is improved;
selecting central points meeting the two conditions to form a skeleton outline curve, positioning the skeleton outline curve to obtain head and tail position points and maximum bending step points, putting the obtained skeleton outline curve into a 640 x 640 plane coordinate system, and calculating an included angle between a connecting line of a head coordinate point and a maximum bending step point coordinate and an included angle between a tail coordinate point and a maximum bending step point coordinate to approximately obtain a crayfish body included angle, wherein the included angle is qualified when the included angle is less than or equal to 90 degrees, and the included angle is unqualified when the included angle is more than or equal to 90 degrees.
Further, the specific implementation manner of step 3 is as follows;
firstly, constructing a crayfish color classification model; using a ResNet152 classification network to classify the color of the crayfish image, adopting an Adam optimizer, setting the learning rate to be 0.00015, setting a SeLU function as a network activation function, and adopting the following formula:
Figure BDA0003528299150000051
where α and λ are constant values, x is the input; and then inputting the qualified crayfish images into the constructed classification network to perform a crayfish color classification task, finishing training, obtaining an optimal classification model, and performing color classification on the crayfish images to be tested by using the optimal classification model.
Further, the specific implementation manner of step 4 is as follows;
step 4.1, inputting the qualified crayfish image detected in the step 2;
step 4.2, converting the RGB image into an HSV color model;
the original image is an RGB image, HSV can describe the color of an object more visually, wherein H is a color channel, S is a light and dark channel, and V is a light and dark channel, so that the HSV model can more conveniently perform color segmentation on a picture target;
the formula for converting the RGB model into the HSV model is as follows:
Figure BDA0003528299150000052
Figure BDA0003528299150000053
V=C max (11)
wherein:
Figure BDA0003528299150000061
in the formula (I), the compound is shown in the specification,
Figure BDA0003528299150000062
are all normalized values, C max Is the maximum value among the three normalized values, and delta is the difference value between the maximum value and the minimum value among the three normalized values;
4.3, segmenting the crayfish target and converting the crayfish target into a binary image;
in the HSV color space, the red range is (0, 43, 46) - (10, 255, 255), the red area pixel value is set to 255 by setting a threshold value, the other color area pixel values are set to 0, the mask of the crayfish is obtained, the original image is converted into a binary image, and the mask and the original image are subjected to AND operation to obtain the crayfish target by dividing the original image;
step 4.4, performing expansion corrosion operation to obtain a crayfish communication area;
processing the mask by using a morphological expansion operation, fusing all single connected regions in the graph into a connected region, and then processing the mask by using a morphological corrosion operation by using a kernel matrix with the same size to restore the size of the whole single connected region to the size of the original crayfish region;
4.5, calculating the proportion of the pixels of the crayfish to the original image before cutting;
counting the number of black pixel points with the pixel value of 255 in the binary image obtained by conversion, namely counting the number of pixel points occupied by the target of the shrimps, and carrying out proportional operation on the number of the black pixel points and the number of the original pixel points;
step 4.6, finishing the classification of large, medium and small according to the proportion;
combining the artificial classification and the calculated proportion result, the proportion of the crayfish in the original image is set as a subclass when the proportion is less than or equal to n1, a middle class when the proportion is less than or equal to n2 and a large class when the proportion is greater than n 2.
Further, the specific implementation manner of step 5 is as follows;
counting the categories of two attributes of the crayfish, and if the crayfish is bright red in color and large in size, classifying the crayfish into a first-class product; in the size, the product is divided into two grades; the size is small, and the product is divided into three grades; if the color of the crayfish is dark red and the size is large, the crayfish is classified as a second-grade product; in the size, the product is divided into three grades, and the size is small and is only a qualified product.
Compared with the prior art, the invention has the following advantages:
1) more representative: compared with the traditional lobster grading method, the crayfish quality grading method based on deep learning provided by the invention can be used for comprehensively classifying the crayfish according to various attributes, and the obtained class grade is more representative.
2) More safe and reliable: compared with the traditional contact detection, the method is non-contact detection, combines a deep convolutional neural network and an image processing means, reduces the classification difficulty of the image through target cutting, and is more accurate and reliable compared with other methods by adopting a pixel proportion to classify the size.
3) The cost is low and the accuracy is high: according to the invention, a new network model is constructed based on YOLOv5, and an independent convolutional neural network classification model is adopted, so that crayfishes with different surface conditions can be better detected, and the detection cost in detection compared with the traditional machine vision algorithm can be reduced.
Drawings
FIG. 1 is a graph of crayfish grading according to the present invention.
FIG. 2 is a flow chart of the preceding steps for shrimp tail shape discrimination according to the present invention.
FIG. 3 is a flow chart of the crayfish size classification steps of the present invention.
FIG. 4 is an overall flowchart of the crayfish classification steps of the present invention.
FIG. 5 is a skeleton extraction result diagram for crayfish tail shape discrimination in accordance with the present invention.
FIG. 6 is a graph showing the results of the crayfish size classification step of the present invention.
FIG. 7 is a graph of training indicators for crayfish data set classification detection in accordance with the present invention.
FIG. 8 is a diagram showing the results of the class detection of the crayfish validation set of the present invention.
Detailed Description
The invention provides a crayfish quality grading method based on a neural network, and the technical scheme of the crayfish quality grading method is further explained by combining the attached drawings and the embodiment.
As shown in fig. 1, the process of the embodiment of the present invention includes the following steps:
step 1, acquiring a crayfish data set.
Step 1.1, shooting a production line video containing crayfish by using an industrial camera, carrying out screenshot on the video, and selecting a picture proportion of 640 multiplied by 640 as a crayfish initial data set.
And step 1.2, expanding a crawfish data set by adopting a background rendering strategy, wherein the transmission belt of a production line in the initial data set is blue, and in order to improve the generalization of the detection model, the backgrounds of white, green and the like are newly added, and the magnitude of the data image is expanded.
And 1.3, manually labeling the crayfish data set by using a labeling tool, and dividing the crayfish data set into a training set and a test set according to the ratio of 8: 2.
And 2, judging whether the quality of the crayfish tail is qualified or not according to the shape of the crayfish tail, judging that the crayfish tail is qualified if the crayfish tail is in a spiral shape, and judging that the crayfish tail is unqualified if the crayfish tail is not in a spiral shape.
Step 2.1, YOLOv5 positioning algorithm improvement
The Yolov5 network is mainly divided into four parts, Input, Backbone, Neck and Prediction.
Input: and (4) enhancing the embedded data, and performing image splicing on the small target detection in the forms of random cutting, zooming and arrangement by adopting the same processing as YOLOv 4. And secondly, adding adaptive anchor frame calculation, and adaptively selecting the most appropriate anchor point value from different data sets to initialize the anchor frame. And adopting self-adaptive picture scaling, carrying out proper cutting and filling on unmatched pictures, normalizing the size and improving the reasoning speed of the network.
Backbone: a Focus structure is proposed, which performs a slicing operation on the picture and the feature map, and samples separate pixel values to hold the original information of the picture. And extracting height and width information to Channels to enable the Input Channels to be improved to be 4 times of the original height and width information, and performing convolution operation on a new picture to obtain a double-time sampling picture without loss information. Two CSPNet structures are adopted, the change of the gradient is integrated into the characteristic diagram, the gradient expression is enhanced, and the calculation amount is reduced.
And (6) selecting Neck: the SPP module is adopted, from top to bottom, high and low feature layers obtained through up-sampling are spliced to realize feature fusion and obtain a new feature map, and then features are transmitted from weak to strong through PAN (Path fusion network) from bottom to top, so that more feature fusion is realized by the feature layers.
OUTPUT: and the problem of non-maximum value inhibition is optimized by adopting the GIOU _ Loss as a Loss function of the Bounding box compared with the CIOU _ Loss adopted by YOLOv 4.
Anchor positioning of a YOLOv5 network framework adopts an unsupervised K-mean clustering algorithm, and N initial prediction box size types obtained according to a coco data set are reserved during pre-training. Through multiple experiments, the effect difference of the algorithm applied to the collected crayfish data sets is large, the possible reason is that the clustering algorithm is sensitive to the difference of initial seeds, and the difference of the crayfish data sets and the coco data sets causes the phenomenon. Therefore, in order to improve the detection stability, the invention provides a self-adaptive anchor calculation method based on a K-mean step and a whale algorithm.
Whale algorithm the algorithm proposed by simulating the natural behavior of a whale flock hunting, the whole process is hunting and ejecting a bubble net to drive away the prey. In the invention, an improved algorithm is utilized to divide a target anchor into K-type cluster centers, and a fitness function is selected as follows:
Figure BDA0003528299150000081
in the formula, D ciou Is the CIOU distance formula, box i And center j Respectively represent the ith target frame and the ithAnd j clustering centers, namely, the distance calculation mode of CIOU is adopted for the approximate distance between the target frame and the clustering centers, namely, under the condition that the target frame and the clustering centers are infinitely superposed and have consistent width and height, the fitness is best, and the positioning is most accurate.
Initializing the population and updating the optimal target frames by adopting a whale algorithm, initializing position information of N target frames, calculating the fitness of each target frame, selecting the target frame with the minimum fitness as the current optimal solution, and then performing the next iteration until reaching the fitness stopping threshold value to complete screening.
The target frame is approximated by the optimal target frame position, and the calculation formula is as follows:
Figure BDA0003528299150000091
wherein n is the number of iterations, x i n For the current iteration individual value, x best For the current optimal target frame position, A is a random number which is uniformly distributed in a multidimensional way, and C is a random number which is uniformly distributed in (0-2).
Or approaching the position of a random target frame, and the calculation formula is as follows:
Figure BDA0003528299150000092
in the formula, x rand Is a randomly located target box.
The positioning effect is shown in fig. 8, and it can be seen that the improved anchor positioning is more stable and accurate, the influence of the target frame on the data set preselection frame is reduced, and the stability of the detected target positioning is improved.
Step 2.2, training a model and obtaining an optimal detection model;
the spiral shape of the crayfish is judged by adopting a modified YOLOv5 deep convolution neural network. The input crawfish image is automatically divided into S multiplied by S grids by using a YOLOv5 network, and the grid is responsible for detecting the crawfish target when the center coordinate of the crawfish to be detected falls into a certain grid. These predict N bounding boxes per S × S grid cell, N being adjustable in parameters according to a particular data set. Each bounding box contains 5 predictors: x, y, w, h (x, y are coordinates of the center point of the frame, and w, h are the width and height of the frame) and confidence. In the detection process, in order to solve the problem that the initial positioning of the prediction frame is not accurate, 9 real frames with uniform centers and scales are set in network initialization, the range of the prediction frame is close to the range of the 9 frames at the initial iteration, the deviation is not too large, and the 9 real frames are called prior frames. A Focus module is added in front of a backbone network and is used for carrying out slicing operation on an input image, so that the channel of the image is increased, the size of the image is reduced, the calculation parameters can be reduced, and the training speed of the model is improved.
The loss function of YOLOv5 consists of three components, the formula:
Figure BDA0003528299150000101
in the formula, L ciou Is the bounding box loss, used to calculate the deviation between the crayfish prediction box and the true box; l is con f is confidence loss and is used for determining whether the crayfish target exists in the prior frame; l is class Calculating the deviation of crayfish classification for classification loss; if no crayfish target exists in the prior frame, only calculating confidence loss, and if the crayfish target exists, calculating all three types of loss; s 2 Scale of the characteristic map, B is the prior frame, λ noobj As a weight coefficient, if there is a target at the jth prior frame of the ith grid,
Figure BDA0003528299150000102
respectively taking 1 and 0, and if no target exists, respectively taking 0 and 1; rho (·) is Euclidean distance, and c is diagonal distance between the prediction box and the real box closure area; b. w and h are the central coordinate, width and height of the prediction frame; b gt 、w gt 、h gt The center coordinates, width and height of the real frame;
Figure BDA0003528299150000103
the confidence coefficient of the prediction frame and the manual marking frame is obtained;
Figure BDA0003528299150000104
and the category probability of the prediction box and the manual labeling box is obtained.
After convolution operation, the YOLOv5 adopts a Relu activation function which is mainly used for nonlinear mapping of features, and the formula of the Relu activation function is as follows:
Figure BDA0003528299150000105
in the formula, x is the result of each convolution operation.
The Relu activation function is used for forward propagation of the network model, and the loss value of the model, namely the deviation between the real value and the predicted value, is calculated before propagation cutoff, and the deviation is calculated by the loss function. According to the loss value and the loss function, the back propagation is carried out through the partial derivative chain rule network model, the parameters of the network model are continuously refreshed, and the loss value of the model is reduced.
Setting parameters of a network model, modifying yolov5_ createfish _ detection.cfg configuration files, setting input images width and height to be 416 multiplied by 416, setting batch to be 64, and setting subdivisions to be 16, namely training 64 images in each iteration, dividing the images into 16 blocks, respectively, and adopting a mosaic enhancement strategy to iterate for 20000 times. After iterative training, the loss value is reduced to be about 0.1, which shows that the deviation between the predicted value and the true value is small, and the model training is finished.
And 2.3, detecting the model by using the test set image.
And (3) when the detection is carried out, the network configuration file and the weight file in the step 2.2 are needed, the obj _ createfish _ detection. names file is still needed as the basis of the label, and the qualified label and the unqualified label are shared in the file to judge the quality of the crayfish. At this time, the detection model can learn through the whale algorithm in step 2.1 to obtain the coordinates of the center point of the optimal crawfish bounding box and the normalized values (x, y, w, h) of width and height, so as to calculate the coordinates of the upper left corner and the lower right corner of the bounding box, and the calculation formula is as follows:
Figure BDA0003528299150000111
in the formula (x) 1 ,y 1 ) And (x) 2 ,y 2 ) The coordinates of the upper left corner and the lower right corner of the boundary box are respectively, and the network can detect and classify the detected crayfish target according to the learned tail curvature characteristic.
And 2.4, cutting the crayfish detection frame area detected in the step 2.3.
Since the center coordinates of some crayfish targets are near the left side of the image, the coordinates of the upper left corner are usually out of the picture range, i.e., x 1 Is less than zero, set x 1 The cutting of the crayfish target can be smoothly finished when the cutting is constantly larger than 0.
And 2.5, judging the posture of the small dragon and the shape of the tail of the shrimp.
And (3) carrying out binarization processing on the cut crayfish picture, and calculating the distance from a non-zero pixel point to the nearest zero pixel point, namely calculating the distance from a white pixel point to the nearest black edge in the binary image, and extracting all areas connected with the pixel points farthest from the black pixel point to obtain an image skeleton outline.
With euclidean distance as a criterion for distance calculation, limiting the distance to a range of (0-128) pixels, the greater the distance the greater the likelihood that it is likely to be a skeletal structure point. In order to extract the structure points more accurately, the adjacent 8 pixel points of the candidate structure points are extracted, namely P1, P2, P3, P4, P6, P7, P8 and P9. For a point that is most likely to be the center pixel point f (x, y), the following two conditions should be satisfied:
P 1 |P 3 |P 7 |P 9 ≥f(x,y)&P 1 |P 3 |P 7 |P 9 ≤f(x,y) (7)
P 2 ≈P 8 ≤f(x,y)|P 4 ≈P 6 ≤f(x,y) (8)
formula (7) shows that the extracted skeleton pixel point must have higher brightness than at least 1 surrounding pixel point and lower brightness than at least one surrounding pixel point, and the difference of vertexes in four directions is calculated to avoid the central point as an isolated point (the pixel point is the highest, but the surrounding is not communicated with the region). Formula (8) further finds the central and highlighted pixel point on the basis of formula (7), and it is stipulated that at least one pair of pixel points around the central point are close and all smaller than the central pixel point, so that the probability that the central point is the structural central point is greatly improved.
Selecting central points meeting the two conditions to form a skeleton contour curve, positioning the skeleton contour curve to easily obtain head and tail position points and maximum bending step points, putting the obtained skeleton contour curve into a planar coordinate system of 640 multiplied by 640, and calculating an included angle between a connecting line of a head coordinate point and a maximum bending step point coordinate and between a tail coordinate point and the maximum bending step point coordinate to approximately obtain a crayfish-shaped body included angle, wherein the included angle is not more than 90 degrees and is qualified (namely crayfish tail curling), and the included angle is not less than 90 degrees and is unqualified (namely crayfish tail non-curling). The results are shown in FIG. 5.
And 3, classifying the colors of the qualified crayfishes into bright red and dark red (brownish red).
And 3.1, constructing a crayfish color classification model.
The color classification is carried out on the crayfishes by using a ResNet152 classification network, and the ResNet152 is a deep residual error neural network, so that not only can rich image characteristic information be extracted, but also gradient disappearance and gradient explosion can be prevented. An Adam optimizer is adopted, the learning rate is 0.00015, a SeLU function is set as a network activation function, and the formula is as follows:
Figure BDA0003528299150000121
in this embodiment, α has a value of 1.67326, λ has a value of 1.05070, and x is the layer input. Compared with a network classification model adopting a ReLU activation function, the confidence coefficient of 0.2 is improved on average in the crayfish classification test.
And 3.2, detecting the training set by using the detection model, and detecting the crayfish target by using the weight file obtained in the step 2.2 on the crayfish training set manufactured in the step 1.3.
And 3.3, cutting and training the crayfishes and marking the crayfishes as bright red and dark red.
And 2.3, training and concentrating the cutting of the crayfish target, dividing the cut target into bright red and dark red by manual marking, respectively putting the bright red and dark red into two different folders, and finishing the manufacturing of the crayfish color classification data set.
And 3.4, training the model and obtaining the optimal classification model.
And (3) carrying out a crayfish color classification task by using the classification model constructed in the step (3.1), training a classification network by using a pre-training model named as resnet152-pre.pth, and carrying out 200 rounds of iterative training.
And 3.5, finishing training and obtaining the optimal classification model.
And after the training of the ResNet classification network is finished, obtaining a final classification model of the Resnet152_ crayfish _ classification. The final classification results are shown in table 1.
TABLE 1 color Classification of crayfish
Network type Epoch Accuracy(%) Recall(%)
VGG Net 200 86.3 88.4
ResNet152 200 90.2 89.7
And 3.6, classifying the image after the test set is cut by the classification model.
And carrying out color classification on the crayfish targets cut out from the test set according to the classification model obtained by training, namely directly carrying out color classification on the crayfish after the crayfish test set finishes the curling degree detection and cutting.
And 4, classifying the crayfishes into three types, namely large, medium and small.
And 4.1, inputting the crayfish image cut by the test set, and classifying the size of the crayfish target cut by the step 2.3.
And 4.2, converting into an HSV color model.
The original image is an RGB image, HSV can describe the color of an object more visually, wherein H is a color channel, S is a light and dark channel, and V is a light and dark channel, so that the HSV model can more conveniently perform color segmentation on a picture target.
The formula for converting the RGB model into the HSV model is as follows:
Figure BDA0003528299150000131
Figure BDA0003528299150000132
V=C max (12) wherein:
Figure BDA0003528299150000141
in the formula (I), the compound is shown in the specification,
Figure RE-GDA0003636093320000142
are all normalized values, C max Is the maximum value between the three normalized values, and Δ is the difference between the maximum and minimum values between the three normalized values.
And 4.3, segmenting the crayfish target and converting the crayfish target into a binary image.
In the HSV color space, the range of red is (0, 43, 46) to (10, 255, 255). Setting a threshold value to set the pixel value of a red area to be 255 (namely white), setting the pixel values of other color areas to be 0 (namely black), obtaining a mask of the crayfish, converting the original image into a binary image, and performing AND operation on the mask and the original image to obtain the crayfish target by dividing the original image.
And 4.4, performing expansion corrosion operation to obtain a crayfish communication area.
Processing the mask by using a morphological expansion operation, fusing all single connected regions in the graph into a connected region, and then processing the mask by using a morphological corrosion operation by using a kernel matrix with the same size, so that the whole single connected region is restored to the size of the original crayfish region.
And 4.5, calculating the proportion of the pixels of the crayfishes to the original image before cutting.
And counting the number of black pixels with the pixel value of 255 in the binary image obtained by conversion, namely counting the number of pixels occupied by the target of the shrimps, and carrying out proportional operation on the number of the black pixels and the number of pixels of the original image. The image size of the original crayfish image used in this embodiment is 1280 × 924, and the binary image thereof contains 1182720 pixels in total.
And 4.6, finishing the classification of large, medium and small according to the proportion.
In combination with the results of the artificial classification and the calculation, the proportion of the crawfish to the original image is classified into subclasses when the proportion is less than or equal to 0.02, middle classes when the proportion is less than or equal to 0.03, and large classes when the proportion is greater than 0.03, as shown in fig. 6.
After the color and size classification tasks of the crayfishes are finished, counting the classes of the crayfishes with the same two attributes, and if the crayfishes are bright red in color and large in size, classifying the crayfishes into first-class products; in the size, the product is divided into two grades; the size is small, and the product is divided into three grades. If the color of the crayfish is dark red and the size is large, the crayfish is divided into second-grade products; in the size, the product is classified into three grades, the size is small, and only qualified products are shown in table 2.
TABLE 2
Figure BDA0003528299150000151
The experimental configuration of the invention is as follows:
1. and constructing a crayfish data set.
The data set used in the experiment is a self-made data set, is derived from the sorting process of crayfishes collected by an industrial camera on site, the frequency is set for video interception, and 2500 sample pictures are obtained by preferential selection. Due to the fact that images acquired by the camera are limited, in order to improve the generalization and feature fusion of the network model, Gaussian noise, Gaussian blur, background transformation and other modes are added to the acquired images, and crayfish samples with large differences and small quantity are expanded. The converted and amplified crawfish data is integrated into 8000 crawfish data, and the crawfish data comprises crawfish samples with different sizes and colors. Wherein, the positive and negative samples are 6000 respectively, the positive sample is qualified, and the negative sample is unqualified. Before training the data set, the data set was labeled using a Labelmg labeling tool. The classification categories are classified as: the qualification class is 0 and the label is qualified; the reject category is 1 and the label is unqualified. And generating label information, wherein each line represents information of one crayfish in the picture, the first number represents the label classification (0 or 1) of the labeling object, and the last four digits represent the central coordinate of the labeling frame and the relative width and height (values normalized relative to the whole picture).
2. Experimental Environment configuration
Table 3 shows the experimental environment configuration. The pre-trained model is selected as the improved algorithm. The optimization adopts Adam algorithm, and the Batch-Size is set to be 32; the number of iterations is set to 800; momentum factor value is 0.9; the weight attenuation coefficient was set to 0.0005; the learning rate is updated with a warm-up restart with the initial learning rate set to 0.001. The training effect is shown in fig. 7.
TABLE 3 Experimental Environment configuration
Figure BDA0003528299150000152
Figure BDA0003528299150000161
The values on the detection box in fig. 8 represent confidence levels, which represent the probability of determining that the defect is subject to such defects. In the case of accuracy checking, the higher the confidence coefficient is, the better the detection effect is. According to the detection result, under the conditions of different backgrounds including limb breakage and multi-scale, the network model can accurately position the crayfish in the picture, the detection time is improved to 5ms, and the confidence coefficient of defect detection is higher than 95%; under different illumination conditions, too dark or too bright illumination can cause difficulty in segmentation and positioning of the detected target and the background, the accuracy of network identification is affected, and the confidence coefficient of target detection is slightly reduced. Compared with other methods, the method provided by the invention improves the lobster positioning accuracy of the production line, provides a quicker, simpler and more convenient discrimination method for grading the shape and specification of the tail of the prawn, and provides a new solution for large-scale automatic grading of the crayfish.
In specific implementation, the above process can adopt computer software technology to realize automatic operation process.
The specific embodiments described herein are merely illustrative of the spirit of the invention. Various modifications or additions may be made to the described embodiments or alternatives may be employed by those skilled in the art without departing from the spirit or ambit of the invention as defined in the appended claims.

Claims (8)

1. A crayfish quality grading method based on a neural network is characterized by comprising the following steps:
step 1, acquiring a crayfish image data set, carrying out manual labeling, and then dividing the crayfish image data set into a training set and a test set according to a certain proportion;
step 2, using a YOLOv5 network as a detection model for positioning a crawfish target frame area, cutting the crawfish target frame area, further judging the crawfish posture and the shape of the tail of the crawfish, and judging whether the quality of the crawfish is qualified or not according to the shape of the tail of the crawfish;
step 3, carrying out color attribute classification on the qualified crayfish images by adopting a ResNet152 classification network, wherein the crayfish images are classified into bright red and dark red;
step 4, classifying the qualified crayfish images into a large type, a middle type and a small type;
and 5, after the color and size classification tasks of the crayfishes are finished, counting the classes of the two attributes of the crayfishes, and dividing the crayfishes into first-level products, second-level products, third-level products and qualified products.
2. The crayfish quality classification method based on the neural network as claimed in claim 1, wherein: in the step 2, an input crawfish image is automatically divided into S multiplied by S grids by using a YOLOv5 network, and if the center coordinate of the crawfish to be detected falls into a certain grid, the grid is responsible for detecting a crawfish target; during the detection process, each sxs grid unit predicts N detection frames, N can be adjusted in parameters according to a specific data set, each detection frame contains 5 prediction values: x, y, w, h and confidence coefficient, wherein x and y are coordinates of the center point of the detection frame, and w and h are the width and height of the detection frame.
3. The crayfish quality classification method based on the neural network as claimed in claim 1, wherein: the specific implementation mode of positioning the crayfish target frame area by adopting a YOLOv5 network in the step 2 is as follows;
improving the anchor positioning in the YOLOv5 network by an adaptive anchor calculation method based on K-mean and whale algorithms, which is concretely as follows;
dividing a target anchor into K-type clustering centers, and selecting a fitness function as follows:
Figure FDA0003528299140000011
in the formula, box i And center j Respectively representing the ith target frame and the jth cluster center, D ciou The method is a CIOU distance formula, and adopts a CIOU distance calculation mode for the approximate distance between a target frame and a clustering center, namely the target frame and the clustering center are infinitely superposed and have the same width and height, the fitness is best, and the positioning is most accurate;
initializing a population and updating an optimal target frame by adopting a whale algorithm, initializing position information of N target frames, calculating the fitness of each target frame, selecting the target frame with the minimum fitness as the current optimal solution, and then performing next iteration until the fitness stopping threshold is reached to complete screening;
the target frame is approximated by the optimal target frame position, and the calculation formula is as follows:
Figure FDA0003528299140000021
wherein n is the number of iterations, x i n For the current iteration individual value, x best For the current optimal target frame position, A is a random number which is uniformly distributed in a multidimensional way, and C is a random number which is uniformly distributed in (0-2);
or approaching the position of a random target frame, and the calculation formula is as follows:
Figure FDA0003528299140000022
in the formula, x rand A target box at a random position;
finally obtaining the central point coordinate and the width and height normalized values (x, y, w, h) of the optimal crayfish target frame, and calculating the coordinates of the upper left corner and the lower right corner of the target frame according to the calculation formula:
Figure FDA0003528299140000023
in the formula (x) 1 ,y 1 ) And (x) 2 ,y 2 ) The coordinates of the top left corner and the bottom right corner of the target box are respectively.
4. The crawfish quality grading method based on neural network as claimed in claim 1, wherein: the loss function of YOLOv5 in step 2 consists of three components, and the formula is:
Figure FDA0003528299140000024
in the formula, L ciou Is the bounding box loss, used to calculate the deviation between the crayfish prediction box and the true box; l is conf Determining whether a crayfish target exists in the prior frame or not for the confidence loss; l is class Calculating the deviation of crayfish classification for classification loss; if no crayfish target exists in the prior frame, only calculating the confidence loss, and if the crayfish target exists, calculating the three types of losses; s 2 Scale of the characteristic map, B is the prior frame, λ noobj As a weight coefficient, if there is a target at the jth prior frame of the ith grid,
Figure FDA0003528299140000025
respectively taking 1 and 0, and if no target exists, respectively taking 0 and 1; rho (·) is Euclidean distance, and c is diagonal distance between the prediction box and the real box closure area; b. w and h are the central coordinate, width and height of the prediction frame; b gt 、w gt 、h gt The center coordinates, width and height of the real frame;
Figure FDA0003528299140000031
the confidence coefficient of the prediction frame and the manual marking frame is obtained; p i j
Figure FDA0003528299140000032
And the category probability of the prediction box and the manual labeling box is obtained.
5. The crayfish quality classification method based on the neural network as claimed in claim 1, wherein: cutting the target frame area of the crayfish, further judging the posture of the crayfish and the shape of the tail of the crayfish, and judging whether the quality of the crayfish is qualified or not according to the shape of the tail of the crayfish, wherein the specific implementation mode is as follows;
setting the abscissa of the upper left corner of the crayfish target frame to be constantly larger than 0, thereby finishing the cutting of the crayfish target;
then, carrying out binarization processing on the cut crayfish picture, calculating the distance from a non-zero pixel point to a nearest zero pixel point, namely calculating the distance from a white pixel point to a nearest black edge in a binary image, and extracting all areas connected with the pixel points farthest from the black pixel points to obtain an image skeleton outline;
the Euclidean distance is used as a criterion for distance calculation, the distance is limited within a (0-128) pixel range, and the probability that the Euclidean distance is a skeleton structure point is higher when the distance is larger; in order to extract the structure point more accurately, for the candidate structure point, 8 adjacent pixels, P1, P2, P3, P4, P6, P7, P8, and P9, are extracted, and for a point which is most likely to be the central pixel f (x, y), the following two conditions should be satisfied:
P 1 |P 3 |P 7 |P 9 ≥f(x,y)&P 1 |P 3 |P 7 |P 9 ≤f(x,y) (7)
P 2 ≈P 8 ≤f(x,y)|P 4 ≈P 6 ≤f(x,y) (8)
formula (7) shows that the extracted skeleton pixel points have to have higher brightness than at least 1 surrounding pixel point and lower brightness than at least one surrounding pixel point, and the differences of vertexes in four directions are calculated so as to avoid a central point as an isolated point; the formula (8) further finds a central and highlighted pixel point on the basis of the formula (7), and at least one pair of pixel points which are close to each other and are smaller than the central pixel point is arranged around the specified central point, so that the probability that the central point is the structural central point is improved;
selecting central points meeting the two conditions to form a skeleton outline curve, positioning the skeleton outline curve to obtain head and tail position points and maximum bending step points, putting the obtained skeleton outline curve into a 640 x 640 plane coordinate system, and calculating the included angle of a connecting line of a head coordinate point and a maximum bending step point coordinate and a connecting line of a tail coordinate point and a maximum bending step point to approximately obtain the crayfish body included angle, wherein the included angle is qualified when the included angle is less than or equal to 90 degrees, and the included angle is unqualified when the included angle is more than or equal to 90 degrees.
6. The crayfish quality classification method based on the neural network as claimed in claim 1, wherein: the specific implementation manner of the step 3 is as follows;
firstly, constructing a crayfish color classification model; using a ResNet152 classification network to classify the color of the crayfish image, adopting an Adam optimizer, setting the learning rate to be 0.00015, setting a SeLU function as a network activation function, and adopting the following formula:
Figure FDA0003528299140000041
where α and λ are constant values and x is an input; and then inputting the qualified crayfish image into the constructed classification network to perform a crayfish color classification task, finishing training, obtaining an optimal classification model, and performing color classification on the crayfish image to be tested by using the optimal classification model.
7. The crayfish quality classification method based on the neural network as claimed in claim 1, wherein: the specific implementation manner of the step 4 is as follows;
step 4.1, inputting the qualified crayfish image detected in the step 2;
step 4.2, converting the RGB image into an HSV color model;
the original image is an RGB image, HSV can describe the color of an object more visually, wherein H is a color channel, S is a light and dark channel, and V is a light and dark channel, so that the HSV model can more conveniently perform color segmentation on a picture target;
the formula for converting the RGB model into the HSV model is as follows:
Figure FDA0003528299140000042
Figure FDA0003528299140000043
V=C max (11)
wherein:
Figure FDA0003528299140000044
in the formula (I), the compound is shown in the specification,
Figure FDA0003528299140000045
are all normalized values, C max Is the maximum value among the three normalized values, and delta is the difference value between the maximum value and the minimum value among the three normalized values;
4.3, segmenting the crayfish target and converting the crayfish target into a binary image;
in the HSV color space, the red range is (0, 43, 46) - (10, 255, 255), the red area pixel value is set to 255 by setting a threshold value, the other color area pixel values are set to 0, the mask of the crayfish is obtained, the original image is converted into a binary image, and the mask and the original image are subjected to AND operation to be divided from the original image to obtain a crayfish target;
step 4.4, performing expansion corrosion operation to obtain a crayfish communication area;
processing the mask by using a morphological expansion operation, fusing all single connected regions in the graph into a connected region, and then processing the mask by using a morphological corrosion operation by using a kernel matrix with the same size to restore the whole single connected region to the size of the original crayfish region;
4.5, calculating the proportion of the pixels of the crayfish to the original image before cutting;
counting the number of black pixel points with the pixel value of 255 in the binary image obtained by conversion, namely counting the number of pixel points occupied by the crayfish target, and carrying out proportional operation on the number of the black pixel points and the number of the original pixel points;
step 4.6, finishing the classification of large, medium and small according to the proportion;
combining the artificial classification and the calculated proportion result, the proportion of the crayfish in the original image is set as a subclass when the proportion is less than or equal to n1, a middle class when the proportion is less than or equal to n2 and a large class when the proportion is greater than n 2.
8. The crayfish quality classification method based on the neural network as claimed in claim 1, wherein: the specific implementation manner of the step 5 is as follows;
counting the categories of two attributes of the crayfish, and if the crayfish is bright red in color and large in size, classifying the crayfish into a first-class product; in the size, the product is divided into two grades; the size is small, and the product is divided into three grades; if the color of the crayfish is dark red and the size is large, the crayfish is divided into second-grade products; in the size, the product is divided into three grades, and the size is small and is only a qualified product.
CN202210198733.5A 2022-03-02 2022-03-02 Crayfish quality grading method based on neural network Active CN114863198B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210198733.5A CN114863198B (en) 2022-03-02 2022-03-02 Crayfish quality grading method based on neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210198733.5A CN114863198B (en) 2022-03-02 2022-03-02 Crayfish quality grading method based on neural network

Publications (2)

Publication Number Publication Date
CN114863198A true CN114863198A (en) 2022-08-05
CN114863198B CN114863198B (en) 2024-08-06

Family

ID=82627510

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210198733.5A Active CN114863198B (en) 2022-03-02 2022-03-02 Crayfish quality grading method based on neural network

Country Status (1)

Country Link
CN (1) CN114863198B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115606623A (en) * 2022-09-28 2023-01-17 武汉轻工大学 Crayfish sorting device based on machine vision
CN117173490A (en) * 2023-10-09 2023-12-05 乳山新达食品有限公司 Marine product detection classification method and system based on separated and extracted image data
CN118314407A (en) * 2024-06-05 2024-07-09 中国水产科学研究院珠江水产研究所 Shrimp sorting optimization method and system based on biological characteristics

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020102988A1 (en) * 2018-11-20 2020-05-28 西安电子科技大学 Feature fusion and dense connection based infrared plane target detection method
CN111666986A (en) * 2020-05-22 2020-09-15 南京邮电大学 Machine learning-based crayfish grading method
WO2020232941A1 (en) * 2019-05-17 2020-11-26 丰疆智能科技股份有限公司 Dairy cattle nipple detection convolutional neural network model and construction method therefor
CN112507929A (en) * 2020-12-16 2021-03-16 武汉理工大学 Vehicle body spot welding slag accurate detection method based on improved YOLOv3 network
WO2021227366A1 (en) * 2020-05-14 2021-11-18 华南理工大学 Method for automatically and accurately detecting plurality of small targets

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020102988A1 (en) * 2018-11-20 2020-05-28 西安电子科技大学 Feature fusion and dense connection based infrared plane target detection method
WO2020232941A1 (en) * 2019-05-17 2020-11-26 丰疆智能科技股份有限公司 Dairy cattle nipple detection convolutional neural network model and construction method therefor
WO2021227366A1 (en) * 2020-05-14 2021-11-18 华南理工大学 Method for automatically and accurately detecting plurality of small targets
CN111666986A (en) * 2020-05-22 2020-09-15 南京邮电大学 Machine learning-based crayfish grading method
CN112507929A (en) * 2020-12-16 2021-03-16 武汉理工大学 Vehicle body spot welding slag accurate detection method based on improved YOLOv3 network

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
高竟博;李晔;杜闯;: "基于深度学习的小龙虾分级算法", 现代计算机, no. 26, 15 September 2020 (2020-09-15) *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115606623A (en) * 2022-09-28 2023-01-17 武汉轻工大学 Crayfish sorting device based on machine vision
CN117173490A (en) * 2023-10-09 2023-12-05 乳山新达食品有限公司 Marine product detection classification method and system based on separated and extracted image data
CN117173490B (en) * 2023-10-09 2024-07-23 乳山新达食品有限公司 Marine product detection classification method and system based on separated and extracted image data
CN118314407A (en) * 2024-06-05 2024-07-09 中国水产科学研究院珠江水产研究所 Shrimp sorting optimization method and system based on biological characteristics

Also Published As

Publication number Publication date
CN114863198B (en) 2024-08-06

Similar Documents

Publication Publication Date Title
CN114863198B (en) Crayfish quality grading method based on neural network
CN113569667B (en) Inland ship target identification method and system based on lightweight neural network model
CN109740460B (en) Optical remote sensing image ship detection method based on depth residual error dense network
CN109509187B (en) Efficient inspection algorithm for small defects in large-resolution cloth images
CN109684922B (en) Multi-model finished dish identification method based on convolutional neural network
CN111652321A (en) Offshore ship detection method based on improved YOLOV3 algorithm
CN106897673B (en) Retinex algorithm and convolutional neural network-based pedestrian re-identification method
CN110766643A (en) Microaneurysm detection method facing fundus images
CN111833322B (en) Garbage multi-target detection method based on improved YOLOv3
CN113658132A (en) Computer vision-based structural part weld joint detection method
CN112926652B (en) Fish fine granularity image recognition method based on deep learning
CN114663346A (en) Strip steel surface defect detection method based on improved YOLOv5 network
CN109978848B (en) Method for detecting hard exudation in fundus image based on multi-light-source color constancy model
Liu et al. Deep learning based research on quality classification of shiitake mushrooms
CN107808138A (en) A kind of communication signal recognition method based on FasterR CNN
Pramunendar et al. New Workflow for Marine Fish Classification Based on Combination Features and CLAHE Enhancement Technique.
CN114648806A (en) Multi-mechanism self-adaptive fundus image segmentation method
CN113256624A (en) Continuous casting round billet defect detection method and device, electronic equipment and readable storage medium
CN111340019A (en) Grain bin pest detection method based on Faster R-CNN
Pramunendar et al. A Robust Image Enhancement Techniques for Underwater Fish Classification in Marine Environment.
CN107705323A (en) A kind of level set target tracking method based on convolutional neural networks
CN116071339A (en) Product defect identification method based on improved whale algorithm optimization SVM
Sun et al. A novel method for multi-feature grading of mango using machine vision
CN116563609A (en) Packaging defect classification method
CN116402758A (en) Method for detecting cracks of aircraft engine based on YOLOv5

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant