CN116563889A - Device and method for estimating weight of laying hen based on machine vision - Google Patents
Device and method for estimating weight of laying hen based on machine vision Download PDFInfo
- Publication number
- CN116563889A CN116563889A CN202310537634.XA CN202310537634A CN116563889A CN 116563889 A CN116563889 A CN 116563889A CN 202310537634 A CN202310537634 A CN 202310537634A CN 116563889 A CN116563889 A CN 116563889A
- Authority
- CN
- China
- Prior art keywords
- image
- module
- area
- laying hen
- data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000000034 method Methods 0.000 title claims abstract description 18
- 238000009395 breeding Methods 0.000 claims abstract description 23
- 230000001488 breeding effect Effects 0.000 claims abstract description 23
- 238000001514 detection method Methods 0.000 claims abstract description 15
- 238000000605 extraction Methods 0.000 claims abstract description 11
- 238000003860 storage Methods 0.000 claims abstract description 11
- 238000012544 monitoring process Methods 0.000 claims abstract description 9
- 238000012545 processing Methods 0.000 claims abstract description 8
- 238000012360 testing method Methods 0.000 claims description 27
- 238000012549 training Methods 0.000 claims description 26
- 230000037396 body weight Effects 0.000 claims description 25
- 239000000284 extract Substances 0.000 claims description 22
- 235000013330 chicken meat Nutrition 0.000 claims description 11
- 241000287828 Gallus gallus Species 0.000 claims description 10
- 230000002159 abnormal effect Effects 0.000 claims description 9
- 230000007547 defect Effects 0.000 claims description 8
- 238000013527 convolutional neural network Methods 0.000 claims description 7
- 238000011156 evaluation Methods 0.000 claims description 6
- 239000011159 matrix material Substances 0.000 claims description 6
- 210000002569 neuron Anatomy 0.000 claims description 6
- 238000012795 verification Methods 0.000 claims description 6
- 230000011218 segmentation Effects 0.000 claims description 5
- 230000004927 fusion Effects 0.000 claims description 4
- 230000004913 activation Effects 0.000 claims description 3
- 238000013528 artificial neural network Methods 0.000 claims description 3
- 230000002457 bidirectional effect Effects 0.000 claims description 3
- 238000004364 calculation method Methods 0.000 claims description 3
- 238000001914 filtration Methods 0.000 claims description 3
- 230000006870 function Effects 0.000 claims description 3
- 238000005457 optimization Methods 0.000 claims description 3
- 239000013598 vector Substances 0.000 claims description 3
- 230000000007 visual effect Effects 0.000 claims description 3
- 238000000926 separation method Methods 0.000 claims description 2
- 238000004458 analytical method Methods 0.000 abstract description 2
- 238000009313 farming Methods 0.000 description 5
- 235000013594 poultry meat Nutrition 0.000 description 4
- 244000144977 poultry Species 0.000 description 3
- 238000009374 poultry farming Methods 0.000 description 3
- 238000009360 aquaculture Methods 0.000 description 2
- 244000144974 aquaculture Species 0.000 description 2
- 235000013601 eggs Nutrition 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 238000010606 normalization Methods 0.000 description 2
- 241001465754 Metazoa Species 0.000 description 1
- 241000700605 Viruses Species 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000002790 cross-validation Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000003703 image analysis method Methods 0.000 description 1
- 244000144972 livestock Species 0.000 description 1
- 238000009304 pastoral farming Methods 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 230000002265 prevention Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
Classifications
-
- A—HUMAN NECESSITIES
- A01—AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
- A01K—ANIMAL HUSBANDRY; AVICULTURE; APICULTURE; PISCICULTURE; FISHING; REARING OR BREEDING ANIMALS, NOT OTHERWISE PROVIDED FOR; NEW BREEDS OF ANIMALS
- A01K45/00—Other aviculture appliances, e.g. devices for determining whether a bird is about to lay
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/84—Systems specially adapted for particular applications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
- G06V10/267—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/42—Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/54—Extraction of image or video features relating to texture
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/776—Validation; Performance evaluation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/806—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/103—Static body considered as a whole, e.g. static pedestrian or occupant recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Evolutionary Computation (AREA)
- Medical Informatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Software Systems (AREA)
- Databases & Information Systems (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Environmental Sciences (AREA)
- Animal Husbandry (AREA)
- Biodiversity & Conservation Biology (AREA)
- Birds (AREA)
- Human Computer Interaction (AREA)
- Chemical & Material Sciences (AREA)
- Analytical Chemistry (AREA)
- Biochemistry (AREA)
- Immunology (AREA)
- Pathology (AREA)
- Image Analysis (AREA)
Abstract
Description
技术领域technical field
本发明涉及机器视觉技术领域,尤其涉及一种基于机器视觉的蛋鸡体重预估设备及方法。The invention relates to the technical field of machine vision, in particular to a device and method for estimating the body weight of laying hens based on machine vision.
背景技术Background technique
养殖业是农业的一个重要分支,主要包括家禽养殖、畜牧养殖和水产养殖三个细分产业。家禽养殖以禽肉和禽蛋为主要产出;近年来,智能化养殖已经成为畜禽养殖的重要研究领域,而在家禽养殖中,体重对鸡产肉、产蛋性能影响较大,是反映鸡舍饲养状况的重要指标之一。同时对家禽体重数据进行实时监测,可以最大限度地提高养殖场的生产效益和经济效益,对家禽饲养、病毒防治和环境控制具有极大参考价值。随着机器视觉技术的不断发展,基于机器视觉的图像分析法也成为了蛋鸡体重监测的重要手段之一。The aquaculture industry is an important branch of agriculture, mainly including poultry breeding, animal husbandry and aquaculture. Poultry farming mainly produces poultry meat and eggs; in recent years, intelligent farming has become an important research field in livestock and poultry farming, and in poultry farming, body weight has a greater impact on chicken meat and egg production performance, reflecting One of the important indicators of the feeding status of the chicken house. At the same time, real-time monitoring of poultry weight data can maximize the production and economic benefits of the farm, and has great reference value for poultry breeding, virus prevention and environmental control. With the continuous development of machine vision technology, the image analysis method based on machine vision has also become one of the important means of monitoring the weight of laying hens.
现有的蛋鸡体重预估设备及方法需手动进行建模寻参,体重预估精确性差,操作难度较高;此外,现有的蛋鸡体重预估设备及方法目标检测效率低,后续数据处理工作量较大,为此,我们提出一种基于机器视觉的蛋鸡体重预估设备及方法。The existing equipment and methods for weight estimation of laying hens require manual modeling and parameter search. The accuracy of weight estimation is poor and the operation is difficult. In addition, the existing equipment and methods for weight estimation of laying hens have low target detection efficiency and subsequent data The processing workload is relatively large. For this reason, we propose a machine vision-based equipment and method for estimating the weight of laying hens.
发明内容Contents of the invention
本发明的目的是为了解决现有技术中存在的缺陷,而提出的一种基于机器视觉的蛋鸡体重预估设备及方法。The object of the present invention is to propose a machine vision-based laying hen body weight estimation device and method in order to solve the defects in the prior art.
为了实现上述目的,本发明采用了如下技术方案:In order to achieve the above object, the present invention adopts the following technical solutions:
一种基于机器视觉的蛋鸡体重预估设备,包括监测平台、影像采集模块、图像增强模块、目标提取模块、体重预估模块、参数更新模块、报警模块以及区块存储模块;A machine vision-based weight estimation device for laying hens, including a monitoring platform, an image acquisition module, an image enhancement module, a target extraction module, a weight estimation module, a parameter update module, an alarm module, and a block storage module;
所述监测平台用于验证养殖人员身份并接收各模块反馈数据以供养殖人员查看;The monitoring platform is used to verify the identity of the breeders and receive feedback data from each module for the breeders to check;
所述影像采集模块用于对养殖区域图像信息进行采集;The image collection module is used to collect the image information of the breeding area;
所述图像增强模块用于优化采集到的养殖区域图像信息;The image enhancement module is used to optimize the collected image information of the breeding area;
所述目标提取模块用于提取养殖区域图像信息中的蛋鸡图像;The target extraction module is used to extract the image of laying hens in the image information of the breeding area;
所述体重预估模块用于依据采集的蛋鸡图像对各蛋鸡体重进行预估;The body weight estimation module is used for estimating the body weight of each laying hen according to the collected laying hen images;
所述参数更新模块用于采集体重预估模块运行信息,并对其进行参数更新;The parameter update module is used to collect the operation information of the weight estimation module and update its parameters;
所述报警模块用于将异常蛋鸡信息反馈给养殖人员,同时对该蛋鸡进行标记;The alarm module is used to feed back abnormal laying hen information to the breeder, and at the same time mark the laying hen;
所述区块存储模块用于将蛋鸡检测信息进行上链存储。The block storage module is used to store the detection information of laying hens on the chain.
作为本发明的进一步方案,所述图像增强模块养殖区域图像信息优化具体步骤如下:As a further solution of the present invention, the specific steps of optimizing the image information of the breeding area of the image enhancement module are as follows:
步骤一:将采集到的养殖区域图像信息进行逐帧提取以获取多组图片数据,之后依据各图片数据显示比列进行分块处理,之后对分块后的各组图片数据通过傅里叶变换对数据中高频成分进行分析提取,并通过高斯滤波进行平滑处理;Step 1: Extract the collected image information of the breeding area frame by frame to obtain multiple sets of image data, and then perform block processing according to the display ratio of each image data, and then perform Fourier transform on each group of image data after block Analyze and extract the high-frequency components in the data, and smooth them through Gaussian filtering;
步骤二:使用规定像素的窗口在各组图像信息中移动,每移动一次计算此时窗口下的灰度共生矩阵,从灰度共生矩阵中计算相关图像信息中的纹理特征。Step 2: Use a window of specified pixels to move in each group of image information, calculate the gray level co-occurrence matrix under the window at each time of movement, and calculate the texture features in the relevant image information from the gray level co-occurrence matrix.
作为本发明的进一步方案,步骤二所述纹理特征具体计算公式如下:As a further solution of the present invention, the specific calculation formula of the texture feature described in step 2 is as follows:
式中,P(i,j)代表图像像素点的值;L代表灰度级的数目;In the formula, P(i, j) represents the value of the image pixel; L represents the number of gray levels;
其中,公式(1)用于计算纹理特征能量,粗纹理的能量矩较大,细纹理的能量矩较小;公式(2)用于计算纹理特征熵值,若图像没有任何纹理,则熵接近零,若图像充满着细纹理,则图像的熵值最大,若图像中分布着较少的纹理,则该图像的熵值较小;公式(3)用于计算纹理特征对比度,其对比度越大,图像的视觉清晰效果较好;公式(4)用于计算纹理特征相关性,其相关性用于衡量邻域灰度的线性依赖性。 Among them, the formula (1) is used to calculate the texture feature energy, the energy moment of the coarse texture is larger, and the energy moment of the fine texture is smaller; the formula (2) is used to calculate the entropy value of the texture feature, if the image does not have any texture, the entropy is close to Zero, if the image is full of fine textures, the entropy value of the image is the largest, if there are fewer textures distributed in the image, the entropy value of the image is smaller; the formula (3) is used to calculate the texture feature contrast, the greater the contrast , the visual clarity of the image is better; the formula (4) is used to calculate the texture feature correlation, and its correlation is used to measure the linear dependence of the neighborhood gray level.
作为本发明的进一步方案,所述目标提取模块蛋鸡图像提取具体步骤如下:As a further solution of the present invention, the specific steps of the layer image extraction of the target extraction module are as follows:
步骤①:通过图像金字塔对优化后的图像信息进行尺度归一化处理,并提取各组图像信息的特征,之后通过双向特征金字塔进行特征融合以获取目标检测框;Step ①: Scale normalize the optimized image information through the image pyramid, and extract the features of each group of image information, and then perform feature fusion through the bidirectional feature pyramid to obtain the target detection frame;
步骤②:依据目标检测框对各图像信息进行扩大化剪裁以获取目标图像,之后获取窗口滑动获取的纹理特征相关性与对比度,并将得到的特征值按数组的形式储存到相应的像素位置;Step ②: Expand and crop each image information according to the target detection frame to obtain the target image, and then obtain the correlation and contrast of the texture features obtained by sliding the window, and store the obtained feature values in the corresponding pixel position in the form of an array;
步骤③:当相关性与对比度满足预设条件时,则判断当前像素区域为目标蛋鸡,并将其标记为1,若不满足,则判断当前像素区域为背景区域,并依据判断结果对目标图像进行背景分离以提取目标蛋鸡图像;Step ③: When the correlation and contrast meet the preset conditions, judge the current pixel area as the target laying hen, and mark it as 1; The image is subjected to background separation to extract the target layer image;
步骤④:计算蛋鸡图像的形状因子,并选取形状因子趋向0的蛋鸡图像,并判断该蛋鸡图像存在蛋鸡粘连区域,同时估算各粘连区域蛋鸡数量,之后对粘连区域进行凸包区域填充,再依据凸包面积以及粘连区域面积获取粘连区域凸缺陷以获取该凸缺陷内的凹点信息;Step ④: Calculate the shape factor of the laying hen image, and select the laying hen image whose shape factor tends to 0, and judge that there are laying hen adhesion areas in the laying hen image, and estimate the number of laying hens in each adhesion area, and then perform a convex hull on the adhesion area Area filling, and then obtain the convex defect in the adhesion area according to the area of the convex hull and the area of the adhesion area to obtain the information of the concave point in the convex defect;
步骤⑤:对于两只鸡粘连,直接绘制经过两个凹点的直线分割粘连鸡体;对于多只个体的粘连,将检测到的凹点随机连线,且每个凹点只能连一次,每次连接后计算连通区域的个数和每个连通区域的面积,当每个连通区域的面积都小于最大面积时,则判断匹配完成,并依据匹配结果对粘连蛋鸡进行分割。Step ⑤: For the adhesion of two chickens, directly draw a straight line through two concave points to divide the cohesive chicken body; for the adhesion of multiple individuals, randomly connect the detected concave points, and each concave point can only be connected once. Calculate the number of connected regions and the area of each connected region after each connection. When the area of each connected region is smaller than the maximum area, it is judged that the matching is complete, and the cohesive layer is divided according to the matching result.
作为本发明的进一步方案,所述体重预估模块蛋鸡体重预估具体步骤如下:As a further solution of the present invention, the specific steps of the body weight estimation module of the body weight estimation module are as follows:
步骤Ⅰ:体重预估模块接收各组蛋鸡图像,并通过roberts算法提取各组蛋鸡图像二值图像边缘,再使用最小二乘法对边缘轮廓进行椭圆拟合,同时对拟合后的圆区域像素值进行填充,再将圆区域与二值图像区域相减,并提取椭圆区域之外的二值图像小区域;Step Ⅰ: The weight estimation module receives the images of each group of laying hens, and extracts the edges of the binary images of each group of laying hens through the Roberts algorithm, and then uses the least square method to perform ellipse fitting on the edge contours, and at the same time, the fitted circle area The pixel value is filled, and then the circle area is subtracted from the binary image area, and a small area of the binary image outside the ellipse area is extracted;
步骤Ⅱ:对提取的小区域的边缘信息进行提取,并计算每个边缘点到圆边界的最小距离,再提取这些距离中的最大值以及对应点的坐标,之后以该点为中心进行掩模消除,再重复上述步骤直至距离最大值小于预设目标值后停止;Step Ⅱ: Extract the edge information of the extracted small area, and calculate the minimum distance from each edge point to the circle boundary, then extract the maximum value of these distances and the coordinates of the corresponding point, and then perform a mask centered on this point Eliminate, and repeat the above steps until the maximum distance is less than the preset target value and then stop;
步骤Ⅲ:体重预估模块从区块存储模块中提取多组蛋鸡体重数据,之后将其整合成样本数据集,并计算该样本数据集的标准偏差对样本数据集中的异常数据进行剔除,再对剩余数据作标准化处理以及归一化处理;Step Ⅲ: The weight estimation module extracts multiple sets of laying hen weight data from the block storage module, and then integrates them into a sample data set, and calculates the standard deviation of the sample data set to eliminate abnormal data in the sample data set, and then Standardize and normalize the remaining data;
步骤Ⅳ:将归一化后的数据按照一定比例划分为训练集、测试集以及验证集,之后对一组卷积神经网络的参数设定向量进行赋值,再确定各神经网络层神经元数以及对应激励函数,将训练集输入卷积神经网络中进行迭代训练以获取预估模型;Step Ⅳ: Divide the normalized data into training set, test set and verification set according to a certain ratio, then assign a set of parameter setting vectors of convolutional neural network, and then determine the number of neurons in each neural network layer and Corresponding to the activation function, the training set is input into the convolutional neural network for iterative training to obtain the estimated model;
步骤Ⅴ:通过验证集对预估模型的模型精度进行验证,之后将测试集输入值预估模型测试该模型性能,并计算其评价指标,若评价指标大于规定阈值,则通过参数调整模块对预估模型参数进行更新;Step Ⅴ: Verify the model accuracy of the predicted model through the verification set, and then test the performance of the model with the input value of the test set, and calculate its evaluation index. If the evaluation index is greater than the specified threshold, then use the parameter adjustment module to adjust The estimated model parameters are updated;
步骤Ⅵ:更新完成后,将粘连分割后的蛋鸡图像录入预估模型中,并输出预估体重值,同时记录多次预估的体重值以及平均体重值,并记录体重达标的蛋鸡编号。Step Ⅵ: After the update is completed, enter the image of the layered hen after the adhesion and segmentation into the estimation model, and output the estimated weight value, and record the estimated weight value and average weight value of multiple times at the same time, and record the number of the laying hens whose weight reaches the standard .
一种基于机器视觉的蛋鸡体重预估方法,该预估方法具体如下:A method for estimating the body weight of laying hens based on machine vision, the estimating method is specifically as follows:
(1)采集并优化养殖区域图像信息;(1) Collect and optimize the image information of the breeding area;
(2)计算养殖区域图像纹理特征并记录;(2) Calculate and record the image texture features of the breeding area;
(3)提取养殖区域图像中的蛋鸡图像并进行粘连分割;(3) Extract the laying hen image in the breeding area image and carry out the adhesion segmentation;
(4)构建预估模型并优化该模型参数;(4) Construct an estimation model and optimize the model parameters;
(5)通过预估模型对各蛋鸡体重进行预估并记录;(5) Estimate and record the body weight of each laying hen by the estimation model;
(6)上报异常蛋鸡体重并将预估信息进行上链存储。(6) Report the weight of abnormal layers and store the estimated information on the chain.
作为本发明的进一步方案,步骤(4)中所述预估模型参数优化具体步骤如下:As a further solution of the present invention, the specific steps of parameter optimization of the estimated model described in step (4) are as follows:
第Ⅰ步:参数更新模块在预估模型的规定区间内初始化网络连接权值,之后从训练时的输入以及输出对的集合中提交训练样本,并计算该预估模型的输出,再比较期望的网络输出与实际的网络输出,并且计算所有神经元的局部误差;Step I: The parameter update module initializes the network connection weights within the specified interval of the estimated model, then submits training samples from the set of input and output pairs during training, and calculates the output of the estimated model, and then compares the expected The network output and the actual network output, and calculate the local error of all neurons;
第Ⅱ步:当局部误差超出预设阈值后,依据学习规则方程对该预估模型的权值进行训练以及更新,并依据预设的学习率以及步长列出所有可能的数据结果;Step II: When the local error exceeds the preset threshold, train and update the weight of the estimated model according to the learning rule equation, and list all possible data results according to the preset learning rate and step size;
第Ⅲ步:对于每一组数据,选取任意一个子集作为测试集,其余子集作为训练集,训练测试模型后对测试集进行检测,并统计检测结果的均方根误差;Step III: For each set of data, select any subset as the test set, and the remaining subsets as the training set, test the test set after training the test model, and count the root mean square error of the test results;
第Ⅳ步:将测试集更换为另一子集,再取剩余子集作为训练集,再次统计均方根误差,直至对所有数据都进行一次预测,通过选取均方根误差最小时对应的组合参数作为数据区间内最优的参数并替换预估模型原有参数。Step IV: Replace the test set with another subset, then take the remaining subset as the training set, and count the root mean square error again until all the data are predicted once, by selecting the combination corresponding to the minimum root mean square error The parameters are used as the optimal parameters in the data interval and replace the original parameters of the prediction model.
相比于现有技术,本发明的有益效果在于:Compared with the prior art, the beneficial effects of the present invention are:
1、本系统通过体重预估模块接收各组蛋鸡图像,并对各组蛋鸡图像进行椭圆拟合,体重预估模块从区块存储模块中提取多组蛋鸡体重数据进行预处理并进行数据划分,训练集输入卷积神经网络中进行迭代训练以获取预估模型,通过验证集对预估模型的模型精度进行验证,之后将测试集输入值预估模型测试该模型性能,并计算其评价指标,若评价指标大于规定阈值,参数调整模块通过交叉验证法对预估模型参数进行更新,更新完成后,将粘连分割后的蛋鸡图像录入预估模型中,并输出预估体重值,同时记录多次预估的体重值以及平均体重值,并记录体重达标的蛋鸡编号,能够自行建模以及寻参,有效地提高体重预估精确性,同时降低操作难度,方便养殖人员使用,提高使用体验。1. The system receives the images of each group of laying hens through the weight estimation module, and performs ellipse fitting on the images of each group of laying hens. The weight estimation module extracts multiple sets of laying hen weight data from the block storage module for preprocessing and Data division, the training set is input into the convolutional neural network for iterative training to obtain the estimated model, the model accuracy of the estimated model is verified through the verification set, and then the test set is input to the estimated model to test the performance of the model and calculate its Evaluation index, if the evaluation index is greater than the specified threshold, the parameter adjustment module updates the parameters of the estimated model through the cross-validation method. After the update is completed, the image of the layer after the adhesion and segmentation is entered into the estimated model, and the estimated weight value is output. Simultaneously record multiple estimated weight values and average weight values, and record the number of laying hens whose weight reaches the standard. It can model and find references by itself, effectively improving the accuracy of weight estimation, while reducing the difficulty of operation, and facilitating the use of farmers. Improve user experience.
2、本发明通过图像金字塔对优化后的图像信息进行尺度归一化处理,并提取各组图像信息的特征,之后通过双向特征金字塔进行特征融合以获取目标检测框,依据目标检测框对各图像信息进行扩大化剪裁以获取目标图像,之后获取窗口滑动获取的纹理特征相关性与对比度,并对目标图像进行背景分离以提取目标蛋鸡图像,估算各粘连区域蛋鸡数量,之后对粘连区域进行凸包区域填充,再依据凸包面积以及粘连区域面积获取粘连区域凸缺陷以获取该凸缺陷内的凹点信息,对于两只鸡粘连,直接绘制经过两个凹点的直线分割粘连鸡体;对于多只个体的粘连,将检测到的凹点随机连线,且每个凹点只能连一次,每次连接后计算连通区域的个数和每个连通区域的面积,当每个连通区域的面积都小于最大面积时,则判断匹配完成,并依据匹配结果对粘连蛋鸡进行分割,通过级联分析能够准确获取存在蛋鸡的图像数据,提高目标检测效率,同时减少后续数据处理的工作量,减少等待时间。2. The present invention performs scale normalization processing on the optimized image information through the image pyramid, and extracts the features of each group of image information, and then performs feature fusion through the two-way feature pyramid to obtain the target detection frame, and performs the detection of each image according to the target detection frame. The information is enlarged and clipped to obtain the target image, and then the correlation and contrast of the texture features obtained by sliding the window are obtained, and the background of the target image is separated to extract the target layer image, and the number of layers in each adhesion area is estimated, and then the adhesion area is analyzed. Fill the convex hull area, and then obtain the convex defect in the adhesion area according to the area of the convex hull and the area of the adhesion area to obtain the information of the concave points in the convex defect. For the adhesion of two chickens, directly draw a straight line passing through the two concave points to divide the cohesive chicken body; For the adhesion of multiple individuals, the detected concave points are randomly connected, and each concave point can only be connected once. After each connection, the number of connected regions and the area of each connected region are calculated. When each connected region When the areas of the layers are all smaller than the maximum area, it is judged that the matching is complete, and the cohesive laying hens are segmented according to the matching results. Through the cascade analysis, the image data of the existing laying hens can be accurately obtained, the target detection efficiency is improved, and the subsequent data processing work is reduced. volume, reducing waiting time.
附图说明Description of drawings
附图用来提供对本发明的进一步理解,并且构成说明书的一部分,与本发明的实施例一起用于解释本发明,并不构成对本发明的限制。The accompanying drawings are used to provide a further understanding of the present invention, and constitute a part of the description, and are used together with the embodiments of the present invention to explain the present invention, and do not constitute a limitation to the present invention.
图1为本发明提出的一种基于机器视觉的蛋鸡体重预估设备的系统框图;Fig. 1 is a system block diagram of a machine vision-based laying hen body weight estimation device proposed by the present invention;
图2为本发明提出的一种基于机器视觉的蛋鸡体重预估方法的流程框图。Fig. 2 is a flow chart of a method for estimating the body weight of laying hens based on machine vision proposed by the present invention.
具体实施方式Detailed ways
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本发明一部分实施例,而不是全部的实施例。The following will clearly and completely describe the technical solutions in the embodiments of the present invention with reference to the accompanying drawings in the embodiments of the present invention. Obviously, the described embodiments are only some, not all, embodiments of the present invention.
实施例1Example 1
参照图1,一种基于机器视觉的蛋鸡体重预估设备,包括监测平台、影像采集模块、图像增强模块、目标提取模块、体重预估模块、参数更新模块、报警模块以及区块存储模块;With reference to Fig. 1, a kind of laying hen weight estimation equipment based on machine vision, comprises monitoring platform, image acquisition module, image enhancement module, target extraction module, weight estimation module, parameter update module, alarm module and block storage module;
监测平台用于验证养殖人员身份并接收各模块反馈数据以供养殖人员查看;影像采集模块用于对养殖区域图像信息进行采集。The monitoring platform is used to verify the identity of the farming personnel and receive feedback data from each module for viewing by the farming personnel; the image acquisition module is used to collect the image information of the farming area.
图像增强模块用于优化采集到的养殖区域图像信息。The image enhancement module is used to optimize the collected image information of the breeding area.
具体的,将采集到的养殖区域图像信息进行逐帧提取以获取多组图片数据,之后依据各图片数据显示比列进行分块处理,之后对分块后的各组图片数据通过傅里叶变换对数据中高频成分进行分析提取,并通过高斯滤波进行平滑处理,使用规定像素的窗口在各组图像信息中移动,每移动一次计算此时窗口下的灰度共生矩阵,从灰度共生矩阵中计算相关图像信息中的纹理特征。Specifically, the collected image information of the breeding area is extracted frame by frame to obtain multiple sets of picture data, and then block processing is performed according to the display ratio of each picture data, and then each set of picture data after block is processed by Fourier transform Analyze and extract the high-frequency components in the data, and smooth them through Gaussian filtering, use the window of specified pixels to move in each group of image information, and calculate the gray level co-occurrence matrix under the window at this time every time you move, and calculate from the gray level co-occurrence matrix Texture features in associated image information.
需要进一步说明的是,纹理特征具体计算公式如下:It needs to be further explained that the specific calculation formula of texture features is as follows:
式中,P(i,j)代表图像像素点的值;L代表灰度级的数目;In the formula, P(i, j) represents the value of the image pixel; L represents the number of gray levels;
其中,公式(1)用于计算纹理特征能量,粗纹理的能量矩较大,细纹理的能量矩较小;公式(2)用于计算纹理特征熵值,若图像没有任何纹理,则熵接近零,若图像充满着细纹理,则图像的熵值最大,若图像中分布着较少的纹理,则该图像的熵值较小;公式(3)用于计算纹理特征对比度,其对比度越大,图像的视觉清晰效果较好;公式(4)用于计算纹理特征相关性,其相关性用于衡量邻域灰度的线性依赖性。 Among them, the formula (1) is used to calculate the texture feature energy, the energy moment of the coarse texture is larger, and the energy moment of the fine texture is smaller; the formula (2) is used to calculate the entropy value of the texture feature, if the image does not have any texture, the entropy is close to Zero, if the image is full of fine textures, the entropy value of the image is the largest, if there are fewer textures distributed in the image, the entropy value of the image is smaller; the formula (3) is used to calculate the texture feature contrast, the greater the contrast , the visual clarity of the image is better; the formula (4) is used to calculate the texture feature correlation, and its correlation is used to measure the linear dependence of the neighborhood gray level.
目标提取模块用于提取养殖区域图像信息中的蛋鸡图像。The target extraction module is used to extract the image of laying hens in the image information of the breeding area.
具体的,通过图像金字塔对优化后的图像信息进行尺度归一化处理,并提取各组图像信息的特征,之后通过双向特征金字塔进行特征融合以获取目标检测框,依据目标检测框对各图像信息进行扩大化剪裁以获取目标图像,之后获取窗口滑动获取的纹理特征相关性与对比度,并将得到的特征值按数组的形式储存到相应的像素位置,当相关性与对比度满足预设条件时,则判断当前像素区域为目标蛋鸡,并将其标记为1,若不满足,则判断当前像素区域为背景区域,并依据判断结果对目标图像进行背景分离以提取目标蛋鸡图像,计算蛋鸡图像的形状因子,并选取形状因子趋向0的蛋鸡图像,并判断该蛋鸡图像存在蛋鸡粘连区域,同时估算各粘连区域蛋鸡数量,之后对粘连区域进行凸包区域填充,再依据凸包面积以及粘连区域面积获取粘连区域凸缺陷以获取该凸缺陷内的凹点信息,对于两只鸡粘连,直接绘制经过两个凹点的直线分割粘连鸡体;对于多只个体的粘连,将检测到的凹点随机连线,且每个凹点只能连一次,每次连接后计算连通区域的个数和每个连通区域的面积,当每个连通区域的面积都小于最大面积时,则判断匹配完成,并依据匹配结果对粘连蛋鸡进行分割。Specifically, the optimized image information is scale-normalized through the image pyramid, and the features of each group of image information are extracted, and then feature fusion is performed through the bidirectional feature pyramid to obtain the target detection frame, and each image information is analyzed according to the target detection frame. Expand and crop to obtain the target image, and then obtain the correlation and contrast of the texture features obtained by sliding the window, and store the obtained feature values in the corresponding pixel position in the form of an array. When the correlation and contrast meet the preset conditions, Then it is judged that the current pixel area is the target layer, and it is marked as 1. If it is not satisfied, the current pixel area is judged to be the background area, and the background of the target image is separated according to the judgment result to extract the target layer image, and the layer is calculated The shape factor of the image, and select the layer image whose shape factor tends to 0, and judge that there is a layer cohesion area in the layer image, and estimate the number of layers in each cohesion area, and then fill the cohesion area with a convex hull area, and then according to the convex hull In order to obtain the concave point information in the convex defect, for the adhesion of two chickens, directly draw a straight line passing through the two concave points to divide the cohesive chicken body; for the adhesion of multiple individuals, the The detected concave points are randomly connected, and each concave point can only be connected once. After each connection, the number of connected regions and the area of each connected region are calculated. When the area of each connected region is smaller than the maximum area, Then it is judged that the matching is completed, and the sticky laying hens are segmented according to the matching result.
体重预估模块用于依据采集的蛋鸡图像对各蛋鸡体重进行预估。The body weight estimation module is used for estimating the body weight of each layer hen according to the collected layer images.
具体的,体重预估模块接收各组蛋鸡图像,并通过roberts算法提取各组蛋鸡图像二值图像边缘,再使用最小二乘法对边缘轮廓进行椭圆拟合,同时对拟合后的圆区域像素值进行填充,再将圆区域与二值图像区域相减,并提取椭圆区域之外的二值图像小区域,对提取的小区域的边缘信息进行提取,并计算每个边缘点到圆边界的最小距离,再提取这些距离中的最大值以及对应点的坐标,之后以该点为中心进行掩模消除,再重复上述步骤直至距离最大值小于预设目标值后停止,体重预估模块从区块存储模块中提取多组蛋鸡体重数据,之后将其整合成样本数据集,并计算该样本数据集的标准偏差对样本数据集中的异常数据进行剔除,再对剩余数据作标准化处理以及归一化处理,将归一化后的数据按照一定比例划分为训练集、测试集以及验证集,之后对一组卷积神经网络的参数设定向量进行赋值,再确定各神经网络层神经元数以及对应激励函数,将训练集输入卷积神经网络中进行迭代训练以获取预估模型,通过验证集对预估模型的模型精度进行验证,之后将测试集输入值预估模型测试该模型性能,并计算其评价指标,若评价指标大于规定阈值,则通过参数调整模块对预估模型参数进行更新,更新完成后,将粘连分割后的蛋鸡图像录入预估模型中,并输出预估体重值,同时记录多次预估的体重值以及平均体重值,并记录体重达标的蛋鸡编号。Specifically, the weight estimation module receives each group of laying hen images, and extracts the binary image edges of each group of laying hen images through the Roberts algorithm, and then uses the least square method to perform ellipse fitting on the edge contours, and at the same time, the fitted circle area The pixel value is filled, and then the circle area is subtracted from the binary image area, and the small area of the binary image outside the ellipse area is extracted, the edge information of the extracted small area is extracted, and each edge point is calculated to the circle boundary The minimum distance, and then extract the maximum value of these distances and the coordinates of the corresponding point, and then remove the mask centered on this point, and then repeat the above steps until the maximum distance is less than the preset target value and stop. The weight estimation module starts from The block storage module extracts multiple sets of laying hen weight data, and then integrates them into a sample data set, and calculates the standard deviation of the sample data set to eliminate abnormal data in the sample data set, and then standardizes and normalizes the remaining data. Normalization processing divides the normalized data into training set, test set and verification set according to a certain ratio, and then assigns a set of parameter setting vectors of convolutional neural network, and then determines the number of neurons in each neural network layer And the corresponding activation function, input the training set into the convolutional neural network for iterative training to obtain the estimated model, verify the model accuracy of the estimated model through the verification set, and then test the performance of the model by inputting the value of the test set into the estimated model, And calculate its evaluation index. If the evaluation index is greater than the specified threshold, the parameters of the estimated model are updated through the parameter adjustment module. After the update is completed, the image of the layer after the adhesion and segmentation is entered into the estimated model, and the estimated weight value is output , record multiple estimated weight values and average weight values at the same time, and record the number of laying hens whose weight reaches the standard.
参数更新模块用于采集体重预估模块运行信息,并对其进行参数更新;报警模块用于将异常蛋鸡信息反馈给养殖人员,同时对该蛋鸡进行标记;区块存储模块用于将蛋鸡检测信息进行上链存储。The parameter update module is used to collect the operation information of the weight estimation module and update its parameters; the alarm module is used to feed back the abnormal layer information to the breeders, and at the same time mark the layer; the block storage module is Chicken detection information is stored on the chain.
实施例2Example 2
参照图2,一种基于机器视觉的蛋鸡体重预估方法,该预估方法具体如下:Referring to Figure 2, a method for estimating the body weight of laying hens based on machine vision, the estimating method is as follows:
采集并优化养殖区域图像信息。Collect and optimize the image information of the breeding area.
计算养殖区域图像纹理特征并记录。Calculate and record the image texture features of the farming area.
提取养殖区域图像中的蛋鸡图像并进行粘连分割。The laying hen images in the breeding area images are extracted and segmented by adhesion.
构建预估模型并优化该模型参数。Build a prediction model and optimize the model parameters.
具体的,参数更新模块在预估模型的规定区间内初始化网络连接权值,之后从训练时的输入以及输出对的集合中提交训练样本,并计算该预估模型的输出,再比较期望的网络输出与实际的网络输出,并且计算所有神经元的局部误差,当局部误差超出预设阈值后,依据学习规则方程对该预估模型的权值进行训练以及更新,并依据预设的学习率以及步长列出所有可能的数据结果,对于每一组数据,选取任意一个子集作为测试集,其余子集作为训练集,训练测试模型后对测试集进行检测,并统计检测结果的均方根误差,将测试集更换为另一子集,再取剩余子集作为训练集,再次统计均方根误差,直至对所有数据都进行一次预测,通过选取均方根误差最小时对应的组合参数作为数据区间内最优的参数并替换预估模型原有参数。Specifically, the parameter update module initializes the network connection weights within the specified range of the estimated model, then submits training samples from the set of input and output pairs during training, calculates the output of the estimated model, and compares the expected network output and the actual network output, and calculate the local errors of all neurons. When the local errors exceed the preset threshold, the weights of the estimated model are trained and updated according to the learning rule equation, and the preset learning rate and The step size lists all possible data results. For each set of data, select any subset as the test set, and the remaining subsets as the training set. After training the test model, test the test set, and count the root mean square of the test results Error, replace the test set with another subset, then take the remaining subset as the training set, and count the root mean square error again until all the data are predicted once, by selecting the combination parameter corresponding to the minimum root mean square error as The optimal parameters in the data interval and replace the original parameters of the prediction model.
通过预估模型对各蛋鸡体重进行预估并记录。Estimate and record the body weight of each laying hen through the estimation model.
上报异常蛋鸡体重并将预估信息进行上链存储。Report abnormal laying hen weight and store the estimated information on the chain.
Claims (7)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310537634.XA CN116563889A (en) | 2023-05-12 | 2023-05-12 | Device and method for estimating weight of laying hen based on machine vision |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310537634.XA CN116563889A (en) | 2023-05-12 | 2023-05-12 | Device and method for estimating weight of laying hen based on machine vision |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116563889A true CN116563889A (en) | 2023-08-08 |
Family
ID=87491248
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310537634.XA Withdrawn CN116563889A (en) | 2023-05-12 | 2023-05-12 | Device and method for estimating weight of laying hen based on machine vision |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116563889A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116915527A (en) * | 2023-08-15 | 2023-10-20 | 上海易创视听科技有限公司 | A new minimalist conference system with no sense of operation |
CN116949432A (en) * | 2023-08-14 | 2023-10-27 | 浙江新瑞芯材科技有限公司 | Method for cultivating hair carbon source into diamond by MPCVD device |
CN117953034A (en) * | 2024-01-26 | 2024-04-30 | 河北科技师范学院 | Machine vision-based laying hen weight estimation system and method |
CN119027984A (en) * | 2024-10-29 | 2024-11-26 | 大连守业生物科技有限公司 | A visually assisted detection method for the health status of large-scale chicken farming |
-
2023
- 2023-05-12 CN CN202310537634.XA patent/CN116563889A/en not_active Withdrawn
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116949432A (en) * | 2023-08-14 | 2023-10-27 | 浙江新瑞芯材科技有限公司 | Method for cultivating hair carbon source into diamond by MPCVD device |
CN116915527A (en) * | 2023-08-15 | 2023-10-20 | 上海易创视听科技有限公司 | A new minimalist conference system with no sense of operation |
CN117953034A (en) * | 2024-01-26 | 2024-04-30 | 河北科技师范学院 | Machine vision-based laying hen weight estimation system and method |
CN119027984A (en) * | 2024-10-29 | 2024-11-26 | 大连守业生物科技有限公司 | A visually assisted detection method for the health status of large-scale chicken farming |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN116563889A (en) | Device and method for estimating weight of laying hen based on machine vision | |
Liu et al. | Automatic estimation of dairy cattle body condition score from depth image using ensemble model | |
CN106778902B (en) | Dairy cow individual identification method based on deep convolutional neural network | |
CN107844797A (en) | A kind of method of the milking sow posture automatic identification based on depth image | |
CN110728259B (en) | Chicken crowd heavy monitoring system based on depth image | |
CN111243005A (en) | Livestock weight estimation method, device, equipment and computer readable storage medium | |
CN111696139B (en) | White feather breeding hen group weight estimation system and method based on RGB image | |
CN109166094A (en) | A kind of insulator breakdown positioning identifying method based on deep learning | |
CN113785783B (en) | Livestock grouping system and method | |
CN116912671A (en) | Intelligent fish feeding method and device | |
CN112257608A (en) | Yak breeding health state monitoring method | |
CN114612397B (en) | Fish fry sorting method, system, electronic equipment and storage medium | |
Huang et al. | Cow tail detection method for body condition score using Faster R-CNN | |
CN114241433B (en) | A method and system for identifying a bucket of a green forage harvester following a feed truck | |
CN117373661A (en) | Animal body condition scoring method | |
CN115512215A (en) | Underwater biological monitoring method and device and storage medium | |
CN117333784A (en) | Wheat scab disease index estimation method based on unmanned aerial vehicle image | |
Wu et al. | Body condition score for dairy cows method based on vision transformer | |
CN119088045A (en) | Intelligent adjustment system for harvester operating speed | |
CN114022831A (en) | A method and system for monitoring the body condition of livestock based on binocular vision | |
CN118383291A (en) | A fattening pig pen system based on three-dimensional point cloud weight estimation | |
CN110991300A (en) | An automatic identification method for abnormally swollen abdomen of broilers | |
CN115984554A (en) | Weight estimation method based on deep learning | |
KR102424901B1 (en) | method for detecting estrus of cattle based on object detection algorithm | |
CN115641466A (en) | Sick cattle screening method based on video data |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20230808 |