CN101034440A - Identification method for spherical fruit and vegetables - Google Patents
Identification method for spherical fruit and vegetables Download PDFInfo
- Publication number
- CN101034440A CN101034440A CN 200710066695 CN200710066695A CN101034440A CN 101034440 A CN101034440 A CN 101034440A CN 200710066695 CN200710066695 CN 200710066695 CN 200710066695 A CN200710066695 A CN 200710066695A CN 101034440 A CN101034440 A CN 101034440A
- Authority
- CN
- China
- Prior art keywords
- fruit
- leaf
- gray level
- branch
- asm
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 235000012055 fruits and vegetables Nutrition 0.000 title claims abstract description 38
- 238000000034 method Methods 0.000 title claims abstract description 24
- 235000013399 edible fruits Nutrition 0.000 claims abstract description 73
- 239000011159 matrix material Substances 0.000 claims abstract description 24
- 239000013598 vector Substances 0.000 claims description 20
- 230000014509 gene expression Effects 0.000 claims description 2
- 230000015572 biosynthetic process Effects 0.000 claims 1
- 230000001131 transforming effect Effects 0.000 abstract 1
- 238000004364 calculation method Methods 0.000 description 7
- 238000003306 harvesting Methods 0.000 description 5
- 238000012549 training Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 238000000926 separation method Methods 0.000 description 3
- 238000004519 manufacturing process Methods 0.000 description 2
- 230000003595 spectral effect Effects 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 101100009017 Caenorhabditis elegans dcr-1 gene Proteins 0.000 description 1
- 101100009019 Drosophila melanogaster Dcr-1 gene Proteins 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000007635 classification algorithm Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 239000012636 effector Substances 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 230000001932 seasonal effect Effects 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Landscapes
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
Description
(一)技术领域(1) Technical field
本发明涉及一种类球形果蔬的识别方法。The invention relates to a method for identifying spherical fruits and vegetables.
(二)背景技术(2) Background technology
水果与蔬菜的采收是一项劳动密集型的工作。果蔬采收的季节性强,工作环境艰苦,劳动强度大。现阶段的采收基本依靠人力,采收的效率比较低,人力成本在整个果蔬生产成本中占了相当大的比例。因此,实现果蔬采收的自动化具有实际应用价值。Harvesting fruit and vegetables is labor intensive. The harvesting of fruits and vegetables is seasonal, the working environment is difficult, and the labor intensity is high. Harvesting at this stage basically relies on manpower, and the efficiency of harvesting is relatively low, and labor costs account for a considerable proportion of the entire fruit and vegetable production costs. Therefore, the automation of fruit and vegetable harvesting has practical application value.
目前为止,国内还没有一台真正投入生产使用的果蔬采摘机器人,主要原因之一是视觉系统技术难题尚未完全解决。通常,采摘机器人视觉系统的工作方式:首先获取果蔬的数字化图像,然后再通过相关图像处理算法识别并确定图像中果蔬的位置。传感器是机器视觉系统最重要的部件,主要包括图像传感器和距离传感器等。图像传感器可以是CCD黑白相机、彩色摄像机或者立体摄像机,一般安装于机械臂或末端执行器上。距离传感器有激光测距、超声、无线和红外传感器等。So far, there is no domestic fruit and vegetable picking robot that has actually been put into production. One of the main reasons is that the technical problems of the vision system have not been completely resolved. Usually, the working method of the vision system of the picking robot is: first obtain the digital image of the fruit and vegetable, and then identify and determine the position of the fruit and vegetable in the image through the relevant image processing algorithm. Sensors are the most important components of machine vision systems, mainly including image sensors and distance sensors. The image sensor can be a CCD black-and-white camera, a color camera or a stereo camera, and is generally installed on a robotic arm or an end effector. Distance sensors include laser ranging, ultrasonic, wireless and infrared sensors.
较早期的果蔬采摘机器人视觉系统大多是基于二维系统,当叶子和果蔬的颜色对比明显时,这种二维系统可以成功地将果蔬从叶子中检测出来;然而,若有多个果蔬重叠或者其颜色与背景色近似时,则很难进行识别。这时,一般是根据果蔬表面不同的光谱反射特性来检测果蔬,检测到果蔬后,利用三维视觉传感器再对果蔬进行精确识别,这是最常用的光谱反射率法,但在自然光照情况下,由于图像中存在噪声等各种干扰信息,这种方法往往并不理想。有研究人员利用果蔬的形状来识别和定位果实,采用形状定位方式,一般要求目标具有完整的边界条件,若物体被遮挡,就很难做到。又有人提出一种称为Hough变换的新方法,它的特点是不需要整个轮廓信息,根据目标果蔬形状的曲率信息来定位果蔬的中心,但是该方法非常耗时。总之,由于环境的高度复杂性,目前的采摘机器人视觉系统在规则环境下,如已知光照背景下效果比较好,但在自然环境条件,如自然光照下则不是很成功。所以,概括起来,当前识别果蔬目标的方法,主要还是通过颜色空间模型的变换,利用颜色特征参数来提取果蔬目标,能够解决的识别问题非常有限。Most of the early vision systems for fruit and vegetable picking robots were based on two-dimensional systems. When the color contrast between leaves and fruits and vegetables is obvious, this two-dimensional system can successfully detect fruits and vegetables from leaves; however, if there are multiple fruits and vegetables overlapping or When its color is similar to the background color, it is difficult to identify. At this time, the fruits and vegetables are generally detected according to the different spectral reflectance characteristics of the surface of the fruits and vegetables. After the fruits and vegetables are detected, the three-dimensional vision sensor is used to accurately identify the fruits and vegetables. This is the most commonly used spectral reflectance method, but under natural light conditions, This method is often not ideal due to various interference information such as noise in the image. Some researchers use the shape of fruits and vegetables to identify and locate fruits. The shape location method generally requires the target to have complete boundary conditions. If the object is blocked, it is difficult to do so. Someone proposed a new method called Hough transform, which is characterized in that it does not need the entire contour information, and locates the center of the fruit and vegetable according to the curvature information of the target fruit and vegetable shape, but this method is very time-consuming. In conclusion, due to the high complexity of the environment, current picking robot vision systems work well in regular environments, such as known lighting backgrounds, but are not very successful in natural environmental conditions, such as natural lighting. Therefore, to sum up, the current methods for identifying fruit and vegetable targets mainly use color feature parameters to extract fruit and vegetable targets through the transformation of the color space model, and the identification problems that can be solved are very limited.
(三)发明内容(3) Contents of the invention
为了克服已有的类球形果蔬识别方法不能有效识别类球形果蔬、计算复杂、识别精度低的不足,本发明提供一种能够有效识别类球形果蔬、计算简单、识别精度高的一种类球形果蔬的识别方法。In order to overcome the shortcomings of the existing spherical-shaped fruit and vegetable identification methods that cannot effectively identify spherical-shaped fruits and vegetables, the calculation is complicated, and the recognition accuracy is low, the present invention provides a spherical-shaped fruit and vegetable that can effectively identify spherical-shaped fruits and vegetables, and has simple calculation and high recognition accuracy. recognition methods.
本发明解决其技术问题所采用的技术方案是:The technical solution adopted by the present invention to solve its technical problems is:
一种类球形果蔬的识别方法,该方法包括以下步骤:A method for identifying spherical fruits and vegetables, the method comprising the following steps:
(1)、获取自然场景下的果蔬图像;(1) Obtain images of fruits and vegetables in natural scenes;
(2)、将获取的图像同时变换到2r-g-b颜色模型和LCD颜色模型,对2r-g-b颜色模型建立2r-g二维坐标系,LCD颜色模型建立Y-Cr二维坐标系;(2), the acquired image is simultaneously converted to the 2r-g-b color model and the LCD color model, the 2r-g two-dimensional coordinate system is established for the 2r-g-b color model, and the Y-Cr two-dimensional coordinate system is established for the LCD color model;
(3)、根据分类器原理,对Y-Cr二维坐标系下的特征属性Y,Cr分别构造判别式F1、F2,设定特征属性值Y和Cr的果实目标的平均向量mfruit、树叶的平均向量mleaf、枝干的平均向量mbranch,其算式为:(1)、(2):(3) According to the principle of the classifier, construct discriminant formulas F 1 and F 2 respectively for the characteristic attributes Y and C r in the YC r two-dimensional coordinate system, and set the average vector m of the fruit target of the characteristic attribute values Y and C r fruit , the average vector m leaf of the leaves, and the average vector m branch of the branches, the formulas are: (1), (2):
F1=[Y,Cr]T(mfruit-mleaf)-1/2[(mfruit Tmfruit)-(mleaf Tmleaf)] (1)F 1 =[Y, C r ] T (m fruit -m leaf )-1/2[(m fruit T m fruit )-(m leaf T m leaf )] (1)
F2=[Y,Cr]T(mfruit-mbranch)-1/2[(mfruit Tmfruit)-(mbranch Tmlbranch)] (2)F 2 =[Y, C r ] T (m fruit -m branch )-1/2[(m fruit T m fruit )-(m branch T m lbranch )] (2)
对2r-g二维坐标系的特征属性2r和g分别构造判别式F3、F4,设定特征属性值2r和g的果实目标的平均向量m′fruit、树叶的平均向量m′leaf、枝干的平均向量m′branch,其算式为:(3)、(4):Construct discriminant formulas F 3 and F 4 for the characteristic attributes 2r and g of the 2r-g two-dimensional coordinate system respectively, and set the average vector m′fruit of the fruit target and the average vector m′ leaf of the leaf of the characteristic attribute values 2r and g , the average vector m′ branch of the branches, the formulas are: (3), (4):
F3=[2r,g]T(m′fruit-m′leaf)-1/2[(m′fruit Tm′fruit)-(m′leaf Tm′leaf)] (3)F 3 =[2r, g] T (m′ fruit -m′ leaf )-1/2[(m′ fruit T m′ fruit )-(m′ leaf T m′ leaf )] (3)
F4=[2r,g]T(m′fruit-m′branch)-1/2[(m′fruit Tm′fruit)-(m′branch Tm′lbranchz)] (4)F 4 =[2r, g] T (m′ fruit -m′ branch )-1/2[(m′ fruit T m′ fruit )-(m′ branch T m′ lbranchz )] (4)
根据判别式得到省去叶子和枝干的分离直线,并把输入图像分成大小相等的小块,每块大小为L×L,L为奇数;According to the discriminant formula, the separating straight line omitting the leaves and branches is obtained, and the input image is divided into small blocks of equal size, each block is L×L, and L is an odd number;
(4)、顺序选择两个小块B1和B2,分别计算其4个方向的灰度共生矩阵,设图像的一个区域大小为Nc×Nr像素,并设灰度级为G=0,1,……,Nq-1,共生矩阵P(d,q)是一个大小为Nq×Nq的方阵,包括所有间距为d,方向为q,且灰度级为a和b的像素对出现的频率,P(d,q)中的元素表示为P(a,b|d,q),在区域内任选两个像素(k,l)和(m,n),其中k,m=1,2,……,Nc;n=1,2,……,Nr;(4) Select two small blocks B 1 and B 2 sequentially, and calculate the gray level co-occurrence matrix in the four directions respectively, set the size of an area of the image as N c × N r pixels, and set the gray level as G= 0, 1, ..., N q -1, the co-occurrence matrix P (d, q) is a square matrix of size N q ×N q , including all spacing d, direction q, and gray levels a and The frequency of occurrence of the pixel pair of b, the element in P (d, q) is expressed as P (a, b|d, q) , and two pixels (k, l) and (m, n) are selected in the area, where k, m=1, 2, ..., N c ; n = 1, 2, ..., N r ;
计算和各个灰度共生矩阵的两个特征值:熵ENT和能量ASM,其算式分别为(5)、(6):Calculation and two eigenvalues of each gray level co-occurrence matrix: entropy ENT and energy ASM, the formulas are (5), (6):
其中,a,b分别表示像素的灰度级,p(a,b)表示灰度共生矩阵;并计算平均4个方向的灰度共生矩阵的特征值得到平均特征值 ENT和ASM;Among them, a, b represent the gray level of the pixel respectively, p(a, b) represents the gray level co-occurrence matrix; and calculate the average eigenvalue of the gray level co-occurrence matrix in 4 directions to obtain the average eigenvalue ENT and ASM;
(5)、对于相邻的两个小块B1和B2,如判别式F1、F2、F3和F4大于0或者B1和B2的 ENT和 ASM之差小于设定的阈值T,则保留B1,该小块确认为果实;如纹理差值大于阈值T且判别式F1、F2、F3和F4小于0,则丢弃整个小块B1;并将B2中相关参数赋给B1,然后将B2的参数置为空,顺序往后取另一个小块作为B2,重复所述步骤,直到所有小块处理完毕。(5) For two adjacent small blocks B 1 and B 2 , if the discriminants F 1 , F 2 , F 3 and F 4 are greater than 0 or the difference between ENT and ASM of B 1 and B 2 is less than the set threshold T, then keep B 1 , and the small block is confirmed as a fruit; if the texture difference is greater than the threshold T and the discriminants F 1 , F 2 , F 3 and F 4 are less than 0, discard the entire small block B 1 ; and set B Assign the relevant parameters in 2 to B 1 , then set the parameters of B 2 to be empty, take another small block as B 2 in sequence, and repeat the steps until all the small blocks are processed.
作为优选的一种方案:在所述(5)中,对于最后一个小块,如果判别式F1、F2、F3和F4大于0,可以默认为果实直接保留,否则认为是背景,直接丢弃。As a preferred solution: in the above (5), for the last small block, if the discriminant formulas F 1 , F 2 , F 3 and F 4 are greater than 0, the fruit can be retained by default, otherwise it is considered as the background, Just discard.
本发明的技术构思为:由于果实在颜色空间具有聚类性,本方案借鉴平均距离分类器原理对自然场景下的果蔬图像进行分类识别。The technical idea of the present invention is: since the fruit has clustering property in the color space, this scheme uses the principle of the average distance classifier to classify and recognize the fruit and vegetable images in the natural scene.
平均距离分类器通过分析不同的果蔬图像数字特征,把图像数据归类。分类算法包括训练阶段和测试阶段。在训练阶段,区分特定图像的特征属性,基于这些属性,产生描述特定类的唯一方法,即产生训练集,构造判别式。一般根据特征属性m和n,利用最小距离分类器或者平均距离分类器或者其他分类器可产生判别式:F(m,n)=am+bn+c。a,b,c为任意常数,只要满足F(m,n)=0即可。在测试阶段,利用这些特征空间,即这些判别式,每个判别式可将图像分成两个部分,达到识别图像的目的。The average distance classifier classifies the image data by analyzing the digital features of different fruit and vegetable images. Classification algorithms include a training phase and a testing phase. In the training phase, the characteristic attributes of specific images are distinguished, and based on these attributes, a unique method to describe a specific class is generated, that is, a training set is generated and a discriminant is constructed. Generally, according to the feature attributes m and n, the minimum distance classifier or the average distance classifier or other classifiers can be used to generate a discriminant formula: F(m, n)=am+bn+c. a, b, c are arbitrary constants, as long as F(m, n)=0 is satisfied. In the test phase, using these feature spaces, that is, these discriminants, each discriminant can divide the image into two parts to achieve the purpose of image recognition.
针对果实目标、叶子和枝干在每个颜色空间设计分类器,为每个颜色空间分别构造3个判别式实现分离:F(fruit/leaf),简写为F1,用来分离果实和叶子;F(fruit/branch),简写为F2,用来分离果实和枝干;F(leaf /branch),简写为F3,用来分离叶子和枝干。由于叶子和枝干被视作背景,故只需构造F1和F2两个判别式。Design classifiers in each color space for fruit targets, leaves and branches, and construct three discriminants for each color space to achieve separation: F (fruit/leaf) , abbreviated as F 1 , is used to separate fruits and leaves; F (fruit/branch) , abbreviated as F 2 , is used to separate fruit and branches; F (leaf /branch) , abbreviated as F 3 , is used to separate leaves and branches. Since the leaves and branches are regarded as the background, only two discriminant formulas, F 1 and F 2 , need to be constructed.
融合2r-g-b颜色模型和LCD(luminance and color difference)颜色模型的特点,得到一种新的颜色模型LNM(LCD color model combinedwith Normalized-RGB color model)。该模型避免了LCD颜色模型对光的敏感度,同时避免了2r-g-b颜色模型无法识别红色分量值弱的果实的弊端,LNM与RGB颜色模型的对应关系如下式所示:Combining the characteristics of the 2r-g-b color model and the LCD (luminance and color difference) color model, a new color model LNM (LCD color model combined with Normalized-RGB color model) is obtained. This model avoids the sensitivity of the LCD color model to light, and at the same time avoids the disadvantage that the 2r-g-b color model cannot identify fruits with weak red component values. The corresponding relationship between LNM and RGB color models is shown in the following formula:
因为融合了两种颜色空间,在实现识别的过程中,我们共构建4个判别式。Because of the fusion of two color spaces, we constructed four discriminants in the process of recognition.
同时,采用灰度共生矩阵提取图像纹理特征。首先给出共生矩阵的概念:设图像的一个区域大小为Nc×Nr像素,并设灰度级为G=0,1,……,Nq-1。那么共生矩阵P(d,q)是一个大小为Nq×Nq的方阵,包括了所有间距为d,方向为q,且灰度级为a和b的像素对出现的频率。P(d,q)中的元素可表示为P(a,b|d,q)。在区域内任选两个像素(k,l)和(m,n),其中k,m=1,2,……,Nc;n=1,2,……,Nr。则:At the same time, the gray level co-occurrence matrix is used to extract the image texture features. Firstly, the concept of co-occurrence matrix is given: the size of a region of the image is N c ×N r pixels, and the gray level is G=0, 1, ..., N q -1. Then the co-occurrence matrix P (d, q) is a square matrix with a size of N q ×N q , including the frequency of occurrence of all pixel pairs with a distance of d, a direction of q, and gray levels of a and b. The elements in P (d, q) can be expressed as P (a, b|d, q) . Two pixels (k, l) and (m, n) are randomly selected in the region, where k, m=1, 2, ..., N c ; n = 1, 2, ..., N r . but:
P(a,b|d,q)=∑[(k,l),(m,n)]P(a,b|d,q)=∑[(k,l),(m,n)]
其中,如果where, if
|k-m|=q,|l-n|=d,g(k,l)=a,g(m,n)=b|k-m|=q, |l-n|=d, g(k,l)=a, g(m,n)=b
成立,那么established, then
[(k,l),(m,n)]=1[(k,l),(m,n)]=1
否则otherwise
[(k,l),(m,n)]=0[(k,l),(m,n)]=0
其中函数g(k,l)代表(k,l)处像素的灰度级。另外,方向q可以取值为0°、45°、90°、135°。所以对于一个给定的距离d,可以对应四个灰度共生矩阵:where the function g(k, l) represents the gray level of the pixel at (k, l). In addition, the direction q may take values of 0°, 45°, 90°, and 135°. So for a given distance d, it can correspond to four gray-level co-occurrence matrices:
p(a,b|d,0°)=∑[(k,l),(m,n)],k-m=0,|l-n|=d,(k,l)=a,(m,n)=bp(a, b|d, 0°)=∑[(k, l), (m, n)], k-m=0, |l-n|=d, (k, l)=a, (m, n) =b
p(a,b|d,45°)=∑[(k,l),(m,n)],k-m=d,l-n=-d,(k,l)=a,(m,n)=bp(a, b|d, 45°)=∑[(k, l), (m, n)], k-m=d, l-n=-d, (k, l)=a, (m, n)= b
或k-m=-d,l-n=d,(k,l)=a,(m,n)=bOr k-m=-d, l-n=d, (k,l)=a, (m,n)=b
p(a,b|d,90°)=∑[(k,l),(m,n)],|k-m|=d,l-n=0,(k,l)=a,(m,n)=bp(a,b|d,90°)=∑[(k,l),(m,n)],|k-m|=d,l-n=0,(k,l)=a,(m,n) =b
p(a,b|d,135°)=∑[(k,l),(m,n)],k-m=d,l-n=d,(k,l)=a,(m,n)=bp(a,b|d,135°)=∑[(k,l),(m,n)], k-m=d, l-n=d, (k,l)=a, (m,n)=b
或k-m=-d,l-n=d,(k,l)=a,(m,n)=bOr k-m=-d, l-n=d, (k,l)=a, (m,n)=b
分别采样多种果蔬的果实和叶子,采用灰度共生矩阵提取图像特征,得到可以区分果实和叶子的纹理特征值熵和能量,其表达式分别为(5)、(6):The fruits and leaves of various fruits and vegetables are sampled separately, and the image features are extracted by using the gray level co-occurrence matrix, and the texture feature value entropy and energy that can distinguish the fruits and leaves are obtained. The expressions are (5), (6):
其中,a,b分别表示像素的灰度级,p(a,b)表示灰度共生矩阵。Among them, a, b represent the gray level of the pixel respectively, and p(a, b) represents the gray level co-occurrence matrix.
然后利用基于子块的区域生长法较好地解决了利用单一颜色参数出现欠分割的缺点。Then, the sub-block-based region growing method is used to solve the shortcoming of under-segmentation when using a single color parameter.
本发明的有益效果主要表现在:能够有效识别类球形果蔬、计算简单、识别精度高。The beneficial effects of the present invention are mainly manifested in that the spherical fruit and vegetable can be effectively identified, the calculation is simple, and the identification accuracy is high.
(四)附图说明(4) Description of drawings
图1是本发明一个流程示意图;Fig. 1 is a schematic flow chart of the present invention;
图2是分类器示意图;Fig. 2 is a schematic diagram of a classifier;
图3是2r-g-b分类器示意图;Figure 3 is a schematic diagram of a 2r-g-b classifier;
图4是LCD分类器示意图;Fig. 4 is a schematic diagram of an LCD classifier;
图5是果实和背景的属性分布图;Fig. 5 is the attribute distribution figure of fruit and background;
图6是根据判别式得出的分割直线图。Fig. 6 is a segmentation line diagram obtained according to the discriminant formula.
(五)具体实施方式(5) Specific implementation methods
下面结合附图对本发明作进一步描述。The present invention will be further described below in conjunction with the accompanying drawings.
参照图1~图6,一种类球形果蔬的识别方法,该方法包括以下步骤:(1)、获取自然场景下的果蔬图像;(2)、将获取的图像同时变换到2r-g-b颜色模型和LCD颜色模型,对2r-g-b颜色模型建立2r-g二维坐标系,LCD颜色模型建立Y-Cr二维坐标系;(3)、根据分类器原理,对Y-Cr二维坐标系下的特征属性Y,Cr分别构造判别式F1、F2,设定特征属性值Y和Cr的果实目标的平均向量mfruit、树叶的平均向量mleaf、枝干的平均向量mbranch,其算式为:(1)、(2):With reference to Fig. 1~Fig. 6, a kind of identification method of spherical fruit and vegetable, this method comprises the following steps: (1), obtain the fruit and vegetable image under the natural scene; (2), transform the obtained image into 2r-gb color model and LCD color model, establish a 2r-g two-dimensional coordinate system for the 2r-gb color model, and establish a YC r two-dimensional coordinate system for the LCD color model; (3), according to the principle of the classifier, the characteristic attributes under the YC r two-dimensional coordinate system Y and C r respectively construct the discriminant formulas F 1 and F 2 , and set the average vector m fruit of the fruit target, the average vector m leaf of the leaves, and the average vector m branch of the branches of the characteristic attribute values Y and C r . The formula is : (1), (2):
F1=[Y,Cr]T(mfruit-mleaf)-1/2[(mfruit Tmfruit)-(mleaf Tmleaf)] (1)F 1 =[Y, C r ] T (m fruit -m leaf )-1/2[(m fruit T m fruit )-(m leaf T m leaf )] (1)
F2=[Y,Cr]T(mfruit-mbranch)-1/2[(mfruit Tmfruit)-(mbranch Tmlbranch)] (2)F 2 =[Y, C r ] T (m fruit -m branch )-1/2[(m fruit T m fruit )-(m branch T m lbranch )] (2)
对2r-g二维坐标系的特征属性2r和g分别构造判别式F3、F4,设定特征属性值2r和g的果实目标的平均向量m′fruit、树叶的平均向量m′leaf、枝干的平均向量m′branch,其算式为:(3)、(4):Construct discriminant formulas F 3 and F 4 for the characteristic attributes 2r and g of the 2r-g two-dimensional coordinate system respectively, and set the average vector m′ fruit of the fruit target and the average vector m′ leaf of the leaves of the characteristic attribute values 2r and g The average vector m′ branch of the branch, its calculation formula is: (3), (4):
F3=[2r,g]T(m′fruit-m′leaf)-1/2[(m′fruit Tm′fruit)-(m′leaf Tm′leaf)] (3)F 3 =[2r, g] T (m' fruit -m' leaf )-1/2[(m' fruit T m'f fruit )-(m' leaf T m' leaf )] (3)
F4=[2r,g]T(m′fruit-m′branch)-1/2[(m′fruit Tm′fruit)-(m′branch Tm′lbranch)] (4)F 4 =[2r, g]T(m' fruit -m' branch )-1/2[(m' fruit T m' fruit )-(m' branch T m' lbranch )] (4)
根据判别式得到省去叶子和枝干的分离直线,并把输入图像分成大小相等的小块,每块大小为L×L,L为奇数;According to the discriminant formula, the separating straight line omitting the leaves and branches is obtained, and the input image is divided into small blocks of equal size, each block is L×L, and L is an odd number;
(4)、顺序选择两个小块B1和B2,分别计算其4个方向的灰度共生矩阵,设图像的一个区域大小为Nc×Nr像素,并设灰度级为G=0,1,……,Nq-1,共生矩阵P(d,q)是一个大小为Nq×Nq的方阵,包括所有间距为d,方向为q,且灰度级为a和b的像素对出现的频率,P(d,q)中的元素表示为P(a,b|d,q),在区域内任选两个像素(k,l)和(m,n),其中k,m=1,2,……,Nc;n=1,2,……,Nr;(4) Select two small blocks B 1 and B 2 sequentially, and calculate the gray level co-occurrence matrix in the four directions respectively, set the size of an area of the image as N c × N r pixels, and set the gray level as G= 0, 1, ..., N q -1, the co-occurrence matrix P (d, q) is a square matrix of size N q ×N q , including all spacing d, direction q, and gray levels a and The frequency of occurrence of the pixel pair of b, the element in P (d, q) is expressed as P (a, b|d, q) , and two pixels (k, l) and (m, n) are selected in the area, where k, m=1, 2, ..., N c ; n = 1, 2, ..., N r ;
计算和各个灰度共生矩阵的两个特征值:熵ENT和能量ASM,其算式分别为(5)、(6):Calculation and two eigenvalues of each gray level co-occurrence matrix: entropy ENT and energy ASM, the formulas are (5), (6):
其中,a,b分别表示像素的灰度级,p(a,b)表示灰度共生矩阵;并计算平均4个方向的灰度共生矩阵的特征值得到平均特征值 ENT和ASM;Among them, a, b represent the gray level of the pixel respectively, p(a, b) represents the gray level co-occurrence matrix; and calculate the average eigenvalue of the gray level co-occurrence matrix in 4 directions to obtain the average eigenvalue ENT and ASM;
(5)、对于相邻的两个小块B1和B2,如判别式F1、F2、F3和F4大于0或者B1和B2的 ENT和 ASM之差小于设定的阈值T,则保留B1,该小块确认为果实;如纹理差值大于阈值T且判别式F1、F2、F3和F4小于0,则丢弃整个小块B1;并将B2中相关参数赋给B1,然后将B2的参数置为空,顺序往后取另一个小块作为B2,重复所述步骤,直到所有小块处理完毕。(5) For two adjacent small blocks B 1 and B 2 , if the discriminants F 1 , F 2 , F 3 and F 4 are greater than 0 or the difference between ENT and ASM of B 1 and B 2 is less than the set threshold T, then keep B 1 , and the small block is confirmed as a fruit; if the texture difference is greater than the threshold T and the discriminants F 1 , F 2 , F 3 and F 4 are less than 0, discard the entire small block B 1 ; and set B Assign the relevant parameters in 2 to B 1 , then set the parameters of B 2 to be empty, take another small block as B 2 in sequence, and repeat the steps until all the small blocks are processed.
在所述(5)中,对于最后一个小块,如果判别式F1、F2、F3和F4大于0,可以默认为果实直接保留,否则认为是背景,直接丢弃。In (5), for the last small block, if the discriminant formulas F 1 , F 2 , F 3 , and F 4 are greater than 0, it can be regarded as the fruit by default and kept directly, otherwise it is regarded as the background and discarded directly.
本实施例中的具体步骤为:Concrete steps in the present embodiment are:
(1)、通过数码相机或者摄像头获取自然场景下的果蔬图像。(1) Obtain images of fruits and vegetables in natural scenes through a digital camera or video camera.
(2)、将获取的图像同时变换到2r-g-b颜色模型和LCD颜色模型。(2) Transform the acquired image to the 2r-g-b color model and the LCD color model simultaneously.
(3)、根据分类器原理,对2r-g-b模型的2r-g二维坐标系下的特征属性2r,g和LCD的Y-Cr二维坐标系下的特征属性Y,Cr分别构造判别式,产生4个判别式F1,F2,F3,F4。同时,把输入图像分成大小相等的小块,每块大小为L×L,L为奇数。(3) According to the principle of the classifier, construct discriminant formulas for the characteristic attributes 2r, g under the 2r-g two-dimensional coordinate system of the 2r-gb model and the characteristic attributes Y and C r under the YC r two-dimensional coordinate system of the LCD , produce four discriminant formulas F 1 , F 2 , F 3 , F 4 . At the same time, the input image is divided into small blocks of equal size, each block size is L×L, and L is an odd number.
举例说明:for example:
假设每个训练类由一个平均向量表示:Assume each training class is represented by a mean vector:
这里Ni是类Wj的训练模式向量的数量。假设有基于特征属性值Y和Cr的果实目标、树叶和枝干,假定将特征属性描述在二维特征空间中,如图5所示。并且可以得到平均值为:mfruit=[94.14,114.26]T,mleaf=[77.02,218.45]T,mbranch=[52.88,33.67]T,如图5中用#表示。Here N i is the number of training pattern vectors for class W j . Suppose there are fruit targets, leaves and branches based on the characteristic attribute values Y and Cr , and the characteristic attributes are assumed to be described in a two-dimensional feature space, as shown in Figure 5. And the average value can be obtained as: m fruit =[94.14, 114.26] T , m leaf =[77.02, 218.45] T , m branch =[52.88, 33.67] T , represented by # in Fig. 5 .
基于这一点,可以通过判定最接近mj的原型,分配任意给定的模式x给类。如采用欧几里德距离作为测量相似性距离,原型距离为:Based on this, any given schema x can be assigned to a class by determining the prototype closest to mj . If Euclidean distance is used as the measurement similarity distance, the prototype distance is:
Fj(x)=‖x-mj‖ for j=1,2,…,MF j (x) = ‖ xm j ‖ for j = 1, 2, ..., M
这相当于:This is equivalent to:
Fj(x)=xTmj-1/2(mj Tmj) for j=1,2,…,MF j (x)=x T m j -1/2(m j T m j ) for j=1, 2,..., M
在本例中,简单假设每个得到的fruit的平均值为[a,b],leaf的平均值为[c,d],branch的平均值为[e,f]。那么可以得到In this example, simply assume that the average value of each obtained fruit is [a, b], the average value of leaf is [c, d], and the average value of branch is [e, f]. then you can get
根据这个方法,可以计算得到以下决策函数:According to this method, the following decision function can be calculated:
最后,根据这些决策函数,可得分离类wi和wj的判别式满足:Finally, according to these decision functions, the discriminant for separating classes w i and w j satisfies:
Fi(x)-Fj(x)=0F i (x) - F j (x) = 0
即:Right now:
在这个例中,得到判别式F1,F2,F3如下:In this example, the discriminant formulas F 1 , F 2 , and F 3 are obtained as follows:
根据这些判别式,可以得出分离直线,省去叶子和枝干的分离直线,如图6所示。According to these discriminants, the separation straight line can be obtained, omitting the separation straight line of leaves and branches, as shown in Figure 6.
如果采用的原型距离不一样得到的判别式也是不一样的,如可以采用标准化的欧式距离和马哈朗诺必斯距离等。If different prototype distances are used, the obtained discriminants are also different. For example, standardized Euclidean distance and Mahalanobis distance can be used.
(4)、顺序选择两个小块B1和B2,分别计算其4个方向的灰度共生矩阵和各个灰度共生矩阵的两个特征值::熵ENT和能量ASM,其算式分别为(5)、(6):(4) Select two small blocks B 1 and B 2 in sequence, respectively calculate the gray level co-occurrence matrix in four directions and the two eigenvalues of each gray level co-occurrence matrix: entropy ENT and energy ASM, the calculation formulas are respectively (5), (6):
其中,a,b分别表示像素的灰度级,p(a,b)表示灰度共生矩阵;Among them, a, b represent the gray level of the pixel respectively, and p(a, b) represents the gray level co-occurrence matrix;
(5)、平均4个方向的灰度共生矩阵的特征值得到平均特征值 ENT和 ASM作为判别的纹理特征。(5), average the eigenvalues of the gray level co-occurrence matrix in 4 directions to obtain the average eigenvalue ENT and ASM as discriminative texture features.
(6)、对于相邻的两个小块B1和B2,根据颜色模型构造的判别式F1,F2,F3,F4和纹理特征值进行判断。如果判别式都大于0或者B1和B2的 ENT和 ASM之差小于设定的阈值T,则保留B1,将其压入一个堆栈。同时将B2中相关参数赋给B1,然后将B2的参数置为空。顺序往后取另一个小块作为B2,重复上述步骤。如果纹理差值大于阈值T并且判别式F1,F2,F3,F4都小于0,则丢弃整个小块B1。直到所有的小块处理完毕。(6) For two adjacent small blocks B 1 and B 2 , judge according to the discriminant formulas F 1 , F 2 , F 3 , F 4 constructed by the color model and the texture feature values. If the discriminants are both greater than 0 or the difference between ENT and ASM of B 1 and B 2 is smaller than the set threshold T, keep B 1 and push it into a stack. At the same time, assign the relevant parameters in B 2 to B 1 , and then set the parameters of B 2 to empty. Sequentially take another small block as B 2 and repeat the above steps. If the texture difference is greater than the threshold T and the discriminants F 1 , F 2 , F 3 , and F 4 are all less than 0, then discard the entire small block B 1 . until all small blocks are processed.
Claims (2)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CNB2007100666953A CN100463001C (en) | 2007-01-12 | 2007-01-12 | A method for identifying spherical fruits and vegetables |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CNB2007100666953A CN100463001C (en) | 2007-01-12 | 2007-01-12 | A method for identifying spherical fruits and vegetables |
Publications (2)
Publication Number | Publication Date |
---|---|
CN101034440A true CN101034440A (en) | 2007-09-12 |
CN100463001C CN100463001C (en) | 2009-02-18 |
Family
ID=38730987
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CNB2007100666953A Expired - Fee Related CN100463001C (en) | 2007-01-12 | 2007-01-12 | A method for identifying spherical fruits and vegetables |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN100463001C (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2011106854A1 (en) * | 2010-03-03 | 2011-09-09 | Agência De Inovação - Inova - Universidade Estadual De Campinas - Unicamp | Method and system for recognition and classification of objects in semi-controlled environments |
CN102289671A (en) * | 2011-09-02 | 2011-12-21 | 北京新媒传信科技有限公司 | Method and device for extracting texture feature of image |
CN104573664A (en) * | 2015-01-21 | 2015-04-29 | 深圳华侨城文化旅游科技有限公司 | Reconstruction system and method of 3D scene of shooting path |
CN108271531A (en) * | 2017-12-29 | 2018-07-13 | 湖南科技大学 | The fruit automation picking method and device of view-based access control model identification positioning |
CN108875675A (en) * | 2018-06-28 | 2018-11-23 | 西南科技大学 | A kind of intelligent fruits recognition methods can be applied to supermarket self-checkout system |
CN108901540A (en) * | 2018-06-28 | 2018-11-30 | 重庆邮电大学 | Fruit tree light filling and fruit thinning method based on artificial bee colony fuzzy clustering algorithm |
CN110598539A (en) * | 2019-08-02 | 2019-12-20 | 焦作大学 | Sports goods classification device and method based on computer vision recognition |
CN111401442A (en) * | 2020-03-16 | 2020-07-10 | 中科立业(北京)科技有限公司 | Fruit identification method based on deep learning |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6296186B1 (en) * | 1998-11-19 | 2001-10-02 | Ncr Corporation | Produce recognition system including a produce shape collector |
EP1161526A4 (en) * | 1999-03-12 | 2002-06-05 | Exelixis Plant Sciences Inc | Trait-associated gene identification method |
CN1120656C (en) * | 2000-08-22 | 2003-09-10 | 中国农业大学 | Automatic recognizer of seedling leaf direction |
CN1394699A (en) * | 2002-08-03 | 2003-02-05 | 浙江大学 | Fruit quality real time detection and grading robot system |
CN100337243C (en) * | 2003-12-31 | 2007-09-12 | 中国农业大学 | A fruit surface image collection system and method |
-
2007
- 2007-01-12 CN CNB2007100666953A patent/CN100463001C/en not_active Expired - Fee Related
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2011106854A1 (en) * | 2010-03-03 | 2011-09-09 | Agência De Inovação - Inova - Universidade Estadual De Campinas - Unicamp | Method and system for recognition and classification of objects in semi-controlled environments |
CN102289671A (en) * | 2011-09-02 | 2011-12-21 | 北京新媒传信科技有限公司 | Method and device for extracting texture feature of image |
CN104573664A (en) * | 2015-01-21 | 2015-04-29 | 深圳华侨城文化旅游科技有限公司 | Reconstruction system and method of 3D scene of shooting path |
CN108271531A (en) * | 2017-12-29 | 2018-07-13 | 湖南科技大学 | The fruit automation picking method and device of view-based access control model identification positioning |
CN108875675A (en) * | 2018-06-28 | 2018-11-23 | 西南科技大学 | A kind of intelligent fruits recognition methods can be applied to supermarket self-checkout system |
CN108901540A (en) * | 2018-06-28 | 2018-11-30 | 重庆邮电大学 | Fruit tree light filling and fruit thinning method based on artificial bee colony fuzzy clustering algorithm |
CN110598539A (en) * | 2019-08-02 | 2019-12-20 | 焦作大学 | Sports goods classification device and method based on computer vision recognition |
CN111401442A (en) * | 2020-03-16 | 2020-07-10 | 中科立业(北京)科技有限公司 | Fruit identification method based on deep learning |
Also Published As
Publication number | Publication date |
---|---|
CN100463001C (en) | 2009-02-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN101034440A (en) | Identification method for spherical fruit and vegetables | |
Zhu et al. | Lesion detection of endoscopy images based on convolutional neural network features | |
CN1260680C (en) | method and apparatus for digital image segmentation | |
CN111652292B (en) | Similar object real-time detection method and system based on NCS and MS | |
CN107945200B (en) | Image Binarization Segmentation Method | |
CN1932847A (en) | Method for detecting colour image human face under complex background | |
CN1977286A (en) | Object recognition method and apparatus therefor | |
CN1794265A (en) | Method and device for distinguishing face expression based on video frequency | |
CN110991389B (en) | A Matching Method for Determining the Appearance of Target Pedestrians in Non-overlapping Camera Views | |
CN1207924C (en) | Method for testing face by image | |
CN1777915A (en) | Face image candidate region retrieval method, retrieval system and retrieval program | |
CN107909081A (en) | The quick obtaining and quick calibrating method of image data set in a kind of deep learning | |
CN1761205A (en) | System for detecting eroticism and unhealthy images on network based on content | |
CN110032946B (en) | Aluminum/aluminum blister packaging tablet identification and positioning method based on machine vision | |
CN111126240A (en) | A three-channel feature fusion face recognition method | |
CN110749598A (en) | Silkworm cocoon surface defect detection method integrating color, shape and texture characteristics | |
CN116229189B (en) | Image processing method, device, equipment and storage medium based on fluorescence endoscope | |
CN112464983A (en) | Small sample learning method for apple tree leaf disease image classification | |
Rodríguez-Sánchez et al. | A deep learning approach for detecting and correcting highlights in endoscopic images | |
CN104504161B (en) | A kind of image search method based on robot vision platform | |
CN105374010A (en) | A panoramic image generation method | |
CN109840498B (en) | A real-time pedestrian detection method, neural network and target detection layer | |
CN1632823A (en) | Automatic fingerprint classification system and method | |
CN110188811A (en) | Underwater target detection method based on normed gradient feature and convolutional neural network | |
Shweta et al. | External feature based quality evaluation of Tomato using K-means clustering and support vector classification |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
C17 | Cessation of patent right | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20090218 Termination date: 20120112 |