CN112581483B - Self-learning-based plant leaf vein segmentation method and device - Google Patents
Self-learning-based plant leaf vein segmentation method and device Download PDFInfo
- Publication number
- CN112581483B CN112581483B CN202011528023.1A CN202011528023A CN112581483B CN 112581483 B CN112581483 B CN 112581483B CN 202011528023 A CN202011528023 A CN 202011528023A CN 112581483 B CN112581483 B CN 112581483B
- Authority
- CN
- China
- Prior art keywords
- leaf
- map
- vein
- extraction module
- leaf vein
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/12—Edge-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30181—Earth observation
- G06T2207/30188—Vegetation; Agriculture
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
Description
技术领域technical field
本申请涉及数据处理技术领域,尤其涉及一种基于自学习的植物叶片叶脉分割方法和装置。The present application relates to the technical field of data processing, and in particular, to a method and device for segmenting plant leaf veins based on self-learning.
背景技术Background technique
植物的叶子是植物的重要器官,叶片的轮廓和叶脉是叶片形态学特征的重要组成部分。它们包含了植物的内在属性和重要的遗传信息,叶脉更是被视为叶片的“指纹”,不仅作为衡量植物生长发育、生长状况、遗传特征等生化反应过程的重要参数,而且作为植物的分类和识别的重要依据广泛应用于农业生产和科研服务中。地球上的植物种类是巨大的,植物叶脉的提取在植物学、农业生产和园艺等方面都有着十分重要的意义。传统的植物叶脉分割方法是靠人工完成的,如采用化学试剂、高分辨率扫描仪和X射线来处理,这都需要依靠专门技术人员和复杂的设备而且效率低下。随着人工智能和计算机视觉的发展,深度学习方法在解决这类问题上很有前景,然而现有的方法通常需要大量精细标注的图像进行训练,并且标注过程既繁琐又耗费人力。The leaves of plants are important organs of plants, and the outline and veins of leaves are important components of leaf morphological characteristics. They contain the intrinsic attributes and important genetic information of plants, and leaf veins are regarded as the "fingerprints" of leaves, not only as important parameters for measuring plant growth and development, growth status, genetic characteristics and other biochemical reaction processes, but also as a classification of plants. The important basis for identification and identification is widely used in agricultural production and scientific research services. Plant species on the earth are huge, and the extraction of plant leaf veins is of great significance in botany, agricultural production and horticulture. Traditional plant vein segmentation methods are done manually, such as using chemical reagents, high-resolution scanners and X-rays, which all require specialized technicians and complex equipment and are inefficient. With the development of artificial intelligence and computer vision, deep learning methods are promising in solving such problems, however, existing methods usually require a large number of finely annotated images for training, and the annotation process is tedious and labor-intensive.
发明内容SUMMARY OF THE INVENTION
本申请旨在至少在一定程度上解决相关技术中的技术问题之一。The present application aims to solve one of the technical problems in the related art at least to a certain extent.
为此,本申请的第一个目的在于提出一种基于自学习的植物叶片叶脉分割方法,利用深度学习算法框架,将包含植物叶片的图片中的叶子轮廓和叶脉清晰准确地提取出来,可以直接对简单背景下拍摄的照片进行处理,不需要经过特殊的图片预处理,利用尽可能少的有标注的图片样本(如10张图片)和大量无标注图片样本(如上百张图片)对神经网络进行迭代式自学习训练,通过不断迭代的过程,得到越来越清晰完整的轮廓和叶脉分割图。该算法大大降低了训练过程对大量有标注数据的需求,充分利用无标注图片本身的信息,使模型在反复迭代中学习到与轮廓和叶脉有关的特征,使得轮廓和叶脉的提取更加清晰准确,最终可以实现端到端地将输入图片的轮廓和叶脉分割出来,不仅省去了以往图片预处理的复杂过程,提高了泛化性能,而且充分利用无标签数据的信息,更有利于模型提取叶片的特征信息。To this end, the first purpose of this application is to propose a method for segmenting plant leaf veins based on self-learning, using a deep learning algorithm framework to clearly and accurately extract the leaf outline and leaf veins in a picture containing plant leaves, which can be directly To process photos taken in a simple background, no special image preprocessing is required, and the neural network is trained by using as few labeled picture samples (such as 10 pictures) and a large number of unlabeled picture samples (such as hundreds of pictures) as possible. Iterative self-learning training is carried out, and through the continuous iterative process, more and more clear and complete contours and leaf vein segmentation maps are obtained. The algorithm greatly reduces the need for a large amount of labeled data in the training process, and makes full use of the information of the unlabeled pictures themselves, so that the model can learn the features related to the contour and leaf veins in repeated iterations, making the extraction of the contour and leaf veins clearer and more accurate. Finally, the outline and veins of the input image can be segmented end-to-end, which not only saves the complex process of previous image preprocessing and improves the generalization performance, but also makes full use of the information of unlabeled data, which is more conducive to the model to extract leaves. characteristic information.
本申请的第二个目的在于提出一种基于自学习的植物叶片叶脉分割装置。The second objective of the present application is to propose a self-learning-based device for segmenting plant leaf veins.
为达上述目的,本申请第一方面实施例提出了一种基于自学习的植物叶片叶脉分割方法,包括:In order to achieve the above purpose, the first aspect of the present application proposes a method for segmenting plant leaf veins based on self-learning, including:
获取已标注的植物叶片图片样本,并通过深度神经网络模型对所述已标注的植物叶片样本进行训练,获取特征提取模块、粗糙叶脉提取模块和精细叶脉提取模块;Obtain the labeled plant leaf picture samples, and train the labeled plant leaf samples through a deep neural network model to obtain a feature extraction module, a rough leaf vein extraction module, and a fine leaf vein extraction module;
获取无标注的植物叶片图片输入所述特征提取模块、所述粗糙叶脉提取模块和所述精细叶脉提取模块进行处理,获取所述无标注的植物叶片图片的粗糙叶脉图和精细叶脉图;Obtain the unlabeled plant leaf picture and input the feature extraction module, the rough leaf vein extraction module and the fine leaf vein extraction module for processing, and obtain the rough leaf vein map and the fine leaf vein map of the unlabeled plant leaf picture;
将所述粗糙叶脉图和所述精细叶脉图进行融合,获取所述无标注的植物叶片图片的叶脉分割图,并将所述叶脉分割图作为所述无标注的植物叶片图片的标注信息并根据预设损失函数对所述深度神经网络模型进行训练,以使已训练的深度神经网络模型对待处理植物叶片图片进行处理,获取植物叶片分割结果。The rough leaf vein map and the fine leaf vein map are fused to obtain the leaf vein segmentation map of the unlabeled plant leaf picture, and the leaf vein segmentation map is used as the label information of the unlabeled plant leaf picture and according to A preset loss function is used to train the deep neural network model, so that the trained deep neural network model can process the picture of the plant leaf to be processed, and obtain the plant leaf segmentation result.
本申请实施例的基于自学习的植物叶片叶脉分割方法,通过深度神经网络模型对已标注的植物叶片样本进行训练,获取特征提取模块、粗糙叶脉提取模块和精细叶脉提取模块对无标注的植物叶片图片进行处理,获取粗糙叶脉图和精细叶脉图;将粗糙叶脉图和精细叶脉图进行融合,获取叶脉分割图作为无标注的植物叶片图片的标注信息并根据预设损失函数对深度神经网络模型进行训练,以使已训练的深度神经网络模型对待处理植物叶片图片进行处理获取植物叶片分割结果。由此,使用极少量的标注图片让模型自动地去学习大量未标注图片中的信息,从而提高泛化性,提高植物叶片叶脉分割的效率和准确性。The self-learning-based plant leaf vein segmentation method in the embodiment of the present application trains the labeled plant leaf samples through a deep neural network model, and obtains the feature extraction module, the rough leaf vein extraction module and the fine leaf vein extraction module for unlabeled plant leaves. The image is processed to obtain the rough leaf vein map and the fine leaf vein map; the rough leaf vein map and the fine leaf vein map are fused to obtain the leaf vein segmentation map as the label information of the unlabeled plant leaf image, and the deep neural network model is performed according to the preset loss function. training, so that the trained deep neural network model can process the plant leaf pictures to be processed to obtain plant leaf segmentation results. Therefore, using a very small number of labeled pictures allows the model to automatically learn the information in a large number of unlabeled pictures, thereby improving generalization and improving the efficiency and accuracy of plant leaf vein segmentation.
在本申请的一个实施例中,获取无标注的植物叶片图片输入所述特征提取模块、所述粗糙叶脉提取模块和所述精细叶脉提取模块对进行处理,获取所述无标注的植物叶片图片的粗糙叶脉图和精细叶脉图,包括:In an embodiment of the present application, an unlabeled plant leaf picture is obtained and input into the feature extraction module, the rough leaf vein extraction module and the fine leaf vein extraction module for processing, and the unlabeled plant leaf picture is obtained. Coarse and fine vein maps, including:
所述特征提取模块对所述无标注的植物叶片图片进行特征提取,获取特征图;The feature extraction module performs feature extraction on the unlabeled plant leaf picture to obtain a feature map;
所述粗糙叶脉提取模块对所述特征图进行处理,获取中间层特征图和所述粗糙叶脉图;The rough leaf vein extraction module processes the feature map to obtain the intermediate layer feature map and the rough leaf vein map;
所述精细叶脉提取模块对所述特征图和所述中间层特征图进行处理,获取所述精细叶脉图。The fine vein extraction module processes the feature map and the intermediate layer feature map to obtain the fine vein map.
在本申请的一个实施例中,所述将所述叶脉分割图作为所述无标注的植物叶片图片的标注信息并根据预设损失函数对所述深度神经网络模型进行训练,包括:In an embodiment of the present application, using the leaf vein segmentation map as the label information of the unlabeled plant leaf picture and training the deep neural network model according to a preset loss function, including:
将所述叶脉分割图作为所述无标注的植物叶片图片的标注信息输入所述深度神经网络模型进行训练,根据损失函数计算训练结果和所述标注信息的差值,调整所述深度神经网络模型的参数,直到满足训练条件。Input the leaf vein segmentation map as the label information of the unlabeled plant leaf picture into the deep neural network model for training, calculate the difference between the training result and the label information according to the loss function, and adjust the deep neural network model parameters until the training conditions are met.
在本申请的一个实施例中,根据所述粗糙叶脉分割图生成置信度图,计算所述置信度图中高置信度区域与已标注的植物叶片图片样本对应的区域的损失,损失函数如下:In an embodiment of the present application, a confidence map is generated according to the rough leaf vein segmentation map, and the loss of the region corresponding to the high-confidence region in the confidence map and the labeled plant leaf picture sample is calculated, and the loss function is as follows:
l(x,y)=L={l1,l2,…,lN}T (1)l(x,y)=L={l 1 ,l 2 ,...,l N } T (1)
ln=-wn[yn·logxn] (2)l n = -w n [y n ·logx n ] (2)
其中,xn为所述粗糙叶脉提取模块输出的第n个像素点的预测值,yn为标注图片对应位置的标签值,wn∈{0,1}为置信度权重,当对应输出位置为高置信度时,wn为1,否则为0。Among them, x n is the predicted value of the nth pixel point output by the rough leaf vein extraction module, y n is the label value of the corresponding position of the marked image, w n ∈ {0,1} is the confidence weight, when the corresponding output position When the confidence is high, w n is 1, otherwise it is 0.
在本申请的一个实施例中,计算所述精细叶脉图与已标注的植物叶片图片样本对应的区域的损失,损失函数如下:In an embodiment of the present application, the loss of the area corresponding to the fine vein map and the labeled plant leaf picture sample is calculated, and the loss function is as follows:
l(x,y)=L={l1,l2,…,lN}T (3)l(x,y)=L={l 1 ,l 2 ,...,l N } T (3)
ln=-yn·logxn (4)l n = -y n ·logx n (4)
其中,xn为所述粗糙叶脉提取模块输出的第n个像素点的预测值,yn为标注图片对应位置的标签值。Wherein, x n is the predicted value of the nth pixel point output by the rough leaf vein extraction module, and y n is the label value of the corresponding position of the marked picture.
为达上述目的,本申请第二方面实施例提出了一种基于自学习的植物叶片叶脉分割装置,包括:In order to achieve the above purpose, a second aspect of the present application provides a device for segmenting plant leaf veins based on self-learning, including:
获取训练模块,用于获取已标注的植物叶片图片样本,并通过深度神经网络模型对所述已标注的植物叶片样本进行训练,获取特征提取模块、粗糙叶脉提取模块和精细叶脉提取模块;obtaining a training module for obtaining the labeled plant leaf picture samples, and training the labeled plant leaf samples through a deep neural network model to obtain a feature extraction module, a rough leaf vein extraction module and a fine leaf vein extraction module;
获取模块,用于获取无标注的植物叶片图片输入所述特征提取模块、所述粗糙叶脉提取模块和所述精细叶脉提取模块进行处理,获取所述无标注的植物叶片图片的粗糙叶脉图和精细叶脉图;The obtaining module is used to obtain the unlabeled plant leaf picture and input the feature extraction module, the rough leaf vein extraction module and the fine leaf vein extraction module for processing, and obtain the rough leaf vein map and the fine vein picture of the unlabeled plant leaf picture. leaf vein diagram
处理模块,用于将所述粗糙叶脉图和所述精细叶脉图进行融合,获取所述无标注的植物叶片图片的叶脉分割图,并将所述叶脉分割图作为所述无标注的植物叶片图片的标注信息并根据预设损失函数对所述深度神经网络模型进行训练,以使已训练的深度神经网络模型对待处理植物叶片图片进行处理,获取植物叶片分割结果。A processing module, configured to fuse the rough leaf vein map and the fine leaf vein map, obtain a leaf vein segmentation map of the unlabeled plant leaf picture, and use the leaf vein segmentation map as the unlabeled plant leaf picture and the deep neural network model is trained according to the preset loss function, so that the trained deep neural network model can process the plant leaf picture to be processed and obtain the plant leaf segmentation result.
本申请实施例的基于自学习的植物叶片叶脉分割装置,通过深度神经网络模型对已标注的植物叶片样本进行训练,获取特征提取模块、粗糙叶脉提取模块和精细叶脉提取模块对无标注的植物叶片图片进行处理,获取粗糙叶脉图和精细叶脉图;将粗糙叶脉图和精细叶脉图进行融合,获取叶脉分割图作为无标注的植物叶片图片的标注信息并根据预设损失函数对深度神经网络模型进行训练,以使已训练的深度神经网络模型对待处理植物叶片图片进行处理获取植物叶片分割结果。由此,使用极少量的标注图片让模型自动地去学习大量未标注图片中的信息,从而提高泛化性,提高植物叶片叶脉分割的效率和准确性。The self-learning-based plant leaf vein segmentation device of the embodiment of the present application trains the labeled plant leaf samples through a deep neural network model, and obtains the feature extraction module, the rough leaf vein extraction module and the fine leaf vein extraction module for the unlabeled plant leaves. The image is processed to obtain the rough leaf vein map and the fine leaf vein map; the rough leaf vein map and the fine leaf vein map are fused to obtain the leaf vein segmentation map as the label information of the unlabeled plant leaf image, and the deep neural network model is performed according to the preset loss function. training, so that the trained deep neural network model can process the plant leaf pictures to be processed to obtain plant leaf segmentation results. Therefore, using a very small number of labeled pictures allows the model to automatically learn the information in a large number of unlabeled pictures, thereby improving generalization and improving the efficiency and accuracy of plant leaf vein segmentation.
在本申请的一个实施例中,所述获取模块,具体用于:In an embodiment of the present application, the acquisition module is specifically used for:
所述特征提取模块对所述无标注的植物叶片图片进行特征提取,获取特征图;The feature extraction module performs feature extraction on the unlabeled plant leaf picture to obtain a feature map;
所述粗糙叶脉提取模块对所述特征图进行处理,获取中间层特征图和所述粗糙叶脉图;The rough leaf vein extraction module processes the feature map to obtain the intermediate layer feature map and the rough leaf vein map;
所述精细叶脉提取模块对所述特征图和所述中间层特征图进行处理,获取所述精细叶脉图。The fine vein extraction module processes the feature map and the intermediate layer feature map to obtain the fine vein map.
在本申请的一个实施例中,处理模块,具体用于:In an embodiment of the present application, the processing module is specifically used for:
将所述叶脉分割图作为所述无标注的植物叶片图片的标注信息输入所述深度神经网络模型进行训练,根据损失函数计算训练结果和所述标注信息的差值,调整所述深度神经网络模型的参数,直到满足训练条件。Input the leaf vein segmentation map as the label information of the unlabeled plant leaf picture into the deep neural network model for training, calculate the difference between the training result and the label information according to the loss function, and adjust the deep neural network model parameters until the training conditions are met.
在本申请的一个实施例中,根据所述粗糙叶脉分割图生成置信度图,计算所述置信度图中高置信度区域与已标注的植物叶片图片样本对应的区域的损失,损失函数如下:In an embodiment of the present application, a confidence map is generated according to the rough leaf vein segmentation map, and the loss of the region corresponding to the high-confidence region in the confidence map and the labeled plant leaf picture sample is calculated, and the loss function is as follows:
l(x,y)=L={l1,l2,…,lN}T (1)l(x,y)=L={l 1 ,l 2 ,...,l N } T (1)
ln=-wn[yn·logxn] (2)l n = -w n [y n ·logx n ] (2)
其中,xn为所述粗糙叶脉提取模块输出的第n个像素点的预测值,yn为标注图片对应位置的标签值,wn∈{0,1}为置信度权重,当对应输出位置为高置信度时,wn为1,否则为0。Among them, x n is the predicted value of the nth pixel point output by the rough leaf vein extraction module, y n is the label value of the corresponding position of the marked image, w n ∈ {0,1} is the confidence weight, when the corresponding output position When the confidence is high, w n is 1, otherwise it is 0.
在本申请的一个实施例中,计算所述精细叶脉图与已标注的植物叶片图片样本对应的区域的损失,损失函数如下:In an embodiment of the present application, the loss of the area corresponding to the fine vein map and the labeled plant leaf picture sample is calculated, and the loss function is as follows:
l(x,y)=L={l1,l2,…,lN}T (3)l(x,y)=L={l 1 ,l 2 ,...,l N } T (3)
ln=-yn·logxn (4)l n = -y n ·logx n (4)
其中,xn为所述粗糙叶脉提取模块输出的第n个像素点的预测值,yn为标注图片对应位置的标签值。Wherein, x n is the predicted value of the nth pixel point output by the rough leaf vein extraction module, and y n is the label value of the corresponding position of the marked picture.
本申请附加的方面和优点将在下面的描述中部分给出,部分将从下面的描述中变得明显,或通过本申请的实践了解到。Additional aspects and advantages of the present application will be set forth, in part, in the following description, and in part will be apparent from the following description, or learned by practice of the present application.
附图说明Description of drawings
本申请上述的和/或附加的方面和优点从下面结合附图对实施例的描述中将变得明显和容易理解,其中:The above and/or additional aspects and advantages of the present application will become apparent and readily understood from the following description of embodiments taken in conjunction with the accompanying drawings, wherein:
图1为本申请实施例一所提供的一种基于自学习的植物叶片叶脉分割方法的流程示意图;1 is a schematic flowchart of a self-learning-based plant leaf vein segmentation method provided in Embodiment 1 of the present application;
图2为本申请实施例的基于自学习的植物叶片叶脉分割方法的示例图;Fig. 2 is an example diagram of a self-learning-based plant leaf vein segmentation method according to an embodiment of the application;
图3为本申请实施例的图片数据的示例图;3 is an example diagram of picture data according to an embodiment of the present application;
图4为本申请实施例的利用预训练的模型对未标注图片提取叶脉分割伪标签的流程图;4 is a flowchart of extracting pseudo-labels for leaf vein segmentation from unlabeled pictures using a pre-trained model according to an embodiment of the present application;
图5为本申请实施例所提供的一种基于自学习的植物叶片叶脉分割装置的结构示意图。FIG. 5 is a schematic structural diagram of a plant leaf vein segmentation device based on self-learning provided by an embodiment of the present application.
具体实施方式Detailed ways
下面详细描述本申请的实施例,所述实施例的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施例是示例性的,旨在用于解释本申请,而不能理解为对本申请的限制。The following describes in detail the embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein the same or similar reference numerals refer to the same or similar elements or elements having the same or similar functions throughout. The embodiments described below with reference to the accompanying drawings are exemplary, and are intended to be used to explain the present application, but should not be construed as a limitation to the present application.
下面参考附图描述本申请实施例的基于自学习的植物叶片叶脉分割方法和装置。The following describes the method and device for segmenting plant leaf veins based on self-learning according to the embodiments of the present application with reference to the accompanying drawings.
本申请基于自学习的植物叶片叶脉分割方法,实现了极少量标注样本下的基于自学习的轮廓和叶脉分割任务,通过迭代式的学习训练,使得模型充分学习无标注图片中叶片的特征信息,实现了由粗糙到精细的分割效果,训练并不局限于已标注的几张数据中,这也使得方法具有较好的泛化性。The present application is based on the self-learning method for plant leaf vein segmentation, and realizes the self-learning-based contour and leaf vein segmentation tasks under a very small number of labeled samples. The segmentation effect from coarse to fine is achieved, and the training is not limited to several labeled data, which also makes the method have better generalization.
图1为本申请实施例一所提供的一种基于自学习的植物叶片叶脉分割方法的流程示意图。FIG. 1 is a schematic flowchart of a method for segmenting plant leaf veins based on self-learning according to Embodiment 1 of the present application.
如图1所示,该基于自学习的植物叶片叶脉分割方法包括以下步骤:As shown in Figure 1, the self-learning-based method for segmenting plant leaf veins includes the following steps:
步骤101,获取已标注的植物叶片图片样本,并通过深度神经网络模型对已标注的植物叶片样本进行训练,获取特征提取模块、粗糙叶脉提取模块和精细叶脉提取模块。Step 101: Obtain the labeled plant leaf picture samples, and train the labeled plant leaf samples through a deep neural network model to obtain a feature extraction module, a rough leaf vein extraction module, and a fine leaf vein extraction module.
步骤102,获取无标注的植物叶片图片输入特征提取模块、粗糙叶脉提取模块和精细叶脉提取模块进行处理,获取无标注的植物叶片图片的粗糙叶脉图和精细叶脉图。
步骤103,将粗糙叶脉图和精细叶脉图进行融合,获取无标注的植物叶片图片的叶脉分割图,并将叶脉分割图作为无标注的植物叶片图片的标注信息并根据预设损失函数对深度神经网络模型进行训练,以使已训练的深度神经网络模型对待处理植物叶片图片进行处理,获取植物叶片分割结果。
具体地,本申请使用深度学习模型来提取叶片信息并且利用这些中间特征进行叶脉分割的推断,和使用自学习的学习模式利用极少量的标注图片让模型自动地去学习大量未标注图片中的信息,从而提高泛化性。这使得本申请的鲁棒性更强,不受到局部噪声的影响。用户在使用该算法时,无需了解算法背后的原理,训练好的模型就可以完成植物分类任务。Specifically, the present application uses a deep learning model to extract leaf information and uses these intermediate features to infer leaf vein segmentation, and uses a self-learning learning mode to use a very small number of labeled pictures to let the model automatically learn information from a large number of unlabeled pictures , thereby improving generalization. This makes the application more robust against local noise. When using the algorithm, users do not need to understand the principle behind the algorithm, and the trained model can complete the task of plant classification.
本申请利用深度学习算法框架,将包含植物叶片的图片中的叶子轮廓和叶脉清晰准确地提取出来,以及利用尽可能少的有标注的图片样本(如10张图片)和大量无标注图片样本(如上百张图片)对神经网络进行迭代式自学习训练,提升网络性能和泛化性。This application uses a deep learning algorithm framework to clearly and accurately extract the leaf outline and leaf veins in pictures containing plant leaves, and use as few labeled picture samples (such as 10 pictures) and a large number of unlabeled picture samples ( Such as hundreds of pictures) iterative self-learning training of neural network to improve network performance and generalization.
具体地,如图2所示,一是用少量的标注图片做预训练,通过深度神经网络模型的训练得到三个模块——特征提取模块、粗糙叶脉提取模块、精细叶脉提取模块;二是通过利用上述的特征提取模块对新来的无标注图片提取特征,然后基于此特征进一步用粗糙叶脉提取模块和精细叶脉提取模块经过推断分别得到不同精细度的高置信度伪标签,并将两种伪标签融合;三是基于自学习迭代过程,将得到的伪标签作为输入图片的标注来再次训练神经网络,通过不断迭代训练,得到越来越清晰完整的轮廓和叶脉分割图。Specifically, as shown in Figure 2, one is to use a small number of labeled pictures for pre-training, and three modules are obtained through the training of the deep neural network model - a feature extraction module, a rough leaf vein extraction module, and a fine leaf vein extraction module; The above feature extraction module is used to extract features from the newly arrived unlabeled images, and then based on this feature, the coarse vein extraction module and the fine vein extraction module are used to obtain high-confidence pseudo-labels with different finenesses through inference, respectively. Label fusion; thirdly, based on the self-learning iterative process, the obtained pseudo-label is used as the annotation of the input image to retrain the neural network, and through continuous iterative training, a clearer and more complete outline and leaf vein segmentation map are obtained.
具体地,少量有标签图片样本如图3所示,其中,标注图片中白色的部分即为要分割出的轮廓和叶脉,不需要对图片做任何建模和预处理,而大量无标签样本指的是只有左边的输入图片,没有右边的人工标注图片的样本。Specifically, a small number of labeled image samples are shown in Figure 3, in which the white part of the labeled image is the contour and leaf vein to be segmented. The one is that there is only the input image on the left, and there is no sample of the manually annotated image on the right.
在本申请实施例中,特征提取模块对无标注的植物叶片图片进行特征提取,获取特征图;粗糙叶脉提取模块对特征图进行处理,获取中间层特征图和所述粗糙叶脉图;精细叶脉提取模块对特征图和中间层特征图进行处理,获取精细叶脉图。In the embodiment of the present application, the feature extraction module performs feature extraction on unlabeled plant leaf pictures to obtain feature maps; the rough leaf vein extraction module processes the feature maps to obtain intermediate layer feature maps and the rough leaf vein maps; fine leaf vein extraction The module processes the feature map and the middle layer feature map to obtain a fine vein map.
在本申请实施例中,根据所述粗糙叶脉分割图生成置信度图,计算所述置信度图中高置信度区域与已标注的植物叶片图片样本对应的区域的损失,损失函数如下:In the embodiment of the present application, a confidence map is generated according to the rough leaf vein segmentation map, and the loss of the high-confidence region in the confidence map and the area corresponding to the labeled plant leaf picture sample is calculated, and the loss function is as follows:
l(x,y)=L={l1,l2,…,lN}T (1)l(x,y)=L={l 1 ,l 2 ,...,l N } T (1)
ln=-wn[yn·logxn] (2)l n = -w n [y n ·logx n ] (2)
其中,xn为粗糙叶脉提取模块输出的第n个像素点的预测值,yn为标注图片对应位置的标签值,wn∈{0,1}为置信度权重,当对应输出位置为高置信度时,wn为1,否则为0,即不传递损失。Among them, x n is the predicted value of the nth pixel output by the rough vein extraction module, y n is the label value of the corresponding position of the marked image, and w n ∈ {0,1} is the confidence weight. When the corresponding output position is high When the confidence is used, w n is 1, otherwise it is 0, that is, no loss is passed.
在本申请实施例中,计算所述精细叶脉图与已标注的植物叶片图片样本对应的区域的损失,损失函数如下:In the embodiment of the present application, the loss of the area corresponding to the fine vein map and the labeled plant leaf picture sample is calculated, and the loss function is as follows:
l(x,y)=L={l1,l2,…,lN}T (3)l(x,y)=L={l 1 ,l 2 ,...,l N } T (3)
ln=-yn·logxn (4)l n = -y n ·logx n (4)
其中,xn为粗糙叶脉提取模块输出的第n个像素点的预测值,yn为标注图片对应位置的标签值,认为该推断结果是高置信度的,所以没有对应的权重wn了。Among them, x n is the predicted value of the nth pixel output by the rough leaf vein extraction module, and y n is the label value of the corresponding position of the marked image. It is considered that the inference result is of high confidence, so there is no corresponding weight w n .
由此,通过最小化以上两个损失函数,通过反向传播算法来训练相关的模型。Thus, by minimizing the above two loss functions, the relevant model is trained through the back-propagation algorithm.
具体地,为了得到包含丰富图像和语义信息的特征,设计了特征提取模块,该模块把固定尺寸的输入图像转化成固定尺寸的特征图。为了得到清晰准确的叶脉,把叶肉和叶脉分开,同时把叶片和背景区分开,设计了粗糙叶脉提取模块和精细叶脉提取模块,粗糙叶脉提取模块的输入为特征提取模块输出的特征图,输出为与原始输入叶子图像相同大小的粗糙分割图,即各个点作为提取目标的概率值,同时可以得到每个像素点的置信度,精细叶脉提取模块就根据这个置信度图,对最不确定的区域进行再次推断,利用多尺度的特征图推断得到局部的精细叶脉图。为了提高本申请各模块的泛化能力,粗糙叶脉提取模块和精细叶脉提取模块共用图像特征提取模块,另外,精细叶脉提取模块还用到了粗糙叶脉提取模块中各个中间层的多尺度特征图。Specifically, in order to obtain features containing rich image and semantic information, a feature extraction module is designed, which converts a fixed-size input image into a fixed-size feature map. In order to obtain clear and accurate leaf veins, the mesophyll and leaf veins are separated, and the leaves and the background are distinguished at the same time, a rough leaf vein extraction module and a fine leaf vein extraction module are designed. The input of the rough leaf vein extraction module is the feature map output by the feature extraction module, and the output is The rough segmentation map of the same size as the original input leaf image, that is, the probability value of each point as the extraction target, and the confidence level of each pixel point can be obtained. Infer again, and use multi-scale feature map inference to obtain a local fine vein map. In order to improve the generalization ability of each module of the present application, the rough vein extraction module and the fine vein extraction module share the image feature extraction module. In addition, the fine vein extraction module also uses the multi-scale feature maps of each intermediate layer in the rough vein extraction module.
第一个步骤通过卷积神经网络(CNN)方法,特征提取模块、粗糙叶脉提取模块、精细叶脉提取模块这三种模块,这三种模块一方面描述了植物叶片图像形状特征,另一方面利用这些特征,从不同粒度推断出整体和局部的叶脉分割图,综合起来可以实现对叶片的轮廓和叶脉分割。此外本步骤通过预训练得到的各模型的基础性能是整个算法进一步学习的基础,为了提高方法的有效性,通常的深度学习方法会采用增大样本数据量来提高模型性能,而本申请是针对少量标注数据而设计的,在数据量保持在这么少的情况下,选择采用更高效的网络模型——残差网络(ResNet),采用此网络的优点在于,无需人为设计提取特征方法,即可提取出丰富的空间结构和语义特征用于轮廓和叶脉分割。在本申请中特征提取模块具体实现的残差网络结构参数如表1所示,从卷积2-卷积5分别由6、8、12、6个相同尺寸的卷积核构成,除了每层第一个卷积核的步长为2,剩余卷积核的步长为1(卷积2例外,它的所有步长都为1,因为池化1的步长为2,在这一步已经缩小了特征图尺寸)。粗糙叶脉提取模块采用反卷积来逐渐增大特征图的分辨率,最后得到与输入图像相同尺寸的粗糙叶脉图,其具体实现的网络参数如表2所示。精细叶脉提取模块采用尺寸为1*1的卷积核来实现,采用简单有效的网络结构达到二次推断的目的,其具体实现的网络参数如表3所示。The first step uses the convolutional neural network (CNN) method, which includes three modules: feature extraction module, rough leaf vein extraction module, and fine leaf vein extraction module. These three modules describe the shape features of plant leaf images on the one hand, and use the These features, inferring global and local leaf vein segmentation maps from different granularities, can be combined to achieve leaf contour and leaf vein segmentation. In addition, the basic performance of each model obtained through pre-training in this step is the basis for further learning of the entire algorithm. In order to improve the effectiveness of the method, the usual deep learning method will increase the amount of sample data to improve the model performance, and this application is aimed at It is designed for a small amount of labeled data. When the amount of data is kept at such a small amount, a more efficient network model - Residual Network (ResNet) is selected. The advantage of using this network is that there is no need to manually design a feature extraction method. Rich spatial structure and semantic features are extracted for contour and leaf vein segmentation. The residual network structure parameters specifically implemented by the feature extraction module in this application are shown in Table 1. From convolution 2 to convolution 5, they are composed of 6, 8, 12, and 6 convolution kernels of the same size, except for each layer. The first convolution kernel has a stride of 2, and the remaining convolution kernels have a stride of 1 (except for convolution 2, which has all strides of 1, because pooling 1 has a stride of 2, which is already in this step. reduced feature map size). The rough vein extraction module uses deconvolution to gradually increase the resolution of the feature map, and finally obtains the rough vein map of the same size as the input image. The specific network parameters are shown in Table 2. The fine vein extraction module is implemented by a convolution kernel with a size of 1*1, and a simple and effective network structure is used to achieve the purpose of secondary inference. The specific network parameters are shown in Table 3.
表1特征提取模块结构参数表Table 1 Feature extraction module structure parameter table
表2粗糙叶脉提取模块结构参数表Table 2 Structure parameters of rough leaf vein extraction module
表3精细叶脉提取模块结构参数表Table 3 Structure parameter table of fine vein extraction module
需要说明的是,特征提取模块对原始图像进行特征提取和分割时,可以采用不同的深度学习网络模型去实现,特征图的尺寸大小也可以采用不同的方案。It should be noted that, when the feature extraction module performs feature extraction and segmentation on the original image, different deep learning network models can be used to achieve this, and the size of the feature map can also be implemented in different schemes.
第二个步骤是通过利用上述的特征提取模块对新来的无标注图片提取特征图,然后基于此特征进一步用粗糙叶脉提取模块和精细叶脉提取模块经过推断分别得到不同精细度的高置信度伪标签,并将两种伪标签融合,整个流程图如图4所示。其中特征提取模块对输入图片进行操作,得到含有丰富语义信息的特征图;粗糙叶脉提取模块一边利用这些特征图中的信息做分割操作一边逐渐增大分辨率,分割出与原始输入图像尺寸大小相同的叶脉分割图,由于结果只有其中一部分区域的置信度高,所以称之为粗糙叶脉分割图;对于置信度低的部分,将其进行采样,用精细叶脉提取模块进行二次推断,该模块的输入包括特征提取模块得到的特征图、粗糙叶脉模块的中间层特征图以及粗糙叶脉置信度图中采样到的点,综合这些信息来生成采样点对应的精细叶脉分割结果。最后将粗糙叶脉和精细叶脉进行融合,得到最终的叶脉分割图,也就是未标注叶片图片的伪标签。The second step is to extract the feature map from the new unlabeled image by using the above feature extraction module, and then further use the rough vein extraction module and the fine vein extraction module to obtain high-confidence pseudo-graphs with different finenesses after inference based on this feature. label, and fuse the two pseudo-labels, the entire flow chart is shown in Figure 4. The feature extraction module operates on the input image to obtain a feature map with rich semantic information; the rough leaf vein extraction module gradually increases the resolution while using the information in these feature maps for segmentation, and the segmented image has the same size as the original input image. Since only a part of the result has high confidence, it is called a rough vein segmentation map; for the part with low confidence, it is sampled, and the fine vein extraction module is used for secondary inference. The input includes the feature map obtained by the feature extraction module, the intermediate layer feature map of the rough vein module, and the points sampled in the rough vein confidence map. These information are synthesized to generate the fine vein segmentation results corresponding to the sampling points. Finally, the rough leaf veins and the fine leaf veins are fused to obtain the final leaf vein segmentation map, which is the pseudo-label of the unlabeled leaf image.
需要说明的是,粗糙叶脉提取模块对特征进行分割时也可以采用不同的深度学习网络模型,对置信度图的采样方式,可以采用均匀采样、非均匀采样等多种方式,精细叶脉提取模块对采样点特征进行二次推断的过程,也可以采用不同的深度学习模型,以及伪标签的融合方式,可以采用加性、乘性、取最大、取最小等多种方式。It should be noted that the rough leaf vein extraction module can also use different deep learning network models to segment the features, and the sampling method of the confidence map can adopt various methods such as uniform sampling and non-uniform sampling. In the process of secondary inference of sampling point features, different deep learning models and fusion methods of pseudo-labels can also be used, such as additive, multiplicative, maximum, minimum and other methods.
在第三个步骤中,把第二个步骤得到的叶脉分割图,即伪标签当作未标注图片的标签,输入到第一个步骤得到的模型中,对特征提取模块和粗糙叶脉提取模块进行反复再训练,如下面的算法流程。如上文所述,特征提取模块采用残差网络(ResNet)作为骨架,粗糙叶脉提取模块包括了多层反卷积层。当训练到收敛之后,对于任意一张测试图片,其叶脉分割流程与上述流程类似,直接输出即为分割结果,不需要再进行迭代训练。In the third step, the leaf vein segmentation map obtained in the second step, that is, the pseudo-label, is regarded as the label of the unlabeled image, and is input into the model obtained in the first step. Repeated retraining, as the following algorithm flow. As mentioned above, the feature extraction module uses Residual Network (ResNet) as the skeleton, and the rough vein extraction module includes multiple deconvolution layers. After the training converges, for any test image, the leaf vein segmentation process is similar to the above process, and the output is directly That is, the segmentation result, no need for iterative training.
具体地,自学习迭代流程,输入:未标注的叶子图片I、预训练得到的特征提取模块me、粗糙叶脉提取模块mc和精细叶脉提取模块mf,二次推断规模N;对于每张未标注的叶子图片:用特征提取模块提取该图片的特征图ε=me(I);用粗糙叶脉提取模块计算该叶子的粗糙叶脉图以及置信度图Dc;根据置信度图采样N个最不确定的点;计算这些点所对应的各个特征图的位置;结合特征图ε,粗糙叶脉图以及粗糙叶脉提取模块的中间层特征,用精细叶脉提取模块做二次推断,得到精细叶脉将粗糙叶脉图和精细叶脉融合,得到伪标签将作为I的标签计算损失函数l,作梯度反向传播更新模型参数。返回第一步迭代,直到收敛。Specifically, in the self-learning iterative process, input: unlabeled leaf image I, feature extraction module me obtained by pre-training, rough leaf vein extraction module m c and fine leaf vein extraction module m f , and the secondary inference scale N; Unlabeled leaf image: use the feature extraction module to extract the feature map of the image ε=me (I); use the rough vein extraction module to calculate the rough vein map of the leaf And the confidence map D c ; sample N most uncertain points according to the confidence map; calculate the position of each feature map corresponding to these points; combine the feature map ε, the rough vein map And the middle layer features of the rough vein extraction module, use the fine vein extraction module to do secondary inference, and get the fine veins The rough vein diagram and fine veins Fusion to get pseudo labels Will The loss function l is calculated as the label of I, and the model parameters are updated by gradient back-propagation. Return to the first iteration until convergence.
由此,对于待提取叶片叶脉的植物叶片,只需要整个叶子为简单背景且完整包含在拍摄的照片中即可,申请使用深度学习模型来自动学习并提取叶片特征,然后利用这些中间特征进行自动化的叶脉分割,通过两个阶段的推断,得到高置信度的叶脉分割图。二次推断过程可以达到让模型再次对局部信息进行推断和纠正第一次推断中错误的效果,这使得本申请提取的结果更加准确,鲁棒性更强。Therefore, for the plant leaves whose leaf veins are to be extracted, it is only necessary that the entire leaf is a simple background and is completely included in the photographed photos. Apply for the use of a deep learning model to automatically learn and extract leaf features, and then use these intermediate features to automate Through two-stage inference, a high-confidence leaf vein segmentation map is obtained. The secondary inference process can achieve the effect of allowing the model to infer the local information again and correct the errors in the first inference, which makes the results extracted by this application more accurate and robust.
另外,本申请使用自学习的学习模式利用极少量的标注图片让模型自动地去学习大量未标注图片中的信息,可以提高本申请的泛化能力。同时减轻了模型训练时对大量标注图片的依赖,充分利用了未标注数据,这也使得本申请的应用场景更广泛。In addition, the present application uses a self-learning learning mode to use a very small number of labeled pictures to allow the model to automatically learn the information in a large number of unlabeled pictures, which can improve the generalization ability of the present application. At the same time, the dependence on a large number of labeled pictures during model training is reduced, and the unlabeled data is fully utilized, which also makes the application scenarios of the present application wider.
基于上述实施例的描述,本申请利用深度学习的方法提取叶片的多尺度特征信息;对于植物的叶片,通过深度学习网络——粗糙叶脉提取模块来提取叶片的轮廓和叶脉。对于植物的叶片,通过深度学习网络——精细叶脉提取模块来提取叶片的轮廓和叶脉。对于植物的叶片,通过对置信度图采样和置信度推断的方式,来对得到的轮廓和叶脉做进一步推断和分割。对于各个模块的预训练过程,只要求利用极少量有标注叶子图片,对于自学习过程,只需无标注叶子图片即可使迭代式自学习算法收敛,无需利用额外的有标注数据。将置信度推断与自学习相结合,置信度推断为自学习的伪标签提供保障,自学习得到的模型又为推断过程提供了更有信服力的置信度图。对于伪标签的生成,通过将粗糙叶脉图和精细叶脉图融合的方式来得到伪标签。Based on the description of the above embodiments, the present application uses the deep learning method to extract the multi-scale feature information of the leaves; for the leaves of the plants, the outline and the leaf veins of the leaves are extracted by the deep learning network - rough vein extraction module. For the leaves of plants, the contours and veins of the leaves are extracted through the deep learning network-fine vein extraction module. For the leaves of plants, the obtained contours and leaf veins are further inferred and segmented by means of confidence map sampling and confidence inference. For the pre-training process of each module, only a very small number of labeled leaf pictures are required. For the self-learning process, only the unlabeled leaf pictures can make the iterative self-learning algorithm converge without using additional labeled data. Combining confidence inference with self-learning, confidence inference provides a guarantee for self-learning pseudo-labels, and the model obtained from self-learning provides a more convincing confidence map for the inference process. For the generation of pseudo-labels, pseudo-labels are obtained by fusing the coarse vein map and the fine vein map.
针对相关技术中,大多数都对采集的图片质量有极高的要求如高光谱图像、投射扫描图像,有些还需要经过人工和复杂处理得到算法所需的数据格式如人工裁剪、点云数据、去噪和二值化,缺点是对采集设备要求很高,现实生活中最方便的往往是直接用简单手持设备进行拍摄,不能直接输入这样简单拍摄的原图使得方法应用起来不够便利,且经过处理的图片会丢失图片本身的色彩纹理等细致特征。而且这些方法所用到的都是需要人工调参的图像处理算法,常常会使得无法完全去除噪声,分割出的叶脉存在很多毛刺,这又反过来使得最终效果特别依赖于输入图片的质量,另外整个流程不能做到自动化处理。更重要的是,现实场景中的叶子往往是叶脉比较精细,而这些方法要么是需要特殊设备得到高清晰度的图片以得到精细的结果,要么对简单拍摄的图片只能得到及其简单的粗糙叶脉,这使得这些方法都不够便利和具有推广性。而与本发明类似的深度学习分割算法往往又需要大量精心标注的训练数据来提升效果,现阶段急需即可以自动化处理,又可以减少对人工标注数据的依赖的方法。For related technologies, most of them have extremely high requirements on the quality of the collected images, such as hyperspectral images, projected scanned images, and some require manual and complex processing to obtain the data formats required by the algorithm, such as manual cropping, point cloud data, The disadvantage of denoising and binarization is that it has high requirements for acquisition equipment. In real life, the most convenient way is to use a simple handheld device to shoot directly. The inability to directly input such a simple shooting original image makes the method inconvenient to apply. The processed image will lose the fine features such as the color texture of the image itself. Moreover, these methods use image processing algorithms that require manual parameter adjustment, which often makes it impossible to completely remove noise, and there are many burrs in the segmented leaf veins, which in turn makes the final effect particularly dependent on the quality of the input image. The process cannot be automated. More importantly, the leaves in the real scene often have finer veins, and these methods either require special equipment to obtain high-definition pictures to obtain fine results, or only simple and rough pictures can be obtained for simple pictures. Leaf veins, which makes these methods inconvenient and generalizable. However, a deep learning segmentation algorithm similar to the present invention often requires a large amount of carefully labeled training data to improve the effect, and a method that can automate processing and reduce the dependence on manually labeled data is urgently needed at this stage.
本申请对于待提取叶片叶脉的植物叶片,只需要普通手持设备(手机、照相机等)拍摄的具有干净背景的叶子即可,且不需要对图片做任何预处理即可自动化地提取叶脉,而且只需要极少量的标注数据即可进行模型的训练。这其他方案相比具有很大的优势和便利性,节省时间和人力。For the plant leaves whose veins are to be extracted in this application, only leaves with a clean background photographed by ordinary handheld devices (mobile phones, cameras, etc.) are required, and the veins can be automatically extracted without any preprocessing of the pictures. A very small amount of labeled data is required to train the model. Compared with other solutions, this has great advantages and convenience, saving time and manpower.
本申请采用深度学习方法模型来提取叶片特征,只需要利用便携手持相机获取的图片即可,不需要专门的传感器和复杂的预处理工作,可以直接将测试图片输入已经训练好的模型中,对图片质量需求高的缺点,效果和泛化性都更好。This application uses the deep learning method model to extract the leaf features, and only needs to use the pictures obtained by the portable handheld camera. It does not require special sensors and complex preprocessing work. The test pictures can be directly input into the trained model. The disadvantage of high image quality requirements, the effect and generalization are better.
本申请采用深度学习方法模型来自动学习参数,无需手工调参,且去噪效果更好。本申请无需特殊传感器等来获取图片,大大降低了应用难度,拓宽了应用场景,且通过神经网络能捕捉到图片中更丰富的特征信息,避免了相关技术中会导致信息丢失的问题。The present application adopts a deep learning method model to automatically learn parameters without manual parameter adjustment, and the denoising effect is better. The application does not require special sensors to obtain pictures, which greatly reduces the difficulty of application, broadens the application scenarios, and captures richer feature information in pictures through neural networks, avoiding the problem of information loss in related technologies.
本申请中避免了繁琐的人工预处理操作,更接近实际场景,克服了相关技术中采用手动设计参数的局限性,而且本申请中采用的深度学习神经网络算法误差小,鲁棒性强,使用范围广。This application avoids tedious manual preprocessing operations, is closer to the actual scene, and overcomes the limitations of manually designing parameters in the related art, and the deep learning neural network algorithm used in this application has small errors and strong robustness. wide range.
本申请采用的深度学习网络在去噪效果上要远远好于Canny算子滤波,鲁棒性强,避免了叶脉边缘形成毛刺的问题,减少了叶脉断裂区域的产生,而且最后只需要将测试图片输入到已训练好的模型中即可完成自动化的叶脉分割,无需人为插手中间的处理过程。The deep learning network used in this application is much better than Canny operator filtering in terms of denoising effect, has strong robustness, avoids the problem of burrs forming on the edges of leaf veins, and reduces the generation of broken areas of leaf veins, and finally only needs to test The automatic leaf vein segmentation can be completed by inputting the image into the trained model, without the need for human intervention in the middle processing process.
本申请采用的深度学习网络可以自动化地进行处理,避免了如此多需要人工设定的阈值,并且本申请算法误差小,不会产生毛刺,鲁棒性强,灵活性高。The deep learning network used in the present application can be processed automatically, avoiding so many thresholds that need to be manually set, and the algorithm of the present application has small errors, no burr, strong robustness, and high flexibility.
本申请的训练和测试图片都采用普通相机直接拍摄的图片,不需要预处理得到点云,避免了预处理过程带来的信息损失。针对相关技术中只能得到中间的一条主叶脉,而本申请可以得到更精细的多级叶脉,效果上更佳。The training and test pictures of this application are all pictures taken directly by ordinary cameras, and point clouds are obtained without preprocessing, which avoids the loss of information caused by the preprocessing process. In the related art, only one main leaf vein in the middle can be obtained, while the present application can obtain a finer multi-level leaf vein, and the effect is better.
本申请不要求如此高质量的图片,仅需普通相机拍摄到的图片即可,大大降低了技术的应用难度。另外,本申请中的深度学习算法误差要小于相关技术中的骨架化算法,得到的叶脉分割图更加精细,同时避免了骨架化算法中不断的细化造成的叶脉过细或过粗的问题,与原图中叶脉的像素宽度更接近。This application does not require such high-quality pictures, but only needs pictures taken by ordinary cameras, which greatly reduces the difficulty of application of the technology. In addition, the error of the deep learning algorithm in the present application is smaller than that of the skeletonization algorithm in the related art, the obtained leaf vein segmentation map is more refined, and at the same time, the problem of too thin or too thick leaf veins caused by continuous refinement in the skeletonization algorithm is avoided. The pixel width of the leaf veins in the original image is closer.
本申请实施例的基于自学习的植物叶片叶脉分割方法,通过深度神经网络模型对已标注的植物叶片样本进行训练,获取特征提取模块、粗糙叶脉提取模块和精细叶脉提取模块对无标注的植物叶片图片进行处理,获取粗糙叶脉图和精细叶脉图;将粗糙叶脉图和精细叶脉图进行融合,获取叶脉分割图作为无标注的植物叶片图片的标注信息并根据预设损失函数对深度神经网络模型进行训练,以使已训练的深度神经网络模型对待处理植物叶片图片进行处理获取植物叶片分割结果。由此,使用极少量的标注图片让模型自动地去学习大量未标注图片中的信息,从而提高泛化性,提高植物叶片叶脉分割的效率和准确性。The self-learning-based plant leaf vein segmentation method in the embodiment of the present application trains the labeled plant leaf samples through a deep neural network model, and obtains the feature extraction module, the rough leaf vein extraction module and the fine leaf vein extraction module for unlabeled plant leaves. The image is processed to obtain the rough leaf vein map and the fine leaf vein map; the rough leaf vein map and the fine leaf vein map are fused to obtain the leaf vein segmentation map as the label information of the unlabeled plant leaf image, and the deep neural network model is performed according to the preset loss function. training, so that the trained deep neural network model can process the plant leaf pictures to be processed to obtain plant leaf segmentation results. Therefore, using a very small number of labeled pictures allows the model to automatically learn the information in a large number of unlabeled pictures, thereby improving generalization and improving the efficiency and accuracy of plant leaf vein segmentation.
为了实现上述实施例,本申请还提出一种基于自学习的植物叶片叶脉分割装置。In order to realize the above embodiments, the present application also proposes a plant leaf vein segmentation device based on self-learning.
图5为本申请实施例提供的一种基于自学习的植物叶片叶脉分割装置的结构示意图。FIG. 5 is a schematic structural diagram of a plant leaf vein segmentation device based on self-learning according to an embodiment of the present application.
如图5所示,该基于自学习的植物叶片叶脉分割装置包括获取训练模块510、获取模块520和处理模块530。As shown in FIG. 5 , the plant leaf vein segmentation device based on self-learning includes an
获取训练模块510,用于获取已标注的植物叶片图片样本,并通过深度神经网络模型对所述已标注的植物叶片样本进行训练,获取特征提取模块、粗糙叶脉提取模块和精细叶脉提取模块。The
获取模块520,用于获取无标注的植物叶片图片输入所述特征提取模块、所述粗糙叶脉提取模块和所述精细叶脉提取模块进行处理,获取所述无标注的植物叶片图片的粗糙叶脉图和精细叶脉图。The obtaining
处理模块530,用于将所述粗糙叶脉图和所述精细叶脉图进行融合,获取所述无标注的植物叶片图片的叶脉分割图,并将所述叶脉分割图作为所述无标注的植物叶片图片的标注信息并根据预设损失函数对所述深度神经网络模型进行训练,以使已训练的深度神经网络模型对待处理植物叶片图片进行处理,获取植物叶片分割结果。The
本申请实施例的基于自学习的植物叶片叶脉分割装置,通过深度神经网络模型对已标注的植物叶片样本进行训练,获取特征提取模块、粗糙叶脉提取模块和精细叶脉提取模块对无标注的植物叶片图片进行处理,获取粗糙叶脉图和精细叶脉图;将粗糙叶脉图和精细叶脉图进行融合,获取叶脉分割图作为无标注的植物叶片图片的标注信息并根据预设损失函数对深度神经网络模型进行训练,以使已训练的深度神经网络模型对待处理植物叶片图片进行处理获取植物叶片分割结果。由此,使用极少量的标注图片让模型自动地去学习大量未标注图片中的信息,从而提高泛化性,提高植物叶片叶脉分割的效率和准确性。The self-learning-based plant leaf vein segmentation device of the embodiment of the present application trains the labeled plant leaf samples through a deep neural network model, and obtains the feature extraction module, the rough leaf vein extraction module and the fine leaf vein extraction module for the unlabeled plant leaves. The image is processed to obtain the rough leaf vein map and the fine leaf vein map; the rough leaf vein map and the fine leaf vein map are fused to obtain the leaf vein segmentation map as the label information of the unlabeled plant leaf image, and the deep neural network model is performed according to the preset loss function. training, so that the trained deep neural network model can process the plant leaf pictures to be processed to obtain plant leaf segmentation results. Therefore, using a very small number of labeled pictures allows the model to automatically learn the information in a large number of unlabeled pictures, thereby improving generalization and improving the efficiency and accuracy of plant leaf vein segmentation.
需要说明的是,前述对基于自学习的植物叶片叶脉分割方法实施例的解释说明也适用于该实施例的基于自学习的植物叶片叶脉分割装置,此处不再赘述。It should be noted that, the foregoing explanations of the embodiment of the method for segmenting plant leaf veins based on self-learning are also applicable to the device for segmenting veins of plant leaves based on self-learning in this embodiment, which will not be repeated here.
在本说明书的描述中,参考术语“一个实施例”、“一些实施例”、“示例”、“具体示例”、或“一些示例”等的描述意指结合该实施例或示例描述的具体特征、结构、材料或者特点包含于本申请的至少一个实施例或示例中。在本说明书中,对上述术语的示意性表述不必须针对的是相同的实施例或示例。而且,描述的具体特征、结构、材料或者特点可以在任一个或多个实施例或示例中以合适的方式结合。此外,在不相互矛盾的情况下,本领域的技术人员可以将本说明书中描述的不同实施例或示例以及不同实施例或示例的特征进行结合和组合。In the description of this specification, description with reference to the terms "one embodiment," "some embodiments," "example," "specific example," or "some examples", etc., mean specific features described in connection with the embodiment or example , structure, material or feature is included in at least one embodiment or example of the present application. In this specification, schematic representations of the above terms are not necessarily directed to the same embodiment or example. Furthermore, the particular features, structures, materials or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, those skilled in the art may combine and combine the different embodiments or examples described in this specification, as well as the features of the different embodiments or examples, without conflicting each other.
此外,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括至少一个该特征。在本申请的描述中,“多个”的含义是至少两个,例如两个,三个等,除非另有明确具体的限定。In addition, the terms "first" and "second" are only used for descriptive purposes, and should not be construed as indicating or implying relative importance or implying the number of indicated technical features. Thus, a feature delimited with "first", "second" may expressly or implicitly include at least one of that feature. In the description of the present application, "plurality" means at least two, such as two, three, etc., unless expressly and specifically defined otherwise.
流程图中或在此以其他方式描述的任何过程或方法描述可以被理解为,表示包括一个或更多个用于实现定制逻辑功能或过程的步骤的可执行指令的代码的模块、片段或部分,并且本申请的优选实施方式的范围包括另外的实现,其中可以不按所示出或讨论的顺序,包括根据所涉及的功能按基本同时的方式或按相反的顺序,来执行功能,这应被本申请的实施例所属技术领域的技术人员所理解。Any process or method description in the flowcharts or otherwise described herein may be understood to represent a module, segment or portion of code comprising one or more executable instructions for implementing custom logical functions or steps of the process , and the scope of the preferred embodiments of the present application includes alternative implementations in which the functions may be performed out of the order shown or discussed, including performing the functions substantially concurrently or in the reverse order depending upon the functions involved, which should It is understood by those skilled in the art to which the embodiments of the present application belong.
在流程图中表示或在此以其他方式描述的逻辑和/或步骤,例如,可以被认为是用于实现逻辑功能的可执行指令的定序列表,可以具体实现在任何计算机可读介质中,以供指令执行系统、装置或设备(如基于计算机的系统、包括处理器的系统或其他可以从指令执行系统、装置或设备取指令并执行指令的系统)使用,或结合这些指令执行系统、装置或设备而使用。就本说明书而言,"计算机可读介质"可以是任何可以包含、存储、通信、传播或传输程序以供指令执行系统、装置或设备或结合这些指令执行系统、装置或设备而使用的装置。计算机可读介质的更具体的示例(非穷尽性列表)包括以下:具有一个或多个布线的电连接部(电子装置),便携式计算机盘盒(磁装置),随机存取存储器(RAM),只读存储器(ROM),可擦除可编辑只读存储器(EPROM或闪速存储器),光纤装置,以及便携式光盘只读存储器(CDROM)。另外,计算机可读介质甚至可以是可在其上打印所述程序的纸或其他合适的介质,因为可以例如通过对纸或其他介质进行光学扫描,接着进行编辑、解译或必要时以其他合适方式进行处理来以电子方式获得所述程序,然后将其存储在计算机存储器中。The logic and/or steps represented in flowcharts or otherwise described herein, for example, may be considered an ordered listing of executable instructions for implementing the logical functions, may be embodied in any computer-readable medium, For use with, or in conjunction with, an instruction execution system, apparatus, or device (such as a computer-based system, a system including a processor, or other system that can fetch instructions from and execute instructions from an instruction execution system, apparatus, or apparatus) or equipment. For the purposes of this specification, a "computer-readable medium" can be any device that can contain, store, communicate, propagate, or transport the program for use by or in connection with an instruction execution system, apparatus, or apparatus. More specific examples (non-exhaustive list) of computer readable media include the following: electrical connections with one or more wiring (electronic devices), portable computer disk cartridges (magnetic devices), random access memory (RAM), Read Only Memory (ROM), Erasable Editable Read Only Memory (EPROM or Flash Memory), Fiber Optic Devices, and Portable Compact Disc Read Only Memory (CDROM). In addition, the computer readable medium may even be paper or other suitable medium on which the program may be printed, as the paper or other medium may be optically scanned, for example, followed by editing, interpretation, or other suitable medium as necessary process to obtain the program electronically and then store it in computer memory.
应当理解,本申请的各部分可以用硬件、软件、固件或它们的组合来实现。在上述实施方式中,多个步骤或方法可以用存储在存储器中且由合适的指令执行系统执行的软件或固件来实现。如,如果用硬件来实现和在另一实施方式中一样,可用本领域公知的下列技术中的任一项或他们的组合来实现:具有用于对数据信号实现逻辑功能的逻辑门电路的离散逻辑电路,具有合适的组合逻辑门电路的专用集成电路,可编程门阵列(PGA),现场可编程门阵列(FPGA)等。It should be understood that various parts of this application may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware as in another embodiment, it can be implemented by any one of the following techniques known in the art, or a combination thereof: discrete with logic gates for implementing logic functions on data signals Logic circuits, application specific integrated circuits with suitable combinational logic gates, Programmable Gate Arrays (PGA), Field Programmable Gate Arrays (FPGA), etc.
本申请领域的普通技术人员可以理解实现上述实施例方法携带的全部或部分步骤是可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,该程序在执行时,包括方法实施例的步骤之一或其组合。Those of ordinary skill in the field of the present application can understand that all or part of the steps carried by the methods of the above embodiments can be completed by instructing the relevant hardware through a program, and the program can be stored in a computer-readable storage medium, and the program is in When executed, one or a combination of the steps of the method embodiment is included.
此外,在本申请各个实施例中的各功能单元可以集成在一个处理模块中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。所述集成的模块如果以软件功能模块的形式实现并作为独立的产品销售或使用时,也可以存储在一个计算机可读取存储介质中。In addition, each functional unit in each embodiment of the present application may be integrated into one processing module, or each unit may exist physically alone, or two or more units may be integrated into one module. The above-mentioned integrated modules can be implemented in the form of hardware, and can also be implemented in the form of software function modules. If the integrated modules are implemented in the form of software functional modules and sold or used as independent products, they may also be stored in a computer-readable storage medium.
上述提到的存储介质可以是只读存储器,磁盘或光盘等。尽管上面已经示出和描述了本申请的实施例,可以理解的是,上述实施例是示例性的,不能理解为对本申请的限制,本领域的普通技术人员在本申请的范围内可以对上述实施例进行变化、修改、替换和变型。The above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, and the like. Although the embodiments of the present application have been shown and described above, it should be understood that the above embodiments are exemplary and should not be construed as limitations to the present application. Embodiments are subject to variations, modifications, substitutions and variations.
Claims (6)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011528023.1A CN112581483B (en) | 2020-12-22 | 2020-12-22 | Self-learning-based plant leaf vein segmentation method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011528023.1A CN112581483B (en) | 2020-12-22 | 2020-12-22 | Self-learning-based plant leaf vein segmentation method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112581483A CN112581483A (en) | 2021-03-30 |
CN112581483B true CN112581483B (en) | 2022-10-04 |
Family
ID=75138943
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011528023.1A Active CN112581483B (en) | 2020-12-22 | 2020-12-22 | Self-learning-based plant leaf vein segmentation method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112581483B (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113327299B (en) * | 2021-07-07 | 2021-12-14 | 北京邮电大学 | Neural network light field method based on joint sampling structure |
CN114140688B (en) * | 2021-11-23 | 2022-12-09 | 武汉理工大学 | Vein phenotype extraction method and device based on transmission scanning image and electronic equipment |
CN115511702A (en) * | 2022-10-14 | 2022-12-23 | 昆明理工大学 | Dicotyledon vein modeling method based on DCGAN and fractal theory |
CN115690493A (en) * | 2022-10-24 | 2023-02-03 | 苏州大学 | Training of cultural relics disease labeling model and cultural relics disease labeling system and device |
CN119576253B (en) * | 2024-10-17 | 2025-06-10 | 广东工业大学 | Plant vein custom recognition printing method and system based on image recognition |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101969852A (en) * | 2008-03-04 | 2011-02-09 | 断层放疗公司 | Method and system for improved image segmentation |
CN109544554A (en) * | 2018-10-18 | 2019-03-29 | 中国科学院空间应用工程与技术中心 | A kind of segmentation of plant image and blade framework extracting method and system |
CN109636809A (en) * | 2018-12-03 | 2019-04-16 | 西南交通大学 | A kind of image segmentation hierarchy selection method based on scale perception |
CN111354002A (en) * | 2020-02-07 | 2020-06-30 | 天津大学 | Kidney and kidney tumor segmentation method based on deep neural network |
CN111598174A (en) * | 2020-05-19 | 2020-08-28 | 中国科学院空天信息创新研究院 | Training method, image analysis method and system of image feature classification model |
WO2020215236A1 (en) * | 2019-04-24 | 2020-10-29 | 哈尔滨工业大学(深圳) | Image semantic segmentation method and system |
-
2020
- 2020-12-22 CN CN202011528023.1A patent/CN112581483B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101969852A (en) * | 2008-03-04 | 2011-02-09 | 断层放疗公司 | Method and system for improved image segmentation |
CN109544554A (en) * | 2018-10-18 | 2019-03-29 | 中国科学院空间应用工程与技术中心 | A kind of segmentation of plant image and blade framework extracting method and system |
CN109636809A (en) * | 2018-12-03 | 2019-04-16 | 西南交通大学 | A kind of image segmentation hierarchy selection method based on scale perception |
WO2020215236A1 (en) * | 2019-04-24 | 2020-10-29 | 哈尔滨工业大学(深圳) | Image semantic segmentation method and system |
CN111354002A (en) * | 2020-02-07 | 2020-06-30 | 天津大学 | Kidney and kidney tumor segmentation method based on deep neural network |
CN111598174A (en) * | 2020-05-19 | 2020-08-28 | 中国科学院空天信息创新研究院 | Training method, image analysis method and system of image feature classification model |
Non-Patent Citations (2)
Title |
---|
Unsupervised Learning Method for Plant and Leaf Segmentation;Noor M. Al-Shakarji 等;《2017 IEEE Applied Imagery Pattern Recognition Workshop (AIPR)》;20180911;全文 * |
基于改进LBP 和Otsu 相结合的病害叶片图像分割方法;许新华;《计算机产品与流通》;20190815;全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN112581483A (en) | 2021-03-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112581483B (en) | Self-learning-based plant leaf vein segmentation method and device | |
CN110111340B (en) | Weak supervision example segmentation method based on multi-path segmentation | |
CN108335303B (en) | A multi-scale palm bone segmentation method applied to palm X-ray films | |
CN112232371B (en) | American license plate recognition method based on YOLOv3 and text recognition | |
CN109086811B (en) | Multi-label image classification method and device and electronic equipment | |
CN108710863A (en) | Unmanned plane Scene Semantics dividing method based on deep learning and system | |
CN108921057B (en) | Convolutional neural network-based prawn form measuring method, medium, terminal equipment and device | |
CN111145209A (en) | Medical image segmentation method, device, equipment and storage medium | |
CN108615046A (en) | A kind of stored-grain pests detection recognition methods and device | |
CN110807775A (en) | Traditional Chinese medicine tongue image segmentation device and method based on artificial intelligence and storage medium | |
CN111462076A (en) | Method and system for detecting fuzzy area of full-slice digital pathological image | |
CN111340124A (en) | Method and device for identifying entity category in image | |
CN112883795B (en) | Rapid and automatic table extraction method based on deep neural network | |
CN111178438A (en) | ResNet 101-based weather type identification method | |
CN114240961A (en) | U-Net + + cell division network system, method, equipment and terminal | |
CN114565675A (en) | A method for removing dynamic feature points in the front end of visual SLAM | |
CN115063447A (en) | A video sequence-based target animal motion tracking method and related equipment | |
CN114120359A (en) | Method for measuring body size of group-fed pigs based on stacked hourglass network | |
CN115423802A (en) | Automatic classification and segmentation method of squamous epithelial tumor cell images based on deep learning | |
CN117542075A (en) | Small sample image classification method and device based on attention mechanism | |
CN109472294A (en) | A method, device, storage medium and equipment for identifying urban water bodies | |
CN113837062A (en) | A classification method, device, storage medium and electronic device | |
CN112699898A (en) | Image direction identification method based on multi-layer feature fusion | |
CN114626483B (en) | A method and device for generating a landmark image | |
CN114897922B (en) | A tissue pathology image segmentation method based on deep reinforcement learning |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |