[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN116563598A - Network degree detection method and device, storage medium and electronic equipment - Google Patents

Network degree detection method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN116563598A
CN116563598A CN202310330426.2A CN202310330426A CN116563598A CN 116563598 A CN116563598 A CN 116563598A CN 202310330426 A CN202310330426 A CN 202310330426A CN 116563598 A CN116563598 A CN 116563598A
Authority
CN
China
Prior art keywords
image
target
spinning
node
semantic segmentation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310330426.2A
Other languages
Chinese (zh)
Inventor
李智
彭添强
郑广智
肖计春
张颖
柴天佑
张敏喆
万学军
王开仕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northeastern University China
Beijing Sanlian Hope Shin Gosen Technical Service Co
Original Assignee
Northeastern University China
Beijing Sanlian Hope Shin Gosen Technical Service Co
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northeastern University China, Beijing Sanlian Hope Shin Gosen Technical Service Co filed Critical Northeastern University China
Priority to CN202310330426.2A priority Critical patent/CN116563598A/en
Publication of CN116563598A publication Critical patent/CN116563598A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • G06V10/765Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects using rules for classification or partitioning the feature space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30124Fabrics; Textile; Paper
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Databases & Information Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The application discloses a network degree detection method, a network degree detection device, a storage medium and electronic equipment, wherein the method comprises the following steps: performing image processing on an image to be tested to obtain a plurality of target contour images corresponding to the to-be-tested attempt, wherein each target contour image comprises a target spinning contour; performing network node characteristic extraction processing on the to-be-tested image based on a preset semantic segmentation model to obtain a node prediction image corresponding to the to-be-tested image; and detecting the node prediction image by taking each target contour image as a mask image to obtain a network degree detection result of each target spinning. The detection method for obtaining the network degree index based on image processing is high in efficiency and more accurate in detection result.

Description

一种网络度检测方法、装置、存储介质及电子设备A network degree detection method, device, storage medium and electronic equipment

技术领域technical field

本发明涉及化纤领域,特别涉及一种网络度检测方法、装置、存储介质及电子设备。The invention relates to the field of chemical fibers, in particular to a network degree detection method, device, storage medium and electronic equipment.

背景技术Background technique

化纤产品与我们的生活息息相关,我们日常穿着和居住环境都离不来化纤产品。我国化纤行业存量大且存续历史悠久,行业智能改造、产业升级等市场潜力巨大,并且随着全球纤维产业创新驱动战略的实施,推动纤维领域与新兴科技交叉与融合对化纤生产企业提出新的要求。因此化纤产品的质量检测对后续生产环节具有较大影响。网络度,作为化纤产品的关键质量指标之一,指的是单位长度内化纤长丝中网络结点的数量。对于化纤长丝,单位长度的网络度需要满足一定的范围,网络数过多会导致松弛,加工过程中,丝线不能充分解捻,且染色时上染率不同,导致面料表面有网络斑。网络数过少导致在织造过程中网络点易松散起毛。所以正确的网络度对后续生产环节至关重要。Chemical fiber products are closely related to our life, our daily wear and living environment cannot do without chemical fiber products. my country's chemical fiber industry has a large stock and a long history, and the market potential for industrial intelligent transformation and industrial upgrading is huge. With the implementation of the innovation-driven strategy of the global fiber industry, the promotion of the intersection and integration of the fiber field and emerging technologies has put forward new requirements for chemical fiber manufacturers. . Therefore, the quality inspection of chemical fiber products has a great influence on the subsequent production links. Network degree, as one of the key quality indicators of chemical fiber products, refers to the number of network nodes in chemical fiber filaments per unit length. For chemical fiber filaments, the degree of network per unit length needs to meet a certain range. Excessive network numbers will lead to relaxation. During processing, the silk cannot be fully untwisted, and the dyeing rate is different during dyeing, resulting in network spots on the surface of the fabric. If the number of networks is too small, the network points are easy to loosen and fluff during the weaving process. Therefore, the correct network degree is very important for the subsequent production process.

而现在很多化纤纺丝工厂使用传统的人工网络度指标检测方法,该方法通过水浴法将多根纺丝并行置于水中,并通过目测法计数得到网络度检测指标,但是在实际使用中具有很大的局限性。首先,人工检测依赖于人的主观评价,受人的心情、思维以及照明灯主客观因素的影响而具有很大的不稳定性、不可靠性和非量化性,给产品的质量评估带来了很多不稳定和不可靠的因素。其次,人眼无法实现产品高速生产时的网络度指标检测需求。Now many chemical fiber spinning factories use the traditional artificial network index detection method. This method puts multiple spinning threads in parallel in water through the water bath method, and obtains the network degree detection index by visual counting, but it has many advantages in actual use. Big limitations. First of all, manual inspection relies on people's subjective evaluation, which is greatly unstable, unreliable and non-quantifiable due to the influence of people's mood, thinking and subjective and objective factors of lighting lamps, which brings great challenges to product quality evaluation. Lots of unstable and unreliable factors. Secondly, the human eye cannot meet the detection requirements of the network degree index when the product is produced at high speed.

发明内容Contents of the invention

有鉴于此,本发明提供了一种网络度检测方法、装置、电子设备以及存储介质,主要目的在于解决目前存在人工进行网络度检测不够精准的问题。In view of this, the present invention provides a network degree detection method, device, electronic equipment and storage medium, the main purpose of which is to solve the current problem that manual network degree detection is not accurate enough.

为解决上述问题,本申请提供一种网络度检测方法,包括:In order to solve the above problems, this application provides a network degree detection method, including:

对待测试图像进行图像处理,获得与所述待测试图像对应的若干目标轮廓图像,其中,各所述目标轮廓图像中包含目标纺丝的轮廓;Image processing is performed on the image to be tested to obtain a plurality of target contour images corresponding to the image to be tested, wherein each of the target contour images includes a target spinning profile;

基于预设的语义分割模型对所述待测试图像进行网络节点特征提取处理,获得与所述待测试图像对应的节点预测图像;Performing network node feature extraction processing on the image to be tested based on a preset semantic segmentation model to obtain a node prediction image corresponding to the image to be tested;

将各所述目标轮廓图像作为掩膜图像,对所述节点预测图像进行检测,得到各所述目标纺丝的网络度检测结果。Using each target outline image as a mask image to detect the node prediction image to obtain the network degree detection result of each target spinning.

可选的,所述对待测试图像进行图像处理,获得待测试图像中目标纺丝对应的目标轮廓图像,具体包括:Optionally, performing image processing on the image to be tested to obtain a target contour image corresponding to the target spinning in the image to be tested, specifically including:

对所述待测试图像进行前置处理,得到前置处理后的第一图像;performing pre-processing on the image to be tested to obtain a pre-processed first image;

对所述第一图像进行阈处理,得到纺丝区域与非纺丝区域分离的第二图像;performing threshold processing on the first image to obtain a second image in which the spinning area is separated from the non-spinning area;

对所述第二图像进行膨胀腐蚀处理,得到纺丝轮廓图像;performing dilation and erosion processing on the second image to obtain a spinning profile image;

基于所述纺丝轮廓图像进行轮廓检测,得到待测试图像中目标纺丝对应的目标轮廓图像。The contour detection is performed based on the spinning contour image, and the target contour image corresponding to the target spinning in the image to be tested is obtained.

可选的,在基于预设的语义分割模型对所述待测试图像进行网络节点特征提取处理,获得与所述待测试图像对应的节点预测图像,所述方法还包括:训练获得所述语义分割模型,具体包括:Optionally, performing network node feature extraction processing on the image to be tested based on a preset semantic segmentation model to obtain a node prediction image corresponding to the image to be tested, the method further includes: training to obtain the semantic segmentation models, including:

获取若干历史图像和与各所述历史图像对应的标签图像;Acquiring several historical images and tag images corresponding to each of the historical images;

对各所述历史图像和与各所述标签图像进行分组处理,获得包含若干第一历史图像以及与各第一历史图像对应的标签图像的第一数据集;和包含若干第二历史图像以及与各第二历史图像对应的标签图像的第二数据集;performing grouping processing on each of the historical images and each of the label images to obtain a first data set comprising several first historical images and label images corresponding to each first historical image; a second data set of label images corresponding to each second historical image;

将各所述第一数据集作为训练样本,采用预设的语义分割方法进行模型训练,得到当前语义分割模型和当前模型参数,基于第二数据集对所述当前语义分割模型和当前模型参数进行检测,获得所述语义分割模型。Each of the first data sets is used as a training sample, and a preset semantic segmentation method is used for model training to obtain the current semantic segmentation model and current model parameters, and the current semantic segmentation model and current model parameters are performed based on the second data set. Detect to obtain the semantic segmentation model.

可选的,所述将各所述第一数据集作为训练样本,采用预设的语义分割方法进行模型训练,得到当前语义分割模型和当前模型参数,基于第二数据集对所述当前语义分割模型和当前模型参数进行检测,获得所述语义分割模型,具体包括如下步骤:Optionally, each of the first data sets is used as a training sample, and a preset semantic segmentation method is used for model training to obtain the current semantic segmentation model and current model parameters, and the current semantic segmentation is performed based on the second data set. Model and current model parameter are detected, obtain described semantic segmentation model, specifically comprise the following steps:

步骤一:基于各所述第一数据集作为训练样本,采用预设的语义分割方法进行模型训练,得到当前语义分割模型;Step 1: Based on each of the first data sets as a training sample, a preset semantic segmentation method is used for model training to obtain a current semantic segmentation model;

步骤二:判断当前模型精度值是否满足预设精度值,在当前模型精度值小于预设精度值时,执行步骤三;在当前模型精度值大于或者等于预设精度值的情况下,重复执行步骤一,获得更新后的当前语义分割模型和更新后的当前模型参数;Step 2: Determine whether the current model accuracy value meets the preset accuracy value. When the current model accuracy value is less than the preset accuracy value, perform step 3; if the current model accuracy value is greater than or equal to the preset accuracy value, repeat step 1. Obtain the updated current semantic segmentation model and the updated current model parameters;

步骤三:基于第二数据集对所述当前语义分割模型和当前模型参数进行检测,获得所述语义分割模型。Step 3: Detect the current semantic segmentation model and current model parameters based on the second data set to obtain the semantic segmentation model.

可选的,所述获取与各所述历史图像对应的标签图像,具体包括:Optionally, the acquiring the label image corresponding to each of the historical images specifically includes:

对各所述历史图像进行图像标记处理,获取与各所述历史图像对应的标签文件,所述标签文件中包括所述历史图像的节点区域信息和标记类别信息;Perform image labeling processing on each of the historical images, and obtain a label file corresponding to each of the historical images, and the label file includes node area information and label category information of the historical image;

基于各所述历史图像对应的所述节点区域信息和所述标签类别信息,对各所述历史图像对应的标签文件进行标签图像制作,获得与各所述历史图像对应的标签图像。Based on the node area information and the label category information corresponding to each of the historical images, perform label image creation on the label file corresponding to each of the historical images, and obtain a label image corresponding to each of the historical images.

可选的,所述基于各所述目标轮廓图像作为掩膜图像,对所述节点预测图像进行检测,得到各所述目标纺丝的网络度检测结果,具体包括:Optionally, the node prediction image is detected based on each target outline image as a mask image to obtain the network degree detection result of each target spinning, which specifically includes:

将各所述目标轮廓图像作为掩膜图像,对所述节点预测图像进行检测,提取得到目标纺丝对应的节点图像;Using each target outline image as a mask image, detecting the node prediction image, and extracting the node image corresponding to the target spinning;

基于所述目标纺丝对应的节点图像,计算获得目标纺丝的网络度,得到所述网络度的检测结果。Based on the node image corresponding to the target spinning, calculate and obtain the network degree of the target spinning, and obtain the detection result of the network degree.

可选的,所述基于所述目标纺丝对应的节点图像,计算获得目标纺丝的网络度,得到所述网络度的检测结果,具体包括:Optionally, the calculating and obtaining the network degree of the target spinning based on the node image corresponding to the target spinning, to obtain the detection result of the network degree, specifically includes:

基于所述目标纺丝对应的节点图像,获得目标纺丝对应的节点数量;Obtaining the number of nodes corresponding to the target spinning based on the node image corresponding to the target spinning;

基于所述节点图像的像素点数量和每个像素点对应的实际距离,计算获得所述目标纺丝的实际长度;Based on the number of pixels of the node image and the actual distance corresponding to each pixel, calculate and obtain the actual length of the target spinning;

基于所述目标纺丝对应的节点数量和所述目标纺丝的实际长度,计算获得所述目标纺丝的网络度,得到所述网络度的检测结果。Based on the number of nodes corresponding to the target spinning and the actual length of the target spinning, calculate and obtain the network degree of the target spinning, and obtain the detection result of the network degree.

为解决上述问题,本申请提供一种网络度检测装置,包括:In order to solve the above problems, the present application provides a network degree detection device, including:

图像处理模块:用于对待测试图像进行图像处理,获得与所述待测试图像对应的若干目标轮廓图像,其中,各所述目标轮廓图像中包含目标纺丝的轮廓;Image processing module: used to perform image processing on the image to be tested to obtain a number of target contour images corresponding to the image to be tested, wherein each of the target contour images includes the contour of the target spinning;

特征提取模块:用于基于预设的语义分割模型对所述待测试图像进行网络节点特征提取处理,获得与所述待测试图像对应的节点预测图像;Feature extraction module: for performing network node feature extraction processing on the image to be tested based on a preset semantic segmentation model, to obtain a node prediction image corresponding to the image to be tested;

检测模块:用于将各所述目标轮廓图像作为掩膜图像,对所述节点预测图像进行检测,得到各所述目标纺丝的网络度检测结果。A detection module: used to use each target outline image as a mask image to detect the node prediction image, and obtain a network degree detection result of each target spinning.

为解决上述问题,本申请提供一种存储介质,所述存储介质存储有计算机程序,所述计算机程序被处理器执行时实现上述网络度检测方法的步骤。In order to solve the above problems, the present application provides a storage medium, the storage medium stores a computer program, and when the computer program is executed by a processor, the steps of the above network degree detection method are realized.

为解决上述问题,本申请提供一种电子设备,至少包括存储器、处理器,所述存储器上存储有计算机程序,所述处理器在执行所述存储器上的计算机程序时实现上述网络度检测方法的步骤。In order to solve the above problems, the present application provides an electronic device, which at least includes a memory and a processor, the memory stores a computer program, and when the processor executes the computer program on the memory, the above-mentioned network degree detection method is implemented. step.

本申请中,对待测试纺丝的图像进行图像处理获得目标纺丝对应的目标轮廓图像,基于预设的语义分割模型对待测试图像进行特征提取得到节点预测图像,基于目标轮廓图像作为掩膜图像对所述节点预测图像进行检测获得目标纺丝的网络度检测结果,基于图像处理获得网络度指标的检测方法效率高,检测结果更加精准。In this application, image processing is performed on the image of the spinning to be tested to obtain the target contour image corresponding to the target spinning, based on the preset semantic segmentation model, feature extraction is performed on the image to be tested to obtain a node prediction image, based on the target contour image as a mask image pair The node prediction image is detected to obtain the detection result of the network degree of the target spinning, and the detection method based on the image processing to obtain the network degree index has high efficiency and the detection result is more accurate.

上述说明仅是本发明技术方案的概述,为了能够更清楚了解本发明的技术手段,而可依照说明书的内容予以实施,并且为了让本发明的上述和其它目的、特征和优点能够更明显易懂,以下特举本发明的具体实施方式。The above description is only an overview of the technical solution of the present invention. In order to better understand the technical means of the present invention, it can be implemented according to the contents of the description, and in order to make the above and other purposes, features and advantages of the present invention more obvious and understandable , the specific embodiments of the present invention are enumerated below.

附图说明Description of drawings

通过阅读下文优选实施方式的详细描述,各种其他的优点和益处对于本领域普通技术人员将变得清楚明了。附图仅用于示出优选实施方式的目的,而并不认为是对本发明的限制。而且在整个附图中,用相同的参考符号表示相同的部件。在附图中:Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiment. The drawings are only for the purpose of illustrating a preferred embodiment and are not to be considered as limiting the invention. Also throughout the drawings, the same reference numerals are used to designate the same parts. In the attached picture:

图1为本申请实施例一种网络度检测方法的流程图;Fig. 1 is the flowchart of a kind of network degree detection method of the embodiment of the present application;

图2为本申请又一实施例一种网络度检测方法的流程图;FIG. 2 is a flowchart of a network degree detection method according to another embodiment of the present application;

图3本申请又一实施例一种网络度检测装置的结构框图;FIG. 3 is a structural block diagram of a network degree detection device according to yet another embodiment of the present application;

图4(a)本申请中待测试图像;Figure 4(a) the image to be tested in this application;

图4(b)本申请中经过腐蚀膨胀处理后得到的纺丝轮廓图像;Figure 4(b) is the spinning profile image obtained after corrosion and expansion treatment in this application;

图4(c)本申请中,经过语义分割模型对待测试图像进行网络节点特征提取处理后获得的节点预测图像;Fig. 4 (c) In this application, the node prediction image obtained after the network node feature extraction process is performed on the test image by the semantic segmentation model;

图4(d)本申请中目标纺丝对应的目标轮廓图像;Figure 4(d) the target contour image corresponding to the target spinning in this application;

图4(e)本申请中目标纺丝对应的节点图像;Figure 4(e) The node image corresponding to the target spinning in this application;

图4(f)在待测试图像上标记节点轮廓外接矩形得到的图像。Figure 4(f) is the image obtained by marking the circumscribed rectangle of the node outline on the image to be tested.

具体实施方式Detailed ways

此处参考附图描述本申请的各种方案以及特征。Various aspects and features of the present application are described herein with reference to the accompanying drawings.

应理解的是,可以对此处申请的实施例做出各种修改。因此,上述说明书不应该视为限制,而仅是作为实施例的范例。本领域的技术人员将想到在本申请的范围和精神内的其他修改。It should be understood that various modifications may be made to the embodiments applied for herein. Accordingly, the above description should not be viewed as limiting, but only as exemplifications of embodiments. Those skilled in the art will envision other modifications within the scope and spirit of the application.

包含在说明书中并构成说明书的一部分的附图示出了本申请的实施例,并且与上面给出的对本申请的大致描述以及下面给出的对实施例的详细描述一起用于解释本申请的原理。The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the application and, together with the general description of the application given above and the detailed description of the embodiments given below, serve to explain the embodiments of the application. principle.

通过下面参照附图对给定为非限制性实例的实施例的优选形式的描述,本申请的这些和其它特性将会变得显而易见。These and other characteristics of the present application will become apparent from the following description of preferred forms of embodiment given as non-limiting examples with reference to the accompanying drawings.

还应当理解,尽管已经参照一些具体实例对本申请进行了描述,但本领域技术人员能够确定地实现本申请的很多其它等效形式。It should also be understood that, while the application has been described with reference to a few specific examples, those skilled in the art will be able to implement certain other equivalents of the application.

当结合附图时,鉴于以下详细说明,本申请的上述和其他方面、特征和优势将变得更为显而易见。The above and other aspects, features and advantages of the present application will become more apparent in view of the following detailed description when taken in conjunction with the accompanying drawings.

此后参照附图描述本申请的具体实施例;然而,应当理解,所申请的实施例仅仅是本申请的实例,其可采用多种方式实施。熟知和/或重复的功能和结构并未详细描述以避免不必要或多余的细节使得本申请模糊不清。因此,本文所申请的具体的结构性和功能性细节并非意在限定,而是仅仅作为权利要求的基础和代表性基础用于教导本领域技术人员以实质上任意合适的详细结构多样地使用本申请。Specific embodiments of the present application are hereinafter described with reference to the accompanying drawings; however, it should be understood that the applied embodiments are merely examples of the present application, which can be implemented in various ways. Well-known and/or repetitive functions and constructions are not described in detail to avoid obscuring the application with unnecessary or redundant detail. Therefore, specific structural and functional details disclosed herein are not intended to be limiting, but merely as a basis for the claims and as a representative basis for teaching one skilled in the art to variously employ the present invention in virtually any suitable detailed structure. Apply.

本说明书可使用词组“在一种实施例中”、“在另一个实施例中”、“在又一实施例中”或“在其他实施例中”,其均可指代根据本申请的相同或不同实施例中的一个或多个。This specification may use the phrases "in one embodiment," "in another embodiment," "in yet another embodiment," or "in other embodiments," which may refer to the same or one or more of the different embodiments.

本申请实施例提供一种网络度检测方法,如图1所示,包括:The embodiment of the present application provides a network degree detection method, as shown in Figure 1, including:

步骤S101:对待测试图像进行图像处理,获得与所述待测试图像对应的若干目标轮廓图像,其中,各所述目标轮廓图像中包含目标纺丝的轮廓;Step S101: Perform image processing on the image to be tested to obtain several target contour images corresponding to the image to be tested, wherein each of the target contour images includes the target spinning contour;

本步骤在具体实施过程中,首先对所述待测试图像进行前置处理,进行均值滤波处理并将突出的噪点进行平滑处理,调整图片尺寸得到第一图像,对所述第一图像进行阈处理,所述阈处理可以采用ostu阈值检测方法,通过阈值检测,将图像中的纺丝所在区域与非纺丝区域分开,得到第二图像,最后对所述第二图像进行腐蚀和膨胀处理,将所述第二图像中丝线区域和背景区域进行分离,得到所述纺丝轮廓图像,最后对所述纺丝轮廓图像进行轮廓检测处理,得到每个纺丝对应的轮廓,以得到所述目标纺丝对应的目标轮廓图像。In the specific implementation process of this step, firstly, pre-processing is performed on the image to be tested, mean value filtering is performed and prominent noise points are smoothed, the size of the image is adjusted to obtain the first image, and threshold processing is performed on the first image , the threshold processing can adopt the ostu threshold detection method, through the threshold detection, the spinning area in the image is separated from the non-spinning area to obtain a second image, and finally the second image is corroded and expanded, and the In the second image, the thread area and the background area are separated to obtain the spinning contour image, and finally the spinning contour image is subjected to contour detection processing to obtain the contour corresponding to each spinning, so as to obtain the target spinning contour image. The target contour image corresponding to the wire.

步骤S102:基于预设的语义分割模型对所述待测试图像进行网络节点特征提取处理,获得与所述待测试图像对应的节点预测图像;Step S102: Perform network node feature extraction processing on the image to be tested based on a preset semantic segmentation model, and obtain a node prediction image corresponding to the image to be tested;

本步骤在具体实施过程中,基于预设的语义分割模型和模型参数对所述待测试图像进行网络节点特征提取,本步骤中语义分割模型首先加载已经训练好的模型参数,将所述待测试图像输入到训练好的语义分割模型中,输出得到包含网络节点特征的所述节点预测图像,所述节点预测图像将节点区域标记为白色,其他区域标记为黑色。In the specific implementation process of this step, the network node feature extraction is performed on the image to be tested based on the preset semantic segmentation model and model parameters. In this step, the semantic segmentation model first loads the model parameters that have been trained, and the image to be tested The image is input into the trained semantic segmentation model, and the node prediction image containing network node features is outputted. The node prediction image marks the node area as white and other areas as black.

步骤S103:将各所述目标轮廓图像作为掩膜图像,对所述节点预测图像进行检测,得到各所述目标纺丝的网络度检测结果。Step S103: using each target outline image as a mask image, detecting the node prediction image, and obtaining a network degree detection result of each target spinning.

本步骤在具体实施过程中,基于各所述目标轮廓图像作为掩膜图像,对所述节点预测图像进行检测,通过提取得到目标纺丝对应的节点图像,基于所述目标纺丝对应的节点图像,按照目标纺丝对应的节点图像中节点的长度和宽度对节点进行筛选,得到目标纺丝对应的节点数量,基于所述节点图像的像素点数量和每个像素点对应的实际距离,计算获得所述目标纺丝的实际长度,以基于所述目标纺丝对应的节点数量和所述目标纺丝的实际长度,计算获得所述目标纺丝的网络度,得到所述网络度的检测结果。In the specific implementation process of this step, based on each of the target contour images as a mask image, the node prediction image is detected, and the node image corresponding to the target spinning is obtained by extraction, and based on the node image corresponding to the target spinning , filter the nodes according to the length and width of the nodes in the node image corresponding to the target spinning, and obtain the number of nodes corresponding to the target spinning, based on the number of pixels in the node image and the actual distance corresponding to each pixel, calculate and obtain The actual length of the target spinning is calculated to obtain the network degree of the target spinning based on the number of nodes corresponding to the target spinning and the actual length of the target spinning, and the detection result of the network degree is obtained.

本申请中,对待测试纺丝的图像进行图像处理获得目标纺丝对应的目标轮廓图像,基于预设的语义分割模型对待测试图像进行特征提取得到节点预测图像,基于目标轮廓图像作为掩膜图像对所述节点预测图像进行检测获得目标纺丝的网络度检测结果,基于图像处理获得网络度指标的检测方法效率高,检测结果更加精准。In this application, image processing is performed on the image of the spinning to be tested to obtain the target contour image corresponding to the target spinning, based on the preset semantic segmentation model, feature extraction is performed on the image to be tested to obtain a node prediction image, based on the target contour image as a mask image pair The node prediction image is detected to obtain the detection result of the network degree of the target spinning, and the detection method based on the image processing to obtain the network degree index has high efficiency and the detection result is more accurate.

本申请又一实施例提供一种网络度检测方法,如图2所示,包括:Another embodiment of the present application provides a network degree detection method, as shown in Figure 2, including:

步骤S201:获取若干历史图像;Step S201: acquiring several historical images;

本步骤在具体实施过程中,可以使用工业相机对历史纺丝图像进行采集,将历史纺丝通过网络度检测采样设备放置在水槽中,待历史纺丝在水槽中散开具有较好的形态时,通过相机拍照得到若干历史纺丝图像,为了丰富样本集,对所述若干历史纺丝图像进行裁切,并统一图片尺寸到指定大小,例如:用相机拍摄得到100个历史纺丝图像,对所述100个历史纺丝图像进行裁切后得到276个样本图像,将所述样本图像尺寸统一转换为512*512像素大小的图像得到276个所述历史图像。本实施例对相机拍摄的历史纺丝图像和裁切后得到的样本图像的数量不做限制,可根据实际需要对相机拍摄的历史纺丝图像和裁切后得到的样本图像的数量进行调整。In the specific implementation process of this step, industrial cameras can be used to collect historical spinning images, and the historical spinning can be placed in the water tank through the network degree detection sampling equipment. When the historical spinning is dispersed in the water tank and has a better shape , to obtain several historical spinning images by taking photos with a camera. In order to enrich the sample set, cut the several historical spinning images and unify the size of the pictures to a specified size. For example: take 100 historical spinning images with a camera. The 100 historical spinning images are cropped to obtain 276 sample images, and the size of the sample images is uniformly converted to an image with a size of 512*512 pixels to obtain 276 historical images. In this embodiment, there is no limit to the number of historical spinning images captured by the camera and the number of sample images after cropping, and the number of historical spinning images captured by the camera and sample images after cropping can be adjusted according to actual needs.

步骤S202:基于各所述历史图像,获得与各所述历史图像对应的标签图像;Step S202: Based on each of the historical images, obtain a label image corresponding to each of the historical images;

本步骤在具体实施过程中,首先对各所述历史图像进行图像标记处理,获取与各所述历史图像对应的标签文件,所述标签文件中包括所述历史图像的节点区域信息和标记类别信息;可以使用labelme软件对所述历史图像进行图像标记处理,手动标记每张图像中的节点区域,将标记的节点区域的类别记录为网络节点区域,得到与各所述历史图像对应的标签文件,所述标签文件为json文件,该文件中记录了图像中的标记的节点区域的点集信息以及其对应的标记类别信息,例如:如上所述的276个历史图像,可以使用labelme软件对所述历史图像进行图像标记处理后得到了276个标签文件;然后基于各所述历史图像对应的所述节点区域信息和所述标签类别信息,对各所述历史图像对应的标签文件进行标签图像制作,获得与各所述历史图像对应的标签图像。具体的,根据所述标签文件中的节点区域信息和标签类别信息,通过python程序进行标签图像绘制,将标签文件中的节点区域的像素赋值为255,将除节点区域以外的区域赋值为0,得到了与所述标签文件对应的标签图像。本实施例中对像素赋值大小不做限制,此方案中的赋值是为了让节点区域更加显著。In the specific implementation process of this step, first perform image labeling processing on each of the historical images, and obtain a label file corresponding to each of the historical images, and the label file includes node area information and label category information of the historical image Can use labelme software to carry out image marking process to described historical image, manually mark the node area in each image, the category of the marked node area is recorded as network node area, obtain the corresponding label file with each described historical image, The label file is a json file, which records the point set information of the marked node area in the image and its corresponding label category information, for example: 276 historical images as mentioned above, you can use the labelme software for the described 276 label files are obtained after image labeling processing of the historical images; then, based on the node area information and the label category information corresponding to each of the historical images, label images are produced for the label files corresponding to each of the historical images, A label image corresponding to each of the historical images is obtained. Specifically, according to the node area information and label category information in the label file, the python program is used to draw the label image, the pixel of the node area in the label file is assigned a value of 255, and the area other than the node area is assigned a value of 0, A label image corresponding to the label file is obtained. In this embodiment, there is no limit to the pixel assignment size, and the assignment in this scheme is to make the node area more prominent.

步骤S203:基于各所述历史图像和各所述标签图像,训练获得语义分割模型;Step S203: Based on each of the historical images and each of the label images, train and obtain a semantic segmentation model;

本步骤在具体实施过程中,首先:对各所述历史图像和与各所述标签图像进行分组处理,获得包含若干第一历史图像以及与各第一历史图像对应的标签图像的第一数据集、以及包含若干第二历史图像以及与各第二历史图像对应的标签图像的第二数据集。例如:如上所述的276张历史图像和与所述276个历史图像对应的标签图像进行分组处理,随机抽取其中的221张第一历史图像和与所述221张第一历史图像对应的标签图像组成第一数据集,剩余的55张第二历史图像和与所述55张第二历史图像对应的标签图像组成第二数据集。In the specific implementation process of this step, first: perform grouping processing on each of the historical images and each of the label images, and obtain a first data set including several first historical images and label images corresponding to each of the first historical images , and a second data set including several second historical images and label images corresponding to each second historical image. For example: the above-mentioned 276 historical images and the label images corresponding to the 276 historical images are grouped, and the 221 first historical images and the label images corresponding to the 221 first historical images are randomly selected. The first data set is formed, and the remaining 55 second historical images and label images corresponding to the 55 second historical images form the second data set.

其次,将各所述第一数据集作为训练样本,采用预设的语义分割方法进行模型训练,得到当前语义分割模型和当前模型参数,基于第二数据集对所述当前语义分割模型和当前模型参数进行检测,获得所述语义分割模型。具体步骤如下:Secondly, each of the first data sets is used as a training sample, and the preset semantic segmentation method is used for model training to obtain the current semantic segmentation model and current model parameters, and the current semantic segmentation model and the current model are analyzed based on the second data set. Parameters are detected to obtain the semantic segmentation model. Specific steps are as follows:

步骤一:基于各所述第一数据集作为训练样本,采用预设的语义分割方法进行模型训练,得到当前语义分割模型;Step 1: Based on each of the first data sets as a training sample, a preset semantic segmentation method is used for model training to obtain a current semantic segmentation model;

具体的,将各所述第一数据集作为训练样本,采用U-Net深度语义分割方法进行训练,使用BCEWithLogitsLoss损失函数,经过训练得到得到当前语义分割模型。Specifically, each of the first data sets is used as a training sample, and the U-Net deep semantic segmentation method is used for training, and the BCEWithLogitsLoss loss function is used to obtain the current semantic segmentation model after training.

步骤二:判断当前模型精度值是否满足预设精度值,在当前模型精度值小于或者等于预设精度值时,执行步骤三;在当前模型精度值大于预设精度值的情况下,重复执行步骤一,获得更新后的当前语义分割模型和更新后的当前模型参数;Step 2: Determine whether the current model accuracy value meets the preset accuracy value. When the current model accuracy value is less than or equal to the preset accuracy value, perform step 3; if the current model accuracy value is greater than the preset accuracy value, repeat steps 1. Obtain the updated current semantic segmentation model and the updated current model parameters;

本步骤在具体实施过程中,预设精度值可以设置为0.01,在当前模型精度值小于或者等于0.01时执行步骤三,获得所述语义分割模型;在当前模型精度值大于0.01情况下重复执行步骤一,训练得到更新后的当前语义分割模型,预设精度值可以根据实际需要进行调整。In the specific implementation process of this step, the preset precision value can be set to 0.01. When the current model precision value is less than or equal to 0.01, perform step 3 to obtain the semantic segmentation model; repeat the steps when the current model precision value is greater than 0.01 First, the updated current semantic segmentation model is obtained after training, and the preset accuracy value can be adjusted according to actual needs.

步骤三:基于第二数据集对所述当前语义分割模型和当前模型参数进行检测,获得所述语义分割模型。Step 3: Detect the current semantic segmentation model and current model parameters based on the second data set to obtain the semantic segmentation model.

基于第二数据集对经过循环迭代训练后得到的语义分割模型进行检测,将各所述第二历史图像输入到所述语义分割模型得到的输出结果,与所述第二历史图像对应的标签图像进行对比检测,得到的输出图像与所述第二图像对应的标签图像相似度满足要求,得到所述语义分割模型。Based on the second data set, the semantic segmentation model obtained after cyclic iteration training is detected, the output result obtained by inputting each of the second historical images into the semantic segmentation model, and the label image corresponding to the second historical image By performing comparison and detection, the similarity between the obtained output image and the label image corresponding to the second image meets the requirements, and the semantic segmentation model is obtained.

步骤S204:对待测试图像进行图像处理,获得与所述待测试图像对应的若干目标轮廓图像,其中,各所述目标轮廓图像中包含目标纺丝的轮廓;Step S204: Perform image processing on the image to be tested to obtain several target contour images corresponding to the test image, wherein each of the target contour images includes the target spinning contour;

本步骤在具体实施过程中,首先通过工业相机对待测试纺丝样本进行图像采集,并对采集到的图像的尺寸进行调整,本方案中为了便于语义分割网络的输入,将待测试图像的尺寸调整为32的倍数,得到所述待测试图像,所述待测试图像如图4(a)所示。然后对所述待测试图像进行前置处理,进行均值滤波处理并将突出的噪点进行平滑处理,调整图片尺寸得到第一图像,对所述第一图像进行阈处理,所述阈处理可以采用ostu阈值检测方法,通过阈值检测,将图像中的纺丝所在区域与非纺丝区域分开,得到第二图像,最后对所述第二图像进行腐蚀和膨胀处理,将所述第二图像中丝线区域和背景区域进行分离,得到所述纺丝轮廓图像,所述纺丝轮廓图像如图4(b)所示,最后对所述纺丝轮廓图像进行轮廓检测处理,得到每个纺丝对应的轮廓,以得到所述目标纺丝对应的目标轮廓图像,所述目标轮廓图像如图4(d)所示。In the specific implementation process of this step, first, the image of the spinning sample to be tested is collected by an industrial camera, and the size of the collected image is adjusted. In this scheme, in order to facilitate the input of the semantic segmentation network, the size of the image to be tested is adjusted. is a multiple of 32, the image to be tested is obtained, and the image to be tested is shown in Figure 4(a). Then carry out pre-processing to the image to be tested, perform mean value filtering and smooth the outstanding noise points, adjust the size of the picture to obtain the first image, and perform threshold processing on the first image, the threshold processing can adopt ostu Threshold value detection method, through threshold value detection, the spinning area in the image is separated from the non-spinning area to obtain a second image, and finally the second image is corroded and expanded, and the silk thread area in the second image is Separate from the background area to obtain the spinning profile image, as shown in Figure 4(b), and finally perform profile detection processing on the spinning profile image to obtain the corresponding profile of each spinning , to obtain the target contour image corresponding to the target spinning, the target contour image is shown in FIG. 4( d ).

步骤S205:基于所述语义分割模型对所述待测试图像进行网络节点特征提取处理,获得与所述待测试图像对应的节点预测图像;Step S205: performing network node feature extraction processing on the image to be tested based on the semantic segmentation model, and obtaining a node prediction image corresponding to the image to be tested;

本步骤在具体实施过程中,基于所述语义分割模型和模型参数对所述待测试图像进行网络节点特征提取,本步骤中语义分割模型首先加载已经训练好的模型参数,将所述待测试图像输入到训练好的语义分割模型中,输出节点区域标记为白色,其他区域标记为黑色,并进行滤波和膨胀处理后的节点特征更加明显的所述节点预测图像,所述节点预测图像如图4(c)所示。In the specific implementation process of this step, the network node feature extraction is performed on the image to be tested based on the semantic segmentation model and model parameters. In this step, the semantic segmentation model first loads the model parameters that have been trained, and the image to be tested is Input into the trained semantic segmentation model, the output node area is marked as white, and other areas are marked as black, and the node features after filtering and expansion processing are more obvious. The node prediction image, the node prediction image is shown in Figure 4 (c) shown.

步骤S206:将各所述目标轮廓图像作为掩膜图像,对所述节点预测图像进行检测,提取得到目标纺丝对应的节点图像;Step S206: using each target outline image as a mask image, detecting the node prediction image, and extracting the node image corresponding to the target spinning;

本步骤在具体实施过程中,基于各所述目标轮廓图像作为掩膜图像对对所述节点预测图像进行检测,提取与该目标轮廓图像对应的节点预测图像中节点的轮廓,得到目标纺丝对应的节点图像,所述目标纺丝对应的节点图像如图4(e)所示。In the specific implementation process of this step, the node prediction image is detected based on each of the target contour images as a mask image, and the contour of the node in the node prediction image corresponding to the target contour image is extracted to obtain the target spinning correspondence The node image corresponding to the target spinning is shown in Figure 4(e).

步骤S207:基于所述目标纺丝对应的节点图像,计算获得目标纺丝的网络度,得到所述网络度的检测结果。Step S207: Based on the node image corresponding to the target spinning, calculate and obtain the network degree of the target spinning, and obtain the detection result of the network degree.

本步骤在具体实施过程中,首先,基于所述目标纺丝对应的节点图像,获得目标纺丝对应的节点数量;具体的,基于所述目标纺丝的节点图像,按照节点的长度和宽度对节点进行筛选,剔除不满足节点特征的干扰节点,对各节点轮廓取外接矩形得到目标纺丝对应的节点数量,同时将筛选的节点标记在待测试图像上,如图4(f)所示。然后,基于所述节点图像的像素点数量和每个像素点对应的实际距离,计算获得所述目标纺丝的实际长度;例如如上所述的512*512像素大小的图像,相当于有512*512个像素点组成的图像,实际纺丝长度近似等于512个像素点与每个像素点代表实际的长度的乘积。例如,基于相机参数计算得到每个像素点对应的实际距离为0.025cm时,此时获取得到的实际纺丝长度为,512*0.025=12.8cm。最后,基于所述目标纺丝对应的节点数量和所述目标纺丝的实际长度,计算获得所述目标纺丝的网络度,得到所述网络度的检测结果。具体的,用目标纺丝对应的节点数量除以目标纺丝的实际长度,计算得到所述目标纺丝的网络度。得到所述网络度的检测结果。In the specific implementation process of this step, first, based on the node image corresponding to the target spinning, the number of nodes corresponding to the target spinning is obtained; specifically, based on the node image of the target spinning, according to the length and width of the node The nodes are screened, and the interfering nodes that do not meet the node characteristics are eliminated, and the circumscribed rectangle is taken for each node outline to obtain the number of nodes corresponding to the target spinning, and the screened nodes are marked on the image to be tested, as shown in Figure 4(f). Then, based on the number of pixels of the node image and the actual distance corresponding to each pixel, the actual length of the target spinning is calculated; for example, an image with a size of 512*512 pixels as described above is equivalent to 512* For an image composed of 512 pixels, the actual spinning length is approximately equal to the product of 512 pixels and the actual length represented by each pixel. For example, when the actual distance corresponding to each pixel point calculated based on the camera parameters is 0.025 cm, the actual spinning length acquired at this time is 512*0.025=12.8 cm. Finally, based on the number of nodes corresponding to the target spinning and the actual length of the target spinning, calculate and obtain the network degree of the target spinning, and obtain the detection result of the network degree. Specifically, the number of nodes corresponding to the target spinning is divided by the actual length of the target spinning to calculate the network degree of the target spinning. Obtain the detection result of the network degree.

本申请中,通过对待测试纺丝的图像进行图像处理获得目标纺丝对应的目标轮廓图像,通过构建语义分割模型对待测试图像进行特征提取得到节点预测图像,基于目标轮廓图像作为掩膜图像对所述节点预测图像进行检测获得目标纺丝的网络度检测结果,基于图像处理获得网络度指标的检测方法效率高,检测结果更加精准。In this application, the target contour image corresponding to the target spinning is obtained by image processing on the image of the spinning to be tested, and the node prediction image is obtained by constructing a semantic segmentation model to extract the features of the image to be tested, and based on the target contour image as a mask image for all The network degree detection result of the target spinning is obtained by detecting the predicted image of the above nodes. The detection method based on the image processing to obtain the network degree index has high efficiency and more accurate detection results.

本申请又一实施例提供一种网络度检测装置,如图3所示,包括:Another embodiment of the present application provides a network degree detection device, as shown in FIG. 3 , including:

图像处理模块1:用于对待测试图像进行图像处理,获得与所述待测试图像对应的若干目标轮廓图像,其中,各所述目标轮廓图像中包含目标纺丝的轮廓;Image processing module 1: used to perform image processing on the image to be tested to obtain several target contour images corresponding to the image to be tested, wherein each of the target contour images includes the contour of the target spinning;

特征提取模块2:用于基于预设的语义分割模型对所述待测试图像进行网络节点特征提取处理,获得与所述待测试图像对应的节点预测图像;Feature extraction module 2: used to perform network node feature extraction processing on the image to be tested based on a preset semantic segmentation model, and obtain a node prediction image corresponding to the image to be tested;

检测模块3:用于将各所述目标轮廓图像作为掩膜图像,对所述节点预测图像进行检测,得到各所述目标纺丝的网络度检测结果。Detection module 3: used to use each target outline image as a mask image to detect the node prediction image, and obtain the network degree detection result of each target spinning.

在具体实施过程中,所述图像处理模块具体用于:对所述待测试图像进行前置处理,得到前置处理后的第一图像;对所述第一图像进行阈处理,得到纺丝区域与非纺丝区域分离的第二图像;对所述第二图像进行膨胀腐蚀处理,得到纺丝轮廓图像;基于所述纺丝轮廓图像进行轮廓检测,得到待测试图像中目标纺丝对应的目标轮廓图像。In the specific implementation process, the image processing module is specifically used to: perform pre-processing on the image to be tested to obtain the pre-processed first image; perform threshold processing on the first image to obtain the spinning area A second image separated from the non-spinning area; performing expansion and corrosion processing on the second image to obtain a spinning profile image; performing profile detection based on the spinning profile image to obtain a target corresponding to the target spinning in the image to be tested Contour image.

在具体实施过程中,所述网络度检测装置还包括:模型训练模块,所述模型训练模块具体用于:获取若干历史图像和与各所述历史图像对应的标签图像;对各所述历史图像和与各所述标签图像进行分组处理,获得包含若干第一历史图像以及与各第一历史图像对应的标签图像的第一数据集;和包含若干第二历史图像以及与各第二历史图像对应的标签图像的第二数据集;将各所述第一数据集作为训练样本,采用预设的语义分割方法进行模型训练,得到当前语义分割模型和当前模型参数,基于第二数据集对所述当前语义分割模型和当前模型参数进行检测,获得所述语义分割模型。In the specific implementation process, the network degree detection device also includes: a model training module, and the model training module is specifically used to: acquire a number of historical images and label images corresponding to each of the historical images; and performing grouping processing with each of the label images to obtain a first data set comprising several first historical images and label images corresponding to each first historical image; and comprising a plurality of second historical images and corresponding to each second historical image The second data set of the label image; Each of the first data sets is used as a training sample, and the preset semantic segmentation method is used for model training to obtain the current semantic segmentation model and current model parameters, and the second data set is used for the described The current semantic segmentation model and current model parameters are detected to obtain the semantic segmentation model.

在具体实施过程中,所述模型训练模块还用于:将各所述第一数据集作为训练样本,采用预设的语义分割方法进行模型训练,得到当前语义分割模型和当前模型参数,基于第二数据集对所述当前语义分割模型和当前模型参数进行检测,获得所述语义分割模型,具体包括:步骤一:基于各所述第一数据集作为训练样本,采用预设的语义分割方法进行模型训练,得到当前语义分割模型;步骤二:判断当前模型精度值是否满足预设精度值,在当前模型精度值小于预设精度值时,执行步骤三;在当前模型精度值大于或者等于预设精度值的情况下,重复执行步骤一,获得更新后的当前语义分割模型和更新后的当前模型参数;步骤三:基于第二数据集对所述当前语义分割模型和当前模型参数进行检测,获得所述语义分割模型。In the specific implementation process, the model training module is also used to: use each of the first data sets as training samples, and use the preset semantic segmentation method to perform model training to obtain the current semantic segmentation model and current model parameters, based on the first The second data set detects the current semantic segmentation model and the current model parameters, and obtains the semantic segmentation model, which specifically includes: Step 1: Based on each of the first data sets as a training sample, a preset semantic segmentation method is used to perform Model training to obtain the current semantic segmentation model; Step 2: Determine whether the current model accuracy value meets the preset accuracy value, and when the current model accuracy value is less than the preset accuracy value, perform Step 3; when the current model accuracy value is greater than or equal to the preset In the case of the accuracy value, repeat step 1 to obtain the updated current semantic segmentation model and updated current model parameters; step 3: detect the current semantic segmentation model and current model parameters based on the second data set, and obtain The semantic segmentation model.

在具体实施过程中,所述模型训练模块还用于:对各所述历史图像进行图像标记处理,获取与各所述历史图像对应的标签文件,所述标签文件中包括所述历史图像的节点区域信息和标记类别信息;基于各所述历史图像对应的所述节点区域信息和所述标签类别信息,对各所述历史图像对应的标签文件进行标签图像制作,获得与各所述历史图像对应的标签图像。In the specific implementation process, the model training module is also used to: perform image labeling processing on each of the historical images, and obtain a label file corresponding to each of the historical images, and the label file includes nodes of the historical images area information and label category information; based on the node area information and the label category information corresponding to each of the historical images, perform label image creation on the label files corresponding to each of the historical images, and obtain the corresponding to each of the historical images label image.

在具体实施过程中,所述检测模块3具体用于:将各所述目标轮廓图像作为掩膜图像,对所述节点预测图像进行检测,提取得到目标纺丝对应的节点图像;基于所述目标纺丝对应的节点图像,计算获得目标纺丝的网络度,得到所述网络度的检测结果。In the specific implementation process, the detection module 3 is specifically used to: use each of the target contour images as a mask image, detect the node prediction image, and extract the node image corresponding to the target spinning; The node image corresponding to the spinning is calculated to obtain the network degree of the target spinning, and the detection result of the network degree is obtained.

在具体实施过程中,所述检测模块3还用于:基于所述目标纺丝对应的节点图像,获得目标纺丝对应的节点数量;基于所述节点图像的像素点数量和每个像素点对应的实际距离,计算获得所述目标纺丝的实际长度;基于所述目标纺丝对应的节点数量和所述目标纺丝的实际长度,计算获得所述目标纺丝的网络度,得到所述网络度的检测结果。In the specific implementation process, the detection module 3 is also used to: obtain the number of nodes corresponding to the target spinning based on the node image corresponding to the target spinning; the number of pixels based on the node image corresponds to each pixel The actual distance is calculated to obtain the actual length of the target spinning; based on the number of nodes corresponding to the target spinning and the actual length of the target spinning, the network degree of the target spinning is calculated to obtain the network degree of detection results.

本申请另一实施例提供一种存储介质,所述存储介质存储有计算机程序,所述计算机程序被处理器执行时实现如下方法步骤:Another embodiment of the present application provides a storage medium, the storage medium stores a computer program, and when the computer program is executed by a processor, the following method steps are implemented:

步骤一、对待测试图像进行图像处理,获得与所述待测试图像对应的若干目标轮廓图像,其中,各所述目标轮廓图像中包含目标纺丝的轮廓;Step 1. Perform image processing on the image to be tested to obtain a number of target contour images corresponding to the image to be tested, wherein each of the target contour images includes the target spinning contour;

步骤二、基于预设的语义分割模型对所述待测试图像进行网络节点特征提取处理,获得与所述待测试图像对应的节点预测图像;Step 2, performing network node feature extraction processing on the image to be tested based on a preset semantic segmentation model, and obtaining a node prediction image corresponding to the image to be tested;

步骤三、将各所述目标轮廓图像作为掩膜图像,对所述节点预测图像进行检测,得到各所述目标纺丝的网络度检测结果。Step 3: Using each target outline image as a mask image, detect the node prediction image, and obtain the network degree detection result of each target spinning.

上述方法步骤的具体实施过程可参见上述任意网络度检测方法的实施例,本实施例在此不再重复赘述。For the specific implementation process of the steps of the above method, reference may be made to the above embodiments of any network degree detection method, and this embodiment will not be repeated here.

本申请中,对待测试纺丝的图像进行图像处理获得目标纺丝对应的目标轮廓图像,基于预设的语义分割模型对待测试图像进行特征提取得到节点预测图像,基于目标轮廓图像作为掩膜图像对所述节点预测图像进行检测获得目标纺丝的网络度检测结果,基于图像处理获得网络度指标的检测方法效率高,检测结果更加精准。In this application, image processing is performed on the image of the spinning to be tested to obtain the target contour image corresponding to the target spinning, based on the preset semantic segmentation model, feature extraction is performed on the image to be tested to obtain a node prediction image, based on the target contour image as a mask image pair The node prediction image is detected to obtain the detection result of the network degree of the target spinning, and the detection method based on the image processing to obtain the network degree index has high efficiency and the detection result is more accurate.

本申请另一实施例提供一种电子设备,至少包括存储器、处理器,所述存储器上存储有计算机程序,所述处理器在执行所述存储器上的计算机程序时实现如下方法步骤:Another embodiment of the present application provides an electronic device, including at least a memory and a processor, where a computer program is stored on the memory, and the processor implements the following method steps when executing the computer program on the memory:

步骤二、基于预设的语义分割模型对所述待测试图像进行网络节点特征提取处理,获得与所述待测试图像对应的节点预测图像;Step 2, performing network node feature extraction processing on the image to be tested based on a preset semantic segmentation model, and obtaining a node prediction image corresponding to the image to be tested;

步骤三、将各所述目标轮廓图像作为掩膜图像,对所述节点预测图像进行检测,得到各所述目标纺丝的网络度检测结果。Step 3: Using each target outline image as a mask image, detect the node prediction image, and obtain the network degree detection result of each target spinning.

上述方法步骤的具体实施过程可参见上述任意网络度检测方法的实施例,本实施例在此不再重复赘述。For the specific implementation process of the steps of the above method, reference may be made to the above embodiments of any network degree detection method, and this embodiment will not be repeated here.

本申请中,对待测试纺丝的图像进行图像处理获得目标纺丝对应的目标轮廓图像,基于预设的语义分割模型对待测试图像进行特征提取得到节点预测图像,基于目标轮廓图像作为掩膜图像对所述节点预测图像进行检测获得目标纺丝的网络度检测结果,基于图像处理获得网络度指标的检测方法效率高,检测结果更加精准。In this application, image processing is performed on the image of the spinning to be tested to obtain the target contour image corresponding to the target spinning, based on the preset semantic segmentation model, feature extraction is performed on the image to be tested to obtain a node prediction image, based on the target contour image as a mask image pair The node prediction image is detected to obtain the detection result of the network degree of the target spinning, and the detection method based on the image processing to obtain the network degree index has high efficiency and the detection result is more accurate.

上述方法步骤的具体实施过程可参见上述任意网络度检测方法的实施例,本实施例在此不再重复赘述。For the specific implementation process of the steps of the above method, reference may be made to the above embodiments of any network degree detection method, and this embodiment will not be repeated here.

以上实施例仅为本申请的示例性实施例,不用于限制本申请,本申请的保护范围由权利要求书限定。本领域技术人员可以在本申请的实质和保护范围内,对本申请做出各种修改或等同替换,这种修改或等同替换也应视为落在本申请的保护范围内。The above embodiments are only exemplary embodiments of the present application, and are not intended to limit the present application, and the protection scope of the present application is defined by the claims. Those skilled in the art may make various modifications or equivalent replacements to the present application within the spirit and protection scope of the present application, and such modifications or equivalent replacements shall also be deemed to fall within the protection scope of the present application.

Claims (10)

1. A network degree detection method, comprising:
performing image processing on an image to be tested to obtain a plurality of target contour images corresponding to the to-be-tested attempt, wherein each target contour image comprises a target spinning contour;
performing network node characteristic extraction processing on the to-be-tested image based on a preset semantic segmentation model to obtain a node prediction image corresponding to the to-be-tested image;
and detecting the node prediction image by taking each target contour image as a mask image to obtain a network degree detection result of each target spinning.
2. The method of claim 1, wherein the image processing is performed on the image to be tested to obtain a target contour image corresponding to the target spinning in the image to be tested, and the method specifically comprises:
pre-processing the to-be-detected image to obtain a first image after pre-processing;
performing threshold processing on the first image to obtain a second image with a spinning area separated from a non-spinning area;
performing expansion corrosion treatment on the second image to obtain a spinning contour image;
and performing contour detection based on the spinning contour image to obtain a target contour image corresponding to target spinning in the image to be tested.
3. The method according to claim 1, wherein, in performing network node feature extraction processing on the image to be tested based on a preset semantic segmentation model, a node prediction image corresponding to the image to be tested is obtained, the method further comprises: training to obtain the semantic segmentation model specifically comprises the following steps:
acquiring a plurality of historical images and tag images corresponding to the historical images;
grouping each history image and each label image to obtain a first data set containing a plurality of first history images and the label images corresponding to each first history image; and a second dataset comprising a plurality of second history images and a label image corresponding to each second history image;
and taking each first data set as a training sample, carrying out model training by adopting a preset semantic segmentation method to obtain a current semantic segmentation model and current model parameters, and detecting the current semantic segmentation model and the current model parameters based on a second data set to obtain the semantic segmentation model.
4. The method of claim 3, wherein the model training is performed by using each of the first data sets as a training sample and using a preset semantic segmentation method to obtain a current semantic segmentation model and current model parameters, and the current semantic segmentation model and the current model parameters are detected based on a second data set to obtain the semantic segmentation model, and the method specifically comprises the following steps:
step one: based on each first data set as a training sample, performing model training by adopting a preset semantic segmentation method to obtain a current semantic segmentation model;
step two: judging whether the precision value of the current model meets a preset precision value or not, and executing the third step when the precision value of the current model is smaller than the preset precision value; repeatedly executing the first step under the condition that the precision value of the current model is larger than or equal to the preset precision value, and obtaining an updated current semantic segmentation model and updated current model parameters;
step three: and detecting the current semantic segmentation model and current model parameters based on a second data set to obtain the semantic segmentation model.
5. The method of claim 3, wherein the acquiring the label image corresponding to each of the history images specifically comprises:
performing image marking processing on each historical image to obtain a tag file corresponding to each historical image, wherein the tag file comprises node area information and marking category information of the historical image;
and carrying out label image making on the label file corresponding to each history image based on the node area information and the label category information corresponding to each history image, and obtaining a label image corresponding to each history image.
6. The method according to claim 1, wherein the detecting the node prediction image based on each of the target contour images as a mask image to obtain a network degree detection result of each of the target spinning specifically includes:
detecting the node prediction images by taking each target contour image as a mask image, and extracting to obtain node images corresponding to target spinning;
and calculating and obtaining the network degree of the target spinning based on the node image corresponding to the target spinning, and obtaining a detection result of the network degree.
7. The method of claim 5, wherein calculating the network degree of the target spinning based on the node image corresponding to the target spinning to obtain a detection result of the network degree specifically comprises:
obtaining the node number corresponding to the target spinning based on the node image corresponding to the target spinning;
calculating and obtaining the actual length of the target spinning based on the number of the pixel points of the node image and the actual distance corresponding to each pixel point;
and calculating and obtaining the network degree of the target spinning based on the number of nodes corresponding to the target spinning and the actual length of the target spinning, and obtaining a detection result of the network degree.
8. A network degree detection device, characterized by comprising:
an image processing module: the method comprises the steps of performing image processing on an image to be tested to obtain a plurality of target contour images corresponding to the to-be-tested attempt, wherein each target contour image comprises a target spinning contour;
and the feature extraction module is used for: the method comprises the steps of carrying out network node characteristic extraction processing on the to-be-tested image based on a preset semantic segmentation model to obtain a node prediction image corresponding to the to-be-tested image;
and a detection module: and the node prediction images are used for detecting the node prediction images by taking the target contour images as mask images, so as to obtain network degree detection results of the target spinning.
9. A storage medium storing a computer program which, when executed by a processor, implements the steps of the network detection method according to any one of the preceding claims 1-7.
10. An electronic device comprising at least a memory, a processor, the memory having stored thereon a computer program, the processor, when executing the computer program on the memory, implementing the steps of the network detection method according to any of the preceding claims 1-7.
CN202310330426.2A 2023-03-30 2023-03-30 Network degree detection method and device, storage medium and electronic equipment Pending CN116563598A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310330426.2A CN116563598A (en) 2023-03-30 2023-03-30 Network degree detection method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310330426.2A CN116563598A (en) 2023-03-30 2023-03-30 Network degree detection method and device, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN116563598A true CN116563598A (en) 2023-08-08

Family

ID=87488747

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310330426.2A Pending CN116563598A (en) 2023-03-30 2023-03-30 Network degree detection method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN116563598A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118411700A (en) * 2024-06-27 2024-07-30 浙江恒逸石化有限公司 Detection method, device, equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118411700A (en) * 2024-06-27 2024-07-30 浙江恒逸石化有限公司 Detection method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
WO2021238455A1 (en) Data processing method and device, and computer-readable storage medium
CN108596046A (en) A kind of cell detection method of counting and system based on deep learning
CN113239930B (en) Glass paper defect identification method, system, device and storage medium
CN114757900B (en) Artificial intelligence-based textile defect type identification method
CN110472082B (en) Data processing method, data processing device, storage medium and electronic equipment
CN107564002A (en) Plastic tube detection method of surface flaw, system and computer-readable recording medium
JP2019039773A (en) Classification method for unknown compound using machine learning
CN112837290A (en) A Crack Image Automatic Recognition Method Based on Seed Filling Algorithm
CN111079620A (en) Leukocyte image detection and identification model construction method based on transfer learning and application
CN114663383B (en) Blood cell segmentation and identification method and device, electronic equipment and storage medium
CN109145955B (en) Method and system for wood identification
CN105389581A (en) Germinated rice germ integrity intelligent identification system and identification method thereof
CN109117703A (en) It is a kind of that cell category identification method is mixed based on fine granularity identification
CN118366000A (en) Cultural relic health management method based on digital twinning
JP2017221555A (en) Quality evaluation support system of corneal endothelial cell
CN113850339A (en) A method and device for predicting roughness level based on multi-light source surface images
CN111369526A (en) Multi-type old bridge crack identification method based on semi-supervised deep learning
CN116563598A (en) Network degree detection method and device, storage medium and electronic equipment
CN114092935A (en) Textile fiber identification method based on convolutional neural network
CN112329664A (en) A method for assessing the number of pronuclei in prokaryotic embryos
CN114299040B (en) Ceramic tile flaw detection method and device and electronic equipment
CN116228661A (en) Cloth flaw online detection method and system
CN110349119A (en) Pavement disease detection method and device based on edge detection neural network
JP2008535519A (en) Method for analyzing cell structure and its components
JP2007048006A (en) Image processor and image processing program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination