[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN112037178A - Cylinder two-dimensional image generation method based on multi-view camera - Google Patents

Cylinder two-dimensional image generation method based on multi-view camera Download PDF

Info

Publication number
CN112037178A
CN112037178A CN202010794749.3A CN202010794749A CN112037178A CN 112037178 A CN112037178 A CN 112037178A CN 202010794749 A CN202010794749 A CN 202010794749A CN 112037178 A CN112037178 A CN 112037178A
Authority
CN
China
Prior art keywords
image
cylinder
images
point
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010794749.3A
Other languages
Chinese (zh)
Other versions
CN112037178B (en
Inventor
梁宝添
陆真国
吉祥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Quanzhou Aolegg Electronics Co ltd
Original Assignee
Quanzhou Aolegg Electronics Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Quanzhou Aolegg Electronics Co ltd filed Critical Quanzhou Aolegg Electronics Co ltd
Priority to CN202010794749.3A priority Critical patent/CN112037178B/en
Publication of CN112037178A publication Critical patent/CN112037178A/en
Application granted granted Critical
Publication of CN112037178B publication Critical patent/CN112037178B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4007Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/60Rotation of whole images or parts thereof
    • G06T3/604Rotation of whole images or parts thereof using coordinate rotation digital computer [CORDIC] devices
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to a cylinder two-dimensional image generation method based on a multi-view camera, wherein a plurality of cameras are arranged around a cylinder to be detected; the multiple cameras respectively obtain images of the cylinder to be detected from different angles; synthesizing images of the cylinder to be detected, which are shot by a plurality of cameras at the same time, into a plane image; and detecting defects according to the synthesized plane image. The method has the advantages of full-process intelligent machine detection, no need of manual intervention, high detection coverage rate, low cost, high detection efficiency and the like.

Description

一种基于多目相机的柱体二维图像生成方法A method for generating two-dimensional images of cylinders based on multi-camera

技术领域technical field

本发明涉及图像处理技术领域,特别涉及一种基于多目相机的柱体二维 图像生成方法。The present invention relates to the technical field of image processing, in particular to a method for generating a two-dimensional image of a cylinder based on a multi-eye camera.

背景技术Background technique

日常生活中,存在有很多柱体的生活用品,比如蜡烛、酒瓶、饮料瓶等 等。这些产品在生产过程中,往往需要对成品进行检验,对产品表面的图案 是否破损、是否有图案印刷错误等进行检验。蜡烛、酒瓶等产品的外部贴有 标签,这类产品对标签或者图案有比较严格的要求。一旦标签或者外周有损 伤或者图案印刷错误等问题,会影响用户对产品的直接感官,进而质疑产品 质量。因此,在产品出厂前需要严格地检验。In daily life, there are many daily necessities of cylinders, such as candles, wine bottles, beverage bottles and so on. In the production process of these products, it is often necessary to inspect the finished product to check whether the pattern on the surface of the product is damaged or whether there is a pattern printing error. Candles, wine bottles and other products have labels on the outside, and these products have stricter requirements on labels or patterns. Once the label or periphery is damaged or the pattern is printed incorrectly, it will affect the user's direct sense of the product, and thus question the quality of the product. Therefore, strict inspection is required before the product leaves the factory.

常规的检验方法是人工检验,或者采用抽检的方式,有的甚至不检验外 观就直接出厂。通过人工对每个产品进行目测,观察是否与设计图案或设计 样品是否一致。这种方法存在效率低、容易出现漏检、劳动强度大等技术问 题。因此,就需要采用智能化检验系统,解决这些问题。The conventional inspection method is manual inspection or random inspection, and some even leave the factory without inspecting the appearance. Visually inspect each product manually to see if it matches the design pattern or design sample. This method has technical problems such as low efficiency, easy to miss detection, and high labor intensity. Therefore, it is necessary to adopt an intelligent inspection system to solve these problems.

计算机视觉是使用计算机及相关设备对生物视觉的一种模拟,通过摄像 机对采集的图片或视频进行处理,以获得相应场景的二维和三维信息。在现 代化的智能工厂中,常常采用摄像机对柱形产品进行图像采集,通过图像处 理的方式对采集的图像和标准图像进行对比,从而检验产品是否存在质量问 题。Computer vision is a simulation of biological vision using computers and related equipment, and the captured pictures or videos are processed by cameras to obtain two-dimensional and three-dimensional information of the corresponding scene. In modern smart factories, cameras are often used to capture images of cylindrical products, and the captured images are compared with standard images through image processing to check whether there are quality problems in the products.

申请号为201910859955.5的中国专利公开了一种《一种基于机器视觉 的圆柱金属表面缺陷检测装置》,包括基座,依次安装在基座上且相衔接的 下料道、输送装置、上料道;输送装置包括固定架。传动装置的输入端与固 设在基座上的伺服电机的输出轴相连接。侧面监测工位一侧的基座上还安装 有拍摄装置。该专利只公开了检测装置的结构,未提到任何关于通过图像处 理检测产品缺陷的算法和方法。The Chinese patent with the application number 201910859955.5 discloses a "machine vision-based cylindrical metal surface defect detection device", which includes a base, a feeding channel, a conveying device, and a feeding channel that are sequentially installed on the base and connected to each other. ; The conveying device includes a fixing frame. The input end of the transmission device is connected with the output shaft of the servo motor fixed on the base. A camera is also installed on the base on one side of the side monitoring station. This patent only discloses the structure of the inspection device, and does not mention any algorithms and methods for detecting product defects through image processing.

以上现有技术的方案中,都无法解决通过图像的处理,对柱体产品的外 形、表面缺陷、包装缺陷等进行智能化检验的问题。None of the above solutions in the prior art can solve the problem of intelligently inspecting the shape, surface defects, packaging defects, etc. of cylindrical products through image processing.

发明内容SUMMARY OF THE INVENTION

本发明的目的是提供一种基于多目相机的柱体二维图像生成方法,能够 通过采集柱体的图像和对图像的处理、通过图像对比实现柱体的外形检测。The purpose of the present invention is to provide a method for generating a two-dimensional image of a cylinder based on a multi-eye camera, which can realize the shape detection of the cylinder by collecting the image of the cylinder, processing the image, and comparing the images.

本发明解决上述技术问题的技术方案如下:The technical scheme that the present invention solves the above-mentioned technical problems is as follows:

一种基于多目相机的柱体二维图像生成方法,包括如下步骤:A method for generating a two-dimensional image of a cylinder based on a multi-eye camera, comprising the following steps:

步骤1:待检柱体的周围设置多个相机;Step 1: Set up multiple cameras around the cylinder to be inspected;

步骤2:所述多个相机分别从不同角度获取待检柱体的图像;Step 2: the multiple cameras obtain images of the cylinder to be inspected from different angles;

步骤3:将多个相机在同一时刻拍摄的待检柱体的图像合成一个平面图 像;Step 3: Combine the images of the cylinder to be inspected captured by multiple cameras at the same time into a plane image;

步骤4:根据所述合成的平面图像,进行缺陷检测。Step 4: Perform defect detection according to the synthesized planar image.

所述步骤1中,多个相机中,两个相邻相机的视场存在重叠区域。In the step 1, among the multiple cameras, the fields of view of two adjacent cameras have overlapping areas.

所述步骤1中,相机设置后,获取相机的内部参数和各个相机之家的外 部参数。In the step 1, after the camera is set, the internal parameters of the camera and the external parameters of each camera home are obtained.

所述步骤3中,将多个图像合成一个平面图像的过程包括如下步骤:In the step 3, the process of synthesizing a plurality of images into one plane image includes the following steps:

步骤301:对多个图像进行处理,获取相邻相机拍摄的图像的重叠区域 的空间点云;Step 301: Process a plurality of images to obtain the spatial point cloud of the overlapping area of the images captured by adjacent cameras;

步骤302:通过空间坐标系的转换,计算标准点云的位姿和测量点云的 位姿;Step 302: Calculate the pose of the standard point cloud and the pose of the measurement point cloud through the transformation of the space coordinate system;

步骤303:平展图像,获取待检柱体展开后的平面图像。Step 303 : Flatten the image, and obtain a flat image of the cylinder to be inspected after being unfolded.

所述步骤301中,重叠区域的空间点云的获取过程为:In the step 301, the acquisition process of the spatial point cloud of the overlapping area is as follows:

所述步骤3011:先将相邻两个相机拍摄的两幅图像分别进行区域分割;Described step 3011: firstly perform region segmentation on two images captured by two adjacent cameras;

所述步骤3012:在区域分割后的两幅图像中,分别确认多个特征点;Described step 3012: in the two images after region segmentation, confirm a plurality of feature points respectively;

所述步骤3013:将两幅图像的特征点进行相互匹配;Step 3013: Match the feature points of the two images to each other;

所述步骤3014:对匹配后的每一对匹配点,进行筛选,去除误匹配点对;Described step 3014: Screen each pair of matching points after matching, and remove the wrong matching point pair;

所述步骤3015:利用匹配成功的匹配点对,计算这一对匹配点在待检柱 体上的对应的空间点的位置坐标。Step 3015: Using the matching point pair that has been successfully matched, calculate the position coordinates of the corresponding spatial point of the pair of matching points on the cylinder to be inspected.

所述步骤302中,计算标准点云的位姿和测量点云的位姿的过程包括如 下步骤:In the described step 302, the process of calculating the pose of the standard point cloud and the pose of the measurement point cloud includes the following steps:

步骤3021:对点云采用主成分分析法处理,计算出测量空间坐标系;Step 3021: Process the point cloud by principal component analysis, and calculate the measurement space coordinate system;

步骤3022:将测量空间中的点转换为设计空间坐标系中的点。。Step 3022: Convert the point in the measurement space to the point in the design space coordinate system. .

所述步骤303中平展图像的过程包括如下步骤The process of flattening the image in step 303 includes the following steps

步骤3031:计算展开图像的大小,并将图像的尺寸大小转换像素大小;Step 3031: Calculate the size of the expanded image, and convert the size of the image to the pixel size;

步骤3032:将展开的图像的像素位置,转换为设计空间内的位置坐标;Step 3032: Convert the pixel position of the expanded image into position coordinates in the design space;

步骤3033:将设计空间内的位置坐标对应的点,投影到相机成像平面上。Step 3033: Project the point corresponding to the position coordinate in the design space on the camera imaging plane.

所述步骤4中,所述缺陷检测的过程包括如下步骤:In the step 4, the defect detection process includes the following steps:

步骤401:将生成的图像与标准图像进行模板匹配,找到对应关系;Step 401: template matching is performed between the generated image and the standard image, and the corresponding relationship is found;

步骤402:基于对应关系,对生成的图像与标准图像进行分块检测。Step 402: Based on the corresponding relationship, perform block detection on the generated image and the standard image.

所述步骤4中,所述缺陷检测的过程包括如下步骤:In the step 4, the defect detection process includes the following steps:

步骤401′:收集大量的合格产品的图像数据,建立训练集;Step 401': collect a large number of image data of qualified products, and establish a training set;

步骤402′:通过训练集对深度学习网络进行训练;Step 402': train the deep learning network through the training set;

步骤403′:利用训练后的网络,对合成的平面图像进行缺陷检测。Step 403': Use the trained network to perform defect detection on the synthesized planar image.

采用本发明的方案的有益效果是:The beneficial effects of adopting the scheme of the present invention are:

本发明涉及一种基于多目相机的柱体二维图像生成方法,待检柱体的周 围设置多个相机;所述多个相机分别从不同角度获取待检柱体的图像;将多 个相机在同一时刻拍摄的待检柱体的图像合成一个平面图像;根据所述合成 的平面图像,进行缺陷检测。The invention relates to a method for generating a two-dimensional image of a cylinder based on a multi-eye camera. A plurality of cameras are arranged around a cylinder to be inspected; the multiple cameras obtain images of the cylinder to be inspected from different angles respectively; The images of the cylinder to be inspected captured at the same moment are combined into a plane image; and defect detection is performed according to the combined plane image.

本发明的控制方法,具有以下几个技术特点。The control method of the present invention has the following technical features.

1、通过图像视觉算法,通过图像对比的方式,快速确认是否存在缺陷, 若存在可以准确定位缺陷位置,检测效率高。1. Through image vision algorithm and image comparison, it can quickly confirm whether there is a defect. If there is, it can accurately locate the defect position, and the detection efficiency is high.

2、该方法采用多相机检测,相邻相机之间具有重叠区域,可以做到全 覆盖检测,大大提高了检测的覆盖率,成本低,检测效率远高于人工。2. The method adopts multi-camera detection, and there are overlapping areas between adjacent cameras, which can achieve full coverage detection, greatly improve the detection coverage rate, low cost, and detection efficiency much higher than that of manual work.

4、本发明的控制方法全程使用计算机视觉方案,没有人工干预。4. The control method of the present invention uses a computer vision scheme in the whole process without manual intervention.

5、本发明的方案,对硬件的要求较低,因此实施成本低,对相机特性 没有特别要求。5. The solution of the present invention has lower requirements on hardware, so the implementation cost is low, and there is no special requirement on camera characteristics.

本发明的基于多目相机的柱体二维图像生成方法,具有全程智能化的机 器检测、不需要人工干预、检测覆盖率搞、成本低、检测效率高等优点。The method for generating a two-dimensional image of a cylinder based on a multi-eye camera of the present invention has the advantages of intelligent machine detection in the whole process, no manual intervention, high detection coverage, low cost, and high detection efficiency.

附图说明Description of drawings

图1为本发明的相机布置示意图。FIG. 1 is a schematic diagram of the camera arrangement of the present invention.

图2为本发明的相机标定的流程图。FIG. 2 is a flow chart of the camera calibration of the present invention.

图3为本发明的图像合成的流程图。FIG. 3 is a flow chart of the image synthesis of the present invention.

图4为本发明的图像合成的点云获取流程图。FIG. 4 is a flow chart of point cloud acquisition for image synthesis according to the present invention.

图5为本发明的图像合成的位姿计算流程图。FIG. 5 is a flow chart of pose calculation for image synthesis according to the present invention.

图6为本发明的图像合成的图像平展过程的流程图。FIG. 6 is a flow chart of the image flattening process of the image synthesis of the present invention.

图7为本发明的点云获取过程中的三角原理示意图。FIG. 7 is a schematic diagram of the triangulation principle in the point cloud acquisition process of the present invention.

图8为本发明的图像平展过程的圆柱转换为平面的示意图。FIG. 8 is a schematic diagram of converting a cylinder into a plane in the image flattening process of the present invention.

图9为本发明的图像平展过程的圆柱的坐标位置和平面图像的坐标位置 的对应示意图。Fig. 9 is the corresponding schematic diagram of the coordinate position of the cylinder and the coordinate position of the plane image in the image flattening process of the present invention.

图10为本发明的图像平展过程的圆柱的空间坐标和平面图像坐标的转 换示意图。Fig. 10 is a schematic diagram of the conversion between the spatial coordinates of the cylinder and the plane image coordinates in the image flattening process of the present invention.

附图中,各标号所代表的部件列表如下:In the accompanying drawings, the list of components represented by each number is as follows:

具体实施方式Detailed ways

以下结合附图对本发明的原理和特征进行描述,所举实例只用于解释本 发明,并非用于限定本发明的范围。The principles and features of the present invention are described below in conjunction with the accompanying drawings, and the examples are only used to explain the present invention, but not to limit the scope of the present invention.

如图1-10所示,本发明的一种基于多目相机的柱体二维图像生成方法, 包括如下步骤1-4等4个步骤。As shown in Fig. 1-10, a method for generating a two-dimensional image of a cylinder based on a multi-eye camera of the present invention includes the following four steps: steps 1-4.

步骤1:待检柱体的周围设置多个相机;Step 1: Set up multiple cameras around the cylinder to be inspected;

步骤2:所述多个相机分别从不同角度获取待检柱体的图像;Step 2: the multiple cameras obtain images of the cylinder to be inspected from different angles;

步骤3:将多个相机在同一时刻拍摄的待检柱体的图像合成一个平面图 像;Step 3: Combine the images of the cylinder to be inspected captured by multiple cameras at the same time into a plane image;

步骤4:根据所述合成的平面图像,进行缺陷检测。Step 4: Perform defect detection according to the synthesized planar image.

所述步骤1中,多个相机中,两个相邻相机的视场存在重叠区域。In the step 1, among the multiple cameras, the fields of view of two adjacent cameras have overlapping areas.

本发明中,相机的选择可以使用普通定焦相机,无需选择高帧率、高分 辨率的工业相机。相邻相机的视场之间需要存在一定的重叠,重叠率为 20%-40%。最优重叠率为30%。具体实施时,重叠率越高,检测效果越好,但 所需要的计算资源也会越多。重叠率越低,计算速度提高很多,但容易出现 检测失败情况,即尽管相机拍摄视场存在重叠,但不能保证同时拍摄到蜡烛 侧面相同部分。当选择的重叠率为30%的时候,既能保证检测的成功率,又 能保持相对较低的计算复杂度。附图1中用三个相机示例,但不局限于三个 相机,只要满足相机之间视场重叠率,同时覆盖完整周身范围,可布置若干 相机。本发明最终应用于产线时,需要保证所有相机在同一时刻采集图像, 通过控制处理器进行控制,使得所有相机能够同步触发,在同一时刻采集图 像。In the present invention, a common fixed-focus camera can be used for the selection of the camera, and there is no need to select an industrial camera with high frame rate and high resolution. There needs to be some overlap between the fields of view of adjacent cameras, with an overlap ratio of 20%-40%. The optimal overlap ratio is 30%. In specific implementation, the higher the overlap rate, the better the detection effect, but the more computing resources are required. The lower the overlap rate, the higher the calculation speed, but the detection failure is prone to occur, that is, although the camera's field of view overlaps, it cannot guarantee that the same part of the side of the candle is captured at the same time. When the selected overlap rate is 30%, the detection success rate can be guaranteed, and the computational complexity can be kept relatively low. Figure 1 uses three cameras as an example, but it is not limited to three cameras, as long as the overlap ratio of the fields of view between the cameras is satisfied, and the entire body range is covered at the same time, several cameras can be arranged. When the present invention is finally applied to a production line, it needs to ensure that all cameras capture images at the same time, and are controlled by a control processor so that all cameras can be triggered synchronously and capture images at the same time.

所述步骤1中,相机设置后,获取相机的内部参数和各个相机之家的外 部参数。In the step 1, after the camera is set, the internal parameters of the camera and the external parameters of each camera home are obtained.

相机安装后,首先要调试相机。调试时,需要获取各个相机的内部参数 以及相机之间的外部参数。相机内部参数包括:相机焦距、光心以及径向畸 变与切向畸变参数。相机间外部参数包括:相机之间的旋转与平移参数。After the camera is installed, the first thing to do is to debug the camera. When debugging, you need to get the internal parameters of each camera and the external parameters between cameras. Camera internal parameters include: camera focal length, optical center, radial distortion and tangential distortion parameters. The external parameters between cameras include: rotation and translation parameters between cameras.

相机调试好之后,进行相机的标定。相机标定算法参考张正友标定算法, 标定的过程如下。After the camera is debugged, calibrate the camera. The camera calibration algorithm refers to Zhang Zhengyou's calibration algorithm, and the calibration process is as follows.

首先准备棋盘格,棋盘格格子数为奇数,且棋盘格足够平整,可以准备 高精度棋盘格。然后进行单相机标定,需保证每次棋盘格拍摄角度不同,提 高标定精度,拍摄至少20张有效图。当固定多个相机在产线时,在合适位置 摆放欲检测蜡烛,用于快速定位相机角度。相机重叠区域拍摄棋盘格,为了 提高最终标定精度,尽量保证棋盘格被完整拍摄。最后,在进行多相机标定 时,先两两之间进行标定,最终全局通过Bundle Adjustment,或者Levenberg- Marquardt等全局优化算法,全局调优,提高全局标定精度。First, prepare a checkerboard. The number of checkerboards is odd and the checkerboard is flat enough to prepare a high-precision checkerboard. Then, for single-camera calibration, it is necessary to ensure that the shooting angle of each checkerboard is different, to improve the calibration accuracy, and to shoot at least 20 valid images. When multiple cameras are fixed on the production line, place the candles to be detected in suitable positions to quickly locate the camera angles. The checkerboard is shot in the overlapping area of the camera. In order to improve the final calibration accuracy, try to ensure that the checkerboard is completely photographed. Finally, when performing multi-camera calibration, the calibration is performed in pairs first, and finally the global optimization algorithm such as Bundle Adjustment or Levenberg-Marquardt is used to perform global optimization and improve the global calibration accuracy.

所述步骤3中,将多个图像合成一个平面图像的过程包括如下步骤:In the step 3, the process of synthesizing a plurality of images into one plane image includes the following steps:

步骤301:对多个图像进行处理,获取相邻相机拍摄的图像的重叠区域 的空间点云;Step 301: Process a plurality of images to obtain the spatial point cloud of the overlapping area of the images captured by adjacent cameras;

步骤302:通过空间坐标系的转换,计算标准点云的位姿和测量点云的 位姿;Step 302: Calculate the pose of the standard point cloud and the pose of the measurement point cloud through the transformation of the space coordinate system;

步骤303:平展图像,获取待检柱体展开后的平面图像。Step 303 : Flatten the image, and obtain a flat image of the cylinder to be inspected after being unfolded.

图像合成过程中,在生产线中进行相机装配时,已保证相机之间存在重 叠区域。首先,利用计算机视觉中多视几何的基础理论,计算出重叠区域的 离散点云;其次,对于非重叠区域空间点云估算:第一步,先利用电子蜡烛 设计标准三维模型与已测量重叠区域点云进行粗配准;第二步,利用ICP (Iterative Closest Point,迭代就近点法)算法,进行全局微调。在计 算柱体展开后的合成平面图像时,采用反向投影算法,结合双线性插值算法获取最终像素值。During image compositing, overlapping areas between cameras are guaranteed when cameras are assembled in the production line. First, using the basic theory of multi-view geometry in computer vision, the discrete point cloud of the overlapping area is calculated; secondly, for the spatial point cloud estimation of the non-overlapping area: the first step is to use the electronic candle to design the standard 3D model and the measured overlapping area. The point cloud is coarsely registered; in the second step, the ICP (Iterative Closest Point) algorithm is used to perform global fine-tuning. When calculating the composite plane image after the cylinder expansion, the back projection algorithm is used, combined with the bilinear interpolation algorithm to obtain the final pixel value.

所述步骤301中,重叠区域的空间点云的获取过程为:In the step 301, the acquisition process of the spatial point cloud of the overlapping area is as follows:

所述步骤3011:先将相邻两个相机拍摄的两幅图像分别进行区域分割;Described step 3011: firstly perform region segmentation on two images captured by two adjacent cameras;

所述步骤3012:在区域分割后的两幅图像中,分别确认多个特征点;Described step 3012: in the two images after region segmentation, confirm a plurality of feature points respectively;

所述步骤3013:将两幅图像的特征点进行相互匹配;Step 3013: Match the feature points of the two images to each other;

所述步骤3014:对匹配后的每一对匹配点,进行筛选,去除误匹配点对;Described step 3014: Screen each pair of matching points after matching, and remove the wrong matching point pair;

所述步骤3015:利用匹配成功的匹配点对,计算这一对匹配点在待检柱 体上的对应的空间点的位置坐标。Step 3015: Using the matching point pair that has been successfully matched, calculate the position coordinates of the corresponding spatial point of the pair of matching points on the cylinder to be inspected.

如图4,点云获取的详细过程如下。As shown in Figure 4, the detailed process of point cloud acquisition is as follows.

1、先对有效区域进行分割。在生产线的装配过程中,布置周围环境与 被测对象存在很大的颜色偏差,采用颜色值进行快速分割。1. First divide the effective area. In the assembly process of the production line, there is a large color deviation between the surrounding environment and the measured object, and the color value is used for rapid segmentation.

2、特征点检测。通常使用SIFT(Scale-invariant feature transform, 尺度不变特征变换)实现特征点检测。SIFT的用于图像处理领域的一种算法。 这种算法具有尺度不变性,可在图像中检测出关键的特征点。2. Feature point detection. Usually, SIFT (Scale-invariant feature transform, scale-invariant feature transform) is used to realize feature point detection. An algorithm of SIFT used in the field of image processing. This algorithm is scale invariant and can detect key feature points in the image.

3、特征点匹配,可以使用暴力匹配,也可以建立kd-tree,结合KNN 进行快速匹配。3. For feature point matching, brute force matching can be used, or kd-tree can be established, combined with KNN for fast matching.

利用相机标定结果,可以计算不同相机之间的基本矩阵F(fundamental matrix,以下简称F),计算公式如下:Using the camera calibration results, the fundamental matrix F (fundamental matrix, hereinafter referred to as F) between different cameras can be calculated. The calculation formula is as follows:

F=K'-T[t]xRK-1 (1)F=K' -T [t] x RK -1 (1)

公式(1)中,K和K′分别为相邻两个相机的内部参数矩阵,In formula (1), K and K′ are the internal parameter matrices of two adjacent cameras, respectively,

Figure RE-GDA0002717039370000071
其中fx和fy分别表示单个相机的两个焦距,cx和cy为 这个相机的两个光心,表示相机光轴在图像坐标系中的偏移量;
Figure RE-GDA0002717039370000071
where f x and f y represent the two focal lengths of a single camera, respectively, and c x and cy are the two optical centers of the camera, representing the offset of the camera's optical axis in the image coordinate system;

Figure RE-GDA0002717039370000072
其中f′x和f′y分别表示单个相机的两个焦距,c'x和c'y为 这个相机的两个光心,表示相机光轴在图像坐标系中的偏移量;
Figure RE-GDA0002717039370000072
where f' x and f' y represent the two focal lengths of a single camera, respectively, and c' x and c' y are the two optical centers of the camera, representing the offset of the camera's optical axis in the image coordinate system;

畸变参数不在此公式内部。实际处理时,先对拍摄图像进行去畸变处理, 然后再进行后续的处理。公式(1)中K与公式(9)是K同样的内部参数矩 阵。R表示相机间旋转矩阵;向量t表示相机间的平移向量;[t]x表示向量 t转为反对称矩阵的运算符;相关计算参数,在装配标定完成后,可提前计 算,无须每次检测重复计算。The distortion parameter is not inside this formula. In actual processing, the captured image is first subjected to de-distortion processing, and then subsequent processing is performed. K in formula (1) is the same internal parameter matrix as formula (9). R represents the rotation matrix between cameras; the vector t represents the translation vector between cameras; [t]x represents the operator that converts the vector t into an antisymmetric matrix; the relevant calculation parameters can be calculated in advance after the assembly calibration is completed, without the need for each inspection Repeated calculation.

4、对匹配点进行筛选,去除误匹配点。直接使用一般特征点提取以及 匹配算法,通常包含不少无匹配点对,这些点对对后续点云计算存在极大干 扰。利用相机的空间约束关系,可以排除干扰点对。一般通过计算极距离d, 当极距离d超过某个设定的极距离阈值,就排除该干扰点对。4. Screen matching points to remove false matching points. Direct use of general feature point extraction and matching algorithms usually contains many unmatched point pairs, which greatly interfere with subsequent point cloud computing. Using the spatial constraints of the camera, interfering point pairs can be excluded. Generally, by calculating the pole distance d, when the pole distance d exceeds a certain set pole distance threshold, the interference point pair is eliminated.

极距离计算公式(2)如下:The polar distance calculation formula (2) is as follows:

Figure RE-GDA0002717039370000081
Figure RE-GDA0002717039370000081

其中,x与x′表示一个匹配点对;F为基本矩阵,即附图中Fundamental Matrix,简称F-Matrix;(Fx)1与(Fx)2表示向量Fx的前两维数据。Among them, x and x' represent a matching point pair; F is the basic matrix, namely Fundamental Matrix in the accompanying drawing, referred to as F-Matrix; (Fx) 1 and (Fx) 2 represent the first two-dimensional data of the vector Fx.

5、利用匹配点对,已知相机间空间几何关系,利用三角原理就可以计 算匹配点对的空间位置。图7是三角原理示意图,两个平行四边形分别表示 两个相机的成像面。C与C′表示相机的光心,x′与x表示一个匹配点对,点P 即为要计算的空间点的位置。5. Using the matching point pair, the spatial geometric relationship between the cameras is known, and the spatial position of the matching point pair can be calculated by using the triangulation principle. Figure 7 is a schematic diagram of the principle of a triangle, and the two parallelograms respectively represent the imaging surfaces of the two cameras. C and C' represent the optical center of the camera, x' and x represent a matching point pair, and point P is the position of the spatial point to be calculated.

所述步骤302中,计算标准点云的位姿和测量点云的位姿的过程包括如 下步骤:In the described step 302, the process of calculating the pose of the standard point cloud and the pose of the measurement point cloud includes the following steps:

步骤3021:对点云采用主成分分析法处理,计算出测量空间坐标系;Step 3021: Process the point cloud by principal component analysis, and calculate the measurement space coordinate system;

步骤3022:将测量空间中的点转换为设计空间坐标系中的点。。Step 3022: Convert the point in the measurement space to the point in the design space coordinate system. .

如图5,上述位姿计算的具体过程如下。As shown in Figure 5, the specific process of the above pose calculation is as follows.

1、对计算点云进行主成分分析(Principal components analysis,简 称PCA),数据是三维的,最终计算结果会得出三个互相垂直的方向,即为 测量空间坐标系,坐标轴可用旋转矩阵

Figure BDA0002625126210000083
表示,坐标原点表示为tc。1. Principal components analysis (PCA) is performed on the calculated point cloud. The data is three-dimensional. The final calculation result will obtain three mutually perpendicular directions, which is the measurement space coordinate system, and the coordinate axis can be used with a rotation matrix.
Figure BDA0002625126210000083
is represented, and the origin of the coordinates is represented as t c .

2、在产品设计时,存在一个自定义的设计空间坐标系,假定数学符号 表示为

Figure BDA0002625126210000084
与ts;坐标轴用旋转矩阵
Figure BDA0002625126210000085
表示,坐标原点表示为ts。2. During product design, there is a custom design space coordinate system, assuming that mathematical symbols are expressed as
Figure BDA0002625126210000084
with t s ; the axes use the rotation matrix
Figure BDA0002625126210000085
is represented, and the origin of the coordinates is represented as t s .

另外,假定产品设计空间中一点表示为xs,那么它在测量空间坐标系中 的坐标xc可以用公式(3)表示:In addition, assuming that a point in the product design space is represented as x s , then its coordinate x c in the measurement space coordinate system can be expressed by formula (3):

Figure BDA0002625126210000082
Figure BDA0002625126210000082

整理上式(3),两个坐标系的转换关系可以通过公式(4)和公式(5) 表示。Arranging the above formula (3), the conversion relationship between the two coordinate systems can be expressed by formula (4) and formula (5).

Figure BDA0002625126210000091
Figure BDA0002625126210000091

Figure BDA0002625126210000092
Figure BDA0002625126210000092

根据下面等式(6),可将产品设计标准模型转换到测量空间中。According to equation (6) below, the product design standard model can be transformed into the measurement space.

Figure BDA0002625126210000093
Figure BDA0002625126210000093

其中,xs是设计空间中模型一点;xc是表示xs转换到测量空间中的点。where x s is a model point in design space; x c is a point representing the transformation of x s into measurement space.

采用上面公式(3)进行坐标系转换的方法,获取测量空间与设计空间 坐标系转换是粗略的,具体实施时需要进一步微调。微调一般采用迭代最近 点(Iterative ClosestPoint,简称ICP)算法,将设计空间点云与测量空间 获取点云进行微调,能够获取更精准的转换关系,从而提高计算的准确度。Using the above formula (3) to convert the coordinate system, it is rough to obtain the coordinate system conversion between the measurement space and the design space, and further fine-tuning is required in the specific implementation. The fine-tuning generally adopts the Iterative Closest Point (ICP for short) algorithm to fine-tune the design space point cloud and the acquired point cloud in the measurement space, which can obtain a more accurate conversion relationship and improve the accuracy of the calculation.

所述步骤303中平展图像的过程包括如下步骤The process of flattening the image in step 303 includes the following steps

步骤3031:计算展开图像的大小,并将图像的尺寸大小转换像素大小;Step 3031: Calculate the size of the expanded image, and convert the size of the image to the pixel size;

步骤3032:将展开的图像的像素位置,转换为设计空间内的位置坐标;Step 3032: Convert the pixel position of the expanded image into position coordinates in the design space;

步骤3033:将设计空间内的位置坐标对应的点,投影到相机成像平面上。Step 3033: Project the point corresponding to the position coordinate in the design space on the camera imaging plane.

在平展图像的流程如图6所示,具体步骤如下。The process of flattening an image is shown in Figure 6, and the specific steps are as follows.

1、如图8,首先计算展开图像大小,需要将物理尺寸转为像素,下图为 圆柱转为平面图像示意图。圆柱半径R,高度H均在设计阶段确定。1. As shown in Figure 8, the size of the expanded image is first calculated, and the physical size needs to be converted into pixels. The cylinder radius R and height H are determined in the design stage.

假定最终图像的宽度像素为width(pixel),那么高度像素个数heirht 的计算公式(7)为:Assuming that the width pixel of the final image is width(pixel), the calculation formula (7) of the height pixel number heirht is:

Figure BDA0002625126210000094
Figure BDA0002625126210000094

2、如图9所示,将展开图像像素位置转换为空间坐标位置。2. As shown in FIG. 9, convert the pixel position of the expanded image into a spatial coordinate position.

假设展开图像中的任意点P像素位置为(w,h),转换到后空间坐标系中 的坐标点Q计算方法如下公式(8):Assuming that the pixel position of any point P in the expanded image is (w, h), the calculation method of the coordinate point Q in the post-space coordinate system after conversion is as follows formula (8):

Figure BDA0002625126210000101
Figure BDA0002625126210000101

x=R*cos(α)x=R*cos(α)

y=R*sin(α)y=R*sin(α)

Figure BDA0002625126210000102
Figure BDA0002625126210000102

公式(8)中,width和height为展开图像像素大小,单位为像素;H 为柱面设计物理高度,R为柱面设计半径,单位是物理长度;P点在展开图 像位置为(w,h),单位为像素;Q点为P点对应柱面点,Q点的坐标为(x,y,z), 通过上述计算,单位为物理长度;α为图9中的夹角α,为图9的XYZ坐标系 中原点与点Q的连线在xy平面的投影与x轴的夹角。In formula (8), width and height are the pixel size of the expanded image, and the unit is pixel; H is the design physical height of the cylinder, R is the design radius of the cylinder, and the unit is the physical length; the position of point P in the expanded image is (w,h ), the unit is pixel; Q point is the cylinder point corresponding to P point, the coordinates of Q point are (x, y, z), through the above calculation, the unit is the physical length; α is the angle α in Fig. The angle between the projection of the line connecting the origin and point Q on the xy plane in the XYZ coordinate system of 9 and the x-axis.

3、如图10所示,为空间点云转换到相机平面的示意图。3. As shown in Figure 10, it is a schematic diagram of converting the spatial point cloud to the camera plane.

图10中,柱面上一点Q,投影到相机成像平面点为M,具体计算公式(9) 如下:In Figure 10, a point Q on the cylinder is projected to the camera imaging plane as M. The specific calculation formula (9) is as follows:

Figure BDA0002625126210000103
Figure BDA0002625126210000103

公式(9)中,K是相机标定内部参数,与公式(1)中相同。畸变参数 不包含在K中,在进入算法之前,第一步会进行图像的去畸变处理;畸变处 理是一个隐性的处理过程,使用一般的相机镜头,畸变比较小,可无需处理; 但是,如果是大广角的镜头,这是畸变比较严重,就需要进行畸变处理。需 根据实际情况灵活调整。In formula (9), K is an internal parameter of camera calibration, which is the same as in formula (1). The distortion parameter is not included in K. Before entering the algorithm, the first step is to de-distort the image; the distortion processing is a recessive processing process, using a general camera lens, the distortion is relatively small, and no processing is required; however, If it is a wide-angle lens, the distortion is relatively serious, and distortion processing is required. It needs to be adjusted flexibly according to the actual situation.

R和t分别表示选择矩阵与平移向量,是相机标定外部参数,在相机配 置后就预先确定的;u和v为M点的坐标值;x、y和z为Q点的坐标值;λ 是一个缩放因子,不影响最终计算,为预先设定的常数。R and t represent the selection matrix and translation vector respectively, which are external parameters of camera calibration and are predetermined after the camera is configured; u and v are the coordinate values of point M; x, y and z are the coordinate values of point Q; λ is A scaling factor, which does not affect the final calculation, is a preset constant.

4、判断投影点M是否在相机成像平面内。4. Determine whether the projection point M is in the imaging plane of the camera.

柱面上一点Q不可能同时被所有相机看到,所以需要排除这些伪投点; 排除方法比较简单,即上一步获取M(u,v)在拍摄图像区域内即可。A point Q on the cylinder cannot be seen by all cameras at the same time, so these pseudo-projection points need to be excluded; the exclusion method is relatively simple, that is, the M(u,v) obtained in the previous step can be within the captured image area.

计算投影到相机成像面点(u,v)通常是浮点数,而相机成像位置都是离 散整数值,因此需要对投影像素位置进行插值计算。具体实施时,通常用双 线性插值的方法,来获取投影位置的像素值。The point (u, v) projected to the camera imaging surface is usually a floating point number, and the camera imaging position is a discrete integer value, so it is necessary to interpolate the projected pixel position. During the specific implementation, the bilinear interpolation method is usually used to obtain the pixel value of the projection position.

5、对于重叠区域像素,根据投影点M到相机中心的距离,进行权重插 值即可。5. For the pixels in the overlapping area, perform weight interpolation according to the distance from the projection point M to the center of the camera.

6、将最终计算像素值,填充到平展图像位置上,即图9中点P位置。6. Fill the final calculated pixel value to the position of the flattened image, that is, the position of point P in Figure 9.

所述步骤4中,所述缺陷检测的过程包括如下步骤:In the step 4, the defect detection process includes the following steps:

步骤401:将生成的图像与标准图像进行模板匹配,找到对应关系;Step 401: template matching is performed between the generated image and the standard image, and the corresponding relationship is found;

步骤402:基于对应关系,对生成的图像与标准图像进行分块检测。Step 402: Based on the corresponding relationship, perform block detection on the generated image and the standard image.

分块检测的计算方式,可以采用对应区域做差的方式,当差异大于某个 阈值即报异常,认定存在缺陷。The calculation method of block detection can use the method of making the difference in the corresponding area. When the difference is greater than a certain threshold, an abnormality is reported, and it is determined that there is a defect.

所述步骤4中,所述缺陷检测的过程包括如下步骤:In the step 4, the defect detection process includes the following steps:

步骤401′:收集大量的合格产品的图像数据,建立训练集;Step 401': collect a large number of image data of qualified products, and establish a training set;

步骤402′:通过训练集对深度学习网络进行训练;Step 402': train the deep learning network through the training set;

步骤403′:利用训练后的网络,对合成的平面图像进行缺陷检测。Step 403': Use the trained network to perform defect detection on the synthesized planar image.

在采用深度学习网络进行缺陷检测时,需要先收集大量图像数据,进行 标注;然后再对深度学习网路进行训练;最后用训练后网络进行缺陷检测。When using a deep learning network for defect detection, it is necessary to collect a large amount of image data and label it; then train the deep learning network; and finally use the trained network for defect detection.

本发明的控制方法中,待检柱体的周围设置多个相机;所述多个相机分 别从不同角度获取待检柱体的图像;将多个相机在同一时刻拍摄的待检柱体 的图像合成一个平面图像;根据所述合成的平面图像,进行缺陷检测。In the control method of the present invention, a plurality of cameras are arranged around the cylinder to be inspected; the multiple cameras obtain images of the cylinder to be inspected from different angles respectively; the images of the cylinder to be inspected captured by the multiple cameras at the same time A plane image is synthesized; defect detection is performed according to the synthesized plane image.

本发明的控制方法,具有以下几个技术特点。The control method of the present invention has the following technical features.

1、通过图像视觉算法,将多个相机拍摄图像,拼接整合为一张平面图 像。1. Through the image vision algorithm, the images captured by multiple cameras are spliced and integrated into a flat image.

2、平面图像可以直接与原始设计图进行模板匹配,快速确认是否存在 缺陷,若存在可以准确定位缺陷位置。2. The plane image can be directly matched with the original design drawing for template matching to quickly confirm whether there is a defect. If there is, it can accurately locate the defect position.

3、该方法采用多相机检测,相邻相机之间具有重叠区域,大大提高了 检测的覆盖率,成本低,检测效率远高于人工。3. The method adopts multi-camera detection, and there are overlapping areas between adjacent cameras, which greatly improves the detection coverage rate, and the cost is low, and the detection efficiency is much higher than that of manual work.

4、本发明的控制方法全程使用计算机视觉方案,没有人工干预。4. The control method of the present invention uses a computer vision scheme in the whole process without manual intervention.

5、本发明的方案,对硬件的要求较低,因此实施成本低,对相机特性 没有特别要求。5. The solution of the present invention has lower requirements on hardware, so the implementation cost is low, and there is no special requirement on camera characteristics.

本发明的基于多目相机的柱体二维图像生成方法,具有全程智能化的机 器检测、不需要人工干预、检测覆盖率搞、成本低、检测效率高等优点。The method for generating a two-dimensional image of a cylinder based on a multi-eye camera of the present invention has the advantages of intelligent machine detection in the whole process, no manual intervention, high detection coverage, low cost, and high detection efficiency.

对于本领域技术人员而言,显然本发明不限于上述示范性实施例的细节, 而且在不背离本发明的精神或基本特征的情况下,能够以其他的具体形式实 现本发明。因此,无论从哪一点来看,均应将实施例看作是示范性的,而且 是非限制性的,本发明的范围由所附权利要求而不是上述说明限定,因此旨 在将落在权利要求的等同要件的含义和范围内的所有变化囊括在本发明内。 不应将权利要求中的任何附图标记视为限制所涉及的权利要求。It will be apparent to those skilled in the art that the present invention is not limited to the details of the above-described exemplary embodiments, but that the present invention may be embodied in other specific forms without departing from the spirit or essential characteristics of the invention. Therefore, the embodiments are to be regarded in all respects as illustrative and not restrictive, and the scope of the invention is defined by the appended claims rather than the foregoing description, which are therefore intended to fall within the scope of the appended claims. All changes within the meaning and scope of the equivalents of , are included in the present invention. Any reference signs in the claims shall not be construed as limiting the involved claim.

此外,应当理解,虽然本说明书按照实施方式加以描述,但并非每个实 施方式仅包含一个独立的技术方案,说明书的这种叙述方式仅仅是为清楚起 见,本领域技术人员应当将说明书作为一个整体,各实施例中的技术方案也 可以经适当组合,形成本领域技术人员可以理解的其他实施方式。In addition, it should be understood that although this specification is described in terms of embodiments, not each embodiment only includes an independent technical solution, and this description in the specification is only for the sake of clarity, and those skilled in the art should take the specification as a whole , the technical solutions in each embodiment can also be appropriately combined to form other implementations that can be understood by those skilled in the art.

Claims (9)

1.一种基于多目相机的柱体二维图像生成方法,其特征在于:包括如下步骤:1. a cylinder two-dimensional image generation method based on a multi-eye camera, is characterized in that: comprise the steps: 步骤1:待检柱体的周围设置多个相机;Step 1: Set up multiple cameras around the cylinder to be inspected; 步骤2:所述多个相机分别从不同角度获取待检柱体的图像;Step 2: the multiple cameras obtain images of the cylinder to be inspected from different angles; 步骤3:将多个相机在同一时刻拍摄的待检柱体的图像合成一个平面图像;Step 3: Combine the images of the cylinder to be inspected captured by multiple cameras at the same time into a plane image; 步骤4:根据所述合成的平面图像,进行缺陷检测。Step 4: Perform defect detection according to the synthesized planar image. 2.根据权利要求1所述的一种基于多目相机的柱体二维图像生成方法,其特征在于:所述步骤1中,多个相机中,两个相邻相机的视场存在重叠区域。2 . The method for generating a two-dimensional image of a cylinder based on a multi-eye camera according to claim 1 , wherein in the step 1, among the multiple cameras, there is an overlapping area in the fields of view of two adjacent cameras. 3 . . 3.根据权利要求1所述的一种基于多目相机的柱体二维图像生成方法,其特征在于:所述步骤1中,相机设置后,获取相机的内部参数和各个相机之家的外部参数。3. The method for generating a two-dimensional image of a cylinder based on a multi-camera camera according to claim 1, wherein in the step 1, after the camera is set, the internal parameters of the camera and the exterior of each camera house are obtained. parameter. 4.根据权利要求1所述的一种基于多目相机的柱体二维图像生成方法,其特征在于:所述步骤3中,将多个图像合成一个平面图像的过程包括如下步骤:4. The method for generating a two-dimensional image of a cylinder based on a multi-eye camera according to claim 1, wherein in the step 3, the process of synthesizing a plurality of images into one plane image comprises the following steps: 步骤301:对多个图像进行处理,获取相邻相机拍摄的图像的重叠区域的空间点云;Step 301: Process a plurality of images to obtain a spatial point cloud of the overlapping area of the images captured by adjacent cameras; 步骤302:通过空间坐标系的转换,计算标准点云的位姿和测量点云的位姿;Step 302: Calculate the pose of the standard point cloud and the pose of the measured point cloud through the transformation of the space coordinate system; 步骤303:平展图像,获取待检柱体展开后的平面图像。Step 303 : Flatten the image, and obtain a flat image of the cylinder to be inspected after being unfolded. 5.根据权利要求4所述的一种基于多目相机的柱体二维图像生成方法,其特征在于:所述步骤301中,重叠区域的空间点云的获取过程为:5. The method for generating a two-dimensional image of a cylinder based on a multi-eye camera according to claim 4, wherein in the step 301, the process of obtaining the spatial point cloud in the overlapping area is: 所述步骤3011:先将相邻两个相机拍摄的两幅图像分别进行区域分割;Described step 3011: firstly perform region segmentation on two images captured by two adjacent cameras; 所述步骤3012:在区域分割后的两幅图像中,分别确认多个特征点;Described step 3012: in the two images after region segmentation, confirm a plurality of feature points respectively; 所述步骤3013:将两幅图像的特征点进行相互匹配;Step 3013: Match the feature points of the two images to each other; 所述步骤3014:对匹配后的每一对匹配点,进行筛选,去除误匹配点对;Described step 3014: Screen each pair of matching points after matching, and remove the wrong matching point pair; 所述步骤3015:利用匹配成功的匹配点对,计算这一对匹配点在待检柱体上的对应的空间点的位置坐标。Step 3015: Using the matching point pair that has been successfully matched, calculate the position coordinates of the corresponding spatial point of the pair of matching points on the cylinder to be inspected. 6.根据权利要求4所述的一种基于多目相机的柱体二维图像生成方法,其特征在于:所述步骤302中,计算标准点云的位姿和测量点云的位姿的过程包括如下步骤:6 . The method for generating a two-dimensional image of a cylinder based on a multi-eye camera according to claim 4 , wherein in the step 302 , a process of calculating the pose of the standard point cloud and measuring the pose of the point cloud. 7 . It includes the following steps: 步骤3021:对点云采用主成分分析法处理,计算出测量空间坐标系;Step 3021: Process the point cloud by principal component analysis, and calculate the measurement space coordinate system; 步骤3022:将测量空间中的点转换为设计空间坐标系中的点。。Step 3022: Convert the point in the measurement space to the point in the design space coordinate system. . 7.根据权利要求4所述的一种基于多目相机的柱体二维图像生成方法,其特征在于:所述步骤303中平展图像的过程包括如下步骤7 . The method for generating a two-dimensional image of a cylinder based on a multi-eye camera according to claim 4 , wherein the process of flattening the image in the step 303 comprises the following steps: 8 . 步骤3031:计算展开图像的大小,并将图像的尺寸大小转换像素大小;Step 3031: Calculate the size of the expanded image, and convert the size of the image to the pixel size; 步骤3032:将展开的图像的像素位置,转换为设计空间内的位置坐标;Step 3032: Convert the pixel position of the expanded image into position coordinates in the design space; 步骤3033:将设计空间内的位置坐标对应的点,投影到相机成像平面上。Step 3033: Project the point corresponding to the position coordinate in the design space on the camera imaging plane. 8.根据权利要求1所述的一种基于多目相机的柱体二维图像生成方法,其特征在于:所述步骤4中,所述缺陷检测的过程包括如下步骤:8. The method for generating a two-dimensional image of a cylinder based on a multi-eye camera according to claim 1, wherein in the step 4, the defect detection process comprises the following steps: 步骤401:将生成的图像与标准图像进行模板匹配,找到对应关系;Step 401: template matching is performed between the generated image and the standard image, and the corresponding relationship is found; 步骤402:基于对应关系,对生成的图像与标准图像进行分块检测。Step 402: Based on the corresponding relationship, perform block detection on the generated image and the standard image. 9.根据权利要求1所述的一种基于多目相机的柱体二维图像生成方法,其特征在于:所述步骤4中,所述缺陷检测的过程包括如下步骤:9 . The method for generating a two-dimensional image of a cylinder based on a multi-eye camera according to claim 1 , wherein in the step 4, the defect detection process comprises the following steps: 10 . 步骤401′:收集大量的合格产品的图像数据,建立训练集;Step 401': collect a large number of image data of qualified products, and establish a training set; 步骤402′:通过训练集对深度学习网络进行训练;Step 402': train the deep learning network through the training set; 步骤403′:利用训练后的网络,对合成的平面图像进行缺陷检测。Step 403': Use the trained network to perform defect detection on the synthesized planar image.
CN202010794749.3A 2020-08-10 2020-08-10 Cylindrical two-dimensional image generation method based on multi-view camera Active CN112037178B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010794749.3A CN112037178B (en) 2020-08-10 2020-08-10 Cylindrical two-dimensional image generation method based on multi-view camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010794749.3A CN112037178B (en) 2020-08-10 2020-08-10 Cylindrical two-dimensional image generation method based on multi-view camera

Publications (2)

Publication Number Publication Date
CN112037178A true CN112037178A (en) 2020-12-04
CN112037178B CN112037178B (en) 2024-10-29

Family

ID=73576830

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010794749.3A Active CN112037178B (en) 2020-08-10 2020-08-10 Cylindrical two-dimensional image generation method based on multi-view camera

Country Status (1)

Country Link
CN (1) CN112037178B (en)

Citations (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN201689060U (en) * 2010-05-19 2010-12-29 山东明佳包装检测科技有限公司 Device used for detecting quality of container trademark
CN102183524A (en) * 2011-01-10 2011-09-14 哈尔滨工业大学 Double-CCD (Charge Coupled Device) detecting method and system for apparent defect assessment of civil engineering structure
CN104422700A (en) * 2013-09-03 2015-03-18 西安奇维科技股份有限公司 Beverage can defect detection system and method based on spatial expansion
CN104574339A (en) * 2015-02-09 2015-04-29 上海安威士科技股份有限公司 Multi-scale cylindrical projection panorama image generating method for video monitoring
CN105046746A (en) * 2015-08-05 2015-11-11 西安新拓三维光测科技有限公司 Digital-speckle three-dimensional quick scanning method of human body
KR20150128300A (en) * 2014-05-09 2015-11-18 한국건설기술연구원 method of making three dimension model and defect analysis using camera and laser scanning
CN105358966A (en) * 2013-05-23 2016-02-24 材料开发中心股份公司 Method for the surface inspection of long products and apparatus suitable for carrying out such a method
CN105809626A (en) * 2016-03-08 2016-07-27 长春理工大学 Self-adaption light compensation video image splicing method
CN105979241A (en) * 2016-06-29 2016-09-28 深圳市优象计算技术有限公司 Cylinder three-dimensional panoramic video fast inverse transformation method
CN106952257A (en) * 2017-03-21 2017-07-14 南京大学 A Method of Surface Label Broken Defect Detection Based on Template Matching and Similarity Calculation
US20170337672A1 (en) * 2016-05-20 2017-11-23 Shenyang Neusoft Medical Systems Co., Ltd. Image splicing
CN107764839A (en) * 2017-10-12 2018-03-06 中南大学 A kind of steel wire rope surface defect online test method and device based on machine vision
FR3058816A1 (en) * 2016-11-16 2018-05-18 Safran METHOD FOR NON-DESTRUCTIVE CONTROL OF METAL PIECE
CN108072668A (en) * 2016-11-18 2018-05-25 中国科学院沈阳自动化研究所 Bullet open defect automatic recognition system based on Photoelectric Detection
US20180211373A1 (en) * 2017-01-20 2018-07-26 Aquifi, Inc. Systems and methods for defect detection
KR101915729B1 (en) * 2017-06-20 2018-11-06 주식회사 아이닉스 Apparatus and Method for Generating 360 degree omni-directional view
US20180322623A1 (en) * 2017-05-08 2018-11-08 Aquifi, Inc. Systems and methods for inspection and defect detection using 3-d scanning
CN108921848A (en) * 2018-09-29 2018-11-30 长安大学 Bridge Defect Detecting device and detection image joining method based on more mesh cameras
CN109345620A (en) * 2018-08-13 2019-02-15 浙江大学 Improved ICP point cloud splicing method of object to be measured by fusing fast point feature histogram
CN110084754A (en) * 2019-06-25 2019-08-02 江苏德劭信息科技有限公司 A kind of image superimposing method based on improvement SIFT feature point matching algorithm
CN111080627A (en) * 2019-12-20 2020-04-28 南京航空航天大学 2D +3D large airplane appearance defect detection and analysis method based on deep learning
CN111179173A (en) * 2019-12-26 2020-05-19 福州大学 An Image Mosaic Method Based on Discrete Wavelet Transform and Slope Fusion Algorithm
US20200160505A1 (en) * 2018-11-20 2020-05-21 Bnsf Railway Company Systems and methods for determining defects in physical objects
CN111192198A (en) * 2019-12-26 2020-05-22 台州学院 Pipeline panoramic scanning method based on pipeline robot
CN111340754A (en) * 2020-01-18 2020-06-26 中国人民解放军国防科技大学 A method for detection and classification of surface defects based on aircraft skin
CN111353997A (en) * 2019-04-11 2020-06-30 南京理工大学 Real-time three-dimensional surface defect detection method based on fringe projection

Patent Citations (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN201689060U (en) * 2010-05-19 2010-12-29 山东明佳包装检测科技有限公司 Device used for detecting quality of container trademark
CN102183524A (en) * 2011-01-10 2011-09-14 哈尔滨工业大学 Double-CCD (Charge Coupled Device) detecting method and system for apparent defect assessment of civil engineering structure
CN105358966A (en) * 2013-05-23 2016-02-24 材料开发中心股份公司 Method for the surface inspection of long products and apparatus suitable for carrying out such a method
CN104422700A (en) * 2013-09-03 2015-03-18 西安奇维科技股份有限公司 Beverage can defect detection system and method based on spatial expansion
KR20150128300A (en) * 2014-05-09 2015-11-18 한국건설기술연구원 method of making three dimension model and defect analysis using camera and laser scanning
CN104574339A (en) * 2015-02-09 2015-04-29 上海安威士科技股份有限公司 Multi-scale cylindrical projection panorama image generating method for video monitoring
CN105046746A (en) * 2015-08-05 2015-11-11 西安新拓三维光测科技有限公司 Digital-speckle three-dimensional quick scanning method of human body
CN105809626A (en) * 2016-03-08 2016-07-27 长春理工大学 Self-adaption light compensation video image splicing method
US20170337672A1 (en) * 2016-05-20 2017-11-23 Shenyang Neusoft Medical Systems Co., Ltd. Image splicing
CN105979241A (en) * 2016-06-29 2016-09-28 深圳市优象计算技术有限公司 Cylinder three-dimensional panoramic video fast inverse transformation method
FR3058816A1 (en) * 2016-11-16 2018-05-18 Safran METHOD FOR NON-DESTRUCTIVE CONTROL OF METAL PIECE
CN108072668A (en) * 2016-11-18 2018-05-25 中国科学院沈阳自动化研究所 Bullet open defect automatic recognition system based on Photoelectric Detection
US20180211373A1 (en) * 2017-01-20 2018-07-26 Aquifi, Inc. Systems and methods for defect detection
CN106952257A (en) * 2017-03-21 2017-07-14 南京大学 A Method of Surface Label Broken Defect Detection Based on Template Matching and Similarity Calculation
US20180322623A1 (en) * 2017-05-08 2018-11-08 Aquifi, Inc. Systems and methods for inspection and defect detection using 3-d scanning
KR101915729B1 (en) * 2017-06-20 2018-11-06 주식회사 아이닉스 Apparatus and Method for Generating 360 degree omni-directional view
CN107764839A (en) * 2017-10-12 2018-03-06 中南大学 A kind of steel wire rope surface defect online test method and device based on machine vision
CN109345620A (en) * 2018-08-13 2019-02-15 浙江大学 Improved ICP point cloud splicing method of object to be measured by fusing fast point feature histogram
CN108921848A (en) * 2018-09-29 2018-11-30 长安大学 Bridge Defect Detecting device and detection image joining method based on more mesh cameras
US20200160505A1 (en) * 2018-11-20 2020-05-21 Bnsf Railway Company Systems and methods for determining defects in physical objects
CN111353997A (en) * 2019-04-11 2020-06-30 南京理工大学 Real-time three-dimensional surface defect detection method based on fringe projection
CN110084754A (en) * 2019-06-25 2019-08-02 江苏德劭信息科技有限公司 A kind of image superimposing method based on improvement SIFT feature point matching algorithm
CN111080627A (en) * 2019-12-20 2020-04-28 南京航空航天大学 2D +3D large airplane appearance defect detection and analysis method based on deep learning
CN111179173A (en) * 2019-12-26 2020-05-19 福州大学 An Image Mosaic Method Based on Discrete Wavelet Transform and Slope Fusion Algorithm
CN111192198A (en) * 2019-12-26 2020-05-22 台州学院 Pipeline panoramic scanning method based on pipeline robot
CN111340754A (en) * 2020-01-18 2020-06-26 中国人民解放军国防科技大学 A method for detection and classification of surface defects based on aircraft skin

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
殷润民;李伯虎;柴旭东;: "自适应柱状全景图拼接", 中国图象图形学报, no. 06, 15 June 2008 (2008-06-15) *
胡郁;吴丽萍;: "基于特征提取的虚拟现实中全景图像生成算法", 科学技术与工程, no. 32, 18 November 2017 (2017-11-18) *

Also Published As

Publication number Publication date
CN112037178B (en) 2024-10-29

Similar Documents

Publication Publication Date Title
CN110276808B (en) Method for measuring unevenness of glass plate by combining single camera with two-dimensional code
US8600192B2 (en) System and method for finding correspondence between cameras in a three-dimensional vision system
US11488322B2 (en) System and method for training a model in a plurality of non-perspective cameras and determining 3D pose of an object at runtime with the same
CN104537707B (en) Image space type stereoscopic vision moves real-time measurement system online
CN111667536A (en) Parameter calibration method based on zoom camera depth estimation
CN114029946A (en) Method, device and equipment for guiding robot to position and grab based on 3D grating
WO2017092631A1 (en) Image distortion correction method for fisheye image, and calibration method for fisheye camera
CN106643555B (en) Connector recognition methods based on structured light three-dimensional measurement system
CN101354796B (en) Omnidirectional stereo vision three-dimensional rebuilding method based on Taylor series model
CN106500625B (en) A kind of telecentricity stereo vision measurement method
CN116802688A (en) Apparatus and method for correspondence analysis within an image
US11882262B2 (en) System and method for stereoscopic image analysis
CN114299156A (en) Method for calibrating and unifying coordinates of multiple cameras in non-overlapping area
CN113706635B (en) Long-focus camera calibration method based on point feature and line feature fusion
CN106952262A (en) A Method of Analyzing Ship Plate Machining Accuracy Based on Stereo Vision
CN114820563B (en) A method and system for industrial component size estimation based on multi-viewpoint stereo vision
CN111462246A (en) Equipment calibration method of structured light measurement system
CN119516362A (en) Ship target recognition method in zoom binocular system images based on homography matrix mapping
JP7230722B2 (en) Image processing device and image processing method
CN118729988A (en) Precision measurement method of workpiece based on binocular structured light
CN117830435A (en) Multi-camera system calibration method based on 3D reconstruction
Georgiev et al. A fast and accurate re-calibration technique for misaligned stereo cameras
CN112037178B (en) Cylindrical two-dimensional image generation method based on multi-view camera
CN117437393A (en) Active alignment algorithm for micro LED chip
CN117333367A (en) Image stitching method, system, medium and device based on image local features

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant