[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN104537707A - Image space type stereo vision on-line movement real-time measurement system - Google Patents

Image space type stereo vision on-line movement real-time measurement system Download PDF

Info

Publication number
CN104537707A
CN104537707A CN201410745020.1A CN201410745020A CN104537707A CN 104537707 A CN104537707 A CN 104537707A CN 201410745020 A CN201410745020 A CN 201410745020A CN 104537707 A CN104537707 A CN 104537707A
Authority
CN
China
Prior art keywords
stereo
image
images
camera
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410745020.1A
Other languages
Chinese (zh)
Other versions
CN104537707B (en
Inventor
邢帅
王栋
徐青
葛忠孝
李鹏程
耿迅
张军军
侯晓芬
周杨
夏琴
江腾达
李建胜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
PLA Information Engineering University
Original Assignee
PLA Information Engineering University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by PLA Information Engineering University filed Critical PLA Information Engineering University
Priority to CN201410745020.1A priority Critical patent/CN104537707B/en
Publication of CN104537707A publication Critical patent/CN104537707A/en
Application granted granted Critical
Publication of CN104537707B publication Critical patent/CN104537707B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)

Abstract

本发明涉及像方型立体视觉在线移动实时测量系统,首先进行相机标定;在立体相机的移动过程中,获取若干组立体图像;图像预处理;特征提取与立体匹配;三维重建;立体图像模型连接;对于任一时刻的立体影像,获取相邻时刻的立体影像中的同名像点,以它们为两组立体影像的连接点,通过前方交会计算得到两组立体模型的同名模型点,通过空间相似变换将两组立体模型变换到同一空间坐标系下;依次对下一时刻的立体影像进行同样处理,将所有的立体图像模型连接成一个针对整个场景的整体模型。

The invention relates to an online mobile real-time measurement system of a square-shaped stereo vision. First, camera calibration is performed; during the movement of the stereo camera, several groups of stereo images are acquired; image preprocessing; feature extraction and stereo matching; three-dimensional reconstruction; stereo image model connection ; For the stereoscopic image at any moment, obtain the image points with the same name in the stereoscopic image at the adjacent moment, use them as the connection points of the two sets of stereoscopic images, and calculate the same-named model points of the two sets of stereoscopic models through the front intersection calculation, through the spatial similarity The transformation transforms the two sets of stereo models into the same space coordinate system; the same process is performed on the stereo images at the next moment in turn, and all the stereo image models are connected into an overall model for the entire scene.

Description

像方型立体视觉在线移动实时测量系统Image square stereo vision online mobile real-time measurement system

技术领域technical field

本发明涉及一种像方型立体视觉在线移动实时测量系统。The invention relates to an on-line mobile real-time measuring system for image square stereo vision.

背景技术Background technique

立体视觉的基本原理是从两个或多个视点观察同一景物,以获取物体在不同视角下的图像,通过三角测量原理计算图像像素间的位置偏差(即视差)来获得三维信息。立体视觉测量方法由来已久,并在工业测量和摄影测量领域得到了广泛应用。The basic principle of stereo vision is to observe the same scene from two or more viewpoints to obtain images of objects under different perspectives, and to obtain three-dimensional information by calculating the position deviation between image pixels (ie parallax) through the principle of triangulation. Stereo vision measurement methods have a long history and are widely used in industrial metrology and photogrammetry.

《基于立体视觉的一定机器人目标定位》(南京理工大学硕士论文)总结了现有立体视觉的主要步骤,包括:"A Certain Robot Target Positioning Based on Stereo Vision" (Master Thesis of Nanjing University of Science and Technology) summarizes the main steps of the existing stereo vision, including:

1)图像获取;1) Image acquisition;

2)摄像机标定;2) Camera calibration;

3)图像预处理和特征提取;3) Image preprocessing and feature extraction;

4)立体匹配;4) Stereo matching;

5)三维重建。5) Three-dimensional reconstruction.

上述方法主要存在以下问题:The above method mainly has the following problems:

(1)获取的场景三维信息不完整。有的系统仅获取场景中部分标志物的三维信息,有的系统仅获取场景中部分物体的三维信息,有的系统虽然获取了整个场景的三维信息但密度不够,不能表达场景中的一些细节信息。(1) The acquired 3D information of the scene is incomplete. Some systems only obtain the 3D information of some landmarks in the scene, some systems only obtain the 3D information of some objects in the scene, and some systems obtain the 3D information of the entire scene but the density is not enough to express some detailed information in the scene .

(2)获取的场景三维信息未能整合。立体相机每一次拍摄的立体图像均可以生成对应场景的三维信息,但随着立体相机的移动,可以完成一个较大空间的拍摄,但每一次系统均孤立地处理该次生成的场景三维信息,并未考虑不同时刻生成的场景三维信息之间的关系,无法实现对大范围场景的整体测量。(2) The acquired 3D information of the scene cannot be integrated. Each time the stereo image taken by the stereo camera can generate the 3D information of the corresponding scene, but with the movement of the stereo camera, a larger space can be shot, but each time the system processes the generated 3D information of the scene in isolation, The relationship between the three-dimensional information of the scene generated at different times is not considered, and the overall measurement of a large-scale scene cannot be realized.

(3)系统实时性不足。上述系统中,对于仅获取场景中部分标志物三维信息的系统来说,基本可满足实时响应的要求,但对于需要获取全部场景精细三维信息的系统来说,其处理往往是线下的,还不能满足实时响应的要求。(3) The real-time performance of the system is insufficient. Among the above systems, for the system that only obtains the 3D information of some landmarks in the scene, it can basically meet the requirements of real-time response, but for the system that needs to obtain the fine 3D information of the entire scene, its processing is often offline Can not meet the requirements of real-time response.

本项目设计的像方型立体视觉在线移动实时测量系统,通过优化立体图像处理与匹配算法,结合相机移动连续获取立体图像和立体图像视觉测量模型连接算法,实现针对同一目标的连续观测并实时生成整个目标完整、精确的三维信息。The image square stereo vision online mobile real-time measurement system designed in this project realizes continuous observation and real-time generation of the same target by optimizing the stereo image processing and matching algorithm, combined with camera movement to continuously acquire stereo images and stereo image visual measurement model connection algorithms Complete and accurate 3D information of the entire target.

发明内容Contents of the invention

本发明的目的是提供一种像方型立体视觉在线移动实时测量系统,用以解决现有技术中获取的场景信息不完整、未能整合场景三维信息的问题,进一步还能提高实时性。The purpose of the present invention is to provide an online mobile real-time measurement system of image square stereo vision, which is used to solve the problem of incomplete scene information obtained in the prior art and failure to integrate three-dimensional information of the scene, and further improve real-time performance.

为实现上述目的,本发明的方案包括:To achieve the above object, the solution of the present invention includes:

像方型立体视觉在线移动实时测量系统,包括由两台相机、相机固定与距离调整装置构成的立体相机,立体相机连接控制与计算设备,所述控制与计算设备用于相机采集数据的控制、存储、处理及处理结果输出;测量过程如下:The image square stereo vision online mobile real-time measurement system includes a stereo camera composed of two cameras, camera fixing and distance adjustment devices, the stereo camera is connected to a control and computing device, and the control and computing device is used for the control of the data collected by the camera, Storage, processing and output of processing results; the measurement process is as follows:

1)相机标定;2)在立体相机的移动过程中,获取若干组立体图像;3)图像预处理;4)特征提取与立体匹配;5)三维重建;6)立体图像模型连接;对于任一时刻的立体影像,获取相邻时刻的立体影像中的同名像点,以它们为两组立体影像的连接点,通过前方交会计算得到两组立体模型的同名模型点,通过空间相似变换将两组立体模型变换到同一空间坐标系下;依次对下一时刻的立体影像进行同样处理,将所有的立体图像模型连接成一个针对整个场景的整体模型。1) Camera calibration; 2) Obtain several groups of stereo images during the movement of the stereo camera; 3) Image preprocessing; 4) Feature extraction and stereo matching; 5) 3D reconstruction; 6) Stereo image model connection; Stereoscopic images at any time, obtain the same-named image points in the stereoscopic images at adjacent moments, use them as the connection points of the two sets of stereoscopic images, and obtain the same-named model points of the two sets of three-dimensional models through forward intersection calculations, and transform the two sets of images through spatial similarity transformation The stereoscopic model is transformed into the same space coordinate system; the stereoscopic image at the next moment is sequentially processed in the same way, and all the stereoscopic image models are connected into an overall model for the entire scene.

所述相机标定方法包括:同时获取立体影像;提取标定板角点;标定解算。The camera calibration method includes: simultaneously acquiring stereoscopic images; extracting corner points of a calibration board; and calibration calculation.

图像预处理包括滤波和灰度均衡化处理。Image preprocessing includes filtering and gray equalization.

所述特征提取与立体匹配包括:利用SURF算子得到立体模型间的连接点;计算出两幅图像间的相对位置和姿态,进行相对定向解算;确定立体影像间的核线关系,对立体影像进行纠正得到按核线排列的立体影像;以SGM算法进行立体影像的稠密匹配以生成密集的同名像点。The feature extraction and stereo matching include: using the SURF operator to obtain the connection points between the stereo models; calculating the relative position and attitude between the two images, and performing relative orientation calculation; determining the epipolar relationship between the stereo images, The image is rectified to obtain a stereoscopic image arranged according to the epipolar line; the dense matching of the stereoscopic image is performed using the SGM algorithm to generate dense pixels with the same name.

根据匹配得到的密集同名像点采用多片前方交会法来重建目标场景的三维信息,实现所述三维重建。The multi-slice front intersection method is used to reconstruct the 3D information of the target scene according to the densely matched image points with the same name, so as to realize the 3D reconstruction.

本发明的测量方法,是在立体相机的移动过程中,不断获取立体影像并实时重建目标场景的立体模型。为了形成一个完整的场景模型就需要将每一次重建的结果连接起来。立体图像模型连接的原理是依据获取相邻时刻的两组立体影像中的同名像点,以它们为两组立体影像的连接点,通过前方交会计算得到两组立体模型的同名模型点,最后通过空间相似变换将两组立体模型变换到同一空间坐标系下。对后续时刻获得的立体影像同样处理,即可将所有的立体图像模型连接成一个针对整个场景的整体模型。The measurement method of the present invention continuously acquires stereoscopic images and reconstructs a stereoscopic model of a target scene in real time during the moving process of the stereoscopic camera. In order to form a complete scene model, the results of each reconstruction need to be connected. The principle of stereoscopic image model connection is based on obtaining the same-named image points in two sets of stereoscopic images at adjacent moments, using them as the connection points of the two sets of stereoscopic images, and obtaining the same-named model points of the two sets of stereoscopic models through forward intersection calculations, and finally through The spatial similarity transformation transforms two sets of three-dimensional models into the same space coordinate system. Stereo images obtained at subsequent moments are processed in the same way, so that all stereo image models can be connected into an overall model for the entire scene.

附图说明Description of drawings

图1是摄像头固定架;Fig. 1 is a camera fixing frame;

图2是硬件平台的设计示意图;Fig. 2 is the design schematic diagram of hardware platform;

图3是软件工作流程图;Fig. 3 is a software workflow flowchart;

图4是棋盘格标定板及其坐标系;Fig. 4 is a checkerboard calibration board and its coordinate system;

图5是SURF算子流程图;Figure 5 is a flowchart of the SURF operator;

图6是相对定向完成后同名投影光线相交于目标点;Figure 6 shows that the projection rays with the same name intersect at the target point after the relative orientation is completed;

图7是单独像对相对定向流程图;Fig. 7 is a flow chart of the relative orientation of a single image pair;

图8是平行双目立体视觉系统的对极几何关系;Fig. 8 is the antipolar geometric relationship of the parallel binocular stereo vision system;

图9是SGM匹配算法流程图;Fig. 9 is a flowchart of the SGM matching algorithm;

图10是前方交会测量原理;Fig. 10 is the principle of intersection measurement;

图11是立体图像模型连接流程图;Fig. 11 is a connection flowchart of a stereoscopic image model;

图12是系统进行移动实时测量的过程示意图。Fig. 12 is a schematic diagram of the process of the system performing real-time mobile measurement.

具体实施方式Detailed ways

下面结合附图对本发明做进一步详细的说明。The present invention will be described in further detail below in conjunction with the accompanying drawings.

在立体相机的移动过程中,可以不断获取立体影像并实时重建目标场景的立体模型。但每次重建得到的都是当前视场范围内目标的局部模型,为了形成一个完整的场景模型就需要将每一次重建的结果连接起来,这就是本发明的基本方案所要实现的。During the movement of the stereo camera, the stereo image can be acquired continuously and the stereo model of the target scene can be reconstructed in real time. However, each reconstruction obtains a local model of the target within the current field of view. In order to form a complete scene model, the results of each reconstruction need to be connected. This is what the basic solution of the present invention aims to achieve.

本发明的基本方案的步骤为:1)相机标定;2)在立体相机的移动过程中,获取若干组立体图像;3)图像预处理;4)特征提取与立体匹配;5)三维重建;6)立体图像模型连接;对于任一时刻的立体影像,获取相邻时刻的立体影像中的同名像点,以它们为两组立体影像的连接点,通过前方交会计算得到两组立体模型的同名模型点,通过空间相似变换将两组立体模型变换到同一空间坐标系下;依次对下一时刻的立体影像进行同样处理,将所有的立体图像模型连接成一个针对整个场景的整体模型。The steps of the basic solution of the present invention are: 1) camera calibration; 2) during the movement of the stereo camera, several groups of stereo images are obtained; 3) image preprocessing; 4) feature extraction and stereo matching; 5) three-dimensional reconstruction; 6 ) Stereo image model connection; for a stereo image at any moment, obtain the image points with the same name in the stereo image at adjacent moments, use them as the connection points of the two sets of stereo images, and obtain the models with the same name of the two sets of stereo models through forward intersection calculation Points, transform the two sets of stereo models into the same space coordinate system through spatial similarity transformation; perform the same processing on the stereo images at the next moment in turn, and connect all the stereo image models into an overall model for the entire scene.

经过立体图像模型连接,系统在移动过程中所重建的目标三维信息可以整合到一个第一组立体图像模型的坐标系中,形成整个场景一个完整的几何模型。After the stereo image model is connected, the 3D information of the target reconstructed by the system during the moving process can be integrated into the coordinate system of the first group of stereo image models to form a complete geometric model of the entire scene.

下面具体介绍一种像方型立体视觉在线移动实时测量系统。该系统主要由硬件和软件两部分构成(软件即本发明的方法方案)。如图1所示,其中硬件主要包括两台相机、镜头、数据线、采集转换设备、相机固定装置、控制与计算设备等,软件则包括相机标定、图像预处理、特征提取与匹配、相对定向、立体图像模型连接、核线影像生成、稠密匹配以及三维重建等。The following specifically introduces an image square stereo vision online mobile real-time measurement system. The system is mainly composed of hardware and software (the software is the method scheme of the present invention). As shown in Figure 1, the hardware mainly includes two cameras, lenses, data cables, acquisition and conversion equipment, camera fixtures, control and computing equipment, etc., and the software includes camera calibration, image preprocessing, feature extraction and matching, and relative orientation. , stereo image model connection, epipolar image generation, dense matching and 3D reconstruction, etc.

1.1硬件部分1.1 Hardware part

如图1所示,本系统的构建所需硬件如下:As shown in Figure 1, the hardware required for the construction of this system is as follows:

(1)相机:数码工业相机,分辨率大于100万像素,采集速度不小于30帧/秒,USB接口供电,网络接口、火线接口或USB接口进行数据传输。(1) Camera: Digital industrial camera with a resolution of more than 1 million pixels and an acquisition speed of not less than 30 frames per second, powered by a USB interface, and data transmission through a network interface, FireWire interface or USB interface.

(2)镜头:与相机可接合的标准接口镜头,镜头焦距不低于24毫米。(2) Lens: a standard interface lens that can be connected to the camera, and the focal length of the lens is not less than 24mm.

(3)数据线:标准接口的千兆网线、火线、USB数据线。(3) Data cable: Gigabit network cable, Firewire, USB data cable with standard interface.

(4)采集转换设备:用于将数据线与计算机相连接的转换设备和协议。(4) Acquisition and conversion equipment: conversion equipment and protocols used to connect data lines to computers.

(5)相机固定装置:可安装两台相机且具有一定长度的固定架(如图1所示),两台相机间的距离可通过调整安装台之间的距离进行调节。(5) Camera fixing device: a fixing frame with a certain length that can be installed with two cameras (as shown in Figure 1), and the distance between the two cameras can be adjusted by adjusting the distance between the mounting platforms.

(6)控制与计算设备:用于相机采集数据的控制、存储、处理及处理结果输出,推荐使用具有相应接口的平板电脑或笔记本电脑。(6) Control and computing equipment: used for the control, storage, processing and output of the data collected by the camera. It is recommended to use a tablet computer or laptop with a corresponding interface.

本系统硬件平台的设计示意图如图2所示。The design schematic diagram of the hardware platform of this system is shown in Fig. 2 .

1.2软件部分1.2 Software part

本系统软件由相机标定、图像预处理、特征提取与匹配、相对定向、立体图像模型连接、核线影像生成、稠密匹配以及三维重建8个模块组成,其工作流程如图3所示。The system software consists of eight modules: camera calibration, image preprocessing, feature extraction and matching, relative orientation, stereo image model connection, epipolar image generation, dense matching, and 3D reconstruction. The workflow is shown in Figure 3.

各个模块涉及的关键技术和方法如下。The key technologies and methods involved in each module are as follows.

1.2.1相机标定1.2.1 Camera Calibration

立体相机的标定是为了获取每个相机的内部参数、相机间的位置关系。其中,内部参数包括摄像机的焦距、像主点坐标和镜头的畸变参数等,而位置关系包括摄像机之间的旋转矩阵、平移矩阵。The calibration of the stereo camera is to obtain the internal parameters of each camera and the positional relationship between the cameras. Among them, the internal parameters include the focal length of the camera, the coordinates of the principal point of the image, and the distortion parameters of the lens, etc., and the positional relationship includes the rotation matrix and translation matrix between the cameras.

目前,常用的相机标定方法有实验场法、张正友法、Tsai两步法和自标定法等。经过比较发现,张正友法和Tsai两步法操作方便、稳定可靠以及精度较好,但后者解算的畸变参数个数少于前者,因此本系统采用前者作为相机的标定方法,以便得到更好的畸变纠正效果。标定过程中涉及的各步骤及情况如下:At present, commonly used camera calibration methods include experimental field method, Zhang Zhengyou method, Tsai two-step method and self-calibration method. After comparison, it is found that Zhang Zhengyou's method and Tsai's two-step method are easy to operate, stable and reliable, and have better accuracy, but the number of distortion parameters solved by the latter is less than that of the former, so this system uses the former as the calibration method of the camera in order to obtain better distortion correction effect. The steps and conditions involved in the calibration process are as follows:

第一步:同时获取立体影像。影像信息包括影像的大小、色彩(灰度);场景信息含有一块12×9棋盘格标定板,格子边缘大小固定(30mm),可构成一个物方坐标系(如图4所示),标定板位于立体影像同名区域的中央且分布相对均匀。Step 1: Simultaneously acquire stereoscopic images. The image information includes image size and color (grayscale); the scene information includes a 12×9 checkerboard calibration board with a fixed grid edge size (30mm), which can form an object space coordinate system (as shown in Figure 4), and the calibration board Located in the center of the homonymous region of the stereo image and relatively evenly distributed.

第二步:提取标定板角点。立体影像上分别提取出88个角点的二维影像坐标x、y,其定位精度通常达到子像素级(角点提取方法参考相关文献[张广军.视觉测量[M].科学出版社,2008:55-61])。Step 2: Extract the corner points of the calibration board. The two-dimensional image coordinates x and y of 88 corner points are respectively extracted from the stereoscopic image, and the positioning accuracy usually reaches the sub-pixel level (for the method of corner point extraction, please refer to relevant literature [Zhang Guangjun. Vision Measurement [M]. Science Press, 2008: 55-61]).

第三步:标定解算。用张正友标定法可以得到双相机的内部参数,包含焦距为f、像主点坐标(cx,cy)、径向畸变k1,k2,k3、切向畸变p1,p2;外部参数包含相机的旋转矩阵R、平移向量T。下面给出具体的标定解算原理。The third step: Calibration solution. Using the Zhang Zhengyou calibration method, the internal parameters of the dual camera can be obtained, including focal length f, principal point coordinates (c x , c y ), radial distortion k 1 , k 2 , k 3 , tangential distortion p 1 , p 2 ; The external parameters include the camera's rotation matrix R and translation vector T. The specific calibration solution principle is given below.

1、内部参数标定1. Internal parameter calibration

设相机的焦距为f,像主点坐标为(cx,cy),图像像点坐标为(x,y),(X,Y,Z)为其物点坐标(由于棋盘格标定为平面,因而Z=0),则它们之间的关系为Let the focal length of the camera be f, the coordinates of the principal point of the image are (c x , c y ), the coordinates of the image point are (x, y), and (X, Y, Z) are the object point coordinates (because the checkerboard is marked as a plane , so Z=0), then the relationship between them is

xx ythe y 11 == sMsM rr 11 rr 22 rr 33 TT Xx YY 00 11 == sMsM rr 11 rr 22 TT Xx YY 11 == Hh Xx YY 11 -- -- -- (( 99 ))

其中,[r1 r2 r3]=R为旋转矩阵,H=sM[r1 r2 T]为单应性矩阵,而 M = f x 0 c x 0 f y c y 0 0 1 . Among them, [r 1 r 2 r 3 ]=R is the rotation matrix, H=sM[r 1 r 2 T] is the homography matrix, and m = f x 0 c x 0 f the y c the y 0 0 1 .

由于标定板中角点的物方坐标已知的,而其对应的图像坐标可由图像处理得到,根据公式(3)可解算出矩阵H,用向量形式表示为[h1 h2 h3],其结果表示为Since the object space coordinates of the corner points in the calibration plate are known, and the corresponding image coordinates can be obtained by image processing, the matrix H can be solved according to the formula (3), expressed as [h 1 h 2 h 3 ] in vector form, The result is expressed as

h 1 = s Mr 1 h 2 = s Mr 2 h 3 = sMT r 1 = λM - 1 h 1 r 2 = λM - 1 h 2 T = λM - 1 h 3 - - - ( 2 ) h 1 = the s Mr. 1 h 2 = the s Mr. 2 h 3 = MT or r 1 = λ M - 1 h 1 r 2 = λ M - 1 h 2 T = λ M - 1 h 3 - - - ( 2 )

其中,λ=1/s。根据旋转矩阵的性质,可以得到Among them, λ=1/s. According to the properties of the rotation matrix, we can get

rr 11 TT rr 22 == 00 rr 11 TT rr 11 == rr 22 TT rr 22 -- -- -- (( 33 ))

从而thereby

hh 11 TT Mm -- TT Mm -- 11 hh 22 == 00 hh 11 TT Mm -- TT Mm -- 11 hh 11 == hh 22 TT Mm -- TT Mm -- 11 hh 22 -- -- -- (( 44 ))

令B=M-TM-1,可得B为斜对称矩阵,展开有Let B=M -T M -1 , it can be obtained that B is an oblique symmetric matrix, and the expansion is

BB == Mm -- TT Mm -- 11 == BB 1111 BB 1212 BB 1313 BB 1212 BB 22twenty two BB 23twenty three BB 1313 BB 23twenty three BB 3333 -- -- -- (( 55 ))

因此,可以使用6个元素向量的点积来表示公式(4),则有Therefore, formula (4) can be expressed using the dot product of 6 element vectors, then

hh ii TT BhBh jj == vv ijij TT bb == hh ii 11 hh jj 11 hh ii 11 hh jj 22 ++ hh ii 22 hh jj 11 hh ii 22 hh jj 22 hh ii 33 hh jj 11 ++ hh ii 11 hh jj 33 hh ii 33 hh jj 22 ++ hh ii 22 hh jj 33 hh ii 33 hh jj 33 TT BB 1111 BB 1212 BB 22twenty two BB 1313 BB 23twenty three BB 3333 -- -- -- (( 66 ))

利用来定义两个约束条件,可以写为use to define two constraints, which can be written as

vv 1212 TT (( vv 1111 -- vv 22twenty two )) TT bb == 00 -- -- -- (( 77 ))

如果相机同时获取K个棋盘格图像,可以组建方程组:Vb=0。其中,V是一个2K×6的矩阵,可以通过单应性矩阵H计算得到。由于每幅影像有两个约束方程,因而当K≥3时可利用最小二乘法求解出向量b,进而得到相机内参数If the camera acquires K checkerboard images at the same time, a system of equations can be established: Vb=0. Among them, V is a 2K×6 matrix, which can be calculated through the homography matrix H. Since each image has two constraint equations, when K≥3, the least square method can be used to solve the vector b, and then obtain the camera internal parameters

ff xx == λλ // BB 1111 ff ythe y == λBλB 1111 // (( BB 1111 BB 22twenty two -- BB 1212 22 )) cc xx == -- BB 1313 ff xx 22 // λλ cc ythe y == (( BB 1212 BB 1313 -- BB 1111 BB 23twenty three )) // (( BB 1111 BB 22twenty two -- BB 1212 22 )) -- -- -- (( 88 ))

其中,结合每幅影像的单应性矩阵,又可以计算出它们的外参数。in, Combined with the homography matrix of each image, their extrinsic parameters can in turn be calculated.

考虑到镜头畸变的影响,将上面得到的内外参数作为后面非线性方程组的初始值。令(xp,yp)为点的规范化坐标,(xd,yd)为带有畸变的实际坐标,其模型为Considering the influence of lens distortion, the internal and external parameters obtained above are used as the initial values of the following nonlinear equations. Let (x p , y p ) be the normalized coordinates of the point, (x d , y d ) be the actual coordinates with distortion, and its model is

xx pp ythe y pp == (( 11 ++ kk 11 rr 22 ++ kk 22 rr 44 ++ kk 33 rr 66 )) xx dd ythe y dd ++ 22 pp 11 xx dd ythe y dd ++ pp 22 (( rr 22 ++ 22 xx dd 22 )) pp 11 (( rr 22 ++ 22 ythe y dd 22 )) ++ 22 pp 22 xx dd ythe y dd -- -- -- (( 99 ))

其中,这样在重新估计内外参数后,可以利用最小二乘法来对方程组进行迭代求解,可以最终得到精度更好的内参数。in, In this way, after the internal and external parameters are re-estimated, the least square method can be used to iteratively solve the equation system, and internal parameters with better precision can be finally obtained.

2、外部参数标定2. Calibration of external parameters

立体相机坐标系间的旋转矩阵为R,平移矩阵为T。给定某物方点坐标P,则其在左右像平面坐标系中的坐标为The rotation matrix between the stereo camera coordinate systems is R, and the translation matrix is T. Given the coordinates P of a square point of an object, its coordinates in the coordinate system of the left and right image planes are

PP ll == RR ll PP ++ TT ll PP rr == RR rr PP ++ TT rr -- -- -- (( 1010 ))

其中,Rl、Tl和Rr、Tr是点P由相应的像平面坐标系到世界坐标系的旋转、平移矩阵。而Pl、Pr表示空间中同一点,可得Among them, R l , T l and R r , T r are the rotation and translation matrices of point P from the corresponding image plane coordinate system to the world coordinate system. And P l and P r represent the same point in the space, we can get

Pl=RT(Pr-T)   (11)对公式(10)和(11)进行矩阵运算,可以得到P l =R T (P r -T) (11) Carrying out matrix operation on formulas (10) and (11), we can get

RR == RR rr -- TT RR ll TT TT == TT rr -- RR -- TT TT ll -- -- -- (( 1212 ))

而式中的Rl、Rr、Tl和Tr可由前面的内参数标定得到。The R l , R r , T l and T r in the formula can be obtained from the previous internal parameter calibration.

1.2.2图像预处理1.2.2 Image preprocessing

立体相机采集图像时,由于镜头质量、光线变化等原因,造成图像质量的下降,这对后续的重建将带来负面的影响。为了减轻这种影响,需要对图像进行预处理。本系统的图像预处理主要包括滤波和灰度均衡化,目的是消除噪声和效果增强。When the stereo camera collects images, due to lens quality, light changes, etc., the image quality will be reduced, which will have a negative impact on the subsequent reconstruction. To mitigate this effect, image preprocessing is required. The image preprocessing of this system mainly includes filtering and gray level equalization, the purpose is to eliminate noise and enhance the effect.

1、滤波1. Filtering

为了减少影像噪声的影响,常常用平滑滤波方法来改善影像效果,其基本思想是通过目标点和周围几个点的运算来去除或削弱突变点。这里,滤波运算需要依据噪声特点选择相应的模板,常见的模板有In order to reduce the influence of image noise, the smoothing filter method is often used to improve the image effect. The basic idea is to remove or weaken the sudden change point through the calculation of the target point and several surrounding points. Here, the filtering operation needs to select the corresponding template according to the characteristics of the noise. The common templates are

11 44 00 11 00 11 11 11 00 11 00 11 99 11 11 11 11 11 11 11 11 11 11 1010 11 11 11 11 22 11 11 11 11 11 1616 11 22 11 22 44 22 11 22 11 -- -- -- (( 1313 ))

其中,最后一个模板是高斯模板。Among them, the last template is a Gaussian template.

在滤波处理过程中,可以采用卷积运算公式In the filtering process, the convolution operation formula can be used

gg (( xx ,, ythe y )) == ΣΣ ii == -- mm mm ΣΣ jj == -- nno nno ww (( ii ,, jj )) ff (( xx ++ ii ,, ythe y ++ jj )) ΣΣ ii == -- mm mm ΣΣ jj == -- nno nno ww (( ii ,, jj )) -- -- -- (( 1414 ))

来获取平滑影像。to obtain a smooth image.

2、灰度均衡化2. Gray level equalization

由于立体相机在加工工艺上存在误差,造成获取的立体影像存在辐射不一致的情况,主要体现在左右影像整体上存在一定的灰度差,因而使用前需要对它们进行灰度均衡化处理。本系统采用直方图均衡化方法分别对左右影像进行灰度拉伸。Due to errors in the processing technology of the stereo camera, the acquired stereo images have radiation inconsistencies, mainly reflected in the overall gray level difference between the left and right images, so they need to be gray level equalized before use. This system adopts the histogram equalization method to stretch the grayscale of the left and right images respectively.

灰度直方图是一种描述灰度值的函数,即统计影像中具有某灰度值的像素的个数,其用横坐标表示像素的灰度等级,纵坐标表示灰度值出现的频率。而灰度直方图均衡化的目的将原始影像的直方图变换为均匀分布的形式,使立体影像的灰范围一致,从而使立体相机获取的影像上同名像点的灰度值基本一致。The grayscale histogram is a function to describe the grayscale value, that is, to count the number of pixels with a certain grayscale value in the image. The abscissa represents the grayscale level of the pixel, and the ordinate represents the frequency of the grayscale value. The purpose of grayscale histogram equalization is to transform the histogram of the original image into a form of uniform distribution, so that the gray range of the stereoscopic image is consistent, so that the grayscale values of the pixels with the same name on the image acquired by the stereo camera are basically consistent.

实际上,立体相机间的相对位置关系会造成左右影像中场景信息并不完全重叠,这就需要通过影像匹配得到立体影像的重叠区域。本文是在立体影像重叠区域的基础上做灰度直方图均衡化处理的,其具体的步骤如下:In fact, the relative positional relationship between the stereo cameras will cause the scene information in the left and right images to not completely overlap, which requires image matching to obtain the overlapping area of the stereo images. This article is based on the gray histogram equalization processing on the basis of the overlapping area of the stereo image. The specific steps are as follows:

第一步:在灰度范围[0,255]内,扫描整幅图像并统计第k灰度级出现的次数nk,k∈[0,255];Step 1: In the gray scale range [0,255], scan the entire image and count the number of occurrences n k of the kth gray level, k∈[0,255];

第二步:用频数近似代替概率值,即进行直方图归一化处理。Pr(rk)=nk/n,0≤rk≤1,k=0,1,…,255。Pr(rk)代表的是灰度值rk出现的概率。归一化后直方图的所有分量的总和等于1;The second step: replace the probability value with frequency approximation, that is, normalize the histogram. P r (r k )=n k /n, 0≤r k≤1 , k=0, 1, . . . , 255. P r (r k ) represents the probability of gray value r k appearing. The sum of all components of the histogram after normalization is equal to 1;

第三步:计算变换后的灰度值,其转换公式为Step 3: Calculate the transformed gray value, the conversion formula is

sthe s kk == TT (( rr kk )) == ΣΣ jj == 00 kk pp rr (( rr jj )) ≈≈ ΣΣ jj == 00 kk nno jj nno ,, kk == 0,10,1 ,, .. .. .. ,, 255255 -- -- -- (( 1515 ))

1.2.3特征提取与匹配1.2.3 Feature extraction and matching

SURF算子是目前计算机视觉领域应用较多的一种特征点提取与匹配算法,其具有稳定性好、速度快且正确率高的特点。因此,本系统采用SURF算子得到立体模型间的连接点。The SURF operator is a feature point extraction and matching algorithm widely used in the field of computer vision at present. It has the characteristics of good stability, fast speed and high accuracy rate. Therefore, this system uses the SURF operator to obtain the connection points between the three-dimensional models.

SURF算子的核心是Hessian矩阵的计算。假设函数f(x,y),Hessian矩阵H是由函数偏导数组成:The core of the SURF operator is the calculation of the Hessian matrix. Assuming a function f(x,y), the Hessian matrix H is composed of function partial derivatives:

Hh (( ff (( xx ,, ythe y )) )) == ∂∂ 22 ff ∂∂ xx 22 ∂∂ 22 ff ∂∂ xx ∂∂ ythe y ∂∂ 22 ff ∂∂ xx ∂∂ ythe y ∂∂ 22 ff ∂∂ ythe y 22 -- -- -- (( 1616 ))

H矩阵判别式:H matrix discriminant:

detdet (( Hh )) == ∂∂ 22 ff ∂∂ xx 22 ∂∂ 22 ff ∂∂ ythe y 22 -- (( ∂∂ 22 ff ∂∂ xx ∂∂ ythe y )) 22 -- -- -- (( 1717 ))

判别式的值是H矩阵的特征值,可以利用判定结果的符号将所有点分类,根据判别式取值正负,来判别该点是否为极值点。在SURF算子中,用影像像素I(x,y)代替函数值f(x,y),选用二阶标准高斯函数作为滤波器,通过特定核间的卷积计算二阶偏导数,这样便能计算出H矩阵的三个矩阵元素Lxx,Lyy,Lxy,从而计算出H:The value of the discriminant is the eigenvalue of the H matrix. All points can be classified by the sign of the judgment result, and whether the point is an extreme point can be judged according to the positive or negative value of the discriminant. In the SURF operator, the image pixel I(x,y) is used to replace the function value f(x,y), the second-order standard Gaussian function is selected as the filter, and the second-order partial derivative is calculated through the convolution between specific kernels, so that The three matrix elements L xx , L yy , L xy of the H matrix can be calculated to calculate H:

Hh (( Xx ,, tt )) == LL xxxx (( Xx ,, tt )) LL xyxy (( Xx ,, tt )) LL xyxy (( Xx ,, tt )) LL yyyy (( Xx ,, tt )) -- -- -- (( 1818 ))

L(X,t)=G(t)*I(X)   (19)L(X,t)=G(t)*I(X) (19)

Lxx(X,t)是一幅影像在不同解析度下的表示,可以利用高斯核G(t)与影像函数I(X)在点X=(x,y)的卷积来实现,核函数G(t)具体表示如式(20),g(t)为高斯函数,t为高斯方差,Lyy与Lxy同理。通过这种方法可以为图像中每个像素计算出其H行列式的决定值,并用这个值来判别兴趣点。为方便应用,Herbert Bay提出用近似值Dxx代替Lxx,为平衡准确值与近似值间的误差引入权值w,权值w随尺度变化,则H矩阵判别式可表示为:L xx (X,t) is the representation of an image at different resolutions, which can be realized by convolution of the Gaussian kernel G(t) and the image function I(X) at point X=(x,y). The function G(t) is specifically expressed as formula (20), g(t) is a Gaussian function, t is a Gaussian variance, and L yy is the same as L xy . Through this method, the decision value of its H determinant can be calculated for each pixel in the image, and this value can be used to identify the point of interest. For the convenience of application, Herbert Bay proposed to use the approximate value D xx instead of L xx to introduce a weight w to balance the error between the exact value and the approximate value. The weight w changes with the scale, then the H matrix discriminant can be expressed as:

GG (( tt )) == ∂∂ 22 gg (( tt )) ∂∂ xx 22 -- -- -- (( 2020 ))

det(Happrox)=DxxDyy-(wDxy)2   (21)det(H approx )=D xx D yy -(wD xy ) 2 (21)

SURF算子的具体流程如图5所示。The specific process of the SURF operator is shown in Figure 5.

1.2.4相对定向1.2.4 Relative Orientation

利用匹配得到的立体图像间的连接点,可以计算出两幅图像间的相对位置和姿态,这是利用立体图像进行三维重建的重要一步。Using the connection points between the stereo images obtained by matching, the relative position and attitude between the two images can be calculated, which is an important step for 3D reconstruction using stereo images.

相对定向的目的是确定立体像对在空间中的相对方位,包括5个相对方位元素。其原理是,在确定了立体像对的相对方位时,同名核线与基线应该共面,即对应像点的投影光线应该在核面内对对相交(如图6所示)。The purpose of relative orientation is to determine the relative orientation of the stereo pair in space, including five relative orientation elements. The principle is that when the relative orientation of the stereo pair is determined, the epipolar line with the same name and the baseline should be coplanar, that is, the projection rays corresponding to the image points should intersect in pairs within the epipolar plane (as shown in Figure 6).

由图6可得,共面条件方程用向量表示的基本形式为From Figure 6, it can be seen that the basic form of the coplanar conditional equation represented by vectors is

设图6中S2在S1-X1Y1Z1坐标系中的坐标为(BX,BY,BZ),d1在S1-X1Y1Z1坐标系中的坐标为(X1,Y1,Z1),d2在S2-X2Y2Z2坐标系中的坐标为(X2,Y2,Z2)。则其坐标表达形式为Let the coordinates of S 2 in the S 1 -X 1 Y 1 Z 1 coordinate system in Figure 6 be (B X , B Y , B Z ), and the coordinates of d 1 in the S 1 -X 1 Y 1 Z 1 coordinate system is (X 1 , Y 1 , Z 1 ), and the coordinates of d 2 in the S 2 -X 2 Y 2 Z 2 coordinate system are (X 2 , Y 2 , Z 2 ). Then its coordinate expression is

Ff == BB Xx 00 00 Xx 11 YY 11 ZZ 11 Xx 22 YY 22 ZZ 22 == 00 -- -- -- (( 23twenty three ))

其中in

Xx 11 YY 11 ZZ 11 == aa 11 aa 22 aa 33 bb 11 bb 22 bb 33 cc 11 cc 22 cc 33 xx 11 ythe y 11 -- ff ,, Xx 22 YY 22 ZZ 22 == aa 11 ′′ aa 22 ′′ aa 33 ′′ bb 11 ′′ bb 22 ′′ bb 33 ′′ cc 11 ′′ cc 22 ′′ cc 33 ′′ xx 22 ythe y 22 -- ff

另外,单独像对相对定向所需要的数据为立体影像上同名像点的像方坐标,而且个数一般在6个以上且最好均匀分布。下面给出具体的定向解算流程,如图7所示:In addition, the data required for the relative orientation of individual image pairs is the image square coordinates of the same-named image points on the stereoscopic image, and the number is generally more than 6 and preferably evenly distributed. The specific orientation calculation process is given below, as shown in Figure 7:

最后,解求出相对定向的5个参数,具体解算过程参阅(张保明等,《摄影测量学》)。Finally, the five parameters of the relative orientation are solved. For the specific calculation process, refer to (Zhang Baoming et al., "Photogrammetry").

1.2.5核线影像生成1.2.5 Epipolar image generation

依据立体相机成像的几何原理,立体影像上的同名像点必然位于同名核线上且同名核线上的像点是一一对应的,这一点对后续的稠密匹配具有重要意义。因此,需要首先确定立体影像间的核线关系,然后对立体影像进行纠正得到按核线排列的立体影像,为后续的稠密匹配做准备。According to the geometric principle of stereo camera imaging, the same-named image points on the stereoscopic image must be located on the same-named epipolar line, and the image points on the same-named epipolar line are in one-to-one correspondence, which is of great significance for subsequent dense matching. Therefore, it is necessary to first determine the epipolar relationship between stereo images, and then correct the stereo images to obtain stereo images arranged according to the epipolar lines, in preparation for subsequent dense matching.

本系统根据相对定向得到的基础矩阵,采用Hartley算法来纠正立体影像,将一对二维射影变换作用于图像对,使其对极限匹配且与图像的扫描线相重合。经过这样的二维射影变换,不仅可以使两幅图像中的匹配点对的v方向坐标值相同,而且可以使它们的u方向坐标值尽量接近,即使水平视差较小,这样有利于较小匹配时的搜索空间。该算法仅利用了图像对的基础矩阵,而不需要知道相机投影矩阵。Based on the basic matrix obtained from the relative orientation, the system uses Hartley algorithm to correct the stereoscopic image, and applies a pair of two-dimensional projective transformations to the image pair, so that the pair limit matches and coincides with the scan line of the image. After such a two-dimensional projective transformation, not only can the v-direction coordinate values of matching point pairs in the two images be the same, but also their u-direction coordinate values can be made as close as possible, even if the horizontal parallax is small, which is conducive to smaller matching time search space. The algorithm only utilizes the fundamental matrix of the image pair and does not need to know the camera projection matrix.

校正后的立体图像对的位置关系参见图8,校正后图像的对极限与扫描线重合,为此需要找到一个射影变换使图像的对极点变为无穷远点,并且由该变换所引起的图像剪切畸变最小。为了满足这个要求,设u0为图像中心,希望变换矩阵H能在u0附近进行近似地旋转和平移,这样图像的畸变就会比较小。设u0为原点,对极点p=(f,0,1)在x轴上,则变换为See Figure 8 for the positional relationship of the corrected stereoscopic image pair. The pair limit of the corrected image coincides with the scan line. For this reason, it is necessary to find a projective transformation to make the antipodal point of the image become a point at infinity, and the image caused by this transformation Shear distortion is minimal. In order to meet this requirement, set u 0 as the center of the image, and hope that the transformation matrix H can perform approximate rotation and translation around u 0 , so that the distortion of the image will be relatively small. Let u 0 be the origin, and the opposite pole p=(f,0,1) is on the x-axis, then the transformation is

GG == 11 00 00 00 11 00 -- 11 // ff 00 11 -- -- -- (( 24twenty four ))

利用G矩阵可以将对极点p移至无穷远点(f,0,0),并且该矩阵在原点近似为单位矩阵。对于任意的点和对极点,有H=GRT。这里,矩阵T将u移至原点,R是一个将对极点移到x轴上一点(f,0,1)的旋转矩阵,矩阵G将点(f,0,1)移至无穷远点。这三个变换矩阵组合在一起,就是满足要求的射影变换。The antipole p can be moved to the infinity point (f,0,0) by using the G matrix, and the matrix is approximately an identity matrix at the origin. For any point and antipole, H=GRT. Here, matrix T moves u to the origin, R is a rotation matrix that moves the antipole to a point (f,0,1) on the x-axis, and matrix G moves the point (f,0,1) to infinity. The combination of these three transformation matrices is the projective transformation that meets the requirements.

设待校正的图像对为J和J′,将一对二维射影变换H和H′分别作用于这两幅图像,设λ和λ′为一对对极线,则所求变换满足:Suppose the image pair to be corrected is J and J′, apply a pair of two-dimensional projective transformations H and H′ to the two images respectively, and let λ and λ′ be a pair of epipolar lines, then the required transformation satisfies:

H*λ=H′λ′   (25)H * λ = H'λ' (25)

上式表示变换后的对极线相匹配。The above formula indicates that the transformed epipolar lines match.

前述的H为点映射变换阵,则H*是与H对应的线映射变换阵。满足式(25)的变换称为相匹配的变换。这里需要先找到一个变换H′使对极点p′移到无穷远点,然后求出满足以下条件的与H′相匹配的变换矩阵H:The aforementioned H is a point mapping transformation matrix, then H * is a line mapping transformation matrix corresponding to H. The transformation satisfying formula (25) is called matching transformation. Here it is necessary to find a transformation H' to move the opposite pole p' to the point at infinity, and then find the transformation matrix H that matches H' that satisfies the following conditions:

minmin ΣΣ ii dd (( HuHu ii ,, Hh ′′ uu ii ′′ )) 22 -- -- -- (( 2626 ))

为求得与H′相匹配的变换矩阵H,引入如下定理。In order to obtain the transformation matrix H that matches H', the following theorem is introduced.

定理:设图像对J和J′的基础矩阵F=|p′|×M,H′是对J′所做的射影变换。当且仅当H满足如下形式时,对J所做的射影变换H与H′相匹配:Theorem: Suppose the basic matrix F=|p'|×M of the image pair J and J', H' is the projective transformation of J'. The projective transformation H made on J matches H' if and only if H satisfies the following form:

H=(I+H′p′aT)H′M   (27)H=(I+H'p'a T )H'M (27)

式中,a为任意向量。In the formula, a is an arbitrary vector.

当变换矩阵H′已将对极点p′移动到无穷远点(1,0,0)T时,有When the transformation matrix H' has moved the antipole p' to the infinity point (1,0,0) T , there is

AA == II ++ Hh ′′ pp ′′ aa TT == II ++ (( 1,0,01,0,0 )) TT aa TT == aa bb cc 00 11 00 00 00 11 -- -- -- (( 2828 ))

令H0=H′M,则有H=AH0Let H 0 =H'M, then H=AH 0 .

则上述最小化问题可以表示为如下形式:set up Then the above minimization problem can be expressed in the following form:

minmin ΣΣ ii dd (( AA uu ^^ uu ,, uu ^^ ii ′′ )) 22 -- -- -- (( 2929 ))

求解上述最小化问题即可得到满足要求的一对二维射影变换矩阵H与H′,然后便可以对原始图像进行重采样,并进行灰度插值处理,以便生成新的立体图像对。A pair of two-dimensional projective transformation matrices H and H′ that meet the requirements can be obtained by solving the above minimization problem, and then the original image can be resampled and gray-level interpolation processed to generate a new stereoscopic image pair.

该算法的精度依赖与对极几何的恢复精度,因此,可预先采用离线方式对对极几何进行精度的恢复,这样就可以保证校正的精度。The accuracy of the algorithm depends on the recovery accuracy of the epipolar geometry. Therefore, the accuracy of the epipolar geometry can be recovered offline in advance, so that the accuracy of the correction can be guaranteed.

校正算法步骤如下:The calibration algorithm steps are as follows:

第一步:采用离线的方式进行对极几何的高精度恢复并在两幅图像上找到对极点p和p′;The first step: high-precision recovery of the epipolar geometry in an off-line manner and find the antipole points p and p′ on the two images;

第二步:求出将对极点p′映射到无穷远点(1,0,0)T的射影变换H′;The second step: Find the projective transformation H' that maps the opposite pole p' to the point at infinity (1,0,0) T ;

第三步:求出与变换矩阵H′相匹配的射影变换H,且使其满足Step 3: Find the projective transformation H that matches the transformation matrix H′, and make it satisfy

minmin ΣΣ ii dd (( HmHm 11 ii ,, Hh ′′ mm 22 ii )) 22 -- -- -- (( 3030 ))

其中,m1i=(u1,v1,1),m2i=(u2,v2,1);Among them, m 1i =(u 1 ,v 1 ,1),m 2i =(u 2 ,v 2 ,1);

第四步:根据射影变换H和H′,分别对两幅原始图像进行重采样,得到校正后的立体图像对。Step 4: According to the projective transformations H and H', respectively resample the two original images to obtain the corrected stereo image pair.

1.2.6稠密匹配1.2.6 Dense Matching

现有的立体影像匹配方法,如GC算法、SGM算法、BP算法等,中,SGM算法速度快、正确率高且稳定性好。因此,本系统采用SGM算法进行立体影像的稠密匹配以生成密集的三维场景信息。Among the existing stereo image matching methods, such as GC algorithm, SGM algorithm, BP algorithm, etc., among them, the SGM algorithm has fast speed, high accuracy rate and good stability. Therefore, this system uses the SGM algorithm to perform dense matching of stereoscopic images to generate dense 3D scene information.

SGM算法的基本思想是:先基于互信息执行逐像素代价计算,再用多个一维的平滑约束来近似一个二维的平滑约束[5]The basic idea of the SGM algorithm is: first perform pixel-by-pixel cost calculation based on mutual information, and then use multiple one-dimensional smooth constraints to approximate a two-dimensional smooth constraint [5] .

假设参考影像像素p灰度为Ibp,对应待匹配影像的同名点q灰度为Imq。函数q=ebm(p,d)表示匹配影像上对应于参考影像像素p的核线,核线参数是d。那么,基于MI的匹配代价函数为Assume that the grayscale of pixel p in the reference image is I bp , and the grayscale of point q corresponding to the same name in the image to be matched is Imq . The function q=e bm (p,d) represents the epipolar line on the matching image corresponding to the reference image pixel p, and the epipolar line parameter is d. Then, the matching cost function based on MI is

CC MIMI (( pp ,, dd )) == hh II bb ,, ff DD. (( II mm )) (( II bpbp ,, II mqmq )) -- hh II bb (( II bpbp )) -- hh ff DD. (( II mm )) (( II mqmq )) -- -- -- (( 3131 ))

其中,分别表示以像素p、q为中心的块图像的熵,表示两个块图像的联合熵。in, Represents the entropy of the block image centered on pixel p and q, respectively, Denotes the joint entropy of two block images.

沿着路径r方向,像素p的代价Lr(p,d)由递归方式定义为Along the path r, the cost L r (p,d) of pixel p is recursively defined as

LL rr (( pp ,, dd )) == CC (( pp ,, dd )) ++ minmin {{ LL rr (( pp -- rr ,, dd )) ,, LL rr (( pp -- rr ,, dd -- 11 )) ++ PP 11 ,, LL rr (( pp -- rr ,, dd ++ 11 )) ++ PP 11 ,, minmin ii LL rr (( pp -- rr ,, ii )) ++ PP 22 }} -- minmin kk LL rr (( pp -- rr ,, kk )) -- -- -- (( 3232 ))

其中,P1、P2为惩罚系数。将各个方向的代价相加,可以得到总的匹配代价Among them, P 1 and P 2 are penalty coefficients. The total matching cost can be obtained by adding the costs in all directions

SS (( pp ,, dd )) == ΣΣ rr LL rr (( pp ,, dd )) -- -- -- (( 3333 ))

那么,对于每一个像素点p,深度dp=mindS(p,d)。最后,还需要进行一致性检查,即比较匹配点对的深度值,进而生成轮廓明显、信息丰富的深度图,具体实现过程如图9所示。Then, for each pixel point p, the depth d p =min d S(p,d). Finally, a consistency check is required, that is, to compare the depth values of matching point pairs, and then generate a depth map with clear contours and rich information. The specific implementation process is shown in Figure 9.

1.2.7三维重建1.2.7 3D reconstruction

根据立体相机标定和稠密匹配结果,本系统用多片前方交会法来重建目标场景的三维信息。According to the stereo camera calibration and dense matching results, the system uses the multi-slice front intersection method to reconstruct the 3D information of the target scene.

如果能得到物体表面上所有的点,则三维物体的形状与位置就是唯一确定的。如图10所示,假定空间任意点P在两个相机坐标系C1与C2下的图像点为p1与p2,p1与p2为空间同一点P在左、右图像中的对应点,同时,相机均已标定,投影矩阵分别为M1与M2,于是有If all the points on the surface of the object can be obtained, the shape and position of the three-dimensional object are uniquely determined. As shown in Figure 10, it is assumed that the image points of an arbitrary point P in the two camera coordinate systems C 1 and C 2 are p 1 and p 2 , and p 1 and p 2 are the same point P in the left and right images in space At the same time, the cameras have been calibrated, and the projection matrices are M 1 and M 2 respectively, so we have

ZZ cc 11 uu 11 vv 11 11 == mm 11 1111 mm 11 1212 mm 11 1313 mm 11 1414 mm 11 21twenty one mm 11 22twenty two mm 11 23twenty three mm 11 24twenty four mm 11 3131 mm 11 3232 mm 11 3333 mm 11 3434 Xx YY ZZ 11 -- -- -- (( 3434 ))

ZZ cc 22 uu 22 vv 22 11 == mm 22 1111 mm 22 1212 mm 22 1313 mm 22 1414 mm 22 21twenty one mm 22 22twenty two mm 22 23twenty three mm 22 24twenty four mm 22 3131 mm 22 3232 mm 22 3333 mm 22 3434 Xx YY ZZ 11 -- -- -- (( 3535 ))

式中,(u1,v1,1)与(u2,v2,1)分别为p1与p2点在各自图像中的图像齐次坐标;(X,Y,Z,1)为P点在世界坐标系下的齐次坐标;(k=1,2;i=1,…,3;j=1,…,4)分别为Mk的第i行第j列元素。In the formula, (u 1 ,v 1 ,1) and (u 2 ,v 2 ,1) are the image homogeneous coordinates of points p 1 and p 2 in their respective images; (X,Y,Z,1) is The homogeneous coordinates of point P in the world coordinate system; (k=1, 2; i=1, . . . , 3 ; j=1, .

由以上两式消去Zc1或Zc2可得到关于X,Y,Z的四个线性方程:By eliminating Z c1 or Z c2 from the above two formulas, four linear equations about X, Y, and Z can be obtained:

(( uu 11 mm 3131 11 -- mm 1111 11 )) Xx ++ (( uu 11 mm 3232 11 -- mm 1212 11 )) YY ++ (( uu 11 mm 3333 11 -- mm 1313 11 )) ZZ == mm 1414 11 -- uu 11 mm 3434 11 (( vv 11 mm 3131 11 -- mm 21twenty one 11 )) Xx ++ (( vv 11 mm 3232 11 -- mm 22twenty two 11 )) YY ++ (( vv 11 mm 3333 11 -- mm 23twenty three 11 )) ZZ == mm 24twenty four 11 -- vv 11 mm 3434 11 (( uu 22 mm 3131 22 -- mm 1111 22 )) Xx ++ (( uu 22 mm 3232 22 -- mm 1212 22 )) YY ++ (( uu 22 mm 3333 22 -- mm 1313 22 )) ZZ == mm 1414 22 -- uu 22 mm 3434 22 (( vv 22 mm 3131 22 -- mm 21twenty one 22 )) Xx ++ (( vv 22 mm 3232 22 -- mm 22twenty two 22 )) YY ++ (( vv 22 mm 3333 22 -- mm 23twenty three 22 )) ZZ == mm 24twenty four 22 -- vv 22 mm 3434 22 -- -- -- (( 3636 ))

由解析几何可知,三维空间的平面方程为线性方程,两个平面方程的联立为空间直线方程(改直线为两个平面的交线),因此式(1)的意义是过O1p1(或O2p2)的直线。It can be seen from analytic geometry that the plane equation in three-dimensional space is a linear equation, and the simultaneous connection of two plane equations is a spatial straight line equation (change the straight line to the intersection of two planes), so the meaning of formula (1) is that O 1 p 1 (or O 2 p 2 ) straight line.

现在有4个方程,要求3个未知数,考虑到数据噪声的存在,则可以采用最小二乘法求解。以矩阵形式重写式(1)得Now there are 4 equations, which require 3 unknowns. Considering the existence of data noise, the least square method can be used to solve it. Rewrite formula (1) in matrix form to get

uu 11 mm 3131 11 -- mm 1111 11 uu 11 mm 3232 11 -- mm 1212 11 uu 11 mm 3333 11 -- mm 1313 11 vv 11 mm 3131 11 -- mm 21twenty one 11 vv 11 mm 3232 11 -- mm 22twenty two 11 vv 11 mm 3333 11 -- mm 23twenty three 11 uu 22 mm 3131 22 -- mm 1111 22 uu 22 mm 3232 22 -- mm 1212 22 uu 22 mm 3333 22 -- mm 1313 22 vv 22 mm 3131 22 -- mm 21twenty one 22 vv 22 mm 3232 22 -- mm 22twenty two 22 vv 22 mm 3333 22 -- mm 23twenty three 22 Xx YY ZZ == mm 1414 11 -- uu 11 mm 3434 11 mm 24twenty four 11 -- vv 11 mm 3434 11 mm 1414 22 -- uu 22 mm 3434 22 mm 24twenty four 22 -- vv 22 mm 3434 22 -- -- -- (( 3737 ))

可以将式(37)简写成Equation (37) can be abbreviated as

KX=U   (38)KX=U (38)

其中,K为式(37)左边的4×3矩阵;X为未知的三维向量;U为式(37)右边的4×1向量。K和U为已知向量,则式(37)的最小二乘解为Among them, K is the 4×3 matrix on the left side of formula (37); X is an unknown three-dimensional vector; U is the 4×1 vector on the right side of formula (37). K and U are known vectors, then the least squares solution of formula (37) is

m=(KTK)-1KTU   (39)m=(K T K) -1 K T U (39)

在通常的欧式几何意义下进行重建,需要对相机进行严格的定标,这可以由前面介绍的相机标定方法来完成。Reconstruction in the usual sense of Euclidean geometry requires strict calibration of the camera, which can be done by the camera calibration method introduced earlier.

二维图像和三维场景存在着透视投影关系。这种投影关系可以用一个投影矩阵(即相机参数矩阵)来描述。首先,可以通过少量图像点的三维信息恢复投影矩阵;然后,通过双相机的双投影矩阵,利用上述最小二乘法就恢复每一点的三维信息,从而恢复物体的三维外貌。There is a perspective projection relationship between two-dimensional images and three-dimensional scenes. This projection relationship can be described by a projection matrix (ie camera parameter matrix). Firstly, the projection matrix can be restored by the three-dimensional information of a small number of image points; then, through the double-projection matrix of the dual camera, the three-dimensional information of each point can be restored by using the above least squares method, thereby restoring the three-dimensional appearance of the object.

1.2.8立体图像模型连接1.2.8 Stereo image model connection

本系统在立体相机的移动过程中,可以不断获取立体影像并实时重建目标场景的立体模型。但每次重建得到的都是当前视场范围内目标的局部模型,为了形成一个完整的场景模型就需要将每一次重建的结果连接起来,这就是立体图像模型连接模块的主要任务。During the movement of the stereo camera, the system can continuously acquire stereo images and reconstruct the stereo model of the target scene in real time. However, each reconstruction obtains a local model of the target within the current field of view. In order to form a complete scene model, the results of each reconstruction need to be connected. This is the main task of the stereo image model connection module.

立体图像模型连接的原理是依据获取相邻时刻的两组立体影像中的同名像点,以它们为两组立体影像的连接点,通过前方交会计算得到两组立体模型的同名模型点,最后通过空间相似变换将两组立体模型变换到同一空间坐标系下,对后续时刻获得的立体影像同样处理,即可将所有的立体图像模型连接成一个针对整个场景的整体模型。立体图像模型连接的计算过程如图11所示。The principle of stereoscopic image model connection is based on obtaining the same-named image points in two sets of stereoscopic images at adjacent moments, using them as the connection points of the two sets of stereoscopic images, and obtaining the same-named model points of the two sets of stereoscopic models through forward intersection calculations, and finally through The spatial similarity transformation transforms two sets of stereoscopic models into the same space coordinate system, and performs the same processing on the stereoscopic images obtained at subsequent moments, so that all stereoscopic image models can be connected into an overall model for the entire scene. The calculation process of stereo image model connection is shown in Fig. 11.

本系统采用的空间相似变换公式为The spatial similarity transformation formula used in this system is

Xx TT YY TT ZZ TT == λλ aa 11 aa 22 aa 33 bb 11 bb 22 bb 33 cc 11 cc 22 cc 33 Xx YY ZZ ++ Xx 00 YY 00 ZZ 00 -- -- -- (( 4040 ))

其中,XT、YT、ZT为在前一组立体图像模型坐标系下的模型点坐标,X、Y、Z为相邻下一组立体图像模型同名模型点在其模型坐标系下的坐标,X0、Y0、Z0是相邻下一组立体图像模型坐标系原点在前一组立体图像模型坐标系下的坐标,λ是两组立体图像模型的比例因子,ai、bi、ci是角元素φ、γ的函数。若已知这7个参数,就可以进行两组立体图像模型坐标系之间的坐标变换。Among them, X T , Y T , Z T are the model point coordinates in the previous group of stereoscopic image model coordinate system, X, Y, Z are the coordinates of the model point of the same name in the adjacent next group of stereoscopic image model in its model coordinate system Coordinates, X 0 , Y 0 , Z 0 are the coordinates of the origin of the coordinate system of the next group of stereoscopic image models in the coordinate system of the previous group of stereoscopic image models, λ is the scale factor of the two groups of stereoscopic image models, a i , b i , c i are the corner elements φ, The function of γ. If these seven parameters are known, the coordinate transformation between the two sets of stereoscopic image model coordinate systems can be performed.

由公式(40)可知,其式中含有7个未知参数,而一对相似点的方程个数为3个,这样一来,解算它们就至少需要3个不在一条直线上的同名特征点。在实际处理过程中,为了保证精度、可靠性,常需要4个或4个以上的同名特征点来答解变换参数。由于公式(40)是非线性方程,经过线性化处理得误差方程为It can be seen from formula (40) that there are 7 unknown parameters in the formula, and the number of equations of a pair of similar points is 3. In this way, at least 3 feature points with the same name that are not on a straight line are needed to solve them. In the actual processing process, in order to ensure the accuracy and reliability, four or more feature points with the same name are often needed to solve the transformation parameters. Since formula (40) is a nonlinear equation, the error equation after linearization is

经过立体图像模型连接,系统在移动过程中所重建的目标三维信息可以整合到一个第一组立体图像模型的坐标系中,形成整个场景一个完整的几何模型。如果场景中具有若干已知空间坐标的标志点,则可以将整个场景的几何模型变换至与实际场景的位置、大小完全一致。After the stereo image model is connected, the 3D information of the target reconstructed by the system during the moving process can be integrated into the coordinate system of the first group of stereo image models to form a complete geometric model of the entire scene. If there are several marker points with known spatial coordinates in the scene, the geometric model of the whole scene can be transformed to be completely consistent with the position and size of the actual scene.

2系统的运行过程2 The operation process of the system

本系统是在前期完成相机标定的基础上,通过相对于被摄目标不断移动立体相机平台,从不同角度获取被摄目标的序列观测立体像对,在摄影过程中实时计算每一组立体像对的三维重建结果,并同时将所有立体像对的三维重建结果进行连接生成被摄目标的三维重建模型。其中,相机标定的过程按1.2.1节进行,下面详细介绍本系统进行移动实时测量的过程(如图12所示)。This system is based on the completion of camera calibration in the early stage, by continuously moving the stereo camera platform relative to the subject to obtain the sequence observation stereo pairs of the subject from different angles, and calculating each group of stereo pairs in real time during the photography process The 3D reconstruction results of all stereo image pairs are connected at the same time to generate a 3D reconstruction model of the photographed target. Among them, the process of camera calibration is carried out according to Section 1.2.1, and the process of mobile real-time measurement of this system is introduced in detail below (as shown in Figure 12).

第一步:立体相机拍摄获取立体像对。为了获取目标完整的模型,需要不断移动系统平台,获取目标的序列立体影像,且移动中需要保持一定速度,以便相邻立体像对中含有同名特征点。Step 1: Shooting with a stereo camera to obtain a stereo pair. In order to obtain a complete model of the target, the system platform needs to be continuously moved to obtain the sequence of stereo images of the target, and a certain speed needs to be maintained during the movement so that adjacent stereo pairs contain feature points with the same name.

第二步:当前立体像对进行特征提取与匹配。针对当前获取的一组立体像对,通过特征点提取和影响匹配算法,获取立体像对中左右影像上若干个同名特征点(要求不能少于6个,且不能位于一条直线上)。Step 2: Feature extraction and matching are performed on the current stereo pair. For a set of currently acquired stereoscopic image pairs, through feature point extraction and impact matching algorithms, obtain several feature points with the same name on the left and right images of the stereoscopic image pair (required not less than 6, and cannot be located on a straight line).

第三步:当前立体像对进行相对定向。根据上一步获取的同名特征点,结合相机标定的结果,计算立体像对中左右影像的相对位置和姿态关系,通常是以左影像为基准,计算右影像相对于左影像的相对位置和姿态。Step 3: Relative orientation of the current stereo pair. According to the feature points of the same name obtained in the previous step, combined with the results of camera calibration, the relative position and attitude relationship of the left and right images in the stereo image pair are calculated. Usually, the left image is used as a reference to calculate the relative position and attitude of the right image relative to the left image.

第四步:当前立体像对左右影像分别纠正为核线影像。根据当前立体像对相对定向的结果,分别对左右影像按照核线关系重新采样生成对应的核线影像,此时左右影像上的同名像点应在同一影像行上。Step 4: Correct the left and right images of the current stereo pair to epipolar images. According to the result of the relative orientation of the current stereo pair, the left and right images are re-sampled according to the epipolar relationship to generate corresponding epipolar images. At this time, the pixels with the same name on the left and right images should be on the same image line.

第五步:左右核线影像进行稠密匹配。对左右核线影像采用1.2.6节的算法进行逐像素匹配,得到左右影像上所有同名像点的坐标值。Step 5: Perform dense matching on the left and right epipolar images. Use the algorithm in Section 1.2.6 for pixel-by-pixel matching on the left and right epipolar images, and obtain the coordinate values of all image points with the same name on the left and right images.

第六步:当前立体像对进行三维重建。依据稠密匹配结果,利用前方交会方法计算每一对同名像点对应的物方点空间坐标,得到当前立体像对对应的目标三维信息。Step 6: Perform 3D reconstruction on the current stereo pair. According to the dense matching result, the object-space point space coordinates corresponding to each pair of image points with the same name are calculated by using the forward intersection method, and the 3D information of the target corresponding to the current stereo image pair is obtained.

第七步:相邻两组立体像对获取的目标三维模型的连接。重复一到六步,得到下一组立体像对及其对应的目标三维模型,两组模型具有一定的重叠。通过1.2.8节的方法可以建立相邻两组立体像对获取的目标三维模型之间的几何关系,然后将下一组立体像对对应的目标三维模型变换至统一的坐标系下,使得两组模型融为一体。Step 7: Connection of target 3D models acquired by two adjacent stereoscopic image pairs. Steps 1 to 6 are repeated to obtain the next group of stereo image pairs and their corresponding target 3D models, and the two groups of models have a certain overlap. Through the method in Section 1.2.8, the geometric relationship between the target 3D models acquired by two adjacent stereo pairs can be established, and then the target 3D models corresponding to the next group of stereo pairs can be transformed into a unified coordinate system, so that the two Group models into one.

第八步:获取目标完整的三维几何信息。重复一到七步,将每一次拍摄的立体像对生成的三维模型整合起来,在完成对目标所有表面的拍摄的同时,得到目标表面完整、精确的三维离散点信息。Step 8: Obtain the complete 3D geometric information of the target. Steps 1 to 7 are repeated to integrate the 3D models generated by the stereo image pairs taken each time, and complete and accurate 3D discrete point information on the target surface can be obtained while completing the shooting of all surfaces of the target.

以上给出了具体的实施方式,但本发明不局限于所描述的实施方式。本发明的基本思路在于上述基本方案,对本领域普通技术人员而言,根据本发明的教导,设计出各种变形的模型、公式、参数并不需要花费创造性劳动。在不脱离本发明的原理和精神的情况下对实施方式进行的变化、修改、替换和变型仍落入本发明的保护范围内。Specific embodiments have been given above, but the present invention is not limited to the described embodiments. The basic idea of the present invention lies in the above-mentioned basic scheme. For those of ordinary skill in the art, according to the teaching of the present invention, it does not need to spend creative labor to design various deformation models, formulas, and parameters. Changes, modifications, substitutions and variations to the implementations without departing from the principle and spirit of the present invention still fall within the protection scope of the present invention.

Claims (5)

1. The system is characterized by comprising a stereo camera consisting of two cameras and a camera fixing and distance adjusting device, wherein the stereo camera is connected with a control and calculation device which is used for controlling, storing, processing and outputting processing results of camera acquisition data; the measurement process is as follows:
1) calibrating a camera; 2) acquiring a plurality of groups of stereo images in the moving process of the stereo camera; 3) preprocessing an image;
4) feature extraction and stereo matching; 5) three-dimensional reconstruction; 6) connecting the three-dimensional image models; for stereo at any one time
The image is obtained from the homonymous image points in the stereo images at adjacent moments, and the homonymous image points are taken as connecting points of the two groups of stereo images
Calculating the cross-front intersection to obtain the homonymous model points of the two sets of three-dimensional models, and transforming the two sets of three-dimensional models through space similarity
Transforming to the same space coordinate system; sequentially processing the stereo images at the next moment in the same way, and processing all the stereo images
The image models are connected into an overall model for the entire scene.
2. The system for on-line mobile real-time measurement of image-side stereo vision according to claim 1, wherein the camera calibration method comprises: simultaneously acquiring a three-dimensional image; extracting angular points of the calibration plate; and (5) calibrating and resolving.
3. The system of claim 1, wherein the image-side stereo vision on-line mobile real-time measurement system comprises a filtering and gray-scale equalization process.
4. The system of claim 1, wherein the feature extraction and stereo matching comprises: obtaining connection points between the three-dimensional models by utilizing an SURF operator; calculating the relative position and posture between the two images, and carrying out relative orientation calculation; determining the epipolar line relationship among the stereo images, and correcting the stereo images to obtain stereo images arranged according to the epipolar line; and carrying out dense matching on the stereo images by using an SGM algorithm to generate dense homonymous image points.
5. The system of claim 1, wherein the three-dimensional reconstruction is achieved by reconstructing three-dimensional information of a target scene by a multi-slice frontal intersection method according to the dense homonymous image points obtained by matching.
CN201410745020.1A 2014-12-08 2014-12-08 Image space type stereoscopic vision moves real-time measurement system online Expired - Fee Related CN104537707B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410745020.1A CN104537707B (en) 2014-12-08 2014-12-08 Image space type stereoscopic vision moves real-time measurement system online

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410745020.1A CN104537707B (en) 2014-12-08 2014-12-08 Image space type stereoscopic vision moves real-time measurement system online

Publications (2)

Publication Number Publication Date
CN104537707A true CN104537707A (en) 2015-04-22
CN104537707B CN104537707B (en) 2018-05-04

Family

ID=52853226

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410745020.1A Expired - Fee Related CN104537707B (en) 2014-12-08 2014-12-08 Image space type stereoscopic vision moves real-time measurement system online

Country Status (1)

Country Link
CN (1) CN104537707B (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106384368A (en) * 2016-09-14 2017-02-08 河南埃尔森智能科技有限公司 Distortion self-correction method for non-measurement type camera lens and light-sensing chip
CN107167077A (en) * 2017-07-07 2017-09-15 京东方科技集团股份有限公司 Stereo Vision Measurement System and stereo vision measurement method
CN107392898A (en) * 2017-07-20 2017-11-24 海信集团有限公司 Applied to the pixel parallax value calculating method and device in binocular stereo vision
CN107729824A (en) * 2017-09-28 2018-02-23 湖北工业大学 A kind of monocular visual positioning method for intelligent scoring of being set a table for Chinese meal dinner party table top
CN105469418B (en) * 2016-01-04 2018-04-20 中车青岛四方机车车辆股份有限公司 Based on photogrammetric big field-of-view binocular vision calibration device and method
CN107958469A (en) * 2017-12-28 2018-04-24 北京安云世纪科技有限公司 A kind of scaling method of dual camera, device, system and mobile terminal
CN108344369A (en) * 2017-01-22 2018-07-31 北京林业大学 A kind of method that mobile phone stereoscan measures forest diameter
CN108645426A (en) * 2018-04-09 2018-10-12 北京空间飞行器总体设计部 A kind of in-orbit self-calibrating method of extraterrestrial target Relative Navigation vision measurement system
CN110148205A (en) * 2018-02-11 2019-08-20 北京四维图新科技股份有限公司 A kind of method and apparatus of the three-dimensional reconstruction based on crowdsourcing image
CN111336073A (en) * 2020-03-04 2020-06-26 南京航空航天大学 Device and method for visual monitoring of wind turbine tower clearance
CN111462213A (en) * 2020-03-16 2020-07-28 天目爱视(北京)科技有限公司 Equipment and method for acquiring 3D coordinates and dimensions of object in motion process
CN111630345A (en) * 2017-12-21 2020-09-04 皮尔茨公司 Method for determining distance information from images of spatial regions
CN112837411A (en) * 2021-02-26 2021-05-25 由利(深圳)科技有限公司 Method and system for realizing three-dimensional reconstruction of movement of binocular camera of sweeper
CN113066132A (en) * 2020-03-16 2021-07-02 天目爱视(北京)科技有限公司 3D modeling calibration method based on multi-device acquisition
CN114820559A (en) * 2022-05-16 2022-07-29 浙江大学 Ultrasonic image defect detection method based on image entropy and recursive analysis

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102607533A (en) * 2011-12-28 2012-07-25 中国人民解放军信息工程大学 Block adjustment locating method of linear array CCD (Charge Coupled Device) optical and SAR (Specific Absorption Rate) image integrated local area network
CN102693542A (en) * 2012-05-18 2012-09-26 中国人民解放军信息工程大学 Image characteristic matching method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102607533A (en) * 2011-12-28 2012-07-25 中国人民解放军信息工程大学 Block adjustment locating method of linear array CCD (Charge Coupled Device) optical and SAR (Specific Absorption Rate) image integrated local area network
CN102693542A (en) * 2012-05-18 2012-09-26 中国人民解放军信息工程大学 Image characteristic matching method

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
侯文广等: "《基于SURF和TPS的立体影像密集匹配方法》", 《华中科技大学学报(自然科学版)》 *
宋丽华等: "《一种实现三维立体模型重建的新方法》", 《计算机应用研究》 *
杜双玲: "《基于双相机的考古发掘探方三维建模技术研究与系统设计》", 《中国优秀硕士学位论文全文数据库》 *
王建文等: "《一种基于图像相关的图像特征提取匹配算法》", 《科技创新导报》 *

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105469418B (en) * 2016-01-04 2018-04-20 中车青岛四方机车车辆股份有限公司 Based on photogrammetric big field-of-view binocular vision calibration device and method
CN106384368A (en) * 2016-09-14 2017-02-08 河南埃尔森智能科技有限公司 Distortion self-correction method for non-measurement type camera lens and light-sensing chip
CN108344369A (en) * 2017-01-22 2018-07-31 北京林业大学 A kind of method that mobile phone stereoscan measures forest diameter
CN107167077A (en) * 2017-07-07 2017-09-15 京东方科技集团股份有限公司 Stereo Vision Measurement System and stereo vision measurement method
US10885670B2 (en) * 2017-07-07 2021-01-05 Boe Technology Group Co., Ltd. Stereo vision measuring system and stereo vision measuring method
CN107392898A (en) * 2017-07-20 2017-11-24 海信集团有限公司 Applied to the pixel parallax value calculating method and device in binocular stereo vision
CN107392898B (en) * 2017-07-20 2020-03-20 海信集团有限公司 Method and device for calculating pixel point parallax value applied to binocular stereo vision
CN107729824A (en) * 2017-09-28 2018-02-23 湖北工业大学 A kind of monocular visual positioning method for intelligent scoring of being set a table for Chinese meal dinner party table top
CN111630345A (en) * 2017-12-21 2020-09-04 皮尔茨公司 Method for determining distance information from images of spatial regions
CN111630345B (en) * 2017-12-21 2022-09-02 皮尔茨公司 Method for determining distance information from images of spatial regions
CN107958469A (en) * 2017-12-28 2018-04-24 北京安云世纪科技有限公司 A kind of scaling method of dual camera, device, system and mobile terminal
CN110148205A (en) * 2018-02-11 2019-08-20 北京四维图新科技股份有限公司 A kind of method and apparatus of the three-dimensional reconstruction based on crowdsourcing image
CN108645426B (en) * 2018-04-09 2020-04-10 北京空间飞行器总体设计部 On-orbit self-calibration method for space target relative navigation vision measurement system
CN108645426A (en) * 2018-04-09 2018-10-12 北京空间飞行器总体设计部 A kind of in-orbit self-calibrating method of extraterrestrial target Relative Navigation vision measurement system
CN111336073A (en) * 2020-03-04 2020-06-26 南京航空航天大学 Device and method for visual monitoring of wind turbine tower clearance
CN111336073B (en) * 2020-03-04 2022-04-05 南京航空航天大学 Device and method for visual monitoring of wind turbine tower clearance
CN111462213A (en) * 2020-03-16 2020-07-28 天目爱视(北京)科技有限公司 Equipment and method for acquiring 3D coordinates and dimensions of object in motion process
CN113066132A (en) * 2020-03-16 2021-07-02 天目爱视(北京)科技有限公司 3D modeling calibration method based on multi-device acquisition
CN111462213B (en) * 2020-03-16 2021-07-13 天目爱视(北京)科技有限公司 Equipment and method for acquiring 3D coordinates and dimensions of object in motion process
WO2021185218A1 (en) * 2020-03-16 2021-09-23 左忠斌 Method for acquiring 3d coordinates and dimensions of object during movement
CN112837411A (en) * 2021-02-26 2021-05-25 由利(深圳)科技有限公司 Method and system for realizing three-dimensional reconstruction of movement of binocular camera of sweeper
CN114820559A (en) * 2022-05-16 2022-07-29 浙江大学 Ultrasonic image defect detection method based on image entropy and recursive analysis

Also Published As

Publication number Publication date
CN104537707B (en) 2018-05-04

Similar Documents

Publication Publication Date Title
CN104537707B (en) Image space type stereoscopic vision moves real-time measurement system online
CN110599540B (en) Real-time three-dimensional human body shape and posture reconstruction method and device under multi-viewpoint camera
CN114399554B (en) Calibration method and system of multi-camera system
CN110189399B (en) Indoor three-dimensional layout reconstruction method and system
CN104182982B (en) Overall optimizing method of calibration parameter of binocular stereo vision camera
CN108520537B (en) A binocular depth acquisition method based on photometric parallax
US8452081B2 (en) Forming 3D models using multiple images
CN108053437B (en) Three-dimensional model obtaining method and device based on posture
CN107155341B (en) Three-dimensional scanning system and frame
CN103278138B (en) Method for measuring three-dimensional position and posture of thin component with complex structure
CN111028155B (en) Parallax image splicing method based on multiple pairs of binocular cameras
CN106204731A (en) A kind of multi-view angle three-dimensional method for reconstructing based on Binocular Stereo Vision System
CN104484648A (en) Variable-viewing angle obstacle detection method for robot based on outline recognition
CN112509125A (en) Three-dimensional reconstruction method based on artificial markers and stereoscopic vision
CN108961410A (en) A kind of three-dimensional wireframe modeling method and device based on image
US20200074658A1 (en) Method and system for three-dimensional model reconstruction
CN107038753B (en) Stereoscopic 3D reconstruction system and method
CN116129037B (en) Visual touch sensor, three-dimensional reconstruction method, system, equipment and storage medium thereof
CN103886595B (en) A kind of catadioptric Camera Self-Calibration method based on broad sense unified model
CN111612887B (en) Human body measuring method and device
CN114820563B (en) A method and system for industrial component size estimation based on multi-viewpoint stereo vision
CN111739103A (en) Multi-camera calibration system based on single-point calibration object
CN109493415A (en) A kind of the global motion initial method and system of aerial images three-dimensional reconstruction
CN113808273A (en) A Disordered Incremental Sparse Point Cloud Reconstruction Method for Numerical Simulation of Ship Traveling Waves
KR20160049639A (en) Stereoscopic image registration method based on a partial linear method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20180504

CF01 Termination of patent right due to non-payment of annual fee