[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN105959514B - A kind of weak signal target imaging detection device - Google Patents

A kind of weak signal target imaging detection device Download PDF

Info

Publication number
CN105959514B
CN105959514B CN201610248720.9A CN201610248720A CN105959514B CN 105959514 B CN105959514 B CN 105959514B CN 201610248720 A CN201610248720 A CN 201610248720A CN 105959514 B CN105959514 B CN 105959514B
Authority
CN
China
Prior art keywords
image
point
points
pixel
channel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201610248720.9A
Other languages
Chinese (zh)
Other versions
CN105959514A (en
Inventor
张振
顾朗朗
梁苍
孙启尧
高红民
陈哲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hohai University HHU
Original Assignee
Hohai University HHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hohai University HHU filed Critical Hohai University HHU
Priority to CN201610248720.9A priority Critical patent/CN105959514B/en
Publication of CN105959514A publication Critical patent/CN105959514A/en
Application granted granted Critical
Publication of CN105959514B publication Critical patent/CN105959514B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • H04N23/55Optical parts specially adapted for electronic image sensors; Mounting thereof
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/20Scenes; Scene-specific elements in augmented reality scenes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • H04N23/54Mounting of pick-up tubes, electronic image sensors, deviation or focusing coils
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

本发明公开了一种弱目标成像检测装置及方法。利用目标和背景的反射及散射光在特定波段以及在0°和90°偏振方向上的光强差异,采用双通道正交差分的成像方式实现光谱‑偏振同步成像。硬件模块可分为仪器壳体、光学系统和FPGA主控板三部分。其中仪器壳体用于连接光学镜头、电路板和三脚架;光学系统采用双通道结构,用于获取两幅不同偏振角和波段的图像;FPGA主控板用于对双通道CMOS图像传感器进行参数配置、同步采集、图像缓存及预处理。软件模块依次执行双通道图像采集、图像畸变校正、双通道图像配准、图像差分融合和图像目标检测任务。相比现有方法,具有较低的硬件成本和软件复杂度,为地面复杂背景下运动隐身目标的检测提供了一种有效的手段。

The invention discloses a weak target imaging detection device and method. Utilizing the difference in light intensity between the target and the background reflected and scattered light in a specific wavelength band and in the 0° and 90° polarization directions, a dual-channel orthogonal differential imaging method is used to achieve spectral-polarization simultaneous imaging. The hardware module can be divided into three parts: the instrument shell, the optical system and the FPGA main control board. The instrument shell is used to connect the optical lens, circuit board and tripod; the optical system adopts a dual-channel structure to obtain two images with different polarization angles and wave bands; the FPGA main control board is used to configure the parameters of the dual-channel CMOS image sensor , synchronous acquisition, image caching and preprocessing. The software module performs the tasks of dual-channel image acquisition, image distortion correction, dual-channel image registration, image differential fusion and image target detection in sequence. Compared with existing methods, it has lower hardware cost and software complexity, and provides an effective means for detecting moving stealth targets in complex ground backgrounds.

Description

一种弱目标成像检测装置A Weak Target Imaging Detection Device

技术领域technical field

本发明涉及一种光学成像检测装置及方法,尤其涉及一种弱目标成像检测装置及方法,属于光学成像领域。The invention relates to an optical imaging detection device and method, in particular to a weak target imaging detection device and method, belonging to the field of optical imaging.

背景技术Background technique

目标探测与识别技术是指对固定或移动目标进行非接触测量,并可准确得到目标的属性信息,辨识出目标真伪的高技术手段。其中光学检测由于是被动式工作,安全隐蔽,所以近些年得到了快速的发展和极大的重视。然而传统伪装涂料的使用使得目标与背景之间能够近似实现“同色同谱”,利用传统的光强探测手段难以有效检测复杂背景中的“隐身”弱目标。Target detection and recognition technology refers to the non-contact measurement of fixed or moving targets, and can accurately obtain the attribute information of the target and identify the authenticity of the target. Among them, optical detection has been developed rapidly and paid great attention to in recent years because it is passive and safe and hidden. However, the use of traditional camouflage paint enables the approximate realization of "same color and same spectrum" between the target and the background, and it is difficult to effectively detect "stealth" weak targets in complex backgrounds by using traditional light intensity detection methods.

偏振是光的基本特性之一,任何目标在反射和发射电磁辐射的过程中都会表现出由其自身特性和光学基本定律所决定的偏振特性。一般自然环境中地物背景的偏振度较低,而人工目标的偏振度较高。如植物的偏振度一般小于0.5%;岩石、沙石、裸土等的偏振度介于0.5%~ 1.5%之间;水面、水泥路面、屋顶等的偏振度一般大于1.5%(尤其水面的偏振度达到了8%~ 10%);某些非金属材料和部分金属材料表面的偏振度达到了2%以上(有的甚至达10%以上)。通过成像获得场景在不同偏振状态下的信息,可对具有偏振-光强差异的目标及背景进行有效区分,进而实现复杂背景下弱目标的检测与识别。因此,近些年偏振成像探测在气象环境科学研究、海洋的开发利用、空间探测、生物医学以及军事应用等方面受到了越来越多的重视。Polarization is one of the basic characteristics of light. Any object will show polarization characteristics determined by its own characteristics and the basic laws of optics in the process of reflecting and emitting electromagnetic radiation. Generally, the polarization degree of the object background in the natural environment is low, while the polarization degree of the artificial target is high. For example, the degree of polarization of plants is generally less than 0.5%; the degree of polarization of rocks, sandstones, bare soil, etc. is between 0.5% and 1.5%; the degree of polarization of water surfaces, cement pavements, roofs, etc. The degree of polarization reaches 8% to 10%); the degree of polarization on the surface of some non-metallic materials and some metal materials reaches more than 2% (some even reach more than 10%). Obtaining the information of the scene in different polarization states through imaging can effectively distinguish the target and background with polarization-light intensity difference, and then realize the detection and recognition of weak targets in complex backgrounds. Therefore, in recent years, polarization imaging detection has received more and more attention in meteorological environmental scientific research, ocean development and utilization, space detection, biomedicine, and military applications.

在偏振探测中,目标光辐射的偏振态可用四个斯托克斯(Stokes)参量完整描述,包括光波的总强度I、水平方向上线的偏振光强度Q、45°/135°方向上线的偏振光强度U,以及圆偏振光的强度V。实际应用中,V可以忽略不计,进而将偏振度描述为偏振角描述为θ=0.5arctan(U/Q)。因此为了得到上述偏振态信息,至少要获取三幅不同偏振方向的光强图像以计算参量I、Q、U。In polarization detection, the polarization state of the target light radiation can be fully described by four Stokes parameters, including the total intensity I of the light wave, the polarization intensity Q of the line in the horizontal direction, and the polarization of the line in the 45°/135° direction The intensity U of light, and the intensity V of circularly polarized light. In practical applications, V can be ignored, and then the degree of polarization can be described as The polarization angle is described as θ=0.5 arctan(U/Q). Therefore, in order to obtain the above polarization state information, at least three light intensity images with different polarization directions must be obtained to calculate the parameters I, Q, and U.

据此原理,目前得到应用的偏振成像探测装置主要有四种:(1)分时成像的方式。该方式采用一个成像器件,通过顺序旋转安装在镜头前的偏振片来获得0°、60°、90°三个不同偏振方向的图像;具有结构简单、易实现的优点;但是仅适用于目标与背景均为静止的情况。 (2)光路分光的方式。该方式采用光束分离器和延迟器,将通过单镜头的光束均匀地分成相同的三份,并经过0°、60°、90°方向的偏振片投射到三个独立的成像器件上;可以同时获得三方向的偏振图像;但这种方式会使入射到单个成像器件上的能量大幅减少,导致成像信噪比明显降低。(3)分焦平面的方式。该方式采用特殊工艺制作的成像器件,其上的每一个像素分别对应0°、60°、90°中的一个偏振方向,并按照类似彩色图像传感器中RGB分布的Bayer 格式进行排布;不仅可以实现同时偏振成像,而且无需额外的分光器件,易于实现仪器的小型化;但分焦平面器件的制作工艺复杂且未实现产品化。(4)空间配准的方式。该方式采用三台相机组成三通道同步成像系统,分别采集0°、60°、90°方向的偏振图像,再通过图像空间配准算法将三幅图像重叠区域的像素对齐;具有较低的硬件复杂度;但由于三通道的畸变参数及拍摄视角不一致,如不能合理地校正,将导致图像配准精度不高,影响弱小目标的检测。实际上对于弱目标检测的应用而言,偏振成像的目的不是获取偏振度或偏振角信息,而是如何实时、高效地增强目标和背景的对比度。从这个角度来看,利用Stokes方程对多通道图像进行融合并不是一种高效的方法。According to this principle, there are mainly four types of polarization imaging detection devices currently in use: (1) Time-sharing imaging. This method uses an imaging device to obtain images with three different polarization directions of 0°, 60°, and 90° by sequentially rotating the polarizer installed in front of the lens; it has the advantages of simple structure and easy implementation; but it is only applicable to the target and The background is all static. (2) The way of light path splitting. This method uses a beam splitter and a retarder to divide the beam passing through the single lens into the same three parts evenly, and project it onto three independent imaging devices through polarizers in the directions of 0°, 60°, and 90°; it can simultaneously Polarization images in three directions are obtained; however, this method will greatly reduce the energy incident on a single imaging device, resulting in a significant decrease in the imaging signal-to-noise ratio. (3) The way of dividing the focal plane. This method uses an imaging device made by a special process, and each pixel on it corresponds to a polarization direction of 0°, 60°, and 90°, and is arranged in a Bayer format similar to the RGB distribution in a color image sensor; not only can Simultaneous polarization imaging can be achieved without additional optical splitting devices, and the miniaturization of the instrument can be easily realized; however, the manufacturing process of the split focal plane device is complicated and has not been commercialized. (4) The way of spatial registration. This method uses three cameras to form a three-channel synchronous imaging system, respectively collects polarization images in the directions of 0°, 60°, and 90°, and then aligns the pixels in the overlapping areas of the three images through an image space registration algorithm; it has relatively low hardware Complexity; however, due to the inconsistency of the distortion parameters and shooting angles of the three channels, if it cannot be corrected reasonably, it will lead to low image registration accuracy and affect the detection of weak and small targets. In fact, for the application of weak target detection, the purpose of polarization imaging is not to obtain polarization degree or polarization angle information, but how to enhance the contrast between the target and the background efficiently in real time. From this point of view, using the Stokes equation to fuse multi-channel images is not an efficient method.

本发明利用目标和背景的反射及散射光在特定波段以及在0°和90°偏振方向上的光强差异,采用双通道正交差分的成像方式实现光谱-偏振同步成像,相比现有同步偏振成像方式,具有较低的硬件成本和软件复杂度,为地面复杂背景下运动隐身目标的检测提供了一种有效的手段。The present invention utilizes the reflected and scattered light of the target and the background in a specific wavelength band and the difference in light intensity between 0° and 90° polarization directions, and adopts a dual-channel orthogonal differential imaging method to realize spectrum-polarization synchronous imaging. Compared with the existing synchronous The polarization imaging method has low hardware cost and software complexity, and provides an effective means for detecting moving stealth targets in complex ground backgrounds.

发明内容Contents of the invention

本发明针对现有地面复杂背景下运动隐身目标检测系统存在的不足,提供了一种弱目标成像检测装置及方法。The invention provides a weak target imaging detection device and method aiming at the deficiencies existing in the existing moving stealth target detection system under complex ground backgrounds.

本发明通过以下技术方案实现:The present invention is realized through the following technical solutions:

一种弱目标成像检测装置,由仪器壳体、光学系统和FPGA主控板三部分组成,其特征在于:仪器壳体用于连接光学镜头、电路板和三脚架,包括壳体前面板、壳体后框及三脚架固定座;光学系统采用双通道结构,用于获取两幅不同偏振角和波段的图像,通道1包括0°线偏振滤镜、光学镜头、C口镜头接圈、滤光片座、470nm窄带滤光片和CMOS图像传感器;通道2包括90°线偏振滤镜、光学镜头、C口镜头接圈、滤光片座、630nm窄带滤光片和CMOS 图像传感器;FPGA主控板用于对双通道CMOS图像传感器进行参数配置、同步采集、图像缓存及预处理,并通过USB接口传输至PC机。A weak target imaging detection device is composed of three parts: an instrument shell, an optical system and an FPGA main control board. It is characterized in that: the instrument shell is used to connect the optical lens, circuit board and tripod, including the shell Rear frame and tripod mount; the optical system adopts a dual-channel structure for acquiring two images with different polarization angles and wavelength bands. Channel 1 includes a 0° linear polarizing filter, an optical lens, a C-mount lens mount, and a filter holder , 470nm narrow-band filter and CMOS image sensor; channel 2 includes 90° linear polarizing filter, optical lens, C-mount lens mount, filter holder, 630nm narrow-band filter and CMOS image sensor; for FPGA main control board It is used for parameter configuration, synchronous acquisition, image buffering and preprocessing of the dual-channel CMOS image sensor, and transmits it to the PC through the USB interface.

所述的壳体前面板的尺寸为100mm×50mm×5mm,其上安装有用于固定光学镜头的两个 C口镜头接圈,两个接圈的中心间距为50mm,螺纹外径为25.1mm;壳体后框的尺寸为100mm×50mm×30mm,通过前面板四周的12颗规格为Φ3*6的螺丝与之相连,其左侧有一个B型USB接口,用来连接FPGA主控板和PC机;三脚架固定座位于壳体后框的下侧,通过中心规格为1/4-20的螺孔连接三脚架的云台。The size of the front panel of the housing is 100mm×50mm×5mm, on which two C-mount lens adapters for fixing the optical lens are installed, the distance between the centers of the two adapters is 50mm, and the thread outer diameter is 25.1mm; The size of the rear frame of the housing is 100mm×50mm×30mm, and it is connected to it through 12 screws with a specification of Φ3*6 around the front panel, and there is a B-type USB interface on the left side, which is used to connect the FPGA main control board and the PC machine; the tripod mount is located on the lower side of the rear frame of the casing, and is connected to the head of the tripod through a screw hole with a central specification of 1/4-20.

所述的通道1和通道2的光学镜头的焦距均为8mm定焦,光圈调节范围为F1.4-F16,对焦范围为0.1m-∞,和前面板上的两个C口镜头接圈相连;两片旋转式线偏振滤镜通过尺寸为M30.5×0.5mm的接圈分别安装在两个光学镜头前;采用线偏振标定板将二者对应的线偏振滤镜的偏振方向分别调节至0°和90°;两片窄带滤光片分别通过滤光片座安装于CMOS图像传感器的表面;滤光片均采用镜面玻璃材质,尺寸为12mm×12mm×0.7mm,中心波长分别为470nm和630nm,半带宽为20nm,峰值透射率>90%,截止深度<1%;CMOS图像传感器采用130万像素的1/2″单色面阵传感器,光谱响应范围为400-1050nm。The focal length of the optical lens of channel 1 and channel 2 is 8mm fixed focus, the aperture adjustment range is F1.4-F16, the focus range is 0.1m-∞, and it is connected with two C-mount lens mounts on the front panel ; Two rotating linear polarizing filters are respectively installed in front of the two optical lenses through adapter rings with a size of M30.5×0.5mm; the polarization directions of the two corresponding linear polarizing filters are respectively adjusted to 0° and 90°; two narrow-band filters are respectively installed on the surface of the CMOS image sensor through the filter holder; the filters are made of mirror glass, the size is 12mm×12mm×0.7mm, and the center wavelengths are 470nm and 630nm, half bandwidth of 20nm, peak transmittance >90%, cut-off depth <1%; CMOS image sensor adopts 1.3 million pixel 1/2" monochrome area sensor, spectral response range is 400-1050nm.

所述的FPGA主控板以一片非易失性FPGA芯片为核心,并采用可编程片上系统技术将 32位的软核Nios II处理器及其部分外设集成在单芯片内,片外仅采用一片USB2.0接口芯片和B型USB接口与PC机通信;Nios II处理器通过Avalon总线控制用户RAM、用户FLASH、USB控制器、双通道对应的2组双口RAM控制器及图像采集模块等片内外设;用户RAM 用作Nios II处理器的运行内存;用户FLASH用于存储Nios II处理器执行的程序代码;USB 控制器用于USB2.0接口芯片的配置和总线协议转换;双口RAM是一个异步FIFO,用于图像行有效数据的筛选和处理,并使数据在传输过程中保持同步;图像采集模块包括配置控制器和时序控制器两部分,配置控制器通过I2C双向数据串行总线SCLK、SDATA对CMOS图像传感器内部寄存器进行配置,时序控制器通过时序信号STROBE、PIXCLK、L_VALID、 F_VALID和控制信号STANDBY、TRIGGER、CLKIN控制CMOS图像传感器同步输出数据 DOUT[9:0]。The FPGA main control board takes a piece of non-volatile FPGA chip as the core, and adopts programmable system-on-chip technology to integrate a 32-bit soft-core Nios II processor and some of its peripherals into a single chip. A USB2.0 interface chip and B-type USB interface communicate with PC; Nios II processor controls user RAM, user FLASH, USB controller, two sets of dual-port RAM controllers corresponding to dual channels, and image acquisition module through Avalon bus On-chip peripherals; user RAM is used as the running memory of the Nios II processor; user FLASH is used to store the program code executed by the Nios II processor; the USB controller is used for the configuration of the USB2.0 interface chip and bus protocol conversion; the dual-port RAM is An asynchronous FIFO is used for screening and processing the effective data of the image line, and keeps the data synchronous during transmission; the image acquisition module includes two parts, the configuration controller and the timing controller, and the configuration controller passes the I 2 C bidirectional data serial The buses SCLK and SDATA configure the internal registers of the CMOS image sensor, and the timing controller controls the CMOS image sensor to synchronously output data DOUT[9:0] through timing signals STROBE, PIXCLK, L_VALID, F_VALID and control signals STANDBY, TRIGGER, CLKIN.

所述的FPGA主控板的工作流程为:主控板上电后首先进行系统初始化,然后令Nios II 处理器处于等待状态;PC通过USB接口向主控板发送起始信号后,Nios II处理器通过配置控制器依次对双通道的CMOS图像传感器进行写寄存器操作,将其设置为抓拍模式,并配置图像分辨率、曝光时间及电子增益等参数。设置完成后,配置控制器的I2C总线进入空闲状态,并令2组时序控制器同步发送TRIGGER脉冲;CMOS图像传感器收到TRIGGER脉冲后,内部进行行复位,完成后输出STROBE脉冲,脉冲宽度标识像素积分时间的长度;STROBE信号由1跳变为0后,正常输出数据DOUT[7:0],同时输出同步信号F_VALID和L_VALID;时序控制器接收到返回的数据和同步信号后,首先将F_VALID和L_VALID进行“与”操作;当结果为高时代表此时数据有效,进而以像素时钟为工作时钟按照地址0~1280将其存储在双口RAM中;当结果由高变低时,代表一行有效数据传输完毕,此时将2组双口RAM中的数据每512个字节打包为一个数据包依次输出到USB2.0接口芯片的FIFO中,再经USB线传输至PC;当一帧数据传输完毕后,Nios II处理器通过配置控制器设置CMOS图像传感器为 STANDBY模式,停止数据输出并等待下一个起始信号。The work flow of described FPGA main control board is: first carry out system initialization after main control board is powered on, then make Nios II processor be in wait state; After PC sends initial signal to main control board by USB interface, Nios II processes By configuring the controller, the controller sequentially writes registers to the dual-channel CMOS image sensor, sets it to capture mode, and configures parameters such as image resolution, exposure time, and electronic gain. After the setting is completed, configure the I 2 C bus of the controller to enter the idle state, and make the two groups of timing controllers send TRIGGER pulses synchronously; after the CMOS image sensor receives the TRIGGER pulses, it internally resets the rows, and outputs STROBE pulses after completion, the pulse width Identify the length of pixel integration time; after the STROBE signal jumps from 1 to 0, the normal output data DOUT[7:0], and simultaneously output the synchronization signal F_VALID and L_VALID; after the timing controller receives the returned data and synchronization signal, it will first F_VALID and L_VALID perform "AND"operation; when the result is high, it means that the data is valid at this time, and then use the pixel clock as the working clock to store it in the dual-port RAM according to the address 0~1280; when the result changes from high to low, it means After a line of effective data transmission is completed, the data in the two groups of dual-port RAMs will be packaged into a data packet every 512 bytes and output to the FIFO of the USB2.0 interface chip in turn, and then transmitted to the PC via the USB cable; when a frame After the data transmission is completed, the Nios II processor sets the CMOS image sensor as the STANDBY mode through the configuration controller, stops the data output and waits for the next start signal.

如前所述一种弱目标成像检测装置的检测方法,包括以下五个主要步骤:As mentioned above, a detection method for a weak target imaging detection device includes the following five main steps:

(1)双通道图像采集,任务开始后首先扫描USB端口并连接指定的成像装置;确认连接后向成像装置发送控制字以设置成像参数,包括图像分辨率、曝光时间和电子增益等;完成设置后发送一次采集指令并等待接收图像数据,当双通道的图像数据均传输完成后以无损压缩的位图格式保存图像。(1) Dual-channel image acquisition. After the task starts, first scan the USB port and connect the specified imaging device; after confirming the connection, send the control word to the imaging device to set the imaging parameters, including image resolution, exposure time and electronic gain; complete the setting Then send a capture command and wait for image data to be received, and save the image in a lossless compressed bitmap format after the transmission of the two-channel image data is completed.

(2)图像畸变校正,设计采用张正友法标定成像系统的光学畸变参数,非线性畸变模型仅考虑图像的径向畸变:(2) Image distortion correction. The Zhang Zhengyou method is used to calibrate the optical distortion parameters of the imaging system. The nonlinear distortion model only considers the radial distortion of the image:

其中,δX和δY是畸变值,它与投影点在图像中的像素位置有关。x、y是图像点在成像平面坐标系下根据线性投影模型得到的归一化投影值,k1、k2、k3等为径向畸变系数,这里只考虑二次畸变,畸变后的坐标为:Among them, δ X and δ Y are distortion values, which are related to the pixel position of the projected point in the image. x and y are the normalized projection values obtained by the image point in the imaging plane coordinate system according to the linear projection model, k 1 , k 2 , k 3 etc. are the radial distortion coefficients, only secondary distortion is considered here, and the coordinates after distortion are:

令(ud,vd)、(u,v)分别为图像坐标系下空间点对应的实际坐标和理想坐标,则两者关系为:Let (u d , v d ) and (u, v) be the actual coordinates and ideal coordinates corresponding to the spatial points in the image coordinate system respectively, then the relationship between the two is:

将线性标定结果作为参数初值,带入以下目标函数求最小值,实现非线性参数的估计:The linear calibration result is used as the initial value of the parameter, and the following objective function is used to find the minimum value to realize the estimation of the nonlinear parameter:

其中,是标定模板的第j点在第i幅图像上,利用估计参数得到的投影点, Mj为标定模板第j点在世界坐标系下的坐标值,m为每幅图像特征点个数,n为图像数目;利用LM迭代法优化所得的相机标定参数,最终得到较为精确的径向畸变系数,进而反求无畸变的图像坐标。in, is the jth point of the calibration template on the i-th image, the projection point obtained by using the estimated parameters, M j is the coordinate value of the jth point of the calibration template in the world coordinate system, m is the number of feature points in each image, n is the number of images; use the LM iterative method to optimize the obtained camera calibration parameters, and finally obtain a more accurate radial distortion coefficient, and then inversely calculate the undistorted image coordinates.

(3)双通道图像配准,用于实现不同成像视场、波段、偏振角和光学畸变条件下双通道图像的像素对齐,采用一种基于SURF特征点的图像配准算法,包括以下五个子步骤:(3) Dual-channel image registration, which is used to achieve pixel alignment of dual-channel images under different imaging fields of view, wavelength bands, polarization angles and optical distortions. An image registration algorithm based on SURF feature points is adopted, including the following five sub- step:

1)检测SURF特征点,在构建积分图像的基础上,利用方框型滤波近似替代二阶高斯滤波,并对待选特征点和它周围的点分别计算Hessian值,如果该特征点具有最大的Hessian值,则其为特征点;1) To detect SURF feature points, on the basis of constructing the integral image, use a box filter to approximately replace the second-order Gaussian filter, and calculate the Hessian value of the selected feature point and its surrounding points, if the feature point has the largest Hessian value, it is a feature point;

2)生成特征描述向量,使用特征点邻域的灰度信息,通过计算积分图像的一阶Haar小波响应,得到灰度分布信息来产生128维的特征描述向量;2) Generate a feature description vector, use the gray level information of the feature point neighborhood, and obtain the gray level distribution information to generate a 128-dimensional feature description vector by calculating the first-order Haar wavelet response of the integral image;

3)两步法匹配特征点,通过基于最邻近次邻近比值法的粗匹配算法和基于RANSAC的精匹配算法两个步骤,建立参考图像和待配准图像特征点之间正确的一一对应匹配关系,其特征在于:当两幅图像的特征向量生成后,首先采用SURF特征描述向量的欧式距离作为两幅图像中关键点的相似性判定度量,方法是通过K-d树得到一个特征点到最近邻特征点的距离dND,其到次近邻特征点的距离dNND,如果它们的比值小于阈值ε,则保留该特征点与其最近邻构成的匹配点对;然后随机选取4对初始匹配特征点,计算由这4对点所确定的透视变换矩阵H,再用该矩阵衡量其余特征点的匹配程度:3) Two-step method to match feature points, through the two steps of the rough matching algorithm based on the nearest neighbor ratio method and the fine matching algorithm based on RANSAC, the correct one-to-one correspondence matching between the reference image and the feature points of the image to be registered is established relationship, which is characterized in that: after the feature vectors of the two images are generated, the Euclidean distance of the SURF feature description vector is first used as the similarity determination measure of the key points in the two images, and the method is to obtain a feature point to the nearest neighbor through the Kd tree The distance d ND of the feature point, the distance d NND to the next nearest neighbor feature point, if their ratio is less than the threshold ε, then keep the matching point pair formed by the feature point and its nearest neighbor; then randomly select 4 pairs of initial matching feature points, Calculate the perspective transformation matrix H determined by these 4 pairs of points, and then use this matrix to measure the matching degree of the remaining feature points:

其中,t为阈值,小于等于t的特征点对为H的内点,大于t的特征点对则为外点,这样不断更新内点集,由RANSAC的k次随机采样可得到最大的内点集合,此时也得到了优化后的内点集合所对应的透视变换矩阵H;Among them, t is the threshold, the feature point pair less than or equal to t is the inner point of H, and the feature point pair greater than t is the outer point, so that the inner point set is continuously updated, and the largest inner point can be obtained by k random sampling of RANSAC set, and the perspective transformation matrix H corresponding to the optimized interior point set is also obtained at this time;

4)坐标变换及重采样,根据求得的透视变换矩阵H对图像像素的坐标进行线性变换,并采用双线性插值法对图像像素的灰度值进行重采样,双线性插值法假定内插点周围四个点围城的区域内的灰度变化是线性的,从而可以用线性内插方法,根据四个近邻像素的灰度值,计算出内插点的灰度值;4) Coordinate transformation and resampling. According to the obtained perspective transformation matrix H, the coordinates of the image pixels are linearly transformed, and the gray value of the image pixels is resampled by the bilinear interpolation method. The bilinear interpolation method assumes that the internal The grayscale change in the area surrounded by four points around the interpolation point is linear, so the grayscale value of the interpolated point can be calculated according to the grayscale values of the four adjacent pixels by using the linear interpolation method;

5)裁剪图像重叠区域,根据下式对图像坐标变换后的四个边界点进行判别,确定图像配准后重叠区域的四个边界点坐标(Xmin,Ymin)、(Xmin,Ymax)、(Xmax,Ymin)、(Xmax,Ymax):5) Crop the overlapping area of the image, judge the four boundary points after image coordinate transformation according to the following formula, and determine the four boundary point coordinates (X min , Y min ), (X min , Y max ) of the overlapping area after image registration ), (X max , Y min ), (X max , Y max ):

其中,W、H为图像的宽和高,按照以上边界点构成的矩形区域对双通道图像进行裁剪,得到配准的0°和90°偏振图像I(0°)和I(90°);Wherein, W and H are the width and height of the image, and the two-channel image is cropped according to the rectangular area formed by the above boundary points to obtain the registered 0 ° and 90 ° polarized images I (0 °) and I (90 °);

(4)图像差分融合,采用双通道正交差分的方式融合得到的正交差分图像表示为:(4) Image difference fusion, the orthogonal difference image obtained by using the dual-channel orthogonal difference method is expressed as:

Q=I(0°)-I(90°)Q=I(0°)-I(90°)

(5)图像目标检测,系统基于形态学的方法对正交差分偏振图像进行目标检测,包括以下三个子步骤:(5) Image target detection, the system performs target detection on the orthogonal differential polarization image based on the morphological method, including the following three sub-steps:

1)二值化处理,采用最大类间方差法自适应选取全局阈值,原理如下:设图像有M个灰度值,取值范围在0M-1,在此范围内选取灰度值t,将图像分成两组G0和G1,G0包含的像素的灰度值在0t,G1的灰度值在t+1M-1,用N表示图像像素总数,ni表示灰度值为i的像素的个数,则每个灰度值i出现的概率为pi=ni/N,G0和G1类出现的概率为均值为则类间方差为:1) Binarization processing, using the maximum inter-class variance method to adaptively select the global threshold, the principle is as follows: suppose the image has M gray values, and the value range is 0M-1, select the gray value t within this range, and set The image is divided into two groups G 0 and G 1. The gray value of the pixels contained in G 0 is at 0t, and the gray value of G 1 is at t+1M-1. N represents the total number of image pixels, and n i represents the gray value of i The number of pixels, then the probability of occurrence of each gray value i is p i =n i /N, the probability of occurrence of G 0 and G 1 is The mean is Then the between-class variance is:

σ(t)2=ω0ω101)2 σ(t) 2 =ω 0 ω 101 ) 2

最佳阈值T就是使类间方差最大的t的取值,即:The optimal threshold T is the value of t that maximizes the variance between classes, that is:

T=argmaxσ(t)2,t∈[0,M-1]T=argmaxσ(t) 2 , t∈[0,M-1]

2)开运算操作,开运算操作用于滤除细小的干扰物并获得较为精确的目标轮廓,它定义为先腐蚀后膨胀的过程:腐蚀的作用是消除物体中不相关的细节,特别是边缘点,使物体的边界向内部收缩,其表达式如下:2) Opening operation, the opening operation is used to filter out small interference objects and obtain a more accurate target outline, which is defined as the process of first corroding and then expanding: the role of corrosion is to eliminate irrelevant details in the object, especially the edges point, so that the boundary of the object shrinks inward, and its expression is as follows:

其中,E表示腐蚀后的二值图像;B表示结构元素即模板,它是由0或1组成的任何一种形状的图形,在B中有一个中心点,以此点为中心进行腐蚀;X是原图像经过二值化处理后的图像的像素集合;运算过程是在X图像域内滑动结构元素B,当其中心点与X图像上的某一点(x,y)重合时,遍历结构元素内的像素点,如果每个像素点都与以(x,y)为中心的相同位置中对应像素点完全相同,那么像素点(x,y)将被保留在E中,对于不满足条件的像素点则被剔除掉,从而可达到收缩边界的效果;膨胀与腐蚀的作用相反,它对二值化物体轮廓的边界点进行扩充,能够填补分割后物体中残留的空洞,使物体完整,其表达式如下:Among them, E represents the binary image after corrosion; B represents the structural element, that is, the template, which is a graph of any shape composed of 0 or 1, and there is a center point in B, which is the center for corrosion; X It is the pixel set of the image after binary processing of the original image; the operation process is to slide the structural element B in the X image domain, and when its center point coincides with a certain point (x, y) on the X image, traverse the structure element B If each pixel is exactly the same as the corresponding pixel in the same position centered on (x, y), then the pixel (x, y) will be retained in E. For pixels that do not meet the conditions Points are eliminated, so as to achieve the effect of shrinking the boundary; dilation and erosion have the opposite effect, it expands the boundary points of the outline of the binarized object, and can fill the remaining holes in the object after segmentation, making the object complete, and its expression The formula is as follows:

其中,S表示膨胀后的二值图像像素点的集合;B表示结构元素即模板;X表示经过二值化处理后的图像像素集合。运算过程是在X图像域内滑动结构元素B,当B的中心点移到X图像上的某一点(x,y)时,遍历结构元素内的像素点,如果结构元素B内的像素点与X图像的像素点至少有一个相同,那么就保留(x,y)像素点在S中,否则就去掉此像素点;对二值图像进行开运算操作后,图像被划分成多个连通区域;Among them, S represents the set of expanded binary image pixels; B represents the structural element, that is, the template; X represents the set of image pixels after binary processing. The operation process is to slide the structural element B in the X image domain. When the center point of B moves to a certain point (x, y) on the X image, traverse the pixels in the structural element. If the pixel in the structural element B is the same as X At least one pixel of the image is the same, then keep the (x, y) pixel in S, otherwise remove this pixel; after the binary image is opened, the image is divided into multiple connected regions;

3)连通域识别,首先采用8邻接判据对图像中的连通域进行分割,8邻接连通域的定义是:该区域中每个像素,其所有8个方向的8个相邻像素中至少有一个像素仍然属于该区域,根据该定义将二值图像中不同的连通域填入不同的数字标记;然后分别提取各连通域的像素周长,并和预先设定的目标阈值进行对比,如果在阈值区间内则判定为候选目标;最后采用能够包围其连通域轮廓的最小矩形框在图像中标识出候选目标,完成目标检测。3) Connected domain identification, first use the 8-adjacent criterion to segment the connected domain in the image, the definition of the 8-adjacent connected domain is: each pixel in the area has at least 8 adjacent pixels in all 8 directions A pixel still belongs to this area, and according to the definition, different connected domains in the binary image are filled with different digital marks; then the pixel perimeters of each connected domain are extracted respectively, and compared with the preset target threshold, if in In the threshold interval, it is judged as a candidate target; finally, the candidate target is identified in the image by the smallest rectangular frame that can surround the outline of its connected domain, and the target detection is completed.

本发明具有以下有益效果:The present invention has the following beneficial effects:

1、硬件系统易于实现。无需复杂的光路分光设计或成像器件制作工艺。1. The hardware system is easy to realize. There is no need for complex optical path splitting design or imaging device manufacturing process.

2、软件计算复杂度低。复杂的相机标定工作仅需在实验室中进行一次;图像融合无需计算偏振度、仅需进行一次简单的像素灰度差分运算。2. The computational complexity of the software is low. Complicated camera calibration only needs to be performed once in the laboratory; image fusion does not need to calculate the degree of polarization, and only needs to perform a simple pixel grayscale difference operation.

3、算法的配准精度高。在配准前对相机的非线性畸变进行了校正。3. The registration accuracy of the algorithm is high. Camera nonlinear distortions were corrected before registration.

4、适用于运动目标的检测。4. It is suitable for the detection of moving targets.

附图说明Description of drawings

图1是本发明涉及的弱目标成像检测系统软硬件功能模块框图。Fig. 1 is a block diagram of software and hardware functional modules of the weak target imaging detection system involved in the present invention.

图2是本发明涉及的弱目标成像检测装置硬件结构立体示意图,图中标号名称:1为壳体前面板;2为壳体后框;3为三脚架固定座;4为0°线偏振滤镜;5为90°线偏振滤镜;6、7为光学镜头;8、9为C口镜头接圈;10、11为滤光片座;12为470nm窄带滤光片;13为 630nm窄带滤光片;14、15为CMOS图像传感器;16为USB接口。Fig. 2 is the three-dimensional schematic view of the hardware structure of the weak target imaging detection device involved in the present invention, in which the label names: 1 is the front panel of the housing; 2 is the rear frame of the housing; 3 is the tripod fixing seat; 4 is the 0° linear polarization filter ; 5 is 90° linear polarizing filter; 6, 7 is optical lens; 8, 9 is C-mount lens adapter ring; 10, 11 is filter seat; 12 is 470nm narrow-band filter; 13 is 630nm narrow-band filter 14, 15 are CMOS image sensors; 16 is a USB interface.

图3本发明涉及的FPGA主控板硬件电路框图。FIG. 3 is a block diagram of the hardware circuit of the FPGA main control board involved in the present invention.

图4是本发明涉及的弱目标成像检测方法软件流程框图。Fig. 4 is a software flow diagram of the weak target imaging detection method involved in the present invention.

具体实施方式Detailed ways

下面结合附图对本发明的技术方案进行详细说明:The technical scheme of the present invention is described in detail below in conjunction with accompanying drawing:

本发明的弱目标成像检测系统软硬件功能模块框图如图1所示。弱目标成像检测装置的硬件模块可分为仪器壳体、光学系统和FPGA主控板三部分。其中仪器壳体用于连接光学镜头、电路板和三脚架,包括壳体前面板、壳体后框及三脚架固定座;光学系统采用双通道结构,用于获取两幅不同偏振角和波段的图像,通道1包括0°线偏振滤镜、光学镜头、C口镜头接圈、滤光片座、470nm窄带滤光片和CMOS图像传感器;通道2包括90°线偏振滤镜、光学镜头、C口镜头接圈、滤光片座、630nm窄带滤光片和CMOS图像传感器;FPGA主控板用于对双通道CMOS图像传感器进行参数配置、同步采集、图像缓存及预处理,并通过 USB接口传输至PC机。软件模块运行于PC机上,依次执行双通道图像采集、图像畸变校正、双通道图像配准、图像差分融合和图像目标检测任务。The block diagram of software and hardware functional modules of the weak target imaging detection system of the present invention is shown in FIG. 1 . The hardware module of the weak target imaging detection device can be divided into three parts: the instrument shell, the optical system and the FPGA main control board. Among them, the instrument housing is used to connect the optical lens, circuit board and tripod, including the front panel of the housing, the rear frame of the housing and the tripod holder; the optical system adopts a dual-channel structure, which is used to obtain two images with different polarization angles and wave bands. Channel 1 includes 0° linear polarizing filter, optical lens, C-mount lens adapter, filter holder, 470nm narrow-band filter and CMOS image sensor; channel 2 includes 90° linear polarizing filter, optical lens, C-mount lens Mounting ring, filter holder, 630nm narrow-band filter and CMOS image sensor; FPGA main control board is used for parameter configuration, synchronous acquisition, image buffer and preprocessing of the dual-channel CMOS image sensor, and transmits to PC through USB interface machine. The software module runs on the PC, and performs the tasks of dual-channel image acquisition, image distortion correction, dual-channel image registration, image differential fusion and image target detection in sequence.

本发明的弱目标成像检测装置硬件结构立体示意图如图2所示。壳体前面板1的尺寸为 100mm×50mm×5mm,其上安装有用于固定光学镜头的C口镜头接圈8、9,两个接圈的中心间距为50mm,螺纹外径为25.1mm。壳体后框2的尺寸为100mm×50mm×30mm,通过前面板四周的12颗规格为Φ3*6的螺丝与之相连;其左侧有一个B型USB接口16,用来连接FPGA 主控板和PC机。三脚架固定座3位于壳体后框的下侧,通过中心规格为1/4-20(外径1/4英寸,螺距20牙/英寸)的螺孔连接三脚架的云台。光学镜头6、7的焦距均为8mm定焦,光圈调节范围为F1.4-F16,对焦范围为0.1m-∞,分别和前面板上的C口镜头接圈8、9相连。两片旋转式线偏振滤镜4、5通过尺寸为M30.5×0.5mm(外径30.5mm、螺距0.5mm)的接圈分别安装在光学镜头6、7前;采用线偏振标定板将二者对应的线偏振滤镜的偏振方向分别调节至0°和90°。两片窄带滤光片12、13分别通过的滤光片座10、11安装于CMOS图像传感器14、15的表面;滤光片均采用镜面玻璃材质,尺寸为12mm×12mm×0.7mm,中心波长分别为470nm和630nm,半带宽为20nm,峰值透射率>90%,截止深度<1%。CMOS图像传感器14、15均采用130万像素的MT9M001。MT9M001为1/2″的单色面阵传感器,光谱响应范围为400-1050nm;成像信噪比和动态范围分别为45dB和68.2dB,已能够达到CCD 的水平;5.2μm×5.2μm的像素尺寸使其达到2.1V/lux-sec的高弱光灵敏度;而1280×1024@30fps 的连续图像捕获能力能够满足大多数运动目标的探测需求。A three-dimensional schematic diagram of the hardware structure of the weak target imaging detection device of the present invention is shown in FIG. 2 . The size of the front panel 1 of the housing is 100mm×50mm×5mm, on which are mounted C-mount lens adapters 8 and 9 for fixing the optical lens, the distance between the centers of the two adapters is 50mm, and the thread outer diameter is 25.1mm. The size of the rear frame 2 of the housing is 100mm×50mm×30mm, and it is connected to it through 12 screws with a specification of Φ3*6 around the front panel; there is a B-type USB interface 16 on the left side, which is used to connect to the FPGA main control board and PCs. The tripod mount 3 is positioned at the lower side of the housing rear frame, and is connected to the cloud platform of the tripod through a screw hole of 1/4-20 (outer diameter 1/4 inch, pitch 20 teeth/inch) through the central specification. The focal lengths of optical lenses 6 and 7 are both 8mm fixed focus, the aperture adjustment range is F1.4-F16, and the focus range is 0.1m-∞, which are respectively connected to the C-mount lens mounts 8 and 9 on the front panel. Two rotating linear polarizing filters 4 and 5 are respectively installed in front of the optical lens 6 and 7 through an adapter ring with a size of M30.5×0.5mm (outer diameter 30.5mm, pitch 0.5mm); The polarization directions of the corresponding linear polarizing filters were adjusted to 0° and 90°, respectively. The filter holders 10 and 11 through which the two narrow-band filters 12 and 13 respectively pass are installed on the surfaces of the CMOS image sensors 14 and 15; the filters are all made of mirror glass, and the size is 12mm×12mm×0.7mm, and the center wavelength 470nm and 630nm respectively, the half-bandwidth is 20nm, the peak transmittance is >90%, and the cut-off depth is <1%. Both the CMOS image sensors 14 and 15 are MT9M001 with 1.3 million pixels. MT9M001 is a 1/2″ monochromatic area sensor with a spectral response range of 400-1050nm; the imaging signal-to-noise ratio and dynamic range are 45dB and 68.2dB respectively, which can reach the level of CCD; the pixel size of 5.2μm×5.2μm It achieves a high low-light sensitivity of 2.1V/lux-sec; and the continuous image capture capability of 1280×1024@30fps can meet the detection requirements of most moving targets.

本发明的FPGA主控板硬件电路框图如图3所示。为实现双通道CMOS图像传感器的同步采集及控制,主控板的硬件设计以一片非易失性FPGA芯片为核心,并采用可编程片上系统技术将32位的软核Nios II处理器及其部分外设集成在单芯片内,片外仅采用一片USB2.0 接口芯片和B型USB接口与PC机通信,大大提高了系统组件功能的集成度,并降低了系统级成本。Nios II处理器以IP核的方式构建,通过Avalon总线控制用户RAM、用户FLASH、USB控制器、双通道对应的2组双口RAM控制器及图像采集模块等片内外设。其中,用户 RAM用作Nios II处理器的运行内存;用户FLASH用于存储Nios II处理器执行的程序代码; USB控制器用于USB2.0接口芯片的配置和总线协议转换;双口RAM是一个异步FIFO,用于图像行有效数据的筛选和处理,并使数据在传输过程中保持同步;图像采集模块包括配置控制器和时序控制器两部分,配置控制器通过I2C双向数据串行总线SCLK、SDATA对CMOS 图像传感器内部寄存器进行配置,时序控制器通过时序信号STROBE、PIXCLK、L_VALID、 F_VALID和控制信号STANDBY、TRIGGER、CLKIN控制CMOS图像传感器同步输出数据 DOUT[9:0]。The hardware circuit block diagram of the FPGA main control board of the present invention is as shown in FIG. 3 . In order to realize the synchronous acquisition and control of the dual-channel CMOS image sensor, the hardware design of the main control board takes a non-volatile FPGA chip as the core, and adopts the programmable system-on-chip technology to integrate the 32-bit soft-core Nios II processor and its parts Peripherals are integrated in a single chip, and only one USB2.0 interface chip and B-type USB interface are used outside the chip to communicate with the PC, which greatly improves the integration of system component functions and reduces system-level costs. The Nios II processor is built in the form of an IP core, and controls user RAM, user FLASH, USB controller, two sets of dual-port RAM controllers corresponding to dual channels, and image acquisition modules and other on-chip peripherals through the Avalon bus. Among them, the user RAM is used as the running memory of the Nios II processor; the user FLASH is used to store the program code executed by the Nios II processor; the USB controller is used for the configuration of the USB2.0 interface chip and the conversion of the bus protocol; the dual-port RAM is an asynchronous FIFO is used for screening and processing the effective data of the image line, and keeps the data synchronous during the transmission process; the image acquisition module includes two parts, the configuration controller and the timing controller, and the configuration controller passes through the I 2 C bidirectional data serial bus SCLK , SDATA configure the internal registers of the CMOS image sensor, and the timing controller controls the CMOS image sensor to synchronously output data DOUT[9:0] through timing signals STROBE, PIXCLK, L_VALID, F_VALID and control signals STANDBY, TRIGGER, CLKIN.

具体实施时,FPGA芯片采用ALTERA公司的MAX 10系列型号为10M08E144ES的芯片。此芯片采用TSMC的55nm嵌入式NOR闪存技术制造,具有8K个逻辑单元、378Kb的嵌入式SRAM资源,以及172KB的用户FLASH资源。由于CMOS图像传感器的最大像素阵列为1280×1024,量化位数为8bit,缓存1行数据需要10Kbit的存储空间,因此从嵌入式 SRAM资源中分配出2块10Kb的空间用于构建2个双口RAM,而将剩余的358Kb分配给用户RAM。USB2.0接口芯片采用CYPRESS公司的CY7C68013A,其内部FIFO资源大小为 4KB,外围设备和USB接口可以同时对此FIFO资源进行操作,在不需要USB固件程序的参与下FIFO与外部电路可以进行数据传输,最大传输速率为96MB/s。During specific implementation, the FPGA chip adopts the MAX 10 series chip of ALTERA Company whose model number is 10M08E144ES. This chip is manufactured using TSMC's 55nm embedded NOR flash memory technology, with 8K logic units, 378Kb embedded SRAM resources, and 172KB user FLASH resources. Since the maximum pixel array of the CMOS image sensor is 1280×1024, the number of quantization bits is 8bit, and a storage space of 10Kbit is required for caching one line of data, two pieces of 10Kb space are allocated from the embedded SRAM resource to build two dual-port RAM, while assigning the remaining 358Kb to user RAM. The USB2.0 interface chip adopts CY7C68013A of CYPRESS company, and its internal FIFO resource size is 4KB. Peripheral equipment and USB interface can operate on this FIFO resource at the same time. Data transmission between FIFO and external circuit can be carried out without the participation of USB firmware program. , the maximum transfer rate is 96MB/s.

FPGA主控板的工作流程为:主控板上电后首先进行系统初始化,然后令Nios II处理器处于等待状态。PC通过USB接口向主控板发送起始信号后,Nios II处理器通过配置控制器依次对双通道的CMOS图像传感器进行写寄存器操作,将其设置为抓拍模式,并配置图像分辨率、曝光时间及电子增益等参数。设置完成后,配置控制器的I2C总线进入空闲状态,并令2组时序控制器同步发送TRIGGER脉冲。CMOS图像传感器收到TRIGGER脉冲后,内部进行行复位,完成后输出STROBE脉冲,脉冲宽度标识像素积分时间的长度。STROBE信号由1跳变为0后,正常输出数据DOUT[7:0],同时输出同步信号F_VALID和L_VALID。时序控制器接收到返回的数据和同步信号后,首先将F_VALID和L_VALID进行“与”操作。当结果为高时代表此时数据有效,进而以像素时钟为工作时钟按照地址0~1280将其存储在双口RAM中;当结果由高变低时,代表一行有效数据传输完毕,此时将2组双口RAM中的数据每512个字节打包为一个数据包依次输出到USB2.0接口芯片的FIFO中,再经USB线传输至PC。当一帧数据传输完毕后,Nios II处理器通过配置控制器设置CMOS图像传感器为 STANDBY模式,停止数据输出并等待下一个起始信号。The workflow of the FPGA main control board is as follows: After the main control board is powered on, the system is initialized first, and then the Nios II processor is in a waiting state. After the PC sends the start signal to the main control board through the USB interface, the Nios II processor writes registers to the dual-channel CMOS image sensor in turn through the configuration controller, sets it as the capture mode, and configures the image resolution and exposure time And electronic gain and other parameters. After the setting is completed, configure the I 2 C bus of the controller to enter the idle state, and make the two groups of timing controllers send TRIGGER pulses synchronously. After the CMOS image sensor receives the TRIGGER pulse, it internally resets the row, and outputs a STROBE pulse after completion, and the pulse width indicates the length of the pixel integration time. After the STROBE signal changes from 1 to 0, the data DOUT[7:0] is normally output, and the synchronization signals F_VALID and L_VALID are output at the same time. After the timing controller receives the returned data and synchronous signal, it first performs "AND" operation on F_VALID and L_VALID. When the result is high, it means that the data is valid at this time, and then the pixel clock is used as the working clock to store it in the dual-port RAM according to the address 0~1280; The data in the two groups of dual-port RAM is packed into a data packet every 512 bytes and output to the FIFO of the USB2.0 interface chip in turn, and then transmitted to the PC via the USB line. After a frame of data transmission is completed, the Nios II processor sets the CMOS image sensor to STANDBY mode through the configuration controller, stops data output and waits for the next start signal.

本发明的弱目标成像检测方法软件流程框图如图4所示。弱目标成像检测方法包括以下五个主要步骤:The software flow diagram of the weak target imaging detection method of the present invention is shown in FIG. 4 . The weak target imaging detection method includes the following five main steps:

(1)双通道图像采集。任务开始后,首先扫描USB端口并连接指定的成像装置;确认连接后向成像装置发送控制字以设置成像参数,包括图像分辨率、曝光时间和电子增益等;完成设置后发送一次采集指令并等待接收图像数据,当双通道的图像数据均传输完成后以无损压缩的位图格式保存图像。(1) Two-channel image acquisition. After the task starts, first scan the USB port and connect the specified imaging device; after confirming the connection, send a control word to the imaging device to set the imaging parameters, including image resolution, exposure time and electronic gain; after completing the setting, send an acquisition command and wait Receive the image data, and save the image in a lossless compressed bitmap format after the transmission of the two-channel image data is completed.

(2)图像畸变校正。为实现双通道图像的精确配准,需要分别对两幅图像进行畸变校正。考虑到成像系统的双通道具有独立性,设计采用经典的张正友平面标定法标定成像系统的光学畸变参数。光学畸变是非线性的,主要包括径向畸变、切向畸变、离心畸变及薄棱镜畸变等,需要用非线性模型进行畸变参数的估计。其中径向畸变是图像产生误差的主要因素,其模型可近似描述为:(2) Image distortion correction. In order to achieve accurate registration of dual-channel images, it is necessary to perform distortion correction on the two images separately. Considering that the dual channels of the imaging system are independent, the design adopts the classic Zhang Zhengyou plane calibration method to calibrate the optical distortion parameters of the imaging system. Optical distortion is nonlinear, mainly including radial distortion, tangential distortion, centrifugal distortion and thin prism distortion, etc. It is necessary to use a nonlinear model to estimate the distortion parameters. Among them, radial distortion is the main factor of image error, and its model can be approximately described as:

其中,δX和δY是畸变值,它与投影点在图像中的像素位置有关。x、y是图像点在成像平面坐标系下根据线性投影模型得到的归一化投影值,k1、k2、k3等为径向畸变系数,这里只考虑二次畸变,畸变后的坐标为:Among them, δ X and δ Y are distortion values, which are related to the pixel position of the projected point in the image. x and y are the normalized projection values obtained by the image point in the imaging plane coordinate system according to the linear projection model, k 1 , k 2 , k 3 etc. are the radial distortion coefficients, only secondary distortion is considered here, and the coordinates after distortion are:

令(ud,vd)、(u,v)分别为图像坐标系下空间点对应的实际坐标和理想坐标。则两者关系为:Let (u d , v d ) and (u, v) be the actual coordinates and ideal coordinates corresponding to the spatial points in the image coordinate system, respectively. Then the relationship between the two is:

将线性标定结果作为参数初值,带入以下目标函数求最小值,实现非线性参数的估计:The linear calibration result is used as the initial value of the parameter, and the following objective function is used to find the minimum value to realize the estimation of the nonlinear parameter:

其中,是标定模板的第j点在第i幅图像上,利用估计参数得到的投影点, Mj为标定模板第j点在世界坐标系下的坐标值,m为每幅图像特征点个数,n为图像数目。利用LM迭代法优化所得的相机标定参数,最终得到较为精确的径向畸变系数,进而反求无畸变的图像坐标。in, is the jth point of the calibration template on the i-th image, the projection point obtained by using the estimated parameters, M j is the coordinate value of the jth point of the calibration template in the world coordinate system, m is the number of feature points in each image, n is the number of images. The LM iterative method is used to optimize the obtained camera calibration parameters, and finally a more accurate radial distortion coefficient is obtained, and then the undistorted image coordinates are inversely calculated.

(3)双通道图像配准。由于双通道在成像视场、波段、偏振角和光学畸变上的差异,两幅图像需要进行配准才能使待融合的像素对齐。考虑到SURF特征点对图像旋转、平移、缩放和噪声具有较好的鲁棒性,采用了一种基于SURF特征点的图像配准算法,包括以下五个子步骤:(3) Two-channel image registration. Due to the difference in the imaging field of view, wavelength band, polarization angle and optical distortion of the two channels, the two images need to be registered to align the pixels to be fused. Considering that SURF feature points are robust to image rotation, translation, scaling and noise, an image registration algorithm based on SURF feature points is adopted, including the following five sub-steps:

1)检测SURF特征点,在构建积分图像的基础上,利用方框型滤波近似替代二阶高斯滤波,并对待选特征点和它周围的点分别计算Hessian值,如果该特征点具有最大的Hessian值,则其为特征点。1) To detect SURF feature points, on the basis of constructing the integral image, use a box filter to approximately replace the second-order Gaussian filter, and calculate the Hessian value of the selected feature point and its surrounding points, if the feature point has the largest Hessian value, it is a feature point.

2)生成特征描述向量,使用的是特征点邻域的灰度信息,通过计算积分图像的一阶Haar 小波响应,得到灰度分布信息来产生128维的特征描述向量。2) To generate the feature description vector, the gray level information of the feature point neighborhood is used, and the gray level distribution information is obtained by calculating the first-order Haar wavelet response of the integral image to generate a 128-dimensional feature description vector.

3)两步法匹配特征点,通过基于最邻近次邻近比值法的粗匹配算法和基于RANSAC的精匹配算法两个步骤,建立参考图像和待配准图像特征点之间正确的一一对应匹配关系。其特征在于:当两幅图像的特征向量生成后,首先采用SURF特征描述向量的欧式距离作为两幅图像中关键点的相似性判定度量,方法是通过K-d树得到一个特征点到最近邻特征点的距离dND,其到次近邻特征点的距离dNND,如果它们的比值小于阈值ε,则保留该特征点与其最近邻构成的匹配点对;然后随机选取4对初始匹配特征点,计算由这4对点所确定的透视变换矩阵H,再用该矩阵衡量其余特征点的匹配程度:3) Two-step method to match feature points, through the two steps of the rough matching algorithm based on the nearest neighbor ratio method and the fine matching algorithm based on RANSAC, the correct one-to-one correspondence matching between the reference image and the feature points of the image to be registered is established relation. It is characterized in that: after the feature vectors of the two images are generated, the Euclidean distance of the SURF feature description vector is first used as the similarity judgment measure of the key points in the two images, and the method is to obtain a feature point to the nearest neighbor feature point through the Kd tree d ND , and the distance d NND to the next nearest neighbor feature point, if their ratio is less than the threshold ε, the matching point pair consisting of the feature point and its nearest neighbor is retained; then 4 pairs of initial matching feature points are randomly selected, and the calculation is given by The perspective transformation matrix H determined by these 4 pairs of points is then used to measure the matching degree of the remaining feature points:

其中,t为阈值,小于等于t的特征点对为H的内点,大于t的特征点对则为外点。这样不断更新内点集,由RANSAC的k次随机采样可得到最大的内点集合,此时也得到了优化后的内点集合所对应的透视变换矩阵H。Among them, t is the threshold value, the feature point pair less than or equal to t is the inner point of H, and the feature point pair greater than t is the outer point. In this way, the interior point set is continuously updated, and the largest interior point set can be obtained by k random sampling of RANSAC. At this time, the perspective transformation matrix H corresponding to the optimized interior point set is also obtained.

4)坐标变换及重采样。根据求得的透视变换矩阵H对图像像素的坐标进行线性变换,并采用双线性插值法对图像像素的灰度值进行重采样。双线性插值法假定内插点周围四个点围城的区域内的灰度变化是线性的,从而可以用线性内插方法,根据四个近邻像素的灰度值,计算出内插点的灰度值。4) Coordinate transformation and resampling. According to the obtained perspective transformation matrix H, the coordinates of the image pixels are linearly transformed, and the gray value of the image pixels is resampled by bilinear interpolation method. The bilinear interpolation method assumes that the gray level change in the area surrounded by four points around the interpolation point is linear, so that the gray value of the interpolation point can be calculated according to the gray value of the four adjacent pixels using the linear interpolation method. degree value.

5)裁剪图像重叠区域。根据下式对图像坐标变换后的四个边界点进行判别,确定图像配准后重叠区域的四个边界点坐标(Xmin,Ymin)、(Xmin,Ymax)、(Xmax,Ymin)、(Xmax,Ymax):5) Cropping image overlap area. Discriminate the four boundary points after the image coordinate transformation according to the following formula, and determine the four boundary point coordinates (X min , Y min ), (X min ,Y max ), (X max ,Y min ), (X max ,Y max ):

其中,W、H为图像的宽和高。按照以上边界点构成的矩形区域对双通道图像进行裁剪,得到配准的0°和90°偏振图像I(0°)和I(90°)。Among them, W and H are the width and height of the image. The two-channel image was cropped according to the rectangular area formed by the above boundary points, and the registered 0° and 90° polarization images I(0°) and I(90°) were obtained.

(4)图像差分融合。由于目标和背景的反射及散射光在0°和90°偏振方向上具有显著的光强差异,采用双通道正交差分的图像融合方式不仅能够获得较好的图像信噪比,而且具有极低的软件复杂度。融合得到的正交差分图像表示为:(4) Image differential fusion. Since the reflected and scattered light of the target and the background has a significant difference in light intensity in the 0° and 90° polarization directions, the dual-channel orthogonal differential image fusion method can not only obtain a better image signal-to-noise ratio, but also has an extremely low software complexity. The fused orthogonal difference image is expressed as:

Q=I(0°)-I(90°) (7)Q=I(0°)-I(90°) (7)

(5)图像目标检测。数学形态学是分析几何形状和物体的轮廓结构的数学方法,主要包括膨胀、腐蚀、开运算、闭运算等。在图像处理领域用于“保持物体的基本形状,去除不相关特征”,可以提取到对于表达和描述形状有用的特征。通常形态学处理表现为一种基于模板的邻域运算方式,即定义一种特殊的被称之为“结构元素”或模板的邻域,在要处理的二值图像的每个像素点上将它与二值图像对应的区域进行某种逻辑运算,得到的结果就是输出图像的像素值。结构元素的大小、内容以及运算的性质都将会影响到形态学处理的结果。系统基于形态学的方法对正交差分偏振图像进行目标检测,具有物理意义明确、运算效率高的特点,包括图像二值化、开运算操作、连通域识别三个子步骤。(5) Image target detection. Mathematical morphology is a mathematical method for analyzing geometric shapes and contour structures of objects, mainly including expansion, erosion, opening and closing operations, etc. In the field of image processing, it is used to "maintain the basic shape of the object and remove irrelevant features", and can extract features useful for expressing and describing the shape. Usually, morphological processing is performed as a template-based neighborhood operation method, that is, to define a special neighborhood called a "structural element" or template, and place It performs some logical operation on the area corresponding to the binary image, and the result is the pixel value of the output image. The size, content and operation properties of structural elements will affect the results of morphological processing. The system performs target detection on orthogonal differential polarization images based on the morphology method, which has the characteristics of clear physical meaning and high operation efficiency, including three sub-steps of image binarization, opening operation, and connected domain identification.

1)二值化处理。图像二值化处理是进行形态学滤波的前提,而选取合适的分割阈值是其重要步骤。这里采用最大类间方差法自适应选取全局阈值,该算法由Otsu于1979年提出,是基于整幅图像的统计特性来实现阈值的自动选取的,是全局二值化最杰出的代表。算法的基本思想是用某一假定的灰度值将图像的灰度分成两组,当两组的类间方差最大时,此灰度值就是图像二值化的最佳阈值。设图像有M个灰度值,取值范围在0 M-1,在此范围内选取灰度值t,将图像分成两组G0和G1,G0包含的像素的灰度值在0 t,G1的灰度值在 t+1 M-1,用N表示图像像素总数,ni表示灰度值为i的像素的个数,则每个灰度值i出现的概率为pi=ni/N,G0和G1类出现的概率为均值为则类间方差为:1) Binary processing. Image binarization is the premise of morphological filtering, and selecting an appropriate segmentation threshold is an important step. Here, the maximum inter-class variance method is used to adaptively select the global threshold. This algorithm was proposed by Otsu in 1979. It is based on the statistical characteristics of the entire image to realize the automatic selection of the threshold. It is the most outstanding representative of global binarization. The basic idea of the algorithm is to use a certain assumed gray value to divide the gray value of the image into two groups. When the variance between the two groups is the largest, this gray value is the optimal threshold for image binarization. Assuming that the image has M gray values, the value range is 0 M-1, select the gray value t within this range, divide the image into two groups G 0 and G 1 , the gray value of the pixels contained in G 0 is 0 t, the gray value of G 1 is at t+1 M-1, use N to represent the total number of image pixels, n i represents the number of pixels with gray value i, then the probability of occurrence of each gray value i is p i =n i /N, the probability of G 0 and G 1 class appearing is The mean is Then the between-class variance is:

σ(t)2=ω0ω101)2 (8)σ(t) 2 =ω 0 ω 101 ) 2 (8)

最佳阈值T就是使类间方差最大的t的取值,即:The optimal threshold T is the value of t that maximizes the variance between classes, that is:

T=argmaxσ(t)2,t∈[0,M-1] (9)T=argmaxσ(t) 2 ,t∈[0,M-1] (9)

2)开运算操作。开运算操作用于滤除细小的干扰物并获得较为精确的目标轮廓。它定义为先腐蚀后膨胀的过程:腐蚀的主要作用是消除物体中不相关的细节,特别是边缘点,使物体的边界向内部收缩。其表达式如下:2) Open operation. The opening operation is used to filter out small interference objects and obtain a more accurate target outline. It is defined as the process of first erosion and then expansion: the main function of erosion is to eliminate irrelevant details in the object, especially the edge points, so that the boundary of the object shrinks inward. Its expression is as follows:

其中,E表示腐蚀后的二值图像;B表示结构元素即模板,它是由0或1组成的任何一种形状的图形,在B中有一个中心点,以此点为中心进行腐蚀;X是原图像经过二值化处理后的图像的像素集合。运算过程是在X图像域内滑动结构元素B,当其中心点与X图像上的某一点(x,y)重合时,遍历结构元素内的像素点,如果每个像素点都与以(x,y)为中心的相同位置中对应像素点完全相同,那么像素点(x,y)将被保留在E中,对于不满足条件的像素点则被剔除掉,从而可达到收缩边界的效果。膨胀与腐蚀的作用相反,它对二值化物体轮廓的边界点进行扩充,能够填补分割后物体中残留的空洞,使物体完整。其表达式如下:Among them, E represents the binary image after corrosion; B represents the structural element, that is, the template, which is a graph of any shape composed of 0 or 1, and there is a center point in B, which is the center for corrosion; X It is a collection of pixels of the image after the original image has been binarized. The operation process is to slide the structural element B in the X image domain. When its center point coincides with a certain point (x, y) on the X image, traverse the pixels in the structural element. The corresponding pixels in the same position centered on y) are exactly the same, then the pixel (x, y) will be kept in E, and the pixels that do not meet the conditions will be eliminated, so as to achieve the effect of shrinking the boundary. Expansion is the opposite of erosion. It expands the boundary points of the binary object outline, which can fill the remaining holes in the segmented object and make the object complete. Its expression is as follows:

其中,S表示膨胀后的二值图像像素点的集合;B表示结构元素即模板;X表示经过二值化处理后的图像像素集合。运算过程是在X图像域内滑动结构元素B,当B的中心点移到X图像上的某一点(x,y)时,遍历结构元素内的像素点,如果结构元素B内的像素点与X图像的像素点至少有一个相同,那么就保留(x,y)像素点在S中,否则就去掉此像素点。Among them, S represents the set of expanded binary image pixels; B represents the structural element, that is, the template; X represents the set of image pixels after binary processing. The operation process is to slide the structural element B in the X image domain. When the center point of B moves to a certain point (x, y) on the X image, traverse the pixels in the structural element. If the pixel in the structural element B is the same as X If at least one pixel of the image is the same, then keep the (x, y) pixel in S, otherwise remove this pixel.

3)连通域识别。对二值图像进行开运算后,图像被划分成多个连通区域。为了从中筛选出候选目标,需要对连通域进行分割、标记,并提取特征用于目标识别。连通域分割的目的是将一幅点阵二值图像中互相邻接的目标“1”值像素集合提取出来,并为图像中不同的连通域填入不同的数字标记。算法通常分为两类:一类是局部邻域算法,基本思想是从局部到整体,逐个检查每个连通成分,确定一个“起始点”,再向周围邻域扩展地填入标记;另一类是从整体到局部,先确定不同的连通成分,再对每一个连通成分用区域填充的方法填入标记。这里采用8邻接判据对图像中的连通域进行搜索、标记。8邻接连通域的定义是该区域中每个像素,其所有8个方向的8个相邻像素中至少有一个像素仍然属于该区域。完成连通域的分割和标记后,分别提取各连通域的像素周长和预先设定的目标阈值进行对比,如果在阈值区间内则判定为候选目标,采用能够包围其连通域轮廓的最小矩形框在图像中标识出目标。3) Connected domain identification. After the binary image is opened, the image is divided into multiple connected regions. In order to screen out candidate targets, the connected domains need to be segmented, labeled, and features extracted for target recognition. The purpose of connected domain segmentation is to extract the target "1" value pixel set adjacent to each other in a lattice binary image, and fill in different digital marks for different connected domains in the image. Algorithms are usually divided into two categories: one is the local neighborhood algorithm, the basic idea is to check each connected component one by one from the local to the whole, determine a "starting point", and then fill in the markers to the surrounding neighborhood; the other The class is from the whole to the part, first determine the different connected components, and then fill in the mark with the method of area filling for each connected component. Here, the 8-neighbor criterion is used to search and mark the connected domains in the image. The definition of an 8-adjacent connected domain is that for each pixel in the region, at least one of its 8 adjacent pixels in all 8 directions still belongs to the region. After completing the segmentation and marking of connected domains, extract the pixel perimeter of each connected domain and compare them with the preset target threshold. If it is within the threshold range, it is judged as a candidate target, and the smallest rectangular frame that can surround the outline of its connected domain is adopted. Targets are identified in the image.

Claims (5)

1. The utility model provides a weak target formation of image detection device, comprises instrument casing, optical system and FPGA main control board triplex which characterized in that: the instrument shell is used for connecting the optical lens, the circuit board and the tripod and comprises a shell front panel, a shell rear frame and a tripod fixing seat; the optical system adopts a dual-channel structure and is used for acquiring two images with different polarization angles and wave bands, and a channel 1 comprises a 0-degree linear polarization filter, an optical lens, a C-port lens joint ring, a filter base, a 470nm narrow-band filter and a CMOS image sensor; the channel 2 comprises a 90-degree linear polarization filter, an optical lens, a C-port lens joint ring, a filter seat, a 630nm narrow-band filter and a CMOS image sensor; the FPGA main control board is used for carrying out parameter configuration, synchronous acquisition, image caching and preprocessing on the dual-channel CMOS image sensor and transmitting the parameters to the PC through the USB interface.
2. A weak target imaging detection device according to claim 1, characterized in that: the size of the front panel of the shell is 100mm multiplied by 50mm multiplied by 5mm, two C-port lens joint rings for fixing an optical lens are arranged on the front panel, the central distance between the two joint rings is 50mm, and the external diameter of a thread is 25.1 mm; the size of the rear frame of the shell is 100mm multiplied by 50mm multiplied by 30mm, the rear frame is connected with the front panel through 12 screws with the specification of phi 3 x 6 on the periphery of the front panel, and the left side of the rear frame is provided with a B-type USB interface used for connecting an FPGA main control board and a PC; the tripod fixing seat is positioned on the lower side of the rear frame of the shell and is connected with a tripod head of a tripod through a screw hole with the central specification of 1/4-20.
3. A weak target imaging detection device according to claim 1, characterized in that: the focal lengths of the optical lenses of the channel 1 and the channel 2 are both 8mm fixed focuses, the aperture adjusting range is F1.4-F16, the focusing range is 0.1 m-infinity, and the optical lenses are connected with the two C-port lens connecting rings on the front panel; the two rotary linear polarization filters are respectively arranged in front of the two optical lenses through connecting rings with the size of M30.5 multiplied by 0.5 mm; respectively adjusting the polarization directions of the linear polarization filters corresponding to the linear polarization calibration plate to 0 degree and 90 degrees by adopting the linear polarization calibration plate; the two narrow-band filters are respectively arranged on the surface of the CMOS image sensor through filter seats; the optical filters are made of mirror glass, the size of the optical filters is 12mm multiplied by 0.7mm, the central wavelength is 470nm and 630nm respectively, the half bandwidth is 20nm, the peak transmittance is more than 90%, and the cut-off depth is less than 1%; the CMOS image sensor adopts an 1/2' monochrome area array sensor with 130 ten thousand pixels, and the spectral response range is 400-1050 nm.
4. A weak target imaging detection device according to claim 1, characterized in that: the FPGA main control board takes a nonvolatile FPGA chip as a core and adoptsThe system technology on the programmable chip integrates a 32-bit soft core Nios II processor and partial peripheral equipment thereof into a single chip, and only one USB2.0 interface chip and a B-type USB interface are adopted outside the chip to communicate with a PC; the Nios II processor controls the on-chip peripherals such as a user RAM, a user FLASH, a USB controller, a 2-group double-port RAM controller corresponding to double channels, an image acquisition module and the like through an Avalon bus; the user RAM is used as the running memory of the Nios II processor; the user FLASH is used for storing program codes executed by the Nios II processor; the USB controller is used for the configuration and bus protocol conversion of the USB2.0 interface chip; the double-port RAM is an asynchronous FIFO and is used for screening and processing effective data of image lines and keeping the data synchronous in the transmission process; the image acquisition module comprises a configuration controller and a time schedule controller, wherein the configuration controller passes through I2C bidirectional data serial buses SCLK and SDATA configure internal registers of the CMOS image sensor, and the timing controller controls the CMOS image sensor to synchronously output data DOUT [9:0 ] through timing signals STROBE, PIXCLK, L _ VALID and F _ VALID and control signals STANDBY, TRIGGER and CLKIN]。
5. A weak target imaging detection apparatus as claimed in claim 1, wherein: the imaging detection comprises the following five main steps:
(1) the method comprises the following steps of (1) double-channel image acquisition, wherein after a task is started, a USB port is scanned and connected with a specified imaging device; after the connection is confirmed, sending a control word to the imaging device to set imaging parameters including image resolution, exposure time and electronic gain; after the setting is finished, sending a primary acquisition instruction and waiting for receiving image data, and saving the image in a lossless compression bitmap format after the transmission of the image data of the two channels is finished;
(2) correcting image distortion, designing an optical distortion parameter of the imaging system by adopting a Zhang-friend method, and considering only radial distortion of an image by a nonlinear distortion model:
wherein, deltaXAnd deltaYIs distortion value, which is related to the pixel position of the projection point in the image, x and y are normalized projection values of the image point obtained according to the linear projection model under the imaging plane coordinate system,k1、k2、k3equal to the radial distortion coefficient, only the second order distortion is considered here, and the distorted coordinates are:
order (u)d,vd) And (u, v) are respectively an actual coordinate and an ideal coordinate corresponding to the space point under the image coordinate system, and the relationship between the actual coordinate and the ideal coordinate is as follows:
taking the linear calibration result as an initial parameter value, bringing the initial parameter value into the following objective function to solve the minimum value, and realizing the estimation of the nonlinear parameter:
wherein,is the projection point M obtained by using the estimated parameters of the jth point of the calibration template on the ith imagejThe coordinate value of the jth point of the calibration template in a world coordinate system is defined, m is the number of characteristic points of each image, and n is the number of images; optimizing the obtained camera calibration parameters by using an LM iteration method to finally obtain a more accurate radial distortion coefficient, and further reversely solving an undistorted image coordinate;
(3) the dual-channel image registration is used for realizing pixel alignment of dual-channel images under different imaging fields of view, wave bands, polarization angles and optical distortion conditions, and an image registration algorithm based on SURF feature points is adopted, and the image registration algorithm comprises the following five substeps:
1) detecting SURF characteristic points, on the basis of constructing an integral image, approximately replacing second-order Gaussian filtering by using square filtering, respectively calculating Hessian values of the characteristic points to be selected and points around the characteristic points, and if the characteristic points have the maximum Hessian value, taking the characteristic points as the characteristic points;
2) generating a feature description vector, and generating a 128-dimensional feature description vector by calculating first-order Haar wavelet response of an integral image by using gray scale information of a feature point neighborhood to obtain gray scale distribution information;
3) the two-step method is used for matching feature points, and a correct one-to-one corresponding matching relation between the reference image and the feature points of the image to be registered is established through two steps of a coarse matching algorithm based on a nearest neighbor ratio method and a precise matching algorithm based on RANSAC, and the method is characterized in that: after the feature vectors of the two images are generated, firstly, the Euclidean distance of the SURF feature description vectors is adopted as the similarity judgment measurement of key points in the two images, and the method is to obtain the distance d from one feature point to the nearest neighbor feature point through a K-d treeNDIts distance d to next-nearest neighbor feature pointNNDIf the ratio of the characteristic points to the nearest neighbor points is smaller than the threshold epsilon, keeping the matching point pairs formed by the characteristic points and the nearest neighbor points; then, 4 pairs of initial matching feature points are randomly selected, a perspective transformation matrix H determined by the 4 pairs of the initial matching feature points is calculated, and the matching degree of the other feature points is measured by the matrix:
wherein t is a threshold value, the characteristic point pairs smaller than or equal to t are inner points of H, and the characteristic point pairs larger than t are outer points, so that an inner point set is continuously updated, the largest inner point set can be obtained by k times of random sampling of RANSAC, and a perspective transformation matrix H corresponding to the optimized inner point set is obtained;
4) performing coordinate transformation and resampling, namely performing linear transformation on the coordinates of the image pixels according to the obtained perspective transformation matrix H, and performing resampling on the gray value of the image pixels by adopting a bilinear interpolation method, wherein the bilinear interpolation method assumes that the gray change in an area surrounded by four points around the interpolation point is linear, so that the gray value of the interpolation point can be calculated by using a linear interpolation method according to the gray values of four adjacent pixels;
5) cutting the image overlapping area, judging the four boundary points after the image coordinate transformation according to the following formula, and determining the coordinates (X) of the four boundary points of the overlapping area after the image registrationmin,Ymin)、(Xmin,Ymax)、(Xmax,Ymin)、(Xmax,Ymax):
Wherein W, H is the width and height of the image, and the two-channel image is clipped according to the rectangular region formed by the above boundary points to obtain the registered 0-degree and 90-degree polarization images I (0) and I (90);
(4) and image difference fusion, wherein an orthogonal difference image obtained by fusion in a two-channel orthogonal difference mode is represented as follows:
Q=I(0°)-I(90°);
(5) the image target detection is carried out on the orthogonal differential polarization image by a system based on a morphology method, and the system comprises the following three substeps:
1) and (3) binarization processing, namely adopting a maximum inter-class variance method to adaptively select a global threshold value, wherein the principle is as follows: setting M gray values of the image, wherein the value range is 0-M-1, selecting the gray value t in the range, and dividing the image into two groups G0And G1,G0The gray value of the contained pixel is 0-t, G1The gray value of (1) is t + 1-M-1, the total number of image pixels is represented by N, NiRepresenting the number of pixels with a gray value i, the probability of each gray value i appearing is pi=ni/N,G0And G1The probability of class occurrence isMean value ofThe between-class variance is:
σ(t)2=ω0ω101)2
the optimal threshold T is the value of T that maximizes the inter-class variance, i.e.:
T=argmaxσ(t)2,t∈[0,M-1]
2) and the opening operation is used for filtering fine interferent and obtaining a more accurate target profile, and is defined as a process of firstly corroding and then expanding: the effect of erosion is to eliminate irrelevant details in the object, particularly the edge points, and shrink the boundary of the object inwards, which is expressed as follows:
wherein E represents a binary image after corrosion; b represents a structural element, namely a template, which is a figure in any shape consisting of 0 or 1, and a central point is arranged in B, and the etching is carried out by taking the central point as the center; x is a pixel set of an image of an original image after binarization processing; the operation process is that the structural element B is slid in the X image domain, when the central point of the structural element B is superposed with a certain point (X, y) on the X image, pixel points in the structural element are traversed, if each pixel point is completely the same as the corresponding pixel point in the same position taking (X, y) as the center, the pixel point (X, y) is kept in E, and the pixel points which do not meet the condition are removed, so that the effect of shrinking the boundary can be achieved; the expansion and corrosion effects are opposite, the boundary point of the binary object contour is expanded, the residual holes in the divided object can be filled, the object is complete, and the expression is as follows:
s represents a set of expanded binary image pixel points; b represents a structural element, namely a template; x represents an image pixel set after binarization processing, the operation process is to slide a structural element B in an X image domain, when the central point of B is moved to a certain point (X, y) on an X image, pixel points in the structural element are traversed, if the pixel point in the structural element B is at least one same as the pixel point of the X image, the pixel point (X, y) is kept in S, otherwise, the pixel point is removed; after the binary image is subjected to open operation, the image is divided into a plurality of connected regions;
3) identifying a connected domain, namely firstly segmenting the connected domain in the image by adopting 8 adjacent criteria, wherein the definition of the 8 adjacent connected domains is as follows: filling different connected domains in the binary image with different digital marks according to the definition, wherein each pixel in the region still belongs to at least one of 8 adjacent pixels in all 8 directions; then respectively extracting the pixel circumferences of all connected domains, comparing the pixel circumferences with a preset target threshold, and judging as a candidate target if the pixel circumferences are within a threshold interval; and finally, identifying candidate targets in the image by adopting a minimum rectangular frame capable of surrounding the outline of the connected domain of the target, and completing target detection.
CN201610248720.9A 2016-04-20 2016-04-20 A kind of weak signal target imaging detection device Active CN105959514B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610248720.9A CN105959514B (en) 2016-04-20 2016-04-20 A kind of weak signal target imaging detection device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610248720.9A CN105959514B (en) 2016-04-20 2016-04-20 A kind of weak signal target imaging detection device

Publications (2)

Publication Number Publication Date
CN105959514A CN105959514A (en) 2016-09-21
CN105959514B true CN105959514B (en) 2018-09-21

Family

ID=56917746

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610248720.9A Active CN105959514B (en) 2016-04-20 2016-04-20 A kind of weak signal target imaging detection device

Country Status (1)

Country Link
CN (1) CN105959514B (en)

Families Citing this family (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108090490A (en) * 2016-11-21 2018-05-29 南京理工大学 A kind of Stealthy Target detecting system and method based on multispectral polarization imaging
CN106651802B (en) * 2016-12-24 2019-10-18 大连日佳电子有限公司 Machine vision scolding tin position finding and detection method
CN106851071A (en) * 2017-03-27 2017-06-13 远形时空科技(北京)有限公司 Sensor and heat transfer agent processing method
US11399144B2 (en) 2017-07-12 2022-07-26 Sony Group Corporation Imaging apparatus, image forming method, and imaging system
CN109427044B (en) * 2017-08-25 2022-02-25 瑞昱半导体股份有限公司 Electronic device
CN108181624B (en) * 2017-12-12 2020-03-17 西安交通大学 Difference calculation imaging device and method
CN108320303A (en) * 2017-12-19 2018-07-24 中国人民解放军战略支援部队航天工程大学 A kind of pinhole cameras detection method based on binocular detection
CN108230316B (en) * 2018-01-08 2020-06-05 浙江大学 A detection method for floating hazardous chemicals based on polarization differential magnification image processing
CN109064504B (en) * 2018-08-24 2022-07-15 深圳市商汤科技有限公司 Image processing method, apparatus and computer storage medium
CN109308693B (en) * 2018-08-29 2023-01-24 北京航空航天大学 Single-binocular vision system for target detection and pose measurement constructed by one PTZ camera
CN111161140B (en) * 2018-11-08 2023-09-19 银河水滴科技(北京)有限公司 Distortion image correction method and device
CN111242152A (en) * 2018-11-29 2020-06-05 北京易讯理想科技有限公司 Image retrieval method based on target extraction
CN109859178B (en) * 2019-01-18 2020-11-03 北京航空航天大学 A real-time target detection method for infrared remote sensing images based on FPGA
CN109934112B (en) * 2019-02-14 2021-07-13 青岛小鸟看看科技有限公司 Face alignment method and camera
CN109900719B (en) * 2019-03-04 2020-08-04 华中科技大学 Visual detection method for blade surface knife lines
CN110232694B (en) * 2019-06-12 2021-09-07 安徽建筑大学 A Threshold Segmentation Method for Infrared Polarization Thermal Image
CN113418864B (en) * 2021-06-03 2022-09-16 奥比中光科技集团股份有限公司 Multispectral image sensor and manufacturing method thereof
CN113933246B (en) * 2021-09-27 2023-11-21 中国人民解放军陆军工程大学 Compact multiband full-polarization imaging device compatible with F-mount lens
CN113945531B (en) * 2021-10-20 2023-10-27 福州大学 A dual-channel imaging gas quantitative detection method
CN115880188B (en) * 2023-02-08 2023-05-19 长春理工大学 Method, device and medium for generating statistical image of polarization direction
CN117061854A (en) * 2023-10-11 2023-11-14 中国人民解放军战略支援部队航天工程大学 Super-structured surface polarization camera structure for three-dimensional perception of space target
CN118279208B (en) * 2024-06-04 2024-08-13 长春理工大学 A polarization parameter shaping method

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2294778A (en) * 1993-07-10 1996-05-08 Siemens Plc Improved spectrometer
US5572359A (en) * 1993-07-15 1996-11-05 Nikon Corporation Differential interference microscope apparatus and an observing method using the same apparatus
US7193214B1 (en) * 2005-04-08 2007-03-20 The United States Of America As Represented By The Secretary Of The Army Sensor having differential polarization and a network comprised of several such sensors
CN102297722A (en) * 2011-09-05 2011-12-28 西安交通大学 Double-channel differential polarizing interference imaging spectrometer
CN103604945A (en) * 2013-10-25 2014-02-26 河海大学 Three-channel CMOS synchronous polarization imaging system
CN104103073A (en) * 2014-07-14 2014-10-15 中国人民解放军国防科学技术大学 Infrared polarization image edge detection method
CN204203261U (en) * 2014-11-14 2015-03-11 南昌工程学院 A kind of three light-path CMOS polarization synchronous imaging devices

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2294778A (en) * 1993-07-10 1996-05-08 Siemens Plc Improved spectrometer
US5572359A (en) * 1993-07-15 1996-11-05 Nikon Corporation Differential interference microscope apparatus and an observing method using the same apparatus
US7193214B1 (en) * 2005-04-08 2007-03-20 The United States Of America As Represented By The Secretary Of The Army Sensor having differential polarization and a network comprised of several such sensors
CN102297722A (en) * 2011-09-05 2011-12-28 西安交通大学 Double-channel differential polarizing interference imaging spectrometer
CN103604945A (en) * 2013-10-25 2014-02-26 河海大学 Three-channel CMOS synchronous polarization imaging system
CN104103073A (en) * 2014-07-14 2014-10-15 中国人民解放军国防科学技术大学 Infrared polarization image edge detection method
CN204203261U (en) * 2014-11-14 2015-03-11 南昌工程学院 A kind of three light-path CMOS polarization synchronous imaging devices

Also Published As

Publication number Publication date
CN105959514A (en) 2016-09-21

Similar Documents

Publication Publication Date Title
CN105959514B (en) A kind of weak signal target imaging detection device
CN110689581B (en) Structured light module calibration method, electronic device, and computer-readable storage medium
US10260866B2 (en) Methods and apparatus for enhancing depth maps with polarization cues
WO2021071995A1 (en) Systems and methods for surface normals sensing with polarization
CN107993258B (en) Image registration method and device
US10540784B2 (en) Calibrating texture cameras using features extracted from depth images
CN107833181A (en) A kind of three-dimensional panoramic image generation method and system based on zoom stereoscopic vision
JP2008016918A (en) Image processor, image processing system, and image processing method
CN112489140B (en) Attitude measurement method
CN102901489A (en) Pavement water accumulation and ice accumulation detection method and apparatus thereof
CN102507592A (en) Fly-simulation visual online detection device and method for surface defects
Yang et al. An image-based intelligent system for pointer instrument reading
CN110487183A (en) A kind of multiple target fiber position accurate detection system and application method
CN108197521A (en) A kind of leggy Quick Response Code obtains identification device and method
CN112085793B (en) Three-dimensional imaging scanning system based on combined lens group and point cloud registration method
WO2021022696A1 (en) Image acquisition apparatus and method, electronic device and computer-readable storage medium
CN117058557B (en) Cloud and cloud shadow joint detection method based on physical characteristics and deep learning model
CN102353376A (en) Panoramic imaging earth sensor
CN110276831A (en) Method and device for constructing three-dimensional model, equipment and computer-readable storage medium
CN114519681A (en) Automatic calibration method and device, computer readable storage medium and terminal
CN205681547U (en) A kind of multichannel polarization and infrared image capturing system
CN102034091B (en) A light spot recognition method and device for a digital sun sensor
CN117113284B (en) Multi-sensor fusion data processing method and device and multi-sensor fusion method
CN116524019A (en) Camera pose determining method, camera pose recovering method, camera pose determining device, camera pose recovering device, camera pose determining device and storage medium
CN114494039A (en) A method for geometric correction of underwater hyperspectral push-broom images

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant