CN105959514B - A kind of weak signal target imaging detection device - Google Patents
A kind of weak signal target imaging detection device Download PDFInfo
- Publication number
- CN105959514B CN105959514B CN201610248720.9A CN201610248720A CN105959514B CN 105959514 B CN105959514 B CN 105959514B CN 201610248720 A CN201610248720 A CN 201610248720A CN 105959514 B CN105959514 B CN 105959514B
- Authority
- CN
- China
- Prior art keywords
- image
- point
- points
- pixel
- channel
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003384 imaging method Methods 0.000 title claims abstract description 58
- 238000001514 detection method Methods 0.000 title claims abstract description 43
- 230000010287 polarization Effects 0.000 claims abstract description 67
- 238000000034 method Methods 0.000 claims abstract description 57
- 230000003287 optical effect Effects 0.000 claims abstract description 47
- 230000001360 synchronised effect Effects 0.000 claims abstract description 14
- 230000004927 fusion Effects 0.000 claims abstract description 9
- 230000009466 transformation Effects 0.000 claims description 18
- 238000012545 processing Methods 0.000 claims description 16
- 239000011159 matrix material Substances 0.000 claims description 13
- 230000008569 process Effects 0.000 claims description 12
- 239000013598 vector Substances 0.000 claims description 12
- 230000005540 biological transmission Effects 0.000 claims description 10
- 238000001914 filtration Methods 0.000 claims description 10
- 230000000694 effects Effects 0.000 claims description 8
- 230000007797 corrosion Effects 0.000 claims description 7
- 238000005260 corrosion Methods 0.000 claims description 7
- 238000012952 Resampling Methods 0.000 claims description 6
- 230000004044 response Effects 0.000 claims description 6
- 230000002093 peripheral effect Effects 0.000 claims description 5
- 238000005520 cutting process Methods 0.000 claims description 4
- 238000009826 distribution Methods 0.000 claims description 4
- 230000003628 erosive effect Effects 0.000 claims description 4
- 230000006870 function Effects 0.000 claims description 4
- 238000005259 measurement Methods 0.000 claims description 4
- 230000002457 bidirectional effect Effects 0.000 claims description 3
- 230000008859 change Effects 0.000 claims description 3
- 238000006243 chemical reaction Methods 0.000 claims description 3
- 230000006835 compression Effects 0.000 claims description 3
- 238000007906 compression Methods 0.000 claims description 3
- 238000005530 etching Methods 0.000 claims description 3
- 239000011521 glass Substances 0.000 claims description 3
- 238000007781 pre-processing Methods 0.000 claims description 3
- 238000005070 sampling Methods 0.000 claims description 3
- 238000012216 screening Methods 0.000 claims description 3
- 230000003595 spectral effect Effects 0.000 claims description 3
- 238000002834 transmittance Methods 0.000 claims description 3
- 230000015572 biosynthetic process Effects 0.000 claims description 2
- 238000012937 correction Methods 0.000 abstract description 3
- 238000001228 spectrum Methods 0.000 abstract description 2
- 238000003860 storage Methods 0.000 abstract description 2
- 238000010586 diagram Methods 0.000 description 8
- 230000010354 integration Effects 0.000 description 3
- 230000000877 morphologic effect Effects 0.000 description 3
- 230000011218 segmentation Effects 0.000 description 3
- DMSMPAJRVJJAGA-UHFFFAOYSA-N benzo[d]isothiazol-3-one Chemical group C1=CC=C2C(=O)NSC2=C1 DMSMPAJRVJJAGA-UHFFFAOYSA-N 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 239000007769 metal material Substances 0.000 description 2
- 238000012634 optical imaging Methods 0.000 description 2
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 2
- 241000218691 Cupressaceae Species 0.000 description 1
- 241000196324 Embryophyta Species 0.000 description 1
- 206010034960 Photophobia Diseases 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 239000004568 cement Substances 0.000 description 1
- 230000005603 centrifugal distortion Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000005670 electromagnetic radiation Effects 0.000 description 1
- 208000013469 light sensitivity Diseases 0.000 description 1
- 238000012067 mathematical method Methods 0.000 description 1
- 229910052755 nonmetal Inorganic materials 0.000 description 1
- 239000003973 paint Substances 0.000 description 1
- 238000013139 quantization Methods 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 239000011435 rock Substances 0.000 description 1
- 239000004576 sand Substances 0.000 description 1
- 239000002689 soil Substances 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/50—Constructional details
- H04N23/55—Optical parts specially adapted for electronic image sensors; Mounting thereof
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/20—Scenes; Scene-specific elements in augmented reality scenes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/50—Constructional details
- H04N23/54—Mounting of pick-up tubes, electronic image sensors, deviation or focusing coils
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- Signal Processing (AREA)
- General Physics & Mathematics (AREA)
- Optics & Photonics (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a kind of weak signal target imaging detection device and methods.Using the reflection and scattering light of target and background in specific band and the light-intensity difference on 0 ° and 90 ° of polarization directions, spectrum polarizing synchronous imaging is realized using the imaging mode of binary channels orthogonal differential.Hardware module can be divided into Instrument shell, optical system and FPGA master control borad three parts.Wherein Instrument shell is for connecting optical lens, circuit board and tripod;Optical system uses channel structure, the image for obtaining two width different polarization angles and wave band;FPGA master control borads are used to carry out parameter configuration, synchronous acquisition, image buffer storage and pretreatment to binary channels cmos image sensor.Software module executes Channel Image acquisition, image distortion correction, Channel Image registration, image difference fusion and image object Detection task successively.Compared to existing method, there is lower hardware cost and software complexity, the detection to move Stealthy Target under the complex background of ground provides a kind of effective means.
Description
Technical Field
The invention relates to an optical imaging detection device and method, in particular to a weak target imaging detection device and method, and belongs to the field of optical imaging.
Background
The target detection and identification technology is a high-technology means for carrying out non-contact measurement on a fixed or moving target, accurately obtaining attribute information of the target and identifying the authenticity of the target. The optical detection is passive and safe, so that the rapid development and great attention are paid in recent years. However, the use of the traditional camouflage paint enables the target and the background to approximately realize the same color and the same spectrum, and the traditional light intensity detection means is difficult to effectively detect the invisible weak target in the complex background.
Polarization is one of the fundamental properties of light, and any object exhibits polarization characteristics determined by its own characteristics and the basic law of optics during reflection and emission of electromagnetic radiation. The polarization degree of the ground object background in the general natural environment is low, and the polarization degree of the artificial target is high. The polarization degree of plants is generally less than 0.5%; the polarization degree of rocks, sand, bare soil and the like is between 0.5 and 1.5 percent; the polarization degree of the water surface, the cement pavement, the roof and the like is generally more than 1.5 percent (particularly the polarization degree of the water surface reaches 8 to 10 percent); the polarization degree of the surfaces of some non-metal materials and partial metal materials reaches more than 2 percent (some even reach more than 10 percent). Information of a scene in different polarization states is obtained through imaging, targets with polarization-light intensity differences and backgrounds can be effectively distinguished, and therefore detection and identification of weak targets in complex backgrounds are achieved. Therefore, in recent years, polarization imaging detection has received more and more attention in the scientific research of meteorological environment, the development and utilization of oceans, space detection, biomedicine, military application and the like.
In polarization detection, the polarization state of the target optical radiation can be completely described by four Stokes (Stokes) parameters, including the total intensity I of the optical wave, the intensity Q of the polarized light linear in the horizontal direction, the intensity U of the polarized light linear in the 45 °/135 ° direction, and the intensity V of the circularly polarized light. In practical applications, V can be ignored, and the degree of polarization can be described asThe polarization angle is described as θ ═ 0.5arctan (U/Q). Therefore, in order to obtain the polarization state information, at least three light intensity images with different polarization directions are obtained to calculate the parameter I, Q, U.
According to the principle, the currently applied polarization imaging detection devices mainly have four types: (1) time-sharing imaging mode. The method adopts an imaging device, and obtains images with three different polarization directions of 0 degree, 60 degrees and 90 degrees by sequentially rotating a polaroid arranged in front of a lens; has the advantages of simple structure and easy realization; but only for the case where both the target and the background are stationary. And (2) a light path splitting mode. The method adopts a beam splitter and a retarder, uniformly divides a light beam passing through a single lens into three identical parts, and projects the light beams onto three independent imaging devices through polarizing plates in the directions of 0 degree, 60 degrees and 90 degrees; polarization images in three directions can be obtained simultaneously; however, this approach can greatly reduce the energy incident on a single imaging device, resulting in a significant reduction in the imaging signal-to-noise ratio. (3) The manner of dividing the focal plane. The method adopts an imaging device manufactured by a special process, each pixel on the imaging device corresponds to one polarization direction of 0 degree, 60 degrees and 90 degrees respectively, and the pixels are arranged according to a Bayer format similar to RGB distribution in a color image sensor; the polarization imaging can be realized simultaneously, an additional light splitting device is not needed, and the miniaturization of the instrument is easy to realize; but the manufacturing process of the focal plane splitting device is complex and the commercialization is not realized. (4) The manner of spatial registration. The method adopts three cameras to form a three-channel synchronous imaging system, respectively collects polarization images in the directions of 0 degree, 60 degrees and 90 degrees, and aligns pixels of the overlapping areas of the three images by an image space registration algorithm; the hardware complexity is low; however, since the distortion parameters and the shooting view angles of the three channels are not consistent, if the distortion parameters and the shooting view angles are not reasonably corrected, the image registration accuracy is not high, and the detection of a small target is influenced. In fact, for the application of weak target detection, the purpose of polarization imaging is not to acquire polarization degree or polarization angle information, but how to enhance the contrast of the target and the background in real time and efficiently. From this point of view, fusing multi-channel images using Stokes equations is not an efficient method.
The invention utilizes the light intensity difference of the reflected and scattered light of the target and the background in a specific wave band and in the polarization directions of 0 degree and 90 degrees, adopts a dual-channel orthogonal differential imaging mode to realize spectrum-polarization synchronous imaging, has lower hardware cost and software complexity compared with the existing synchronous polarization imaging mode, and provides an effective means for detecting the moving stealth target under the complex ground background.
Disclosure of Invention
The invention provides a weak target imaging detection device and method aiming at the defects of the existing moving stealth target detection system under the complex ground background.
The invention is realized by the following technical scheme:
the utility model provides a weak target formation of image detection device, comprises instrument casing, optical system and FPGA main control board triplex which characterized in that: the instrument shell is used for connecting the optical lens, the circuit board and the tripod and comprises a shell front panel, a shell rear frame and a tripod fixing seat; the optical system adopts a dual-channel structure and is used for acquiring two images with different polarization angles and wave bands, and a channel 1 comprises a 0-degree linear polarization filter, an optical lens, a C-port lens joint ring, a filter base, a 470nm narrow-band filter and a CMOS image sensor; the channel 2 comprises a 90-degree linear polarization filter, an optical lens, a C-port lens joint ring, a filter seat, a 630nm narrow-band filter and a CMOS image sensor; the FPGA main control board is used for carrying out parameter configuration, synchronous acquisition, image caching and preprocessing on the dual-channel CMOS image sensor and transmitting the parameters to the PC through the USB interface.
The size of the front panel of the shell is 100mm multiplied by 50mm multiplied by 5mm, two C-port lens joint rings for fixing an optical lens are arranged on the front panel, the central distance between the two joint rings is 50mm, and the external diameter of a thread is 25.1 mm; the size of the rear frame of the shell is 100mm multiplied by 50mm multiplied by 30mm, the rear frame is connected with the front panel through 12 screws with the specification of phi 3 x 6 on the periphery of the front panel, and the left side of the rear frame is provided with a B-type USB interface used for connecting an FPGA main control board and a PC; the tripod fixing seat is positioned on the lower side of the rear frame of the shell and is connected with a tripod head of a tripod through a screw hole with the central specification of 1/4-20.
The focal lengths of the optical lenses of the channel 1 and the channel 2 are both 8mm fixed focuses, the aperture adjusting range is F1.4-F16, the focusing range is 0.1 m-infinity, and the optical lenses are connected with the two C-port lens connecting rings on the front panel; the two rotary linear polarization filters are respectively arranged in front of the two optical lenses through connecting rings with the size of M30.5 multiplied by 0.5 mm; respectively adjusting the polarization directions of the linear polarization filters corresponding to the linear polarization calibration plate to 0 degree and 90 degrees by adopting the linear polarization calibration plate; the two narrow-band filters are respectively arranged on the surface of the CMOS image sensor through filter seats; the optical filters are made of mirror glass, the size of the optical filters is 12mm multiplied by 0.7mm, the central wavelength is 470nm and 630nm respectively, the half bandwidth is 20nm, the peak transmittance is more than 90%, and the cut-off depth is less than 1%; the CMOS image sensor adopts an 1/2' monochrome area array sensor with 130 ten thousand pixels, and the spectral response range is 400-1050 nm.
The FPGA main control board takes a nonvolatile FPGA chip as a core, and integrates a 32-bit soft-core Nios II processor and part of external equipment thereof into a single chip by adopting a system-on-programmable chip technology, and only a USB2.0 interface chip and a B-type USB interface are adopted outside the chip to communicate with a PC; the Nios II processor controls the on-chip peripherals such as a user RAM, a user FLASH, a USB controller, a 2-group double-port RAM controller corresponding to double channels, an image acquisition module and the like through an Avalon bus; the user RAM is used as the running memory of the Nios II processor; the user FLASH is used for storing program codes executed by the Nios II processor; the USB controller is used for the configuration and bus protocol conversion of the USB2.0 interface chip; the double-port RAM is an asynchronous FIFO and is used for screening and processing effective data of image lines and keeping the data synchronous in the transmission process; the image acquisition module comprises a configuration controller and a time schedule controller, wherein the configuration controller passes through I2C bidirectional data serial buses SCLK and SDATA configure internal registers of the CMOS image sensor, and the timing controller controls the CMOS image sensor to synchronously output data DOUT [9:0 ] through timing signals STROBE, PIXCLK, L _ VALID and F _ VALID and control signals STANDBY, TRIGGER and CLKIN]。
The working process of the FPGA main control board is as follows: after the main control board is powered on, firstly, initializing the system, and then enabling the Nios II processor to be in a waiting state; after the PC sends an initial signal to the main control board through the USB interface, the Nios II processor sequentially writes a register to the dual-channel CMOS image sensor through the configuration controller, sets the dual-channel CMOS image sensor into a snapshot mode, and configures parameters such as image resolution, exposure time and electronic gain. After the setting is completed, the operation is finished,configuration of controllers I2The C bus enters an idle state and enables the 2 groups of time sequence controllers to synchronously send TRIGGER pulses; after receiving the TRIGGER pulse, the CMOS image sensor performs internal row reset and outputs STROBE pulse after the row reset is completed, and the pulse width marks the length of pixel integration time; after STROBE signal changes from 1 to 0, data DOUT [7:0 ] is normally output]Simultaneously outputting synchronous signals F _ VALID and L _ VALID; after receiving the returned data and the synchronous signal, the time schedule controller firstly carries out AND operation on the F _ VALID and the L _ VALID; when the result is high, the data is effective, and the pixel clock is used as a working clock and stored in the dual-port RAM according to the address 0-1280; when the result changes from high to low, which represents that one line of effective data is completely transmitted, the data in the 2 groups of double-port RAMs are packed into a data packet every 512 bytes and are sequentially output to the FIFO of the USB2.0 interface chip, and then the data packet is transmitted to the PC through the USB line; and after the transmission of one frame of data is finished, the Nios II processor sets the CMOS image sensor to be in a STANDBY mode through the configuration controller, stops data output and waits for the next starting signal.
The detection method of the weak target imaging detection device comprises the following five main steps:
(1) the method comprises the following steps of (1) double-channel image acquisition, wherein after a task is started, a USB port is scanned and connected with a specified imaging device; after the connection is confirmed, sending a control word to the imaging device to set imaging parameters including image resolution, exposure time, electronic gain and the like; and after the setting is finished, sending a primary acquisition instruction and waiting for receiving the image data, and after the transmission of the image data of the two channels is finished, storing the image in a lossless compression bitmap format.
(2) Correcting image distortion, designing an optical distortion parameter of the imaging system by adopting a Zhang-friend method, and considering only radial distortion of an image by a nonlinear distortion model:
wherein, deltaXAnd deltaYIs a distortion value that is related to the pixel position of the proxel in the image. x and y are normalized projection values of the image points obtained according to the linear projection model under the imaging plane coordinate system,k1、k2、k3equal to the radial distortion coefficient, only the second order distortion is considered here, and the distorted coordinates are:
order (u)d,vd) And (u, v) are respectively an actual coordinate and an ideal coordinate corresponding to the space point under the image coordinate system, and the relationship between the actual coordinate and the ideal coordinate is as follows:
taking the linear calibration result as an initial parameter value, bringing the initial parameter value into the following objective function to solve the minimum value, and realizing the estimation of the nonlinear parameter:
wherein,is the projection point M obtained by using the estimated parameters of the jth point of the calibration template on the ith imagejThe coordinate value of the jth point of the calibration template in a world coordinate system is defined, m is the number of characteristic points of each image, and n is the number of images; and optimizing the obtained camera calibration parameters by using an LM iteration method to finally obtain a relatively accurate radial distortion coefficient, and further reversely solving the undistorted image coordinate.
(3) The dual-channel image registration is used for realizing pixel alignment of dual-channel images under different imaging fields of view, wave bands, polarization angles and optical distortion conditions, and an image registration algorithm based on SURF feature points is adopted, and the image registration algorithm comprises the following five substeps:
1) detecting SURF characteristic points, on the basis of constructing an integral image, approximately replacing second-order Gaussian filtering by using square filtering, respectively calculating Hessian values of the characteristic points to be selected and points around the characteristic points, and if the characteristic points have the maximum Hessian value, taking the characteristic points as the characteristic points;
2) generating a feature description vector, and generating a 128-dimensional feature description vector by calculating first-order Haar wavelet response of an integral image by using gray scale information of a feature point neighborhood to obtain gray scale distribution information;
3) the two-step method is used for matching feature points, and a correct one-to-one corresponding matching relation between the reference image and the feature points of the image to be registered is established through two steps of a coarse matching algorithm based on a nearest neighbor ratio method and a precise matching algorithm based on RANSAC, and the method is characterized in that: after the feature vectors of the two images are generated, firstly, the Euclidean distance of the SURF feature description vectors is adopted as the similarity judgment measurement of key points in the two images, and the method is to obtain the distance d from one feature point to the nearest neighbor feature point through a K-d treeNDIts distance d to next-nearest neighbor feature pointNNDIf the ratio of the characteristic points to the nearest neighbor points is smaller than the threshold epsilon, keeping the matching point pairs formed by the characteristic points and the nearest neighbor points; then, 4 pairs of initial matching feature points are randomly selected, a perspective transformation matrix H determined by the 4 pairs of the initial matching feature points is calculated, and the matching degree of the other feature points is measured by the matrix:
wherein t is a threshold value, the characteristic point pairs smaller than or equal to t are inner points of H, and the characteristic point pairs larger than t are outer points, so that an inner point set is continuously updated, the largest inner point set can be obtained by k times of random sampling of RANSAC, and a perspective transformation matrix H corresponding to the optimized inner point set is obtained;
4) performing coordinate transformation and resampling, namely performing linear transformation on the coordinates of the image pixels according to the obtained perspective transformation matrix H, and performing resampling on the gray value of the image pixels by adopting a bilinear interpolation method, wherein the bilinear interpolation method assumes that the gray change in an area surrounded by four points around the interpolation point is linear, so that the gray value of the interpolation point can be calculated by using a linear interpolation method according to the gray values of four adjacent pixels;
5) cutting the image overlapping area, judging the four boundary points after the image coordinate transformation according to the following formula, and determining the coordinates (X) of the four boundary points of the overlapping area after the image registrationmin,Ymin)、(Xmin,Ymax)、(Xmax,Ymin)、(Xmax,Ymax):
Wherein W, H is the width and height of the image, and the two-channel image is clipped according to the rectangular region formed by the above boundary points to obtain the registered 0-degree and 90-degree polarization images I (0) and I (90);
(4) and image difference fusion, wherein an orthogonal difference image obtained by fusion in a two-channel orthogonal difference mode is represented as follows:
Q=I(0°)-I(90°)
(5) the image target detection is carried out on the orthogonal differential polarization image by a system based on a morphology method, and the system comprises the following three substeps:
1) and (3) binarization processing, namely adopting a maximum inter-class variance method to adaptively select a global threshold value, wherein the principle is as follows: setting M gray values of the image, selecting a gray value t in a value range of 0M-1, and dividing the image into two groups G0And G1,G0The gray value of the contained pixel is 0t, G1Is at t +1M-1, the total number of image pixels is represented by N, NiRepresenting gray scaleThe number of pixels with the value i, the probability of the occurrence of each gray value i is pi=ni/N,G0And G1The probability of class occurrence isMean value ofThe between-class variance is:
σ(t)2=ω0ω1(μ0-μ1)2
the optimal threshold T is the value of T that maximizes the inter-class variance, i.e.:
T=argmaxσ(t)2,t∈[0,M-1]
2) and the opening operation is used for filtering fine interferent and obtaining a more accurate target profile, and is defined as a process of firstly corroding and then expanding: the effect of erosion is to eliminate irrelevant details in the object, particularly the edge points, and shrink the boundary of the object inwards, which is expressed as follows:
wherein E represents a binary image after corrosion; b represents a structural element, namely a template, which is a figure in any shape consisting of 0 or 1, and a central point is arranged in B, and the etching is carried out by taking the central point as the center; x is a pixel set of an image of an original image after binarization processing; the operation process is that the structural element B is slid in the X image domain, when the central point of the structural element B is superposed with a certain point (X, y) on the X image, pixel points in the structural element are traversed, if each pixel point is completely the same as the corresponding pixel point in the same position taking (X, y) as the center, the pixel point (X, y) is kept in E, and the pixel points which do not meet the condition are removed, so that the effect of shrinking the boundary can be achieved; the expansion and corrosion effects are opposite, the boundary point of the binary object contour is expanded, the residual holes in the divided object can be filled, the object is complete, and the expression is as follows:
s represents a set of expanded binary image pixel points; b represents a structural element, namely a template; x represents the image pixel set after the binarization processing. The operation process is that the structural element B is slid in the X image domain, when the center point of B is moved to a certain point (X, y) on the X image, the pixel point in the structural element is traversed, if the pixel point in the structural element B is at least one same as the pixel point of the X image, the pixel point (X, y) is kept in S, otherwise, the pixel point is removed; after the binary image is subjected to open operation, the image is divided into a plurality of connected regions;
3) identifying a connected domain, namely firstly segmenting the connected domain in the image by adopting 8 adjacent criteria, wherein the definition of the 8 adjacent connected domains is as follows: filling different connected domains in the binary image with different digital marks according to the definition, wherein each pixel in the region still belongs to at least one of 8 adjacent pixels in all 8 directions; then respectively extracting the pixel circumferences of all connected domains, comparing the pixel circumferences with a preset target threshold, and judging as a candidate target if the pixel circumferences are within a threshold interval; and finally, identifying candidate targets in the image by adopting a minimum rectangular frame capable of surrounding the outline of the connected domain of the target, and completing target detection.
The invention has the following beneficial effects:
1. the hardware system is easy to implement. And complicated light path light splitting design or imaging device manufacturing process is not required.
2. The software has low computational complexity. Complex camera calibration work needs to be performed only once in a laboratory; the image fusion does not need to calculate the polarization degree, and only needs to carry out simple pixel gray level difference operation once.
3. The registration accuracy of the algorithm is high. The non-linear distortion of the camera is corrected before registration.
4. The method is suitable for detecting the moving target.
Drawings
Fig. 1 is a block diagram of software and hardware functional modules of a weak target imaging detection system according to the present invention.
Fig. 2 is a schematic perspective view of a hardware structure of a weak target imaging detection apparatus according to the present invention, where the names of the symbols in the diagram are: 1 is a front panel of the shell; 2 is a shell rear frame; 3 is a tripod fixed seat; 4 is a 0-degree linear polarization filter; 5 is a 90-degree linear polarization filter; 6. 7 is an optical lens; 8. 9 is a C-port lens joint ring; 10. 11 is a filter holder; 12 is a 470nm narrow band filter; 13 is a 630nm narrow-band filter; 14. 15 is a CMOS image sensor; and 16 is a USB interface.
FIG. 3 is a block diagram of a hardware circuit of the FPGA main control board according to the present invention.
FIG. 4 is a software flow diagram of a weak target imaging detection method according to the present invention.
Detailed Description
The technical scheme of the invention is explained in detail in the following with the accompanying drawings:
the block diagram of the software and hardware functional modules of the weak target imaging detection system of the invention is shown in fig. 1. The hardware module of the weak target imaging detection device can be divided into an instrument shell, an optical system and an FPGA main control board. The instrument shell is used for connecting an optical lens, a circuit board and a tripod and comprises a shell front panel, a shell rear frame and a tripod fixing seat; the optical system adopts a dual-channel structure and is used for acquiring two images with different polarization angles and wave bands, and a channel 1 comprises a 0-degree linear polarization filter, an optical lens, a C-port lens joint ring, a filter base, a 470nm narrow-band filter and a CMOS image sensor; the channel 2 comprises a 90-degree linear polarization filter, an optical lens, a C-port lens joint ring, a filter seat, a 630nm narrow-band filter and a CMOS image sensor; the FPGA main control board is used for carrying out parameter configuration, synchronous acquisition, image caching and preprocessing on the dual-channel CMOS image sensor and transmitting the parameters to the PC through the USB interface. The software module runs on a PC and sequentially executes tasks of dual-channel image acquisition, image distortion correction, dual-channel image registration, image difference fusion and image target detection.
The hardware structure of the weak target imaging detection device of the invention is shown in a three-dimensional schematic diagram in fig. 2. The size of the front panel 1 of the shell is 100mm multiplied by 50mm multiplied by 5mm, C-port lens joints 8 and 9 used for fixing an optical lens are arranged on the front panel, the center distance between the two joints is 50mm, and the external diameter of a thread is 25.1 mm. The size of the shell rear frame 2 is 100mm multiplied by 50mm multiplied by 30mm, and the shell rear frame is connected with the front panel through 12 screws with phi 3 multiplied by 6 specification on the periphery of the front panel; the left side of the device is provided with a B-type USB interface 16 which is used for connecting the FPGA main control board and a PC. The tripod fixed seat 3 is positioned at the lower side of the rear frame of the shell and is connected with a tripod head of a tripod through a screw hole with the center specification of 1/4-20 (the outer diameter is 1/4 inches, and the thread pitch is 20 threads/inch). The focal lengths of the optical lenses 6 and 7 are both fixed focuses of 8mm, the aperture adjusting range is F1.4-F16, the focusing range is 0.1 m-infinity, and the optical lenses are respectively connected with C-port lens joints 8 and 9 on the front panel. The two rotary linear polarization filters 4 and 5 are respectively arranged in front of the optical lenses 6 and 7 through connecting rings with the size of M30.5 multiplied by 0.5mm (the outer diameter is 30.5mm, and the thread pitch is 0.5 mm); and the polarization directions of the linear polarization filters corresponding to the linear polarization calibration plate are respectively adjusted to 0 degree and 90 degrees by adopting the linear polarization calibration plate. Two filter bases 10 and 11 through which the narrow-band filters 12 and 13 pass are respectively arranged on the surfaces of the CMOS image sensors 14 and 15; the optical filters are made of mirror glass, the size of the optical filters is 12mm multiplied by 0.7mm, the central wavelength is 470nm and 630nm respectively, the half bandwidth is 20nm, the peak transmittance is more than 90%, and the cut-off depth is less than 1%. The CMOS image sensors 14, 15 each employ an MT9M001 of 130 ten thousand pixels. The MT9M001 is 1/2' of a monochromatic area array sensor, and the spectral response range is 400-1050 nm; the imaging signal-to-noise ratio and the dynamic range are respectively 45dB and 68.2dB, and the level of the CCD can be achieved; a pixel size of 5.2 μm × 5.2 μm makes it possible to achieve a high low light sensitivity of 2.1V/lux-sec; and the continuous image capturing capability of 1280 multiplied by 1024@30fps can meet the detection requirement of most moving objects.
The hardware circuit block diagram of the FPGA main control board is shown in FIG. 3. In order to realize the synchronous acquisition and control of the dual-channel CMOS image sensor, the hardware design of the main control board takes a nonvolatile FPGA chip as a core, a 32-bit soft-core Nios II processor and part of external equipment thereof are integrated in the single chip by adopting a system-on-programmable chip technology, and only a USB2.0 interface chip and a B-type USB interface are adopted outside the chip to communicate with a PC, so that the integration level of the functions of system components is greatly improved, and the system-level cost is reduced. The Nios II processor is constructed in an IP core mode, and controls the on-chip peripherals such as a user RAM, a user FLASH, a USB controller, a two-channel corresponding 2 groups of double-port RAM controllers, an image acquisition module and the like through an Avalon bus. Wherein, the user RAM is used as the running memory of the Nios II processor; the user FLASH is used for storing program codes executed by the Nios II processor; the USB controller is used for the configuration and bus protocol conversion of the USB2.0 interface chip; the double-port RAM is an asynchronous FIFO and is used for screening and processing effective data of image lines and keeping the data synchronous in the transmission process; the image acquisition module comprises a configuration controller and a time schedule controller, wherein the configuration controller passes through I2C bidirectional data serial buses SCLK and SDATA configure internal registers of the CMOS image sensor, and the timing controller controls the CMOS image sensor to synchronously output data DOUT [9:0 ] through timing signals STROBE, PIXCLK, L _ VALID and F _ VALID and control signals STANDBY, TRIGGER and CLKIN]。
In specific implementation, the FPGA chip is a MAX 10 series chip 10M08E144ES manufactured by ALTERA corporation. The chip is manufactured by adopting a 55nm embedded NOR FLASH memory technology of TSMC, and is provided with 8K logic units, 378Kb embedded SRAM resources and 172KB user FLASH resources. Because the maximum pixel array of the CMOS image sensor is 1280 multiplied by 1024, the quantization bit number is 8 bits, and 10Kbit storage space is needed for caching 1 line of data, 2 blocks of 10Kb space are allocated from the embedded SRAM resource to construct 2 double-port RAMs, and the rest 358Kb is allocated to the user RAM. The USB2.0 interface chip adopts CY7C68013A of CYPRESS company, the size of an internal FIFO resource is 4KB, the peripheral device and the USB interface can simultaneously operate the FIFO resource, data transmission can be carried out between the FIFO and an external circuit without the participation of a USB firmware program, and the maximum transmission rate is 96 MB/s.
The working process of the FPGA main control board is as follows: after the main control board is powered on, the system is initialized, and then the Nios II processor is in a waiting state. After the PC sends an initial signal to the main control board through the USB interface, the Nios II processor sequentially writes a register to the dual-channel CMOS image sensor through the configuration controller, sets the dual-channel CMOS image sensor into a snapshot mode, and configures parameters such as image resolution, exposure time and electronic gain. After the setup is completed, configuring I of the controller2And the C bus enters an idle state, and 2 groups of time sequence controllers synchronously send TRIGGER pulses. After receiving the TRIGGER pulse, the CMOS image sensor performs internal row reset and outputs STROBE pulse after the row reset is completed, and the pulse width marks the length of pixel integration time. After STROBE signal changes from 1 to 0, data DOUT [7:0 ] is normally output]And outputs the synchronization signals F _ VALID and L _ VALID at the same time. After receiving the returned data and the synchronization signal, the timing controller firstly performs an and operation on the F _ VALID and the L _ VALID. When the result is high, the data is effective, and the pixel clock is used as a working clock and stored in the dual-port RAM according to the address 0-1280; when the result changes from high to low, which represents that one line of effective data is completely transmitted, the data in the 2 groups of double-port RAMs are packed into a data packet every 512 bytes, and the data packet is sequentially output to the FIFO of the USB2.0 interface chip and then transmitted to the PC through the USB line. And after the transmission of one frame of data is finished, the Nios II processor sets the CMOS image sensor to be in a STANDBY mode through the configuration controller, stops data output and waits for the next starting signal.
The software flow diagram of the weak target imaging detection method of the invention is shown in fig. 4. The weak target imaging detection method comprises the following five main steps:
(1) and (4) double-channel image acquisition. After the task starts, firstly scanning a USB port and connecting a specified imaging device; after the connection is confirmed, sending a control word to the imaging device to set imaging parameters including image resolution, exposure time, electronic gain and the like; and after the setting is finished, sending a primary acquisition instruction and waiting for receiving the image data, and after the transmission of the image data of the two channels is finished, storing the image in a lossless compression bitmap format.
(2) And (5) correcting image distortion. In order to realize accurate registration of the two-channel images, distortion correction needs to be performed on the two images respectively. Considering that the two channels of the imaging system have independence, the optical distortion parameter of the imaging system is calibrated by adopting a classical Zhang-Yong plane calibration method. The optical distortion is nonlinear, mainly comprises radial distortion, tangential distortion, centrifugal distortion, thin prism distortion and the like, and a nonlinear model is required to estimate distortion parameters. The radial distortion is the main factor of the image error, and the model can be approximately described as:
wherein, deltaXAnd deltaYIs a distortion value that is related to the pixel position of the proxel in the image. x and y are normalized projection values of the image points obtained according to the linear projection model under the imaging plane coordinate system,k1、k2、k3equal to the radial distortion coefficient, only the second order distortion is considered here, and the distorted coordinates are:
order (u)d,vd) And (u, v) are respectively an actual coordinate and an ideal coordinate corresponding to the space point under the image coordinate system. The relationship between the two is:
taking the linear calibration result as an initial parameter value, bringing the initial parameter value into the following objective function to solve the minimum value, and realizing the estimation of the nonlinear parameter:
wherein,is the projection point M obtained by using the estimated parameters of the jth point of the calibration template on the ith imagejThe coordinate value of the jth point of the calibration template in the world coordinate system is shown, m is the number of the characteristic points of each image, and n is the number of the images. And optimizing the obtained camera calibration parameters by using an LM iteration method to finally obtain a relatively accurate radial distortion coefficient, and further reversely solving the undistorted image coordinate.
(3) Two-channel image registration. Due to the differences of the two channels in imaging field of view, waveband, polarization angle and optical distortion, the two images need to be registered to align the pixels to be fused. Considering that SURF feature points have better robustness on image rotation, translation, scaling and noise, an image registration algorithm based on SURF feature points is adopted, and the image registration algorithm comprises the following five sub-steps:
1) detecting SURF characteristic points, on the basis of constructing an integral image, utilizing square frame type filtering approximation to replace second-order Gaussian filtering, respectively calculating Hessian values of the characteristic points to be selected and points around the characteristic points, and if the characteristic points have the maximum Hessian value, the characteristic points are taken as the characteristic points.
2) And generating a feature description vector, using gray information of a feature point neighborhood, and generating a 128-dimensional feature description vector by calculating first-order Haar wavelet response of an integral image to obtain gray distribution information.
3) And matching the feature points by a two-step method, and establishing a correct one-to-one corresponding matching relation between the reference image and the feature points of the image to be registered by two steps of a coarse matching algorithm based on a nearest neighbor ratio method and a precise matching algorithm based on RANSAC. The method is characterized in that: after the feature vectors of the two images are generated, firstly, the Euclidean distance of the SURF feature description vectors is adopted as the similarity judgment measurement of key points in the two images, and the method is to obtain the distance d from one feature point to the nearest neighbor feature point through a K-d treeNDIts distance d to next-nearest neighbor feature pointNNDIf the ratio of the characteristic points to the nearest neighbor points is smaller than the threshold epsilon, keeping the matching point pairs formed by the characteristic points and the nearest neighbor points; then, 4 pairs of initial matching feature points are randomly selected, a perspective transformation matrix H determined by the 4 pairs of the initial matching feature points is calculated, and the matching degree of the other feature points is measured by the matrix:
and t is a threshold value, the characteristic point pairs smaller than or equal to t are inner points of H, and the characteristic point pairs larger than t are outer points. Thus, the interior point set is continuously updated, the maximum interior point set can be obtained by k times of random sampling of RANSAC, and at the moment, the perspective transformation matrix H corresponding to the optimized interior point set is also obtained.
4) And (4) coordinate transformation and resampling. And performing linear transformation on the coordinates of the image pixels according to the obtained perspective transformation matrix H, and resampling the gray value of the image pixels by adopting a bilinear interpolation method. The bilinear interpolation method assumes that the gray scale change in the region surrounded by four points around the interpolation point is linear, so that the gray scale value of the interpolation point can be calculated by linear interpolation from the gray scale values of four neighboring pixels.
5) And cutting the image overlapping area. Judging the four boundary points after the image coordinate transformation according to the following formula, and determining the coordinates (X) of the four boundary points of the overlapped area after the image registrationmin,Ymin)、(Xmin,Ymax)、(Xmax,Ymin)、(Xmax,Ymax):
Where W, H is the width and height of the image. And (3) cutting the two-channel image according to the rectangular region formed by the boundary points to obtain registered 0-degree and 90-degree polarization images I (0) and I (90).
(4) And (4) image difference fusion. Because the reflected and scattered light of the target and the background has obvious light intensity difference in the polarization directions of 0 degree and 90 degrees, a better image signal-to-noise ratio can be obtained by adopting a dual-channel orthogonal difference image fusion mode, and the software complexity is extremely low. The fused orthogonal difference image is represented as:
Q=I(0°)-I(90°) (7)
(5) and detecting an image target. Mathematical morphology is a mathematical method for analyzing geometrical shapes and contour structures of objects, and mainly comprises expansion, corrosion, opening operation, closing operation and the like. In the field of image processing for "keeping the basic shape of an object, removing irrelevant features", features useful for expressing and describing the shape can be extracted. Usually, morphological processing is expressed as a template-based neighborhood operation, i.e. a special neighborhood called "structural element" or template is defined, and a certain logic operation is performed on each pixel point of the binary image to be processed and the corresponding region of the binary image, and the obtained result is the pixel value of the output image. The size, content and nature of the operations of the structuring element will all affect the result of the morphological processing. The system carries out target detection on the orthogonal differential polarization image based on a morphology method, has the characteristics of definite physical significance and high operation efficiency, and comprises three substeps of image binarization, open operation and connected domain identification.
1) And (6) carrying out binarization processing. The image binarization processing is a precondition for morphological filtering, and is selectedA suitable segmentation threshold is an important step. The algorithm is proposed by Otsu in 1979, realizes automatic selection of the threshold value based on the statistical characteristics of the whole image, and is the most outstanding representative of global binarization. The basic idea of the algorithm is to divide the image gray into two groups by using a certain assumed gray value, and when the inter-class variance between the two groups is the maximum, the gray value is the optimal threshold for image binarization. Setting M gray values of the image, selecting a gray value t in a value range of 0M-1, and dividing the image into two groups G0And G1,G0The gray value of the contained pixel is 0t, G1Is at t +1M-1, the total number of image pixels is represented by N, NiRepresenting the number of pixels with a gray value i, the probability of each gray value i appearing is pi=ni/N,G0And G1The probability of class occurrence isMean value ofThe between-class variance is:
σ(t)2=ω0ω1(μ0-μ1)2(8)
the optimal threshold T is the value of T that maximizes the inter-class variance, i.e.:
T=argmaxσ(t)2,t∈[0,M-1](9)
2) an on operation. The on operation is used for filtering out fine interferents and obtaining a more accurate target profile. It is defined as the process of erosion followed by expansion: the main effect of erosion is to eliminate irrelevant details in the object, particularly the edge points, causing the boundaries of the object to shrink inwards. The expression is as follows:
wherein E represents a binary image after corrosion; b represents a structural element, namely a template, which is a figure in any shape consisting of 0 or 1, and a central point is arranged in B, and the etching is carried out by taking the central point as the center; and X is the pixel set of the image after the binarization processing of the original image. The operation process is to slide the structural element B in the X image domain, when the central point of the structural element B is superposed with a certain point (X, y) on the X image, the pixel points in the structural element are traversed, if each pixel point is completely the same as the corresponding pixel point in the same position with the (X, y) as the center, the pixel point (X, y) is kept in the E, and the pixel points which do not meet the condition are removed, so that the effect of shrinking the boundary can be achieved. The expansion and corrosion are opposite, the boundary points of the binary object contour are expanded, and the residual holes in the divided object can be filled, so that the object is complete. The expression is as follows:
s represents a set of expanded binary image pixel points; b represents a structural element, namely a template; x represents the image pixel set after the binarization processing. The operation process is to slide the structural element B in the X image domain, when the center point of B moves to a certain point (X, y) on the X image, the pixel point in the structural element is traversed, if the pixel point in the structural element B is at least one same as the pixel point of the X image, the pixel point (X, y) is kept in S, otherwise, the pixel point is removed.
3) And identifying a connected domain. After the binary image is subjected to an opening operation, the image is divided into a plurality of connected regions. In order to screen out candidate targets, connected domains need to be segmented, marked, and features need to be extracted for target identification. The purpose of connected component segmentation is to extract a set of target "1" value pixels that are adjacent to each other in a dot matrix binary image, and fill different connected components in the image with different number labels. Algorithms generally fall into two categories: one is a local neighborhood algorithm, and the basic idea is to check each connected component one by one from local to whole, determine a 'starting point', and then fill in marks to the surrounding neighborhoods in an expanding way; the other type is that from whole to part, different connected components are determined, and then each connected component is filled with a mark by using a region filling method. Here, 8 adjacent criteria are adopted to search and mark connected domains in the image. The definition of 8 contiguous connected domains is that each pixel in the region has at least one of 8 neighboring pixels in all 8 directions still belonging to the region. After the segmentation and marking of the connected domains are finished, respectively extracting the pixel circumferences of the connected domains and a preset target threshold value for comparison, if the pixel circumferences are within the threshold value interval, judging as a candidate target, and identifying the target in the image by adopting a minimum rectangular frame capable of surrounding the outline of the connected domain.
Claims (5)
1. The utility model provides a weak target formation of image detection device, comprises instrument casing, optical system and FPGA main control board triplex which characterized in that: the instrument shell is used for connecting the optical lens, the circuit board and the tripod and comprises a shell front panel, a shell rear frame and a tripod fixing seat; the optical system adopts a dual-channel structure and is used for acquiring two images with different polarization angles and wave bands, and a channel 1 comprises a 0-degree linear polarization filter, an optical lens, a C-port lens joint ring, a filter base, a 470nm narrow-band filter and a CMOS image sensor; the channel 2 comprises a 90-degree linear polarization filter, an optical lens, a C-port lens joint ring, a filter seat, a 630nm narrow-band filter and a CMOS image sensor; the FPGA main control board is used for carrying out parameter configuration, synchronous acquisition, image caching and preprocessing on the dual-channel CMOS image sensor and transmitting the parameters to the PC through the USB interface.
2. A weak target imaging detection device according to claim 1, characterized in that: the size of the front panel of the shell is 100mm multiplied by 50mm multiplied by 5mm, two C-port lens joint rings for fixing an optical lens are arranged on the front panel, the central distance between the two joint rings is 50mm, and the external diameter of a thread is 25.1 mm; the size of the rear frame of the shell is 100mm multiplied by 50mm multiplied by 30mm, the rear frame is connected with the front panel through 12 screws with the specification of phi 3 x 6 on the periphery of the front panel, and the left side of the rear frame is provided with a B-type USB interface used for connecting an FPGA main control board and a PC; the tripod fixing seat is positioned on the lower side of the rear frame of the shell and is connected with a tripod head of a tripod through a screw hole with the central specification of 1/4-20.
3. A weak target imaging detection device according to claim 1, characterized in that: the focal lengths of the optical lenses of the channel 1 and the channel 2 are both 8mm fixed focuses, the aperture adjusting range is F1.4-F16, the focusing range is 0.1 m-infinity, and the optical lenses are connected with the two C-port lens connecting rings on the front panel; the two rotary linear polarization filters are respectively arranged in front of the two optical lenses through connecting rings with the size of M30.5 multiplied by 0.5 mm; respectively adjusting the polarization directions of the linear polarization filters corresponding to the linear polarization calibration plate to 0 degree and 90 degrees by adopting the linear polarization calibration plate; the two narrow-band filters are respectively arranged on the surface of the CMOS image sensor through filter seats; the optical filters are made of mirror glass, the size of the optical filters is 12mm multiplied by 0.7mm, the central wavelength is 470nm and 630nm respectively, the half bandwidth is 20nm, the peak transmittance is more than 90%, and the cut-off depth is less than 1%; the CMOS image sensor adopts an 1/2' monochrome area array sensor with 130 ten thousand pixels, and the spectral response range is 400-1050 nm.
4. A weak target imaging detection device according to claim 1, characterized in that: the FPGA main control board takes a nonvolatile FPGA chip as a core and adoptsThe system technology on the programmable chip integrates a 32-bit soft core Nios II processor and partial peripheral equipment thereof into a single chip, and only one USB2.0 interface chip and a B-type USB interface are adopted outside the chip to communicate with a PC; the Nios II processor controls the on-chip peripherals such as a user RAM, a user FLASH, a USB controller, a 2-group double-port RAM controller corresponding to double channels, an image acquisition module and the like through an Avalon bus; the user RAM is used as the running memory of the Nios II processor; the user FLASH is used for storing program codes executed by the Nios II processor; the USB controller is used for the configuration and bus protocol conversion of the USB2.0 interface chip; the double-port RAM is an asynchronous FIFO and is used for screening and processing effective data of image lines and keeping the data synchronous in the transmission process; the image acquisition module comprises a configuration controller and a time schedule controller, wherein the configuration controller passes through I2C bidirectional data serial buses SCLK and SDATA configure internal registers of the CMOS image sensor, and the timing controller controls the CMOS image sensor to synchronously output data DOUT [9:0 ] through timing signals STROBE, PIXCLK, L _ VALID and F _ VALID and control signals STANDBY, TRIGGER and CLKIN]。
5. A weak target imaging detection apparatus as claimed in claim 1, wherein: the imaging detection comprises the following five main steps:
(1) the method comprises the following steps of (1) double-channel image acquisition, wherein after a task is started, a USB port is scanned and connected with a specified imaging device; after the connection is confirmed, sending a control word to the imaging device to set imaging parameters including image resolution, exposure time and electronic gain; after the setting is finished, sending a primary acquisition instruction and waiting for receiving image data, and saving the image in a lossless compression bitmap format after the transmission of the image data of the two channels is finished;
(2) correcting image distortion, designing an optical distortion parameter of the imaging system by adopting a Zhang-friend method, and considering only radial distortion of an image by a nonlinear distortion model:
wherein, deltaXAnd deltaYIs distortion value, which is related to the pixel position of the projection point in the image, x and y are normalized projection values of the image point obtained according to the linear projection model under the imaging plane coordinate system,k1、k2、k3equal to the radial distortion coefficient, only the second order distortion is considered here, and the distorted coordinates are:
order (u)d,vd) And (u, v) are respectively an actual coordinate and an ideal coordinate corresponding to the space point under the image coordinate system, and the relationship between the actual coordinate and the ideal coordinate is as follows:
taking the linear calibration result as an initial parameter value, bringing the initial parameter value into the following objective function to solve the minimum value, and realizing the estimation of the nonlinear parameter:
wherein,is the projection point M obtained by using the estimated parameters of the jth point of the calibration template on the ith imagejThe coordinate value of the jth point of the calibration template in a world coordinate system is defined, m is the number of characteristic points of each image, and n is the number of images; optimizing the obtained camera calibration parameters by using an LM iteration method to finally obtain a more accurate radial distortion coefficient, and further reversely solving an undistorted image coordinate;
(3) the dual-channel image registration is used for realizing pixel alignment of dual-channel images under different imaging fields of view, wave bands, polarization angles and optical distortion conditions, and an image registration algorithm based on SURF feature points is adopted, and the image registration algorithm comprises the following five substeps:
1) detecting SURF characteristic points, on the basis of constructing an integral image, approximately replacing second-order Gaussian filtering by using square filtering, respectively calculating Hessian values of the characteristic points to be selected and points around the characteristic points, and if the characteristic points have the maximum Hessian value, taking the characteristic points as the characteristic points;
2) generating a feature description vector, and generating a 128-dimensional feature description vector by calculating first-order Haar wavelet response of an integral image by using gray scale information of a feature point neighborhood to obtain gray scale distribution information;
3) the two-step method is used for matching feature points, and a correct one-to-one corresponding matching relation between the reference image and the feature points of the image to be registered is established through two steps of a coarse matching algorithm based on a nearest neighbor ratio method and a precise matching algorithm based on RANSAC, and the method is characterized in that: after the feature vectors of the two images are generated, firstly, the Euclidean distance of the SURF feature description vectors is adopted as the similarity judgment measurement of key points in the two images, and the method is to obtain the distance d from one feature point to the nearest neighbor feature point through a K-d treeNDIts distance d to next-nearest neighbor feature pointNNDIf the ratio of the characteristic points to the nearest neighbor points is smaller than the threshold epsilon, keeping the matching point pairs formed by the characteristic points and the nearest neighbor points; then, 4 pairs of initial matching feature points are randomly selected, a perspective transformation matrix H determined by the 4 pairs of the initial matching feature points is calculated, and the matching degree of the other feature points is measured by the matrix:
wherein t is a threshold value, the characteristic point pairs smaller than or equal to t are inner points of H, and the characteristic point pairs larger than t are outer points, so that an inner point set is continuously updated, the largest inner point set can be obtained by k times of random sampling of RANSAC, and a perspective transformation matrix H corresponding to the optimized inner point set is obtained;
4) performing coordinate transformation and resampling, namely performing linear transformation on the coordinates of the image pixels according to the obtained perspective transformation matrix H, and performing resampling on the gray value of the image pixels by adopting a bilinear interpolation method, wherein the bilinear interpolation method assumes that the gray change in an area surrounded by four points around the interpolation point is linear, so that the gray value of the interpolation point can be calculated by using a linear interpolation method according to the gray values of four adjacent pixels;
5) cutting the image overlapping area, judging the four boundary points after the image coordinate transformation according to the following formula, and determining the coordinates (X) of the four boundary points of the overlapping area after the image registrationmin,Ymin)、(Xmin,Ymax)、(Xmax,Ymin)、(Xmax,Ymax):
Wherein W, H is the width and height of the image, and the two-channel image is clipped according to the rectangular region formed by the above boundary points to obtain the registered 0-degree and 90-degree polarization images I (0) and I (90);
(4) and image difference fusion, wherein an orthogonal difference image obtained by fusion in a two-channel orthogonal difference mode is represented as follows:
Q=I(0°)-I(90°);
(5) the image target detection is carried out on the orthogonal differential polarization image by a system based on a morphology method, and the system comprises the following three substeps:
1) and (3) binarization processing, namely adopting a maximum inter-class variance method to adaptively select a global threshold value, wherein the principle is as follows: setting M gray values of the image, wherein the value range is 0-M-1, selecting the gray value t in the range, and dividing the image into two groups G0And G1,G0The gray value of the contained pixel is 0-t, G1The gray value of (1) is t + 1-M-1, the total number of image pixels is represented by N, NiRepresenting the number of pixels with a gray value i, the probability of each gray value i appearing is pi=ni/N,G0And G1The probability of class occurrence isMean value ofThe between-class variance is:
σ(t)2=ω0ω1(μ0-μ1)2
the optimal threshold T is the value of T that maximizes the inter-class variance, i.e.:
T=argmaxσ(t)2,t∈[0,M-1]
2) and the opening operation is used for filtering fine interferent and obtaining a more accurate target profile, and is defined as a process of firstly corroding and then expanding: the effect of erosion is to eliminate irrelevant details in the object, particularly the edge points, and shrink the boundary of the object inwards, which is expressed as follows:
wherein E represents a binary image after corrosion; b represents a structural element, namely a template, which is a figure in any shape consisting of 0 or 1, and a central point is arranged in B, and the etching is carried out by taking the central point as the center; x is a pixel set of an image of an original image after binarization processing; the operation process is that the structural element B is slid in the X image domain, when the central point of the structural element B is superposed with a certain point (X, y) on the X image, pixel points in the structural element are traversed, if each pixel point is completely the same as the corresponding pixel point in the same position taking (X, y) as the center, the pixel point (X, y) is kept in E, and the pixel points which do not meet the condition are removed, so that the effect of shrinking the boundary can be achieved; the expansion and corrosion effects are opposite, the boundary point of the binary object contour is expanded, the residual holes in the divided object can be filled, the object is complete, and the expression is as follows:
s represents a set of expanded binary image pixel points; b represents a structural element, namely a template; x represents an image pixel set after binarization processing, the operation process is to slide a structural element B in an X image domain, when the central point of B is moved to a certain point (X, y) on an X image, pixel points in the structural element are traversed, if the pixel point in the structural element B is at least one same as the pixel point of the X image, the pixel point (X, y) is kept in S, otherwise, the pixel point is removed; after the binary image is subjected to open operation, the image is divided into a plurality of connected regions;
3) identifying a connected domain, namely firstly segmenting the connected domain in the image by adopting 8 adjacent criteria, wherein the definition of the 8 adjacent connected domains is as follows: filling different connected domains in the binary image with different digital marks according to the definition, wherein each pixel in the region still belongs to at least one of 8 adjacent pixels in all 8 directions; then respectively extracting the pixel circumferences of all connected domains, comparing the pixel circumferences with a preset target threshold, and judging as a candidate target if the pixel circumferences are within a threshold interval; and finally, identifying candidate targets in the image by adopting a minimum rectangular frame capable of surrounding the outline of the connected domain of the target, and completing target detection.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610248720.9A CN105959514B (en) | 2016-04-20 | 2016-04-20 | A kind of weak signal target imaging detection device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610248720.9A CN105959514B (en) | 2016-04-20 | 2016-04-20 | A kind of weak signal target imaging detection device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN105959514A CN105959514A (en) | 2016-09-21 |
CN105959514B true CN105959514B (en) | 2018-09-21 |
Family
ID=56917746
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610248720.9A Active CN105959514B (en) | 2016-04-20 | 2016-04-20 | A kind of weak signal target imaging detection device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105959514B (en) |
Families Citing this family (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108090490A (en) * | 2016-11-21 | 2018-05-29 | 南京理工大学 | A kind of Stealthy Target detecting system and method based on multispectral polarization imaging |
CN106651802B (en) * | 2016-12-24 | 2019-10-18 | 大连日佳电子有限公司 | Machine vision scolding tin position finding and detection method |
CN106851071A (en) * | 2017-03-27 | 2017-06-13 | 远形时空科技(北京)有限公司 | Sensor and heat transfer agent processing method |
CN110832843B (en) * | 2017-07-12 | 2021-12-14 | 索尼公司 | Image forming apparatus, image forming method, and image forming system |
CN109427044B (en) * | 2017-08-25 | 2022-02-25 | 瑞昱半导体股份有限公司 | Electronic device |
CN108181624B (en) * | 2017-12-12 | 2020-03-17 | 西安交通大学 | Difference calculation imaging device and method |
CN108320303A (en) * | 2017-12-19 | 2018-07-24 | 中国人民解放军战略支援部队航天工程大学 | A kind of pinhole cameras detection method based on binocular detection |
CN108230316B (en) * | 2018-01-08 | 2020-06-05 | 浙江大学 | Floating hazardous chemical substance detection method based on polarization differential amplification image processing |
CN109064504B (en) * | 2018-08-24 | 2022-07-15 | 深圳市商汤科技有限公司 | Image processing method, apparatus and computer storage medium |
CN109308693B (en) * | 2018-08-29 | 2023-01-24 | 北京航空航天大学 | Single-binocular vision system for target detection and pose measurement constructed by one PTZ camera |
CN111161140B (en) * | 2018-11-08 | 2023-09-19 | 银河水滴科技(北京)有限公司 | Distortion image correction method and device |
CN111242152A (en) * | 2018-11-29 | 2020-06-05 | 北京易讯理想科技有限公司 | Image retrieval method based on target extraction |
CN109859178B (en) * | 2019-01-18 | 2020-11-03 | 北京航空航天大学 | FPGA-based infrared remote sensing image real-time target detection method |
CN109934112B (en) * | 2019-02-14 | 2021-07-13 | 青岛小鸟看看科技有限公司 | Face alignment method and camera |
CN109900719B (en) * | 2019-03-04 | 2020-08-04 | 华中科技大学 | Visual detection method for blade surface knife lines |
CN110232694B (en) * | 2019-06-12 | 2021-09-07 | 安徽建筑大学 | Infrared polarization thermal image threshold segmentation method |
CN113418864B (en) * | 2021-06-03 | 2022-09-16 | 奥比中光科技集团股份有限公司 | Multispectral image sensor and manufacturing method thereof |
CN113933246B (en) * | 2021-09-27 | 2023-11-21 | 中国人民解放军陆军工程大学 | Compact multiband full-polarization imaging device compatible with F-mount lens |
CN113945531B (en) * | 2021-10-20 | 2023-10-27 | 福州大学 | Dual-channel imaging gas quantitative detection method |
CN115880188B (en) * | 2023-02-08 | 2023-05-19 | 长春理工大学 | Polarization direction statistical image generation method, device and medium |
CN117061854A (en) * | 2023-10-11 | 2023-11-14 | 中国人民解放军战略支援部队航天工程大学 | Super-structured surface polarization camera structure for three-dimensional perception of space target |
CN118279208B (en) * | 2024-06-04 | 2024-08-13 | 长春理工大学 | Polarization parameter shaping method |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2294778A (en) * | 1993-07-10 | 1996-05-08 | Siemens Plc | Improved spectrometer |
US5572359A (en) * | 1993-07-15 | 1996-11-05 | Nikon Corporation | Differential interference microscope apparatus and an observing method using the same apparatus |
US7193214B1 (en) * | 2005-04-08 | 2007-03-20 | The United States Of America As Represented By The Secretary Of The Army | Sensor having differential polarization and a network comprised of several such sensors |
CN102297722A (en) * | 2011-09-05 | 2011-12-28 | 西安交通大学 | Double-channel differential polarizing interference imaging spectrometer |
CN103604945A (en) * | 2013-10-25 | 2014-02-26 | 河海大学 | Three-channel CMOS synchronous polarization imaging system |
CN104103073A (en) * | 2014-07-14 | 2014-10-15 | 中国人民解放军国防科学技术大学 | Infrared polarization image edge detection method |
CN204203261U (en) * | 2014-11-14 | 2015-03-11 | 南昌工程学院 | A kind of three light-path CMOS polarization synchronous imaging devices |
-
2016
- 2016-04-20 CN CN201610248720.9A patent/CN105959514B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
GB2294778A (en) * | 1993-07-10 | 1996-05-08 | Siemens Plc | Improved spectrometer |
US5572359A (en) * | 1993-07-15 | 1996-11-05 | Nikon Corporation | Differential interference microscope apparatus and an observing method using the same apparatus |
US7193214B1 (en) * | 2005-04-08 | 2007-03-20 | The United States Of America As Represented By The Secretary Of The Army | Sensor having differential polarization and a network comprised of several such sensors |
CN102297722A (en) * | 2011-09-05 | 2011-12-28 | 西安交通大学 | Double-channel differential polarizing interference imaging spectrometer |
CN103604945A (en) * | 2013-10-25 | 2014-02-26 | 河海大学 | Three-channel CMOS synchronous polarization imaging system |
CN104103073A (en) * | 2014-07-14 | 2014-10-15 | 中国人民解放军国防科学技术大学 | Infrared polarization image edge detection method |
CN204203261U (en) * | 2014-11-14 | 2015-03-11 | 南昌工程学院 | A kind of three light-path CMOS polarization synchronous imaging devices |
Also Published As
Publication number | Publication date |
---|---|
CN105959514A (en) | 2016-09-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105959514B (en) | A kind of weak signal target imaging detection device | |
US12099148B2 (en) | Systems and methods for surface normals sensing with polarization | |
Kadambi et al. | Depth sensing using geometrically constrained polarization normals | |
Artusi et al. | A survey of specularity removal methods | |
CN109410207B (en) | NCC (non-return control) feature-based unmanned aerial vehicle line inspection image transmission line detection method | |
CN107993258B (en) | Image registration method and device | |
EP2561482B1 (en) | Shape and photometric invariants recovery from polarisation images | |
CN102509304A (en) | Intelligent optimization-based camera calibration method | |
CN110520768B (en) | Hyperspectral light field imaging method and system | |
Yang et al. | An image-based intelligent system for pointer instrument reading | |
WO2021022696A1 (en) | Image acquisition apparatus and method, electronic device and computer-readable storage medium | |
Liang et al. | 3D plant modelling via hyperspectral imaging | |
CN103308000A (en) | Method for measuring curve object on basis of binocular vision | |
CN112924028A (en) | Light field polarization imaging detection system for sea surface oil spill | |
CN112085793A (en) | Three-dimensional imaging scanning system based on combined lens group and point cloud registration method | |
Yeh et al. | Shape-from-shifting: Uncalibrated photometric stereo with a mobile device | |
Mikołajczyk et al. | Camera-Based Automatic System for Tool Measurements and Recognition | |
Drap et al. | Underwater multimodal survey: Merging optical and acoustic data | |
Karaca et al. | Ground-based panoramic stereo hyperspectral imaging system with multiband stereo matching | |
Li et al. | Learning to Synthesize Photorealistic Dual-pixel Images from RGBD frames | |
Li et al. | Overall well-focused catadioptric image acquisition with multifocal images: a model-based method | |
Ballabeni et al. | Intensity histogram equalisation, a colour-to-grey conversion strategy improving photogrammetric reconstruction of urban architectural heritage | |
TWI814680B (en) | Correction method for 3d image processing | |
Lv et al. | P‐2.29: Depth Estimation of Light Field Image Based on Compressed Sensing and Multi‐clue Fusion | |
Zhu et al. | Contour extraction based on circular hough transform for forest canopy digital hemispherical photography |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |