[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

WO2009109125A1 - 图像处理方法及系统 - Google Patents

图像处理方法及系统 Download PDF

Info

Publication number
WO2009109125A1
WO2009109125A1 PCT/CN2009/070583 CN2009070583W WO2009109125A1 WO 2009109125 A1 WO2009109125 A1 WO 2009109125A1 CN 2009070583 W CN2009070583 W CN 2009070583W WO 2009109125 A1 WO2009109125 A1 WO 2009109125A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
feature information
frame
module
images
Prior art date
Application number
PCT/CN2009/070583
Other languages
English (en)
French (fr)
Inventor
李凯
刘源
Original Assignee
深圳华为通信技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳华为通信技术有限公司 filed Critical 深圳华为通信技术有限公司
Priority to EP09718449A priority Critical patent/EP2252088A4/en
Publication of WO2009109125A1 publication Critical patent/WO2009109125A1/zh
Priority to US12/860,339 priority patent/US8416314B2/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • H04N17/002Diagnosis, testing or measuring for television systems or their details for television cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20224Image subtraction

Definitions

  • the invention relates to an image processing method and system, and belongs to the technical field of image processing. Background technique
  • camera calibration is the process of obtaining the geometric and optical properties of the camera, ie internal parameters, and the three-dimensional positional relationship of the camera coordinate system relative to the spatial coordinate system, ie the approximation of external parameters.
  • Embodiments of the present invention provide an image processing method and system, which automatically selects a suitable clear image in a video file or a series of images, thereby realizing automation of calibration.
  • An embodiment of the present invention provides an image processing method, including:
  • the frame image having the characteristic information after the detection is processed corresponding to the sharpness, and compared with the preset definition standard, and the clear image is retained.
  • An embodiment of the present invention further provides an image processing system, including:
  • An image acquisition module configured to acquire a video image or capture a series of frame images in real time;
  • the feature information detection module is connected to the image acquisition module, and is configured to perform feature information detection on the acquired video image or a frame image in a series of frame images. Obtaining a frame image with significant feature information;
  • the image clear determination module is connected to the feature information detecting module and the image acquiring module, and is configured to process the frame image with the feature information according to the definition, and compare with the preset definition standard, and keep the definition clear Image;
  • the image difference detecting module is connected to the image clear determining module, and is configured to save the current clear frame image when the difference between the current frame image and the image saved in the previous frame is greater than a preset threshold.
  • An embodiment of the present invention further provides an image processing system, including:
  • An image acquisition module configured to acquire a video image or capture a series of frame images in real time;
  • an image difference detection module is connected to the image acquisition module, and is configured to reserve two before and after the difference between the two frames of images is greater than a preset threshold Frame image
  • a feature information detecting module configured to be connected to the image acquiring module, configured to perform feature information detection on the acquired video image or a frame image in a series of frame images, to obtain a frame image having obvious feature information;
  • An image clear determination module is connected to the feature information detecting module and the image acquiring module, and is configured to perform processing corresponding to the sharpness of the frame image having the feature information, and compare with the preset definition standard, at the current When the frame image is not clear, the current image is discarded, and the image acquisition module performs an operation of selecting the next frame image.
  • the embodiment of the invention performs feature information and clear judgment on a single image, automatically finds a clear image that satisfies the condition, and does not need to compare multiple images with each other, therefore, processing With fewer images, image preprocessing can be performed on video files to automate the process of subsequent video calibration.
  • Embodiment 1 is a flow chart of Embodiment 1 of an image processing method according to the present invention.
  • Embodiment 2 is a flowchart of Embodiment 2 of an image processing method according to the present invention.
  • Embodiment 3 is a flowchart of Embodiment 3 of an image processing method according to the present invention.
  • FIG. 5 is a schematic diagram of an embodiment of image cropping according to the present invention.
  • Embodiment 2 is a schematic diagram of Embodiment 2 of image cropping according to the present invention.
  • Embodiment 7 is a flow chart of Embodiment 4 of an image processing method according to the present invention.
  • FIG. 8 is a schematic diagram of an embodiment of an image after cropping in an image processing method according to the present invention
  • FIG. 9 is a schematic diagram of an embodiment of an image before cropping in an image processing method according to the present invention
  • FIG. 10 is a view corresponding to a cropped image in an image processing method according to the present invention
  • FIG. 11 is a schematic diagram of an embodiment of a gradient information map corresponding to a blurred image before cropping in the image processing method of the present invention
  • Embodiment 1 of an image processing system according to the present invention.
  • FIG. 13 is a schematic diagram of Embodiment 2 of an image processing system according to the present invention.
  • FIG. 14 is a schematic diagram of Embodiment 3 of an image processing system according to the present invention.
  • FIG. 15 is a schematic diagram of Embodiment 4 of an image processing system according to the present invention. detailed description
  • a flowchart of Embodiment 1 of an image processing method according to the present invention includes the following steps:
  • Step 0001 Acquire a video image or capture a series of frame images in real time.
  • Step 0002 detecting the difference between the two frames before and after.
  • Step 0003 Determine whether the difference between the two frames of images is less than a preset threshold, and then execute step 0004; otherwise, execute step 0005.
  • Step 0004 Discard one of the images and leave only one of the images.
  • Step 0005 Perform feature information detection on the acquired frame image.
  • Step 0006 Perform image clear detection on the frame image with the feature information after the detection, perform processing corresponding to the sharpness, and compare with the preset definition standard to retain the clear image.
  • the acquired image is processed.
  • the difference between the two images is smaller than the preset value, one of the images is omitted, and the difference between the obtained images is ensured, and the multiple images that are detected are always in the Stationary state.
  • each image in the obtained plurality of significantly different images is clearly detected, and the unclear image is further discarded, so that the last remaining images have obvious feature information, clarity, and storage differences between each other.
  • some calibration methods require only one image, and some require two or more images.
  • the traditional camera calibration method requires at least two or more images, and the camera self-calibration method theoretically requires only one picture, which can retain a certain number of clear and characteristic information images according to different needs.
  • FIG. 2 it is a flowchart of Embodiment 2 of an image processing method according to the present invention, which includes the following steps:
  • Step 001 Acquire a video image or capture a series of frame images in real time.
  • Step 002 Perform feature information detection on one of the video image or a series of frame images.
  • Step 003 Perform image cropping on the frame image with the feature information after the detection.
  • Step 004 Perform image clear detection on the cropped frame image (processing corresponding to the sharpness and comparing with the preset definition standard), discarding the unclear frame image.
  • Step 005 determining whether the difference between the current frame image and the previous frame image is greater than a preset threshold, indicating that the difference between the two frames is large, and performing step 006; otherwise, the two frames are different. To prevent the detected scene from being static and not changing, perform step 051.
  • Step 051 Since the difference between the two frames is too small, one of the images of the two frames is discarded, that is, only one picture is retained for subsequent processing, and step 002 is continued to read the next frame.
  • Step 006 Save the current frame image.
  • Step 007 Determine whether the number of images reaches a preset value, for example: At least three different images are required in the camera calibration to perform subsequent internal parameter calculations, and calibration is performed. Therefore, the preset value may be set to 3; The demand can also be set to a different margin, for example, setting 4 to reserve a certain margin.
  • step 008 is performed; otherwise, the image satisfying the condition has not reached the predetermined number, and step 002 is continued to read the image of the next frame.
  • Step 008 Obtain a clear frame image that satisfies the set number.
  • the feature information is separately detected and cropped for each frame of the acquired series of frame images, and whether one frame image is clear is determined separately, that is, whether a portion with obvious feature information in an image is clear or not, and the image is clearly determined. It is only based on the single image itself to determine whether it is clear.
  • an image cropping operation is also added. The purpose of the image cropping is to: maximize the percentage of the image feature information occupied by the image, because some image detection has less feature information, and the percentage of the image is percentage. Smaller, inconvenient to process, after adding image cropping, the excess peripheral area in the obtained image is removed, and the image with the feature information occupying the image can be obtained, which is convenient for subsequent processing.
  • FIG. 3 it is a flowchart of Embodiment 3 of an image processing method according to the present invention. This embodiment is similar to the embodiment of FIG. 2, except that after acquiring the image in step 001, before step 002 performs feature information detection on the image, the following steps are included:
  • Step 010 determining whether the difference between the two frames before and after is greater than a preset threshold, indicating that the difference between the two frames is large, and performing step 011; otherwise, the difference between the two frames is too small, so as to prevent the detected scene from being static and unchanged, performing Step 012.
  • Step 011 Save the two frames before and after, and go to step 002.
  • Step 012 Since the difference between the two frames is too small, one of the images of the two frames is discarded, that is, only one picture is retained for subsequent processing, and the subsequent step 002 is performed.
  • the difference detection of the two frames before and after the image can be performed after the image clear detection operation (step 004 of the embodiment of FIG. 2 ) or after the image is obtained (step 001 of the embodiment of FIG. 3 ). Or do the frame image difference detection before and after, to determine whether the two images are too different. If the difference is large, the frame image is reserved for subsequent detection and judgment; otherwise, one of the frame images is discarded. In the prior art, a plurality of images obtained are judged to find the clearest image, and subsequent calibration operations are performed.
  • the above embodiments obtain two images with inconspicuous differences, one of them is discarded to prevent obtaining
  • the images of the image have been static and unchanged, ensuring that the image difference of the final calibration operation is relatively large and meets certain clarity requirements. For example, there are ten images taken or ten frames of video images, but four of them are similar, and the change is relatively small, then only the remaining six images can be detected and judged, as long as the number of settings that meet the requirements is detected. The image, that is, the detection process is stopped.
  • a plurality of unclear images are discarded before the calibration operation is performed, and subsequent calibration processing can be performed by using the obtained images satisfying the set number.
  • the process can be completed up to six times and processed at least three times.
  • detection and processing can be automatically realized, and a clear picture satisfying the set number is obtained after image pre-processing
  • the calibration operation can be performed according to the obtained image. Since the input number of images has characteristic information and is clear, the parameter information related to the calibration can be effectively calculated, and the processing speed is faster than the prior art. Save time and allow subsequent calibration operations to calculate accurate parameter information.
  • FIG. 4 it is a flowchart of an embodiment of feature information detection according to the present invention.
  • the step of detecting feature information in this embodiment includes:
  • Step 021 Read each frame or one frame of images captured in real time in order.
  • Step 022 Detect corner information in the frame image.
  • Step 023 Determine whether the actually detected corner point is the same as the theoretical number of corner points, and then perform the subsequent steps; otherwise, discard the currently detected frame image, and continue to step 021 to read the next frame image of the current detected image.
  • This embodiment exemplifies the operation of detecting the feature information by using the feature information as a corner point, but those skilled in the art should understand that the feature information may be color feature information of the image, brightness feature information, geometric feature information of the image, etc., more specifically
  • the geometric feature information may be a plurality of feature information such as an edge or a contour, a corner point, a line, a circle, an ellipse or a rectangle: if the template of the image is a checkerboard, the detected feature information may be a corner point, and the subsequent step 023 determines the actual detection. Whether the corner points are consistent with the actual number of corner points in the checkerboard.
  • the detected feature information may be For the number of circles; the subsequent need to determine whether the number of concentric circles actually detected is consistent with the actual number of concentric circles in the template. If they are consistent, the obtained image has obvious feature information, otherwise the obtained image is incomplete and discarded. .
  • different feature information detection can be performed according to the image.
  • the color feature information and the brightness feature information of the image may be embodied as gray feature information, binarized image feature information, and color change feature information; the geometric feature information of the image may refer to common straight lines, circles, arcs, ellipses, hyperbolic curves.
  • corner points corner points are often referred to as feature points or key points.
  • the corner point described therein refers to a certain size of the corner point where the brightness or chromaticity or gradient of the corner point changes sharply, or a dramatic change occurs, for example: the apex of a black rectangle in a white background is Corner points, points in the middle of the black and white checkerboard, etc.
  • the most commonly used image is a template, which one skilled in the art should understand.
  • the image may not be a template, but a general object, such as an image taken by a cup placed on a table.
  • the detected feature information is some characteristic information corresponding to the shape of the cup.
  • this article The example of the checkerboard and the concentric circle is exemplified, but it should be understood that it is not limited to these several embodiments.
  • image cropping of the frame image having the feature information after the detection includes:
  • the cropped image is obtained based only on the image information inside the boundary formed by the respective points.
  • FIG. 5 a schematic diagram of an image cropping embodiment of the present invention is shown.
  • an image template is used as a checkerboard.
  • the image template is a checkerboard
  • four corner points are detected, and the image can be cropped according to the detected feature information of the corner points to maximize the retention of the checkerboard.
  • the cropping operation is as shown in FIG. 4:
  • step 2 the cropped image can be initially obtained.
  • the outermost rectangle shown in Fig. 4 is the preliminary cropped image.
  • the image points outside the rectangular boundary formed by the four edges in the preliminary cropping image of step 2 are all set to black.
  • the checkerboard image has a smaller effective percentage of the image, and the cropped image image 8 only retains the checkerboard containing all the feature information, which greatly increases the percentage of the feature information in the image.
  • FIG. 6 is a schematic diagram of Embodiment 2 of image cropping according to the present invention.
  • This embodiment uses an image template as a concentric circle to illustrate:
  • the maximum and minimum values of the Y coordinate of the outer boundary of the outermost concentric circle are MaxY and MinY. As shown in Fig. 6, the maximum and minimum values are the ordinates of V21 and V23, respectively.
  • step 2 the cropped image can be obtained initially, and the rectangle shown in Fig. 5 is the preliminary cropped image.
  • the image points within the preliminary cropped image and outside the circle boundary in step 2 are all set to black.
  • the feature information can be a corner point, a line, a circle, an ellipse, a rectangle, etc., and the corresponding construction boundary and image cropping are also slightly different, and will not be exemplified here.
  • Embodiment 4 of an image processing method according to the present invention is shown.
  • This embodiment is a flowchart of a more detailed embodiment.
  • the board image template is still taken as an example. As shown in FIG. 7, this embodiment includes:
  • Step 01 Obtain a video file with a checkerboard.
  • This step specifically includes:
  • the acquired image can also be an image detected in real time, as in step 01.
  • Step 01 Capture a series of frame images obtained by shaking the image of the template in real time through the camera.
  • This step specifically includes:
  • the camera captures the frame map in real time, and detects the processing
  • step 01 is a video file that has been taken, as a computer-like hard disk similar to the avi file format, wherein the video file contains a multi-frame image; step 01 is a real-time shooting to obtain a continuous frame image (one image) And there is a computer cache.
  • images can also be obtained by other means, such as one can operate the camera, the checkerboard is fixed, or the checkerboard is not required to directly capture an image.
  • the image captured by the camera may be unclear, making it unsuitable for use as a calibrated image.
  • Step 021 Read each frame of image or a captured image in real time.
  • Step 022 Detect whether there is corner feature information.
  • Step 023 Determine whether the detected number of corner points is equal to the actual number of corner points. If yes, go to step 03; otherwise, go to step 021.
  • Step 03 Trimming the image, that is, step 03 shown in FIG. 7 : cropping the image according to the detected four vertex positions, and obtaining an image having obvious feature information and the same number of feature information of the template, specifically See the description of the embodiment of Figures 5 and 6.
  • Step 041 Calculate the gradient of the cropped image.
  • Step 042 Calculate whether the image is clear. If yes, go to step 05 to perform difference detection. Otherwise, go to step 021 to continue reading the next image.
  • Step 041 and step 042 are steps for judging whether the image is clear, and determining whether the frame image is clear or focused according to the content of the high-frequency component of the edge information in the frame image feature information.
  • the operation of clear image detection is described in detail below:
  • step 03 Obtain the cropped image according to step 03 in the embodiment.
  • the cropped image (as shown in FIG. 8) needs to be grayed out, and the gradient value of each point of the cropped image is obtained (for example, the image is Do a Laplace transform), you can output a grayscale image with only edge information, as shown in Figure 10.
  • D. Threshold amplification As can be seen from Figure 10, there are some noise points in each small white square, so you can denoise the processing, or increase the threshold T1 to a certain multiple. The magnification for different scenes can be different. If the denoising effect is very good, basically do not need to zoom in. If the denoising effect is not good, there are many noise points, and the magnification required to be amplified is about 2 times.
  • Fig. 10 is a grayscale diagram corresponding to a clear image
  • Fig. 11 is a grayscale diagram corresponding to a blurred image. It can be seen that the line 4 in the gray image corresponding to the clear image is thin, and the line in FIG. 11 is thick; if the sum of the white pixel points calculated in step F is greater than the preset threshold, the white pixel in the image More, as shown in Figure 11, the lines are relatively thick, and the corresponding image belongs to a blurred image, which needs to be discarded and no subsequent detection is performed.
  • the feature information (such as edge or contour information) corresponding to the clearest image template is taken as the preset definition standard according to the percentage of the cropped image, if the actual cropping is obtained. If the image after the image exceeds the set sharpness standard, the image is blurred; the total number of white pixels of the feature information in the image actually obtained after the cropping can be compared with the set sharpness standard, if the actual white pixel sum exceeds the set
  • the defined sharpness standard (sum of white pixels) indicates that the image is blurred and the corresponding lines are thicker, as shown in Figure 11.
  • the difference between the front and rear frame map determinations may be performed after the image is obtained, or after the image is obtained, or the frame difference detection is performed at the same time, and is not limited to the specific sequence of the embodiment, and the difference detection is performed.
  • a suitable threshold is set to judge whether the previous image is too different from the current image. If the difference is large, the previous and subsequent frame images are reserved for subsequent detection and determination; otherwise, one of the frame images is discarded.
  • the difference between the front and back frame patterns is mainly to prevent the detected image from being in a state of being stationary and not changing.
  • determining whether the difference between the front and rear frame patterns is greater than a preset threshold includes:
  • Step 06 Obtain a clear sequence of calibration images, ie multiple images that satisfy the conditions (feature information, sharp, cropped).
  • Step 07 Determine whether the number of image sequences is greater than 3, and then perform step 08; otherwise, perform step 01 or 01 to continue acquiring images; if using two-dimensional or three-dimensional templates for camera calibration, determine the valid calibration frame obtained finally. Whether the number of calibration images is satisfied or not, the number of calibration images set in this embodiment is 3.
  • Step 08 Calibrate the video acquisition device according to the clear three images obtained and the corresponding feature information.
  • Step 09 Obtain the internal and external parameters of the video acquisition device.
  • the image sharpness determination in this embodiment is based on the Fourier optics theory: the degree of image sharpness or focus is mainly determined by the amount of high-frequency components in the light intensity distribution. When the high-frequency component is small, the image is blurred, and the high-frequency component is rich, and the image is clear. In this embodiment, the content of the high-frequency component of the image light intensity distribution is mainly used as the main basis for the image sharpness or focus evaluation function.
  • the image has an edge portion, when it is fully focused, the image is clear, and the high-frequency component containing the edge information is the most; when defocusing, the image is blurred, the high-frequency component is less, and thus the high-frequency component of the image edge information can be used. Determine if the image is sharp or in focus.
  • the resolution determination of this embodiment is different from the existing image sharpness or auto focus determination method.
  • the existing image sharpness or auto focus is generally a series of images taken on the same scene, and the clearest image is found from it to automatically adjust the focal length of the camera.
  • the basis for judging whether the image is clear or auto-focusing is to cross-reference a series of images. , select the clearest image from which to use as the basis for adjusting the focal length to the ideal position.
  • it is determined whether an image is clear, or whether the portion with obvious feature information in the image is clear, and whether the other parts of the image are clear is not necessary.
  • the clear determination of the image of this embodiment is not based on a series of images, but only based on the single image itself to determine whether it is clear.
  • Embodiment 1 of an image processing system includes:
  • the image acquisition module 1 is configured to acquire a video image or capture a series of frame images in real time.
  • the feature information detecting module 2 is connected to the image acquiring module 1 and configured to perform feature information detection on the acquired video image or the frame image in the series of frame images to obtain a frame image with obvious feature information.
  • the image clarity determination module 40 is connected to the feature information detection module 2 and the image acquisition module 1 for performing processing corresponding to the definition of the frame image having the feature information, and comparing with the preset definition standard, and retaining the clear image.
  • the image difference detecting module 5 is connected to the image sharpness determining module 40, and is configured to save the current clear image when the difference between the current frame image and the image saved in the previous frame is greater than a preset threshold.
  • Embodiment 2 of an image processing system according to the present invention is shown. As shown in Figure 13, this embodiment includes:
  • the image acquisition module 1 is configured to acquire a video image or capture a series of frame images in real time.
  • the feature information detecting module 2 is connected to the image obtaining module 1 and configured to perform feature information detection on the acquired video image or the frame image in the series of frame images to obtain a frame image with obvious feature information.
  • the image cropping module 3 is connected to the feature information detecting module 2 for performing image cropping on the frame image having the feature information.
  • the image clear determination module 4 is connected to the image cropping module 3 and the image acquisition module 1 for performing processing corresponding to the sharpness of the cropped frame image, and comparing with the preset definition standard, and retaining the clear image. .
  • the image difference detecting module 5 is connected to the image clear determining module 4, and is configured to determine whether the difference between the current frame image and the image saved in the previous frame is less than a preset threshold, and then retain one of the frames; otherwise, save the current frame. image.
  • the image number determining module 6 is connected to the image difference detecting module 5 and the image acquiring module 1 for detecting the number of saved frame images. When the number of saved frame images does not reach the set number, the image acquiring module performs the next selection. The frame image is used for subsequent operations.
  • the embodiment may further include: a calibration module 7 connected to the image number determination module 6 for obtaining a view according to the obtained set number of clear frame images and corresponding feature information.
  • the frequency acquires the internal and external parameters of the device, and the video acquisition device is calibrated.
  • FIG. 14 is a schematic diagram of Embodiment 3 of an image processing system according to the present invention. As shown in FIG. 14, this embodiment includes:
  • the image acquisition module 1 is configured to acquire a video image or capture a series of frame images in real time.
  • the image difference detecting module 5 is connected to the image obtaining module 1 and is configured to reserve two images before and after the difference between the two frames before and after the image is greater than a preset threshold.
  • the feature information detecting module 2 is connected to the image obtaining module 1 and configured to perform feature information detection on the acquired video image or the frame image in the series of frame images to obtain a frame image with obvious feature information.
  • the image clarity determination module 4 is connected to the feature information detection module 2 and the image acquisition module 1 for performing processing corresponding to the definition of the frame image having the feature information, and comparing with the preset definition standard, in the current frame When the image is not clear, the current image is discarded, and the image acquisition module performs an operation of selecting the next frame image.
  • FIG. 15 is a schematic diagram of Embodiment 4 of an image processing system according to the present invention. As shown in Figure 15, this embodiment includes:
  • the image acquisition module 1 is configured to acquire a video image or capture a series of frame images in real time.
  • the image difference detecting module 5 is connected to the image obtaining module 1 and is configured to reserve two images before and after the difference between the two frames before and after the image is greater than a preset threshold.
  • the feature information detecting module 2 is connected to the image obtaining module 1 and configured to perform feature information detection on the acquired video image or the frame image in the series of frame images to obtain a frame image with obvious feature information.
  • the image cropping module 3 is connected to the feature information detecting module 2 for performing image cropping on the frame image having the feature information.
  • the image clear determination module 4 is connected with the image cropping module 3 and the image acquisition module 1 for processing the cropped frame image in accordance with the definition and comparing with the preset definition standard to retain a clear image.
  • the image number determining module 6 is connected to the image sharpening module and the image acquiring module 1 for detecting the number of saved frame images. When the number of saved frame images does not reach the set number, the image acquiring module selects the next frame image for subsequent operating.
  • the embodiment may further include: a calibration module 7 connected to the image number determining module 6 for obtaining the internal and external parameters of the video acquiring device according to the obtained set number of clear frame images and corresponding feature information, and performing the video acquiring device Calibration.
  • FIG. 12 to FIG. 15 can refer to the related description of the method embodiment of FIG. 1 to FIG. 7.
  • the functions and effects similar to the embodiment of FIG. 1 to FIG. 7 are completed, and the image pre-processing is completed to facilitate subsequent calibration operations. Description.
  • the present invention can be embodied in a variety of different forms.
  • the technical solutions of the present invention are illustrated by taking the example of FIG. 1 to FIG. 15 as an example, which does not mean that the specific examples applied to the present invention can be limited to In a particular process or embodiment structure, one of ordinary skill in the art will appreciate that the specific embodiments provided above are only a few examples of the various preferred uses.

Landscapes

  • Engineering & Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Description

图像处理方法及系统 本申请要求于 2008 年 3 月 5 日提交中国专利局, 申请号为 200810082736.2, 发明名称为 "图像处理方法及系统" 的中国专利申 请的优先权, 其全部内容通过引用结合在本申请中。 技术领域
本发明涉及一种图像处理方法及系统, 属于图像处理技术领域。 背景技术
在计算机视觉系统中,摄像机标定就是获得摄像机内部的几何和 光学特性, 即内部参数, 及摄像机坐标系相对于空间坐标系的三维位 置关系, 即外部参数近似值的过程。
无论采用何种摄像机标定方法, 自动获取到清晰的图像是成功标 定的基础。 图像若不清晰则会对标定结果有很大的影响。
在实现本发明过程中,发明人发现现有技术中目前并没有提到如 何自动的选取清晰图像的方法。现有技术中图像处理需要从不同角度 拍摄图像, 对拿图像的人和拍摄的人员带来了不便, 且大多情况下需 要两个人来实现获得图像; 且实际实验中, 有些不清晰的图像用来做 后续摄像机标定会使得标定结果误差很大, 甚至得不到标定结果; 现 有技术中只能靠拍摄多张图像,从中选取清晰的满足要求的图像来检 测特征信息, 并不能自动选取合适的清晰的图像。
发明内容
本发明实施例提供一种图像处理方法及系统,实现在视频文件或 一系列图像中自动选取合适的清晰的图像, 从而实现标定的自动化。
本发明实施例提出一种图像处理方法, 包括:
获取视频图像或实时捕获一系列帧图像; 对前后两帧图像之间的差异进行检测 ,在两帧图像之间的差异小 于预设阈值时, 保留其中一帧图像;
对获取的帧图像进行特征信息检测;
对检测后具有特征信息的帧图像进行与清晰度相应的处理,并与 预设的清晰度标准进行比较, 保留清晰的图像。
本发明实施例还提出一种图像处理系统, 包括:
图像获取模块, 用于获取视频图像或实时捕获一系列帧图像; 特征信息检测模块, 与所述图像获取模块连接, 用于对获取的视 频图像或一系列帧图像中的帧图像进行特征信息检测,获得具有明显 特征信息的帧图像;
图像清晰判定模块,与所述特征信息检测模块及所述图像获取模 块连接, 用于对具有特征信息的帧图像进行与清晰度相应的处理, 并 与预设的清晰度标准进行比较, 保留清晰的图像;
图像差异检测模块, 与所述图像清晰判定模块连接, 用于在当前 的帧图像与前一帧保存的图像之间的差异大于预设阈值时,保存当前 清晰的帧图像。
本发明实施例还提出一种图像处理系统, 包括:
图像获取模块, 用于获取视频图像或实时捕获一系列帧图像; 图像差异检测模块, 与所述图像获取模块连接, 用于在前后两帧 图像之间的差异大于预设阈值时, 保留前后两帧图像;
特征信息检测模块, 与所述图像获取模块连接, 用于对获取的视 频图像或一系列帧图像中的帧图像进行特征信息检测,获得具有明显 特征信息的帧图像;
图像清晰判定模块,与所述特征信息检测模块及所述图像获取模 块连接, 用于对具有特征信息的帧图像进行与清晰度相应的处理, 并 与预设的清晰度标准进行比较, 在当前帧图像不清晰时, 舍弃当前图 像, 使所述图像获取模块执行选取下一帧图像的操作。
本发明实施例对单幅图像进行特征信息及清晰的判断, 自动找出 满足条件的清晰的图像, 不用对多幅图像进行相互比较, 因此, 处理 的图像少, 可以对视频文件进行图像预处理, 实现后续视频标定的程 序自动化。 附图说明
图 1为本发明图像处理方法实施例一流程图;
图 2为本发明图像处理方法实施例二流程图;
图 3为本发明图像处理方法实施例三流程图;
图 4为本发明特征信息检测实施例流程图;
图 5为本发明图像裁剪实施例一示意图;
图 6为本发明图像裁剪实施例二示意图;
图 7为本发明图像处理方法实施例四流程图;
图 8为本发明图像处理方法中裁剪后的图像实施例示意图; 图 9为本发明图像处理方法中裁剪前的图像实施例示意图; 图 10为本发明图像处理方法中对裁剪后的图像对应的梯度信息 图实施例示意图;
图 11为本发明图像处理方法中裁剪前模糊的图像对应的梯度信 息图实施例示意图;
图 12为本发明图像处理系统实施例一示意图;
图 13为本发明图像处理系统实施例二示意图;
图 14为本发明图像处理系统实施例三示意图;
图 15为本发明图像处理系统实施例四示意图。 具体实施方式
下面通过附图和实施例,对本发明的技术方案做进一步的详细描 述。
参见图 1 , 为本发明图像处理方法实施例一流程图, 包括以下步 骤:
步骤 0001: 获取视频图像或实时捕获一系列帧图像。 步骤 0002: 对前后两帧图像之间的差异进行检测。
步骤 0003: 判断两帧图像之间的差异是否小于预设的阈值, 是 则执行步骤 0004; 否则执行步骤 0005。
步骤 0004:舍弃其中一帧图像, 只保留其中一帧图像。
步骤 0005: 对获取的帧图像进行特征信息检测。
步骤 0006: 对检测后具有特征信息的帧图像进行图像清晰检测, 进行与清晰度相应的处理, 并与预设的清晰度标准进行比较, 保留清 晰的图像。
本实施例对获取的图像进行处理,在前后两幅图像之间的差异小 于预设值时, 省去其中一幅图像, 保证获得的图像之间的差异明显, 避免检测的多幅图像一直处于静止状态。本实施例对获得的多幅差异 显著的图像中的每幅图像进行清晰检测, 进一步舍弃不清晰的图像, 保证最后剩余的图像具有明显的特征信息、 清晰, 并且相互之间存储 差异。 当然, 有些情况下可能获得一幅满足条件的图像即可, 如果在 获得了图像后进一步执行与摄像机标定的相关操作,有的标定方法只 需要一幅图像、有的需要 2幅或多幅图像, 传统摄像机标定法需要至 少两幅或两幅以上图像, 而摄像机自标定法理论上只需一幅图就可 以,这可以根据不同的需要保留一定数目的清晰且具有特征信息的图 像。
参见图 2, 为本发明图像处理方法实施例二流程图, 包括以下步 骤:
步骤 001: 获取视频图像或实时捕获一系列帧图像。
步骤 002: 对所述视频图像或一系列帧图像中的一帧图像进行特 征信息检测。
步骤 003: 对检测后具有特征信息的帧图像进行图像裁剪。
步骤 004: 对裁剪后的帧图像进行图像清晰检测 (与清晰度相应 的处理, 并与预设的清晰度标准进行比较), 舍弃不清晰的帧图像。
步骤 005: 判断当前帧图像与前一帧图像之间的差异是否大于预 设阈值, 是则表示两帧图像差异大, 执行步骤 006; 否则两帧图像差 异过小, 为防止检测的场景一直静止不变化, 执行步骤 051。
步骤 051 : 由于两帧图像差异过小, 舍弃两帧图像中其中一帧图 像, 即只保留一幅图参与后续处理, 并继续执行步骤 002, 读取下一 帧图像。
步骤 006: 保存当前帧图像。
步骤 007: 判断图像数目是否达到预设值, 如: 在摄像机标定中 至少需要 3幅不同的图像才能进行后续的内部参数运算, 进行标定, 因此, 可设置预设值为 3; 当然, 为不同需求还可以不同的预设值, 如, 设置 4, 以预留一定的余量。 图像数目达到预设值时, 执行步骤 008; 否则满足条件的图像还未达到预定的数目, 继续执行步骤 002, 读取下一帧图像。
步骤 008: 获得满足设定数目的清晰的帧图像。
本实施例对获取的一系列帧图像中每帧图像单独进行特征信息 的检测、 裁剪, 单独判断一帧图像是否清晰, 即判断一幅图像中具有 明显特征信息的部分是否清晰,图像清晰的判定仅仅根据单幅图像自 身来判定是否清晰。 本实施例中还加入了图像裁剪的操作, 图像裁剪 的目的是在于: 把检测到的图像特征信息占据图像的百分比最大化, 因为有的图像检测的特征信息少, 在整个图像中所占百分比较小, 不 方便进行处理, 加入图像裁剪之后, 将获得的图像中多余的外围区域 去掉, 可以获得特征信息占图像最大化的图像, 方便后续处理。
现有技术中是对同一场景拍摄一系列图像,对一系列图像进行相 互参考, 从中找到最清晰的图像, 而本发明上述各实施例是对单幅图 像进行特征信息及清晰的判断, 自动找出满足条件的清晰的图像, 不 用对多幅图像进行相互比较, 因此处理的图像少, 只需选定满足一定 数目和条件的图像即可进行后续标定, 不需要对全部图像进行处理, 因此, 本实施例可以准确地从视频文件中获取具有明显特征信息的、 清晰的有效帧图, 对视频文件进行图像预处理, 实现后续视频标定的 程序自动化, 并且, 由于可以对视频图像或一系列帧图像进行自动处 理, 一个人就能操作实现图像处理, 为后续标定作好准备。 参见图 3 , 为本发明图像处理方法实施例三流程图。 本实施例与 图 2实施例类似, 不同之处在于在步骤 001获取图像之后, 步骤 002 对图像进行特征信息检测之前, 包括以下步骤:
步骤 010: 判断前后两帧图像之间的差异是否大于预设阈值, 是 则表示两帧图像差异大, 执行步骤 011 ; 否则两帧图像差异过小, 为 防止检测的场景一直静止不变化, 执行步骤 012。
步骤 011 : 保存前后两帧图像, 执行步骤 002。
步骤 012: 由于两帧图像差异过小, 舍弃两帧图像中其中一帧图 像, 即只保留一幅图参与后续处理, 并执行后续步骤 002。
通过图 1至图 3实施例可知,对前后两帧图像进行差异检测即可 以在图像清晰检测的操作 (图 2实施例步骤 004 )之后, 也可以在获 得图像(图 3实施例步骤 001 )之后或同时做前后帧图像差异检测, 判断两幅图像是否差异过大,若差异大则保留前后帧图,做后续检测、 判定; 否则舍弃其中一幅帧图。 现有技术中对获得的多幅图像进行判 断, 找出最清晰的图像, 进行后续标定运算, 而上述各实施例如果在 获取到两幅差异不明显的图像时, 舍弃其中一幅, 防止获得的几幅图 像一直静止无变化, 保证最后进行标定运算的图像差异比较大, 且满 足一定的清晰要求。 例如, 拍摄的图像有十幅或视频图像有十帧, 但 其中有四幅图像比较相近, 变化比较小, 则可以只对剩余六幅图像进 行检测、 判断, 只要检测出有满足要求的设定数目的图像, 即停止检 测处理。
本实施例在未进行标定运算之前就舍弃不清晰的若干张图像,利 用获得的满足设定数目的图像可以进行后续标定处理。 如上段的举 例, 如果进行检测、 判断的六幅图像中前三张均满足要求(有特征信 息、 清晰), 则后三幅图像不需进行处理; 如果进行检测、 判断的六 幅图像中前三幅只有两幅满足要求(有特征信息、 清晰), 则还需要 检测后三幅图像, 最终获得满足数目的图像, 由此, 最多处理六次, 最少处理三次即可完成预处理的过程, 与现有技术相比, 可以自动实 现检测和处理,并且由于图像预处理后获得满足设定数目的清晰的图 像, 后续可以根据获得的图像进行标定运算, 由于输入的设定数目的 图像具有特征信息并且清晰, 所以, 可以有效计算出与标定相关的参 数信息, 与现有技术相比, 处理速度快, 节省时间, 并且使后续标定 运算计算出准确的参数信息。
参见图 4, 为本发明特征信息检测实施例流程图。 如图 4所示, 本实施例中特征信息检测的步骤包括:
步骤 021 : 按顺序读取每帧或实时捕获的一帧图像。
步骤 022: 检测该帧图像中的角点信息。
步骤 023: 判断实际检测的角点是否与理论的角点数目相同, 是 则执行后续步骤; 否则舍弃当前检测的帧图像, 继续执行步骤 021 , 读取当前检测图像的下一帧图像。
本实施例以特征信息为角点对特征信息检测的操作进行举例,但 本领域技术人员应当了解, 特征信息可以是图像的颜色特征信息、 亮 度特征信息、 图像的几何特征信息等, 更具体的几何特征信息可以是 边缘或轮廓、 角点、 直线、 圓、 橢圓或矩形等多种特征信息: 如果图 像的模板为棋盘格, 则检测的特征信息可以为角点, 后续步骤 023判 断实际检测的角点是否与棋盘格中实际的角点数一致, 如果一致, 则 说明获得的图像有明显的特征信息, 否则获得的图像不完整, 舍弃; 如果图像的模板为同心圓, 则检测的特征信息可以为圓的个数; 后续 则需要判断实际检测的同心圓个数是否与模板中实际的同心圓数目 一致, 如果一致, 则说明获得的图像有明显的特征信息, 否则获得的 图像不完整, 舍弃。 具体可根据图像进行不同的特征信息检测。 图像 的颜色特征信息和亮度特征信息可以具体化为灰度特征信息、二值化 图像特征信息、 色彩变化特征信息; 图像的几何特征信息可以指常见 的直线、 圓、 圓弧、 橢圓、 双曲线等曲线特征信息、 角点(角点也常 称为特征点或关键点)。 其中所述的角点是指在角点所在的某个大小 的领域内,该角点亮度或色度或梯度敏锐变化,或者是发生剧烈变化, 例如: 白色背景中的一个黑色矩形的顶点就是个角点, 黑白棋盘格中 间的点等。 当然, 最常用的图像是模板, 本领域技术人员应当了解, 图像也可以不是模板, 而是一般的物体, 如杯子放置于桌上所拍摄的 图像, 此时, 检测的特征信息则为杯子的外形对应的一些特征信息, 为方便画图及后续举例说明,本文以棋盘格及同心圓为例进行举例说 明, 但应当理解为并不限于这几种实施方式。
上述实施例中,对检测后具有特征信息的帧图像进行图像裁剪包 括:
根据检测后获得的特征信息, 获得该帧图像的特征信息的坐标; 获得特征信息对应的各点横坐标及纵坐标的最大值及最小值,初 步获得裁剪图像;
根据该帧图像的特征信息对应的各点的坐标获得特征信息对应 的各点构成的边界;
将所述各点构成的边界之外、初步获得的裁剪图像之内的区域裁 去;
只根据所述各点构成的边界内部的图像信息, 获得裁剪后的图 像。
对图像进行裁剪的操作可参见图 5及图 6实施例。 参见图 5 , 为 本发明图像裁剪实施例一示意图,本实施例以图像模板为棋盘格举例 说明:
如果图像模板为棋盘格, 则特征信息检测后, 检测到四个角点, 根据检测到的角点特征信息可以对图像进行剪裁,最大程度的保留棋 盘格, 裁剪操作如图 4所示:
①、 获取检测到的棋盘格四个角点 VI至 V4的角点坐标。
②、获得四个角点 X坐标的最大、最小值,分别为 MaxX、 MinX, 如图 4所示, 最大值及最小值分别为 V2和 V4的横坐标。
获得四个角点 γ坐标的最大、 最小值, 分别为 MaxY、 MinY, 如图 4所示, 最大值及最小值分别为 VI和 V3的纵坐标。
③、根据步骤②可初步获得裁剪图像, 如图 4所示最外围的矩形 为初步裁剪图像。
④、 根据四个角点 VI、 V2、 V3、 V4 , 建立四条边 L1,L2,L3,L4 的坐标方程, 四条边构成矩形 V1V2V3V4。
⑤、根据步骤④获得的四条直线边的方程, 把步骤②初步裁剪图 像之内、 四条边构成的矩形边界之外的图像点统统都设置为黑。
⑥、 获得只保留四条边内部图像信息的最终的裁剪图像。
具体可参见图 8及图 9中的裁剪后及裁剪前的图像示意图,从图
9可看出, 裁剪前图像中的棋盘格占图像的有效百分比较小, 裁剪后 的图像图 8只保留了包含所有特征信息的棋盘格,大大增加了特征信 息占图像的百分比。
参见图 6, 为本发明图像裁剪实施例二示意图。 本实施例以图像 模板为同心圓举例说明:
如果图像模板为同心圓, 则特征信息检测后,检测到如图 6所示 的四个同心圓, 根据检测到的同心圓特征信息可以对图像进行剪裁, 最大程度的保留图像模板, 裁剪操作如图 6所示:
①、 获取检测到的四个同心圓中最外层同心圓的圓周边界的坐 标。
②、 获得最外层同心圓的圓周边界 X 坐标的最大、 最小值, 分 别为 MaxX、 MinX,如图 6所示,最大值及最小值分别为 V24和 V22 的横坐标。
获得最外层同心圓的圓周边界 Y 坐标的最大、 最小值, 分别为 MaxY、 MinY, 如图 6所示, 最大值及最小值分别为 V21和 V23的 纵坐标。
③、根据步骤②可初步获得裁剪图像, 如图 5所示的矩形为初步 裁剪图像。
④、根据四个点 VI、 V2、 V3、 V4,建立四条圓弧 L21,L22,L23,L24 , 四条圓弧构成一个圓。
⑤、 根据步骤④获得的圓的边界, 把步骤②初步裁剪图像之内、 圓边界之外的图像点统统都设置为黑。
⑥、 获得只保留四个同心圓图像信息的最终的裁剪图像。
本领域技术人员应当了解, 对于不同的特征信息, 对应的图像裁 剪操作也不同, 特征信息可以为角点、 直线、 圓、 橢圓、 矩形等, 对 应的构建边界及图像裁剪也略有不同, 在此不再举例说明。
参见图 7, 为本发明图像处理方法实施例四流程图。 本实施例为 较为详细的实施例流程图, 仍以棋盘格图像模板为例, 如图 7所示, 本实施例包括:
步骤 01,: 获取带棋盘格的视频文件。
本步骤具体包括:
首先, 打开摄像机, 准备拍摄;
其次, 一人拿棋盘格站在摄像机前晃动棋盘格模板;
最后, 获取到带棋盘格的视频文件。
获取图像还可以是实时检测到的图像, 如步骤 01。
步骤 01:通过摄像机实时捕获晃动带模板的图像所得到的一系列 帧图像。
本步骤具体包括:
首先, 打开摄像机, 准备拍摄;
其次, 一人晃动棋盘格站在摄像机前;
再次, 摄像机实时捕获帧图, 检测处理;
最后, 获取到一系列帧图像。
其中, 步骤 01, 是已经拍摄好的视频文件, 作为一个类似于 avi 文件格式存在计算机硬盘中, 其中该视频文件包含多帧图像; 步骤 01是实时拍摄获取到连续的帧图 (一幅图像)并存在计算机緩存中。 当然, 还可以通过其它方式获得图像, 如一个人可以操作摄像机动, 棋盘格固定; 或者不需要棋盘格, 直接拍摄一幅图像。 但由于在摄像 机拍摄过程中会有很多人为因素、摄像机抖动、 不小心干扰摄像机等 因素, 可能导致摄像机拍摄的图像不清晰, 以至于不适合用来作为标 定的图像。
在获得一帧或多帧 (幅) 图像后, 进行后续步骤, 包括: 步骤 021 : 读取每帧图像或实时捕获的图像。
步骤 022: 检测有无角点特征信息。 步骤 023: 判断检测到的角点数是否等于实际的角点数, 是则执 行步骤 03; 否则执行步骤 021。
步骤 03: 对图像进行裁剪, 即如图 7所示的步骤 03, :根据检测 的四个顶点位置的棋盘格裁剪图像, 获得有明显特征信息, 与模板的 特征信息数目相同的图像, 具体可参见图 5及图 6实施例的说明。
步骤 041: 计算裁剪图像的梯度。
步骤 042:计算图像是否清晰,是则执行步骤 05,进行差异检测; 否则执行步骤 021 , 继续读取下一图像。
步骤 041及步骤 042为判断图像是否清晰的步骤,主要根据帧图 像特征信息中边缘信息的高频分量的含量判断该帧图像是否清晰或 聚焦, 下面对图像清晰检测的操作进行详细说明:
A、 根据本实施例步骤 03 获得裁剪后的图像, 在图像清晰检测 时需要把裁剪后的图像(如图 8 )灰度化, 并求裁剪后图像的每个点 的梯度值 (如对图像做 Laplace变换 ),可以输出一个只有边缘信息的 灰度图, 如图 10所示。
B、 求图像梯度值的绝对值; 由于有时候所求的梯度值并不是一 个正数, 而灰度图像的每个象素点的值是 0到 255之间的一个数值, 因此, 需要求梯度值的绝对值, 转换成正数, 进一步转换成灰度图形 式表示。
C、 统计清晰图像对应的灰度图中边缘信息的白色线条的总的像 素和, 或将像素和除以该裁剪图像的大小(就是图像的总象素数目 ), 将白色像素点的总和或占图像的百分比设置为阈值 T1作为清晰度标 准。
D、 阈值放大。 从图 10 可以看出, 每个白色小方格中都有些噪 声点, 因此可以去噪处理, 或者提高阈值 T1到一定倍数, 对于不同 场景应用放大的倍数可以不同。 如果去噪效果很好, 基本不用放大。 如果去噪效果不好, 噪声点多, 需要放大的倍数在 2倍左右。
E、 对裁剪后的帧图像进行二值化, 获得二值化后的图像中所有 边缘信息的白色象素点的总和或边缘信息占图像的百分比。 F、 判断二值化后的图像中所有边缘信息的白色象素点的总和或 边缘信息占图像的百分比与清晰度标准阈值 T1的大小, 如果二值化 后的边缘信息的白色象素点的总和或边缘信息占图像的百分比大于 阈值 T1 , 则图像模糊, 舍弃该图像, 否则判断该图像清晰, 保留。
由图 10及图 11可以看出灰度图中的边缘信息(也就是求出的梯 度信息图的显示)。 图 10是清晰图像对应的灰度图, 图 11是模糊的 图像对应的灰度图。可以看出,清晰图像对应的灰度图中的线条 4艮细, 而图 11中的线条艮粗; 如果步骤 F计算出的白色像素点的总和大于 预设的阈值, 则图像中白色像素点较多, 如图 11所示, 线条比较粗, 对应的图像属于模糊的图像, 需要舍弃, 不进行后续检测。
判断图像是否清晰具体也可如图 7中步骤 042, 所示, 将最清晰 图像模板对应的特征信息(如边缘或轮廓信息)占裁剪图像百分比作 为预设的清晰度标准,如果实际获得的裁剪后的图像超过设定的清晰 度标准则表示图像模糊;也可以将裁剪后实际获得的图像中特征信息 的白色像素点总数与设定的清晰度标准比较,如果实际的白色像素点 总和超过设定的清晰度标准(白色像素点之和 )则表示图像模糊, 对 应的线条较粗, 如图 11所示。 在判断图像清晰之后, 进行下列步骤: 步骤 05: 判断前后帧图差异是否大于预设的阈值, 是则执行步 骤 06; 否则继续执行步骤 021。
如图 1至图 3实施例的说明可知,前后帧图差异判断可以在判断 图像是否清晰之后, 也可以在获得图像之后或同时做前后帧差异检 测, 不限于本实施例的具体顺序, 差异检测时一般设定一个合适的阈 值, 判断上一幅图像与当前图像是否差异过大, 若差异大则保留前后 帧图, 做后继检测、 判定; 否则舍弃其中一幅帧图。 前后帧图差异主 要是防止出现检测的图像一直处于静止不动无变化的状态。在本发明 实施例中, 判断前后帧图差异是否大于预设的阈值包括:
1.获得前、 后两帧图像的角点信息;
2.分别计算前、 后帧图对应位置角点坐标的差值;
3.所有对应角点坐标差值求和、求均值,设定该均值的一个阈值, 若超过该阈值则说明前、 后帧图差异大, 两幅图均保留; 否则, 舍弃 其中一幅。
步骤 06: 获得清晰的标定图像序列, 即多幅满足条件 (特征信 息、 清晰、 裁剪后) 的图像。
步骤 07: 判断图像序列是否数目大于 3 , 是则执行步骤 08; 否 则执行步骤 01或 01, ,继续获取图像; 如果使用二维或三维模板进行 摄像机标定法,则判断最终获得的有效标定帧图是否满足标定图像数 目的要求, 本实施例设置标定图像数目为 3。
步骤 08: 根据获得的清晰的 3幅图像及相应的特征信息, 对视 频获取设备进行标定。
步骤 09: 获得视频获取设备内外参数。
本实施例图像清晰判定是根据傅里叶光学理论: 图像清晰或聚焦 的程度主要由光强分布中高频分量的多少决定,高频分量少则图像模 糊, 高频分量丰富则图像清晰。 本实施例主要用图像光强分布的高频 分量的含量多少作为图像清晰或聚焦评价函数的主要依据。
由于图像存在边缘部分, 当完全聚焦时, 图像清晰, 包含边缘信 息的高频分量最多; 当离焦时, 图像模糊, 高频分量较少, 进而可通 过图像边缘信息的高频分量的多少来判断图像是否清晰或聚焦。
本实施例清晰度判断与现有的图像清晰或自动聚焦判定方法不 同。 现有的图像清晰或自动聚焦一般是对同一场景拍摄一系列图像, 从中找到最清晰的图像, 来自动调整摄像机焦距, 这种判断图像是否 清晰或自动聚焦的依据是对一系列图像进行相互参考,从中选取最清 晰的图像作为焦距是否调整到理想位置的依据。本实施例在视频标定 前期图像处理过程中是单独判断一幅图像是否清晰,或者判断图像中 具有明显特征信息的部分是否清晰,图像其他部分是否清晰则不需关 心。 同时, 本实施例图像清晰的判定不依据一系列图像, 而仅仅根据 单幅图像自身来判定是否清晰。
参见图 12, 为本发明图像处理系统实施例一示意图。 如图 12所 示, 本实施例包括: 图像获取模块 1 , 用于获取视频图像或实时捕获一系列帧图像。 特征信息检测模块 2, 与图像获取模块 1连接, 用于对获取的视 频图像或一系列帧图像中的帧图像进行特征信息检测,获得具有明显 特征信息的帧图像。
图像清晰判定模块 40,与特征信息检测模块 2及图像获取模块 1 连接, 用于对具有特征信息的帧图像进行与清晰度相应的处理, 并与 预设的清晰度标准进行比较, 保留清晰的图像。
图像差异检测模块 5 , 与图像清晰判定模块 40连接, 用于在当 前的帧图像与前一帧保存的图像之间的差异大于预设阈值时,保存当 前清晰的图像。
参见图 13 , 为本发明图像处理系统实施例二示意图。 如图 13所 示, 本实施例包括:
图像获取模块 1 , 用于获取视频图像或实时捕获一系列帧图像。 特征信息检测模块 2, 与图像获取模块 1连接, 用于对获取的视 频图像或一系列帧图像中的帧图像进行特征信息检测,获得具有明显 特征信息的帧图像。
图像裁剪模块 3 , 与特征信息检测模块 2连接, 用于对具有特征 信息的帧图像进行图像裁剪。
图像清晰判定模块 4,与图像裁剪模块 3及图像获取模 1块连接, 用于对裁剪后的帧图像进行与清晰度相应的处理,并与预设的清晰度 标准进行比较, 保留清晰的图像。
图像差异检测模块 5 , 与图像清晰判定模块 4连接, 用于判断当 前的帧图像与前一帧保存的图像之间的差异是否小于预设阈值,是则 保留其中一帧图像; 否则保存当前帧图像。
图像数目判定模块 6, 与图像差异检测模块 5及图像获取模块 1 连接, 用于对保存的帧图像数目检测, 在保存的帧图像数目未达到设 定数目时, 使图像获取模块执行选取下一帧图像进行后续操作。
本实施例还可以包括:标定模块 7,与图像数目判定模块 6连接, 用于根据获得的设定数目的清晰的帧图像及相应的特征信息,获得视 频获取设备内外参数, 对视频获取设备进行标定。
图 14为本发明图像处理系统实施例三示意图。 如图 14所示, 本 实施例包括:
图像获取模块 1,用于获取视频图像或实时捕获一系列帧图像。 图像差异检测模块 5 , 与图像获取模块 1连接, 用于在前后两帧 图像之间的差异大于预设阈值时, 保留前后两帧图像。
特征信息检测模块 2, 与图像获取模块 1连接, 用于对获取的视 频图像或一系列帧图像中的帧图像进行特征信息检测,获得具有明显 特征信息的帧图像。
图像清晰判定模块 4, 与特征信息检测模块 2及图像获取模块 1 连接, 用于对具有特征信息的帧图像进行与清晰度相应的处理, 并与 预设的清晰度标准进行比较,在当前帧图像不清晰时,舍弃当前图像, 使图像获取模块执行选取下一帧图像的操作。
图 15为本发明图像处理系统实施例四示意图。 如图 15所示, 本 实施例包括:
图像获取模块 1 , 用于获取视频图像或实时捕获一系列帧图像。 图像差异检测模块 5, 与图像获取模块 1连接, 用于在前后两帧 图像之间的差异大于预设阈值时, 保留前后两帧图像。
特征信息检测模块 2, 与图像获取模块 1连接, 用于对获取的视 频图像或一系列帧图像中的帧图像进行特征信息检测,获得具有明显 特征信息的帧图像。
图像裁剪模块 3, 与特征信息检测模块 2连接, 用于对具有特征 信息的帧图像进行图像裁剪。
图像清晰判定模块 4,与图像裁剪模块 3及图像获取模块 1连接, 用于对裁剪后的帧图像进行与清晰度相应的处理,并与预设的清晰度 标准进行比较, 保留清晰的图像。
图像数目判定模块 6 , 与图像清晰模块及图像获取模块 1连接, 用于检测保存的帧图像数目, 在保存的帧图像数目未达到设定数目 时, 使图像获取模块选取下一帧图像进行后续操作。 本实施例还可以包括:标定模块 7,与图像数目判定模块 6连接, 用于根据获得的设定数目的清晰的帧图像及相应的特征信息,获得视 频获取设备内外参数, 对视频获取设备进行标定。
图 12至图 15实施例可参见图 1至图 7方法实施例的相关说明, 具有图 1至图 7实施例相类似的功能及效果, 完成图像预处理,便于 后续标定操作, 具体不再举例说明。
本发明能有多种不同形式的具体实施方式, 上面以图 1至图 15 为例结合附图对本发明的技术方案作举例说明,这并不意味着本发明 所应用的具体实例只能局限在特定的流程或实施例结构中,本领域的 普通技术人员应当了解,上文所提供的具体实施方案只是多种优选用 法中的一些示例。
本领域普通技术人员可以理解: 实现上述方法实施例的全部或部 分步骤可以通过程序指令相关的硬件来完成,前述的程序可以存储于 一计算机可读取存储介质中, 该程序在执行时, 执行包括上述方法实 施例的步骤; 而前述的存储介质包括: ROM、 RAM, 磁碟或者光盘 等各种可以存储程序代码的介质。
最后应说明的是: 以上实施例仅用以说明本发明的技术方案, 而 非对其限制; 尽管参照前述实施例对本发明进行了详细的说明, 本领 域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技 术方案进行修改, 或者对其中部分技术特征进行等同替换; 而这些修 改或者替换,并不使相应技术方案的本质脱离本发明各实施例技术方 案的精神和范围。

Claims

权利要求
1.一种图像处理方法, 其特征在于, 包括:
获取视频图像或实时捕获一系列帧图像;
对前后两帧图像之间的差异进行检测,在两帧图像之间的差异小 于预设阈值时, 保留其中一帧图像;
对获取的帧图像进行特征信息检测;
对检测后具有特征信息的帧图像进行与清晰度相应的处理,并与 预设的清晰度标准进行比较, 保留清晰的图像。
2.根据权利要求 1所述的图像处理方法, 其特征在于, 所述对检 测后具有特征信息的帧图像进行与清晰度相应的处理之前, 还包括: 对检测后具有特征信息的图像进行图像裁剪,增大检测到的特征 信息占所述帧图像的百分比。
3.根据权利要求 1所述的图像处理方法, 其特征在于, 所述保留 清晰的图像之后, 还包括:
在保留的图像未达到设定的数目时,继续执行获取视频图像或实 时捕获一系列帧图像的步骤。
4.根据权利要求 1至 3中任一项所述的图像处理方法,其特征在 于, 还包括:
根据获得的清晰的图像及对应的特征信息, 执行摄像机标定, 获 得摄像机的内外参数。
5.根据权利要求 1所述的图像处理方法, 其特征在于, 所述对获 取的帧图像进行特征信息检测, 包括: 对帧图像的颜色特征信息、 亮度特征信息和 /或几何特征信息进 行检测;
所述颜色特征信息和亮度特征信息包括灰度特征信息、二值化图 像特征信息、 色彩变化特征信息;所述几何特征信息包括边缘、轮廓、 角点、 直线、 圓、 圓弧、 橢圓、 矩形、 或曲线特征信息, 或其上述任 意组合。
6.根据权利要求 5所述的图像处理方法, 其特征在于, 所述对获 取的帧图像进行特征信息检测, 包括:
读取每帧或实时捕获的一帧图像;
检测每帧图像的特征信息;
判断实际检测的特征信息是否与理论的特征信息数目相同,是则 执行后续操作; 否则舍弃当前检测的帧图像,检测当前检测图像的下 一帧图像。
7.根据权利要求 1所述的图像处理方法, 其特征在于, 所述获取 视频图像或实时捕获一系列帧图像, 包括:
通过视频获取设备获取晃动带模板的图像所得到的视频图像; 或通过视频获取设备实时捕获晃动带模板的图像所得到的一系 列帧图像。
8.根据权利要求 2所述的图像处理方法, 其特征在于, 所述对检 测后具有特征信息的图像进行图像裁剪, 包括:
根据检测后获得的特征信息, 获得该帧图像的特征信息的坐标; 获得特征信息对应的各点横坐标及纵坐标的最大值及最小值,初 步获得裁剪图像;
根据该帧图像的特征信息对应的各点的坐标获得特征信息对应 的各点构成的边界;
将所述各点构成的边界之外、初步获得的裁剪图像之内的区域裁 去;
根据所述各点构成的边界内部的图像信息, 获得裁剪后的图像。
9.根据权利要求 1所述的图像处理方法, 其特征在于, 所述对前 后两帧图像之间的差异进行检测, 包括:
获得前后两帧图像的特征点信息;
分别计算所述两帧图像对应位置的特征点坐标差值;
计算所有对应位置的特征点坐标差值, 并求平均值;
判断所述差值之和平均值是否小于预设阈值,是则前后两帧图像 之间的差异小, 舍弃其中一帧图像; 否则进行后续操作。
10. 根据权利要求 1、 2、 3、 5、 6、 7、 8或 9中任一项所述的 图像处理方法, 其特征在于, 所述对检测后具有特征信息的帧图像进 行与清晰度相应的处理, 并与预设的清晰度标准进行比较, 包括: 对检测后具有特征信息的帧图像计算所述特征信息中边缘信息 的高频分量的含量, 当帧图像特征信息中边缘信息的高频分量的含量 大于预设的清晰度标准对应的高频分量的含量时,判断该帧图像清晰 或聚焦。
11. 根据权利要求 4所述的图像处理方法, 其特征在于, 所述 对检测后具有特征信息的帧图像进行与清晰度相应的处理,并与预设 的清晰度标准进行比较, 包括:
对检测后具有特征信息的帧图像计算所述特征信息中边缘信息 的高频分量的含量, 当帧图像特征信息中边缘信息的高频分量的含量 大于预设的清晰度标准对应的高频分量的含量时,判断该帧图像清晰 或聚焦。
12. 根据权利要求 2或 8所述的图像处理方法, 其特征在于, 所述对检测后具有特征信息的帧图像进行与清晰度相应的处理,并与 预设的清晰度标准进行比较, 保留清晰的图像, 包括:
对裁剪后的帧图像灰度化, 并计算灰度化后帧图像的梯度值, 输 出一个只有边缘信息的灰度图;
根据所述灰度图设定对应的灰度图阈值作为清晰度标准; 对裁剪后的帧图像进行二值化,获得二值化后的帧图像对应的二 值化阈值;
当二值化后的帧图像对应的二值化阈值小于所述灰度图阈值时, 判定该图像清晰, 保留该帧图像。
13. 根据权利要求 2或 8所述的图像处理方法, 其特征在于, 所述对检测后具有特征信息的帧图像进行与清晰度相应的处理,并与 预设的清晰度标准进行比较, 保留清晰的图像, 包括:
对裁剪后的帧图像灰度化, 并计算灰度化后帧图像的梯度值, 输 出一个只有边缘信息的灰度图;
将所述灰度图边缘信息的白色像素点的总和或占图像的百分比 作为清晰度标准; 对裁剪后的帧图像进行二值化,获得二值化后的图像中所有边缘 信息的白色象素点的总和或边缘信息占图像的百分比;
当二值化后的图像中所有边缘信息的白色像素点的总和或边缘 信息占图像的百分比小于所述清晰度标准时, 判定该图像清晰, 保留 该帧图像。
14. 一种图像处理系统, 其特征在于, 包括:
图像获取模块, 用于获取视频图像或实时捕获一系列帧图像; 特征信息检测模块, 与所述图像获取模块连接, 用于对获取的视 频图像或一系列帧图像中的帧图像进行特征信息检测,获得具有明显 特征信息的帧图像;
图像清晰判定模块,与所述特征信息检测模块及所述图像获取模 块连接, 用于对具有特征信息的帧图像进行与清晰度相应的处理, 并 与预设的清晰度标准进行比较, 保留清晰的图像;
图像差异检测模块, 与所述图像清晰判定模块连接, 用于在当前 的帧图像与前一帧保存的图像之间的差异大于预设阈值时,保存当前 清晰的帧图像。
15. 根据权利要求 14所述的图像处理系统, 其特征在于, 还包 括:
图像裁剪模块,与所述特征信息检测模块及所述图像清晰判断模 块连接, 用于对具有特征信息的帧图像进行图像裁剪。
16. 根据权利要求 14所述的图像处理系统, 其特征在于, 还包 括: 图像数目判定模块,与所述图像差异检测模块及所述图像获取模 块连接, 用于检测保存的帧图像数目, 在所述保存的帧图像数目未达 到设定数目时, 使所述图像获取模块选取下一帧图像进行后续操作。
17. 根据权利要求 16所述的图像处理系统, 其特征在于, 还包 括:
标定模块,与所述图像数目判定模块或所述图像差异检测模块连 接, 用于根据获得的清晰的帧图像及相应的特征信息, 获得视频获取 设备内外参数, 对视频获取设备进行标定。
18. 一种图像处理系统, 其特征在于, 包括:
图像获取模块, 用于获取视频图像或实时捕获一系列帧图像; 图像差异检测模块, 与所述图像获取模块连接, 用于在前后两帧 图像之间的差异大于预设阈值时, 保留前后两帧图像;
特征信息检测模块, 与所述图像获取模块连接, 用于对获取的视 频图像或一系列帧图像中的帧图像进行特征信息检测,获得具有明显 特征信息的帧图像;
图像清晰判定模块,与所述特征信息检测模块及所述图像获取模 块连接, 用于对具有特征信息的帧图像进行与清晰度相应的处理, 并 与预设的清晰度标准进行比较, 在当前帧图像不清晰时, 舍弃当前图 像, 使所述图像获取模块执行选取下一帧图像的操作。
19. 根据权利要求 18所述的图像处理系统, 其特征在于, 还包 括:
图像裁剪模块,与所述特征信息检测模块及所述图像清晰判定模 块连接, 用于对具有特征信息的帧图像进行图像裁剪。
20. 根据权利要求 18所述的图像处理系统, 其特征在于, 还包 括:
图像数目判定模块,与所述图像清晰判定模块及图像获取模块连 接, 用于检测保存的帧图像数目, 在保存的帧图像数目未达到设定数 目时, 使所述图像获取模块选取下一帧图像进行后续操作。
21. 根据权利要求 20所述的图像处理系统, 其特征在于, 还包 括:
标定模块,与所述图像数目判定模块或所述图像清晰判定模块连 接, 用于根据获得的清晰的帧图像及相应的特征信息, 获得视频获取 设备内外参数, 对视频获取设备进行标定。
PCT/CN2009/070583 2008-03-05 2009-02-27 图像处理方法及系统 WO2009109125A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP09718449A EP2252088A4 (en) 2008-03-05 2009-02-27 IMAGE PROCESSING AND SYSTEM
US12/860,339 US8416314B2 (en) 2008-03-05 2010-08-20 Method and system for processing images

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN200810082736.2A CN101527040B (zh) 2008-03-05 2008-03-05 图像处理方法及系统
CN200810082736.2 2008-03-05

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US12/860,339 Continuation US8416314B2 (en) 2008-03-05 2010-08-20 Method and system for processing images

Publications (1)

Publication Number Publication Date
WO2009109125A1 true WO2009109125A1 (zh) 2009-09-11

Family

ID=41055550

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2009/070583 WO2009109125A1 (zh) 2008-03-05 2009-02-27 图像处理方法及系统

Country Status (4)

Country Link
US (1) US8416314B2 (zh)
EP (1) EP2252088A4 (zh)
CN (1) CN101527040B (zh)
WO (1) WO2009109125A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109800619A (zh) * 2017-11-16 2019-05-24 湖南生物机电职业技术学院 成熟期柑橘果实图像识别方法

Families Citing this family (41)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101527040B (zh) 2008-03-05 2012-12-19 华为终端有限公司 图像处理方法及系统
CN102098379A (zh) * 2010-12-17 2011-06-15 惠州Tcl移动通信有限公司 一种终端及其实时视频图像获取方法和装置
CN102098411B (zh) * 2010-12-20 2012-11-14 东莞市金翔电器设备有限公司 可实现复合传感的高速扫描方法
KR101366860B1 (ko) * 2011-09-20 2014-02-21 엘지전자 주식회사 이동 로봇 및 이의 제어 방법
CN103093451A (zh) * 2011-11-03 2013-05-08 北京理工大学 一种棋盘格交叉点识别算法
CN104427333A (zh) * 2013-08-20 2015-03-18 北京市博汇科技股份有限公司 一种高清电视信号检测方法及系统
CN103927749A (zh) * 2014-04-14 2014-07-16 深圳市华星光电技术有限公司 图像处理方法、装置和自动光学检测机
US9646224B2 (en) 2014-04-14 2017-05-09 Shenzhen China Star Optoelectronics Technology Co., Ltd. Image processing method, image processing device and automated optical inspection machine
CN103927750B (zh) * 2014-04-18 2016-09-14 上海理工大学 棋盘格图像角点亚像素的检测方法
CN104936002B (zh) * 2015-06-05 2018-11-06 网易有道信息技术(北京)有限公司 一种屏幕录制的方法和装置
CN107026827B (zh) * 2016-02-02 2020-04-03 上海交通大学 一种用于视频流中静止图像的优化传输方法
CN106231201B (zh) * 2016-08-31 2020-06-05 成都极米科技股份有限公司 自动对焦方法及装置
CN106651908B (zh) * 2016-10-13 2020-03-31 北京科技大学 一种多运动目标跟踪方法
CN106454094A (zh) * 2016-10-19 2017-02-22 广东欧珀移动通信有限公司 拍摄方法、装置以及移动终端
WO2018076370A1 (zh) * 2016-10-31 2018-05-03 华为技术有限公司 一种视频帧的处理方法及设备
CN106789555A (zh) * 2016-11-25 2017-05-31 努比亚技术有限公司 视频数据传输方法及装置
CN106990518A (zh) * 2017-04-17 2017-07-28 深圳大学 一种血涂片自聚焦显微成像方法
WO2019001723A1 (en) * 2017-06-30 2019-01-03 Nokia Solutions And Networks Oy VIDEO IN REAL TIME
CN107506701B (zh) * 2017-08-08 2021-03-05 大连万和海拓文化体育产业有限公司 一种基于视频识别技术的围棋自动记谱方法
CN107358221B (zh) * 2017-08-08 2020-10-09 大连万和海拓文化体育产业有限公司 一种基于视频识别技术的围棋自动记谱的棋盘定位方法
CN107610131B (zh) * 2017-08-25 2020-05-12 百度在线网络技术(北京)有限公司 一种图像裁剪方法和图像裁剪装置
CN107610108B (zh) * 2017-09-04 2019-04-26 腾讯科技(深圳)有限公司 图像处理方法和装置
JP7353747B2 (ja) * 2018-01-12 2023-10-02 キヤノン株式会社 情報処理装置、システム、方法、およびプログラム
CN108596874B (zh) * 2018-03-17 2024-01-05 紫光汇智信息技术有限公司 图像清晰判定方法、装置,以及计算机设备、产品
CN110971811B (zh) * 2018-09-30 2022-11-15 中兴通讯股份有限公司 图像筛选方法、系统、终端及计算机可读存储介质
CN111091526B (zh) * 2018-10-23 2023-06-13 广州弘度信息科技有限公司 一种视频模糊的检测方法和系统
CN109493391A (zh) * 2018-11-30 2019-03-19 Oppo广东移动通信有限公司 摄像头标定方法和装置、电子设备、计算机可读存储介质
CN109598764B (zh) * 2018-11-30 2021-07-09 Oppo广东移动通信有限公司 摄像头标定方法和装置、电子设备、计算机可读存储介质
CN109862390B (zh) * 2019-02-26 2021-06-01 北京融链科技有限公司 媒体流的优化方法和装置、存储介质、处理器
CN110049247B (zh) * 2019-04-28 2021-03-12 Oppo广东移动通信有限公司 图像优选方法、装置、电子设备及可读存储介质
CN110298229B (zh) * 2019-04-29 2022-04-01 星河视效科技(北京)有限公司 视频图像处理方法及装置
CN110517246B (zh) * 2019-08-23 2022-04-08 腾讯科技(深圳)有限公司 一种图像处理方法、装置、电子设备及存储介质
CN110782647A (zh) * 2019-11-06 2020-02-11 重庆神缘智能科技有限公司 基于图像识别的智能抄表系统
CN111161332A (zh) * 2019-12-30 2020-05-15 上海研境医疗科技有限公司 同源病理影像配准预处理方法、装置、设备及存储介质
EP4071713B1 (en) * 2019-12-30 2024-06-19 Huawei Technologies Co., Ltd. Parameter calibration method and apapratus
US11043007B1 (en) * 2020-05-26 2021-06-22 Black Sesame International Holding Limited Dual camera calibration
CN111639708B (zh) * 2020-05-29 2023-05-09 深圳市燕麦科技股份有限公司 图像处理方法、装置、存储介质及设备
CN112351196B (zh) * 2020-09-22 2022-03-11 北京迈格威科技有限公司 图像清晰度的确定方法、图像对焦方法及装置
CN112989113B (zh) * 2021-04-21 2021-08-31 北京金和网络股份有限公司 图片的筛查方法、装置和设备
CN113286079B (zh) * 2021-05-10 2023-04-28 迈克医疗电子有限公司 图像对焦方法、装置、电子设备及可读存储介质
CN114140777A (zh) * 2021-12-06 2022-03-04 南威软件股份有限公司 一种历史建筑风貌保护预警方法、装置及介质

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1645899A (zh) * 2004-01-23 2005-07-27 三洋电机株式会社 图像信号处理装置
CN1949271A (zh) * 2005-10-11 2007-04-18 索尼株式会社 图像处理装置

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2195214B (en) * 1980-12-10 1988-10-26 Emi Ltd Automatic focussing system for an optical system
US6956779B2 (en) * 1999-01-14 2005-10-18 Silicon Storage Technology, Inc. Multistage autozero sensing for a multilevel non-volatile memory integrated circuit system
US6768509B1 (en) * 2000-06-12 2004-07-27 Intel Corporation Method and apparatus for determining points of interest on an image of a camera calibration object
GB2370438A (en) * 2000-12-22 2002-06-26 Hewlett Packard Co Automated image cropping using selected compositional rules.
KR20040065928A (ko) * 2003-01-16 2004-07-23 삼성전자주식회사 동영상 출력기능을 갖는 프린터 및 그 제어방법
CN1567370A (zh) * 2003-06-23 2005-01-19 威视科技股份有限公司 影像处理方法及装置
US8896725B2 (en) * 2007-06-21 2014-11-25 Fotonation Limited Image capture device with contemporaneous reference image capture mechanism
US7454040B2 (en) * 2003-08-29 2008-11-18 Hewlett-Packard Development Company, L.P. Systems and methods of detecting and correcting redeye in an image suitable for embedded applications
JP4324038B2 (ja) * 2004-07-07 2009-09-02 Okiセミコンダクタ株式会社 Yc分離回路
US20060023077A1 (en) * 2004-07-30 2006-02-02 Microsoft Corporation System and method for photo editing
US20060114327A1 (en) * 2004-11-26 2006-06-01 Fuji Photo Film, Co., Ltd. Photo movie creating apparatus and program
CN101419705B (zh) 2007-10-24 2011-01-05 华为终端有限公司 摄像机标定的方法及装置
CN101527040B (zh) 2008-03-05 2012-12-19 华为终端有限公司 图像处理方法及系统
US8300929B2 (en) * 2009-10-07 2012-10-30 Seiko Epson Corporation Automatic red-eye object classification in digital photographic images

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1645899A (zh) * 2004-01-23 2005-07-27 三洋电机株式会社 图像信号处理装置
CN1949271A (zh) * 2005-10-11 2007-04-18 索尼株式会社 图像处理装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP2252088A4 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109800619A (zh) * 2017-11-16 2019-05-24 湖南生物机电职业技术学院 成熟期柑橘果实图像识别方法
CN109800619B (zh) * 2017-11-16 2022-07-01 湖南生物机电职业技术学院 成熟期柑橘果实图像识别方法

Also Published As

Publication number Publication date
CN101527040A (zh) 2009-09-09
US20100315512A1 (en) 2010-12-16
EP2252088A1 (en) 2010-11-17
EP2252088A4 (en) 2011-04-27
CN101527040B (zh) 2012-12-19
US8416314B2 (en) 2013-04-09

Similar Documents

Publication Publication Date Title
WO2009109125A1 (zh) 图像处理方法及系统
KR102291081B1 (ko) 이미지 처리 방법 및 장치, 전자 장치 및 컴퓨터-판독 가능 저장 매체
CN107370958B (zh) 图像虚化处理方法、装置及拍摄终端
KR102474041B1 (ko) 이미지 데이터에서의 키포인트들의 검출
CN108932698B (zh) 图像畸变的校正方法、装置、电子设备和存储介质
WO2019105262A1 (zh) 背景虚化处理方法、装置及设备
WO2019233264A1 (zh) 图像处理方法、计算机可读存储介质和电子设备
US8698916B2 (en) Red-eye filter method and apparatus
WO2019011147A1 (zh) 逆光场景的人脸区域处理方法和装置
US10939035B2 (en) Photograph-capture method, apparatus, terminal, and storage medium
TWI462054B (zh) Estimation Method of Image Vagueness and Evaluation Method of Image Quality
JP2009506688A (ja) 画像分割方法および画像分割システム
US8773731B2 (en) Method for capturing high-quality document images
WO2017206444A1 (zh) 一种成像差异检测方法、装置及计算机存储介质
JP2005309559A (ja) 画像処理方法および装置並びにプログラム
US11836900B2 (en) Image processing apparatus
WO2022089386A1 (zh) 激光图案提取方法、装置、激光测量设备和系统
JP6755787B2 (ja) 画像処理装置、画像処理方法およびプログラム
CN111242074B (zh) 一种基于图像处理的证件照背景替换方法
JP2005309560A (ja) 画像処理方法および装置並びにプログラム
WO2019011110A1 (zh) 逆光场景的人脸区域处理方法和装置
TW201720131A (zh) 影像擷取中進行輔助光源調節的方法及裝置
CN112261292B (zh) 图像获取方法、终端、芯片及存储介质
CN108289170B (zh) 能够检测计量区域的拍照装置、方法及计算机可读介质
US8885971B2 (en) Image processing apparatus, image processing method, and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 09718449

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2009718449

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE