[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN111354047B - Computer vision-based camera module positioning method and system - Google Patents

Computer vision-based camera module positioning method and system Download PDF

Info

Publication number
CN111354047B
CN111354047B CN201811577406.0A CN201811577406A CN111354047B CN 111354047 B CN111354047 B CN 111354047B CN 201811577406 A CN201811577406 A CN 201811577406A CN 111354047 B CN111354047 B CN 111354047B
Authority
CN
China
Prior art keywords
image
module
roi
fitting
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811577406.0A
Other languages
Chinese (zh)
Other versions
CN111354047A (en
Inventor
孔庆杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jingrui Vision Intelligent Technology Shanghai Co ltd
Original Assignee
Jingrui Vision Intelligent Technology Shanghai Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jingrui Vision Intelligent Technology Shanghai Co ltd filed Critical Jingrui Vision Intelligent Technology Shanghai Co ltd
Priority to CN201811577406.0A priority Critical patent/CN111354047B/en
Publication of CN111354047A publication Critical patent/CN111354047A/en
Application granted granted Critical
Publication of CN111354047B publication Critical patent/CN111354047B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20036Morphological image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker
    • G06T2207/30208Marker matrix

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses a camera module positioning method and a camera module positioning system based on computer vision, wherein the method comprises the following steps: collecting an image of a camera module through a high-definition camera; preprocessing an image; dividing the preprocessed image according to the module template parameters to obtain each lens ROI; performing preliminary fitting on each lens ROI to obtain respective circle centers, and mapping back to an original image; determining the coordinates of the acquired fitted central point, and determining the point error according to the rule of the adjacent points; determining a loss function; traversing in a certain direction according to the loss function, and adjusting the coordinate point to be the position with the minimum loss which can be obtained currently. According to the invention, the original image of the camera module is processed through computer vision, the shape characteristics of the image are utilized for fitting, and the position of the fitted point is adjusted to a certain extent by combining the accuracy requirement of detection, so that the error is reduced.

Description

Computer vision-based camera module positioning method and system
Technical Field
The invention relates to the field of detection and positioning of camera modules, in particular to a camera module positioning method and system based on computer vision.
Background
In the prior art, the geometric position positioning is carried out by the traditional computer vision technology, and the specific steps are as follows: firstly, segmenting an acquired image, acquiring the ROI of each lens, and fitting the ROI to obtain a fitting center point. This approach has the following problems: certain deviation may exist in the process of performing the graph fitting due to the influence of uncertain noise points and pixel resolution, and the influence is also great in the detection of the high-precision camera module.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides a camera module positioning method and a camera module positioning system based on computer vision, which are used for further accurately fitting and positioning a camera module. The method solves the technical problems of deviation and the like caused by the influence of uncertain noise points and pixel resolution in the prior art.
The technical scheme for solving the technical problems is as follows: a camera module positioning method based on computer vision comprises the following steps: collecting an image of a camera module through collecting equipment; preprocessing the image; dividing the preprocessed image according to the module template parameters to obtain the ROI of each lens; performing preliminary fitting on each lens ROI to obtain respective circle centers, and mapping the circle centers back into an original image; determining the coordinates of the acquired fitted central point, and determining the point error according to the rule of the adjacent points; determining a loss function according to the standard position of the camera module; traversing according to the loss function and the error determined according to the rule of the adjacent points, and adjusting the coordinate point to be the position with the minimum loss which can be obtained currently.
Wherein the preprocessing operation for the image includes: the acquired image is subjected to graying treatment, information is simplified, and Gaussian filtering is used for eliminating noise; performing binarization operation on the threshold value of the image, and performing pixel level segmentation of objects and backgrounds in the image through the threshold value operation; the image is refilled and noise-protected by morphological opening and closing operations.
The method for obtaining the ROI of each lens comprises the steps of: performing linear fitting on the edges of the camera module in the preprocessed image to obtain the ROI of the camera module; and dividing the acquired die set ROI according to the proportion of the die set ROI and the die set according to the die set parameters of the camera die set, and acquiring the ROI position of each lens.
The initial fitting is performed on each lens ROI to obtain respective circle centers, and the mapping back to the original image includes: contour extraction is carried out on each ROI; performing ellipse fitting operation on the extracted contour set to obtain the circle center, width and height of each ellipse; screening all ellipse sets obtained by fitting; and mapping each circle center coordinate back to the original image respectively to obtain a circle center preliminary coordinate under the global image coordinate system.
The determining the obtained coordinates of the fitted center point, and determining the point error according to the rule of the adjacent points includes: obtaining the corresponding proportion of the actual distance of the lens on the image pixels according to the resolution of the current image; obtaining the distance between adjacent points according to the acquired image proportion and the detection precision requirement; and expanding the original circle center into a set according to the obtained rough circle center coordinates and the distances between adjacent points.
Wherein the Loss function expression is los=m (1-abs (sin (a))) +n (L (ABC) -L (stand)
ard), where m and n are arbitrarily adjusted constants, set according to the currently acquired imaging; l represents the circumference. L (standard) is the perimeter of the pattern under a standard template.
In another aspect, an embodiment of the present invention provides a camera module positioning system based on computer vision, where the system includes: the image acquisition module is used for acquiring the image of the camera module through acquisition equipment; the image preprocessing module is used for preprocessing the image; the image segmentation module is used for segmenting the preprocessed image according to the module template parameters to obtain the ROI of each lens; the preliminary fitting module is used for respectively carrying out preliminary fitting on each lens ROI to obtain respective circle centers and mapping the circle centers back to the original image; the adjacent point determining module is used for determining the coordinates of the acquired fitted central point and determining the point error according to the rule of the adjacent points; and the optimizing loss module is used for determining a loss function, traversing according to errors determined by rules of adjacent points according to the loss function, and adjusting the coordinate point to be the position with the minimum loss currently obtained.
Wherein, the image preprocessing module includes: the denoising module is used for carrying out graying treatment on the acquired image, simplifying information and then eliminating noise by using Gaussian filtering; the binarization module is used for carrying out binarization operation on the threshold value of the image, and carrying out pixel level segmentation on objects and backgrounds in the image through the threshold value operation; and the filling noise-proof module is used for carrying out refilling noise-proof on the image through morphological opening and closing operation.
Wherein the image segmentation module comprises: the straight line fitting module is used for carrying out straight line fitting on the edges of the image pickup module in the preprocessed image to obtain the ROI of the image pickup module; the position determining module is used for dividing the acquired die set ROI according to the proportion of the die set ROI and the die set according to the die set parameters of the camera die set, and acquiring the ROI position of each lens.
Wherein, the preliminary fitting module includes: the contour extraction module is used for extracting the contour of the lens ROI; the ellipse fitting module is used for performing ellipse fitting operation on the extracted contour set to obtain the circle center, the width and the height of each ellipse; the screening module is used for screening all the ellipse sets obtained by fitting; and the mapping module is used for mapping each circle center coordinate back to the original image respectively to obtain circle center preliminary coordinates under the global image coordinate system.
Wherein the determining neighboring point module includes: the pixel determining module is used for obtaining the corresponding proportion of the actual distance of the lens on the image pixels according to the resolution of the current image, and the adjacent point distance determining module is used for obtaining the distance of the adjacent point according to the obtained image proportion and the detection precision requirement, and expanding the original circle center into a set according to the obtained circle center preliminary coordinates and the distance of the adjacent point.
Wherein, the Loss function expression of the optimized Loss module is los=m (1-abs (sin (A))) +n (L)
(ABC) -L (standard)), where m and n are arbitrarily adjusted constants, set according to the currently acquired imaging; l represents the circumference. L (standard) is the perimeter of the pattern under a standard template.
The technical scheme provided by the invention has the beneficial effects that: the invention provides a camera module positioning method and a camera module positioning system based on computer vision, aiming at the technical problems of deviation and the like caused by the influence of uncertain noise points and pixel resolution in the prior art. The invention can carry out smooth filtering on the image through Gaussian filtering and remove noise; the image can be free from involving the multi-level value of the pixel through the binarization operation, the processing becomes simple, and the processing and compression amount of the data are small; a small amount of white spots and black spots which are generated due to slight differences of lighting conditions of each image can be removed through morphological opening and closing operation, and the influence of noise on the image is reduced; the method comprises the steps of fitting by utilizing the shape characteristics of an image, setting a loss function according to the accuracy requirement of detection, and adjusting the position of the fitted point to a certain extent to obtain the coordinate of the minimum loss position by adjusting the coordinate, so that the error is reduced.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flowchart of a method for positioning a camera module based on computer vision according to an embodiment of the present invention;
FIG. 2 is a diagram of a three-shot module according to an embodiment of the present invention;
FIG. 3 is a flow chart of step S200;
FIG. 4 is a flow chart of step S300;
FIG. 5 is a flow chart of step S400;
fig. 6 is a schematic diagram of a center position after rough fitting and positions of three lenses to be measured according to the first embodiment of the present invention;
FIG. 7 is a flow chart of step S500;
FIG. 8 is a schematic diagram illustrating the definition of the location of a neighboring point according to the first embodiment of the present invention;
FIG. 9 is a schematic diagram illustrating a loss function definition according to a first embodiment of the present invention;
FIG. 10 is a schematic view of a traversal range of coordinate point optimization provided by an embodiment of the invention;
fig. 11 is a schematic structural diagram of a camera module positioning system based on computer vision according to a second embodiment of the present invention;
fig. 12 is a schematic diagram of a preprocessing module structure of a camera module positioning system based on computer vision according to a second embodiment of the present invention;
fig. 13 is a schematic diagram of an image segmentation module structure of a camera module positioning system based on computer vision according to a second embodiment of the present invention;
fig. 14 is a schematic diagram of a rough fitting module structure of a camera module positioning system based on computer vision according to a second embodiment of the present invention;
fig. 15 is a schematic diagram of determining a module structure of adjacent points by a camera module positioning system based on computer vision according to a second embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present invention more apparent, the embodiments of the present invention will be described in further detail with reference to the accompanying drawings.
Example 1
Fig. 1 is a flowchart of a method for positioning a camera module based on computer vision according to an embodiment of the present invention, referring to fig. 1, the method includes the following steps:
s100, acquiring an image of a camera module through acquisition equipment; fig. 2 is a three-shot module image acquired by an acquisition device, such as a high-definition camera, according to an embodiment of the present invention;
s200, preprocessing the image;
s300, dividing the preprocessed image according to module template parameters to obtain the ROI of each lens;
s400, performing preliminary fitting on each lens ROI to obtain respective circle centers, and mapping the circle centers back to the original image;
s500, determining the coordinates of the acquired fitted central point, and determining the point error according to the rule of the adjacent points;
s600, determining the composition of a loss function formula according to the standard position of the camera module, traversing according to the loss function and the error determined according to the rule of the adjacent points, and adjusting the coordinate point to be the position with the minimum loss which can be obtained currently.
Wherein the Loss function expression is los=m (1-abs (sin (a))) +n (L (ABC) -L (stand)
ard), where m and n are arbitrarily adjusted constants, set according to the currently acquired imaging; l represents the circumference. L (standard) is the perimeter of the graphic under standard templates, the constraints of the standard are: three lenses form a triangle with a fixed angle and the distance between the center points of the three lenses.
Referring to fig. 3, step S200 further includes:
s201, performing gray processing on the acquired image, simplifying information, and eliminating noise by using Gaussian filtering;
s202, performing binarization operation on a threshold value of the image, and performing pixel level segmentation on an object and a background in the image through the threshold value operation;
s203, refilling and noise-preventing the image through morphological opening and closing operation.
The graying process is a process of converting a color image into a gray image. The color of each pixel in the color image is determined by R, G, B components, and each component preferably has a 255 median value, such a pixel may have 1600 tens of thousands of pixels (255 x 255) a range of color variations. The gray-scale image is a special color image with the same R, G, B components, and the variation range of one pixel point is 255, so that in the digital image processing, various formats of images are generally converted into gray-scale images first so that the calculation amount of the subsequent images is reduced. The description of the gray scale image, like the color image, still reflects the distribution and characteristics of the chromaticity and luminance levels throughout and locally of the image. The Gaussian filter is a filter of a signal, the purpose of the Gaussian filter is smoothing of the signal, the filter is a mathematical model which is built, the image data is subjected to energy conversion through the model, noise belongs to a high-frequency part, and the influence of the noise is reduced after the Gaussian filter is smoothed.
The binarization operation of the image is to set the gray scale of a point on the image to 0 or 255, that is, the whole image presents a remarkable black-and-white effect, namely, the gray scale image with 256 brightness levels is selected by a proper threshold value to obtain a binarized image which can still reflect the whole and local characteristics of the image. The gray level image is binarized to obtain a binarized image, so that the image is favorable for further processing, the aggregate property of the image is only related to the position of a point with a pixel value of 0 or 255, the multi-level value of the pixel is not related any more, the processing is simple, and the processing and compression amount of data are small. In order to obtain an ideal binary image, a closed, connected boundary is generally used to define a non-overlapping region. All pixels with gray levels greater than or equal to the threshold are determined to belong to a particular object, with gray values of 255 indicating that otherwise the pixel points are excluded from the object area, and with gray values of 0 indicating the background or exceptional object area. If a particular object has uniform gray values within it and it is in a uniform background with other levels of gray values, a thresholding method can be used to obtain a comparative segmentation effect.
Wherein the morphological opening operation is an operation of etching before expanding, and has the function of eliminating fine objects, separating the objects at the fine and smoothing the boundary of larger objects, and the expression is:
the morphological opening operation is an operation of expanding first and then corroding, has the functions of filling tiny cavities in an object and connecting adjacent objects and smooth boundaries, and has the expression:
wherein,
wherein,is a corrosion operation, the corrosion result is composed of a set of shifting elements z, so that the result of shifting the elements by B is still included inIn A; />Is a dilation operation, the dilation result is also a set of shifted elements z, such that B1 has at least one overlapping element with A, A, B is two sets on the image, B 1 Is the reflection of B with respect to its origin.
Referring to fig. 4, step S300 further includes:
s301, performing straight line fitting on the edges of the camera module in the preprocessed image to obtain the ROI of the camera module;
s302, dividing the acquired die set ROI according to the proportion of the die set ROI and the die set according to the die set parameters of the camera die set, and acquiring the ROI position of each lens;
the ROI (region of interest) refers to a region of interest, and the region to be processed is outlined from the processed image in the modes of a square frame, a circle, an ellipse, an irregular polygon and the like, which is called the region of interest, and is the focus of image analysis, and the ROI is used for defining the target, so that the processing time can be reduced, and the accuracy can be increased. An important way of image segmentation is by edge detection, i.e. detecting where the grey level or structure has abrupt changes, indicating the ending of one region and also where another region starts, where the line fit is the least squares method. Such discontinuities are referred to as edges. The different images differ in gray scale, with generally sharp edges at the boundaries, with which the image can be segmented. And fitting the local gray value of the image by using the parameter model of the edge, and then carrying out edge detection on the fitted parameter model.
Referring to fig. 5, step S400 further includes:
s401, extracting the contour of each ROI; because the pretreatment is carried out before, the three ROIs are binary images at the moment, and the contour extraction can be directly carried out on each ROI respectively;
s402, performing ellipse fitting operation on the extracted contour set to obtain the circle center, width and height of each ellipse;
s403, screening all ellipse sets obtained by fitting; the specific method comprises the following steps: if one of the width or height of the ellipse is smaller than a certain range, for example, if the ellipse is too small, filtering the ellipse; if the circle center of the ellipse deviates from the center point of the respective ROI too far, for example, the ellipse fitted by the noise which cannot be completely eliminated in the pretreatment stage is filtered;
s404, mapping each circle center coordinate back to the original image respectively to obtain a circle center preliminary coordinate under the global image coordinate system; because the three circle center coordinates obtained by the preliminary fitting are relative to the respective ROI coordinate system, in order to perform the subsequent accurate fitting operation, the three coordinates need to be mapped back to the original image respectively, and the circle center preliminary coordinates under the three global image coordinate systems are obtained, and the specific mapping method is as follows: and respectively adding the abscissa of each circle center with the abscissa of the origin of the coordinate of the ROI of each circle center relative to the abscissa of the global image, thereby obtaining three global circle center coordinates for subsequent processing.
Referring to fig. 6, fig. 6 is a diagram showing a circle center position and positions of three lenses to be measured after preliminary fitting in step S400 according to the first embodiment of the present invention;
referring to fig. 7, step S500 further includes:
s501, obtaining the corresponding proportion of the actual distance of the lens on the image pixels according to the resolution of the current image;
s502, obtaining the distance between adjacent points according to the acquired image proportion and the detection precision requirement;
s503, expanding the original circle center into a set according to the obtained rough circle center coordinates and the distance between the adjacent points.
The distance between adjacent points is measured by Euclidean distance, so that the points are expanded into a point set, the Euclidean distance refers to the real distance between two points in m-dimensional space, or the natural length of a vector (namely the distance between the point and an origin), and the Euclidean distance of the two-dimensional space is expressed as follows:
wherein x and y are the abscissas of two points respectively.
FIG. 8 is a schematic diagram illustrating the definition of the location of a neighboring point according to the first embodiment of the present invention; as can be seen from fig. 8, when the fitting center point is at the center pixel position in the figure and the adjacent point distance is 1, at this time, the fitting center point becomes a point set with 1 as the distance, and the pixel point is changed from 1 to 9; when the adjacent point distance is 2, the fitting center point becomes a point set with 2 as a distance, and the pixel point is changed from 1 to 25.
FIG. 9 is a schematic diagram illustrating a loss function definition according to a first embodiment of the present invention; referring to fig. 9, the Loss function expression is loss=m (1-abs (sin (a))) +n (L (ABC) -L (standard)), where m and n are arbitrarily adjusted constants, set according to the currently acquired imaging; l represents the circumference. L (standard) is the perimeter of the graphic under a standard template, the constraints of which are: three lenses form a triangle with a fixed angle and the distance between the center points of the three lenses. Because the angle of the image has a large influence, the distance between the center points of the three lenses is also influenced by the angle of the triangle, so that the specific gravity of the influence of the angle in the definition loss is about 5 times of the distance, and the specific formula is adjusted according to the difference of samples to be measured. The loss function expression involves two judgment conditions, wherein the judgment condition 1 is to judge the angle formed by two short sides of a triangle formed by three lenses, the judgment condition 2 is to judge the lengths of three sides of the triangle formed by three lenses, and different function expressions are obtained according to the judgment result.
FIG. 10 is a schematic view of a traversal range of coordinate point optimization provided by an embodiment of the invention; referring to fig. 10, scanning and traversing points with a distance X (custom constant) from top to bottom according to the positions of the pixel points, and at the same time, observing the change of a Loss function along with the traversing process, wherein the position with the minimum Loss function value finally obtained is the target position to be determined.
The embodiment of the invention realizes the aim of reducing the image error by the camera module positioning method based on computer vision, and particularly, the image can be subjected to smooth filtering through gray value operation and Gaussian filtering, and noise is removed; the image can be free from involving the multi-level value of the pixel through the binarization operation, the processing becomes simple, and the processing and compression amount of the data are small; a small amount of white spots and black spots which are generated due to slight differences of lighting conditions of each image can be removed through morphological opening and closing operation, and the influence of noise on the image is reduced; the method comprises the steps of fitting by utilizing the shape characteristics of an image, setting a loss function according to the accuracy requirement of detection, and adjusting the position of the fitted point to a certain extent to obtain the coordinate of the minimum loss position by adjusting the coordinate, so that the error is reduced.
Example two
The embodiment of the invention provides a camera module positioning system based on computer vision, which is suitable for a camera module positioning method based on computer vision, and referring to fig. 11, the system comprises: the image acquisition module 100 is used for acquiring an image of the camera module through acquisition equipment; the image preprocessing module 200 is connected with the image acquisition module 100 and is used for preprocessing the image; the image segmentation module 300 is connected with the image preprocessing module 200 and is used for segmenting the preprocessed image according to the module template parameters to obtain the ROI of each lens; the preliminary fitting module 400 is connected with the image segmentation module 300 and is used for respectively performing preliminary fitting on each lens ROI to obtain respective circle centers and mapping the circle centers back into the original image; the adjacent point determining module 500 is connected with the preliminary fitting module 400 and is used for determining the coordinates of the obtained fitted central point and determining the point error according to the rule of the adjacent points; the optimizing loss module 600 is connected to the adjacent point determining module 500, and is configured to determine a loss function, traverse the error determined according to the rule of the adjacent point according to the loss function, and adjust the coordinate point to be the position where the loss is the smallest.
Fig. 12 is a schematic diagram of a preprocessing module structure of a camera module positioning system based on computer vision according to a second embodiment of the present invention; referring to fig. 12, the image preprocessing module 200 includes: the denoising module 201 is used for performing graying treatment on the acquired image, simplifying information, and then eliminating noise by using Gaussian filtering; a binarization module 202, configured to perform a binarization operation on a threshold value of the image, and perform pixel level segmentation of an object and a background in the image through the threshold value operation; and the filling noise prevention module 203 is used for performing filling noise prevention on the image through morphological opening and closing operation.
Fig. 13 is a schematic diagram of an image segmentation module structure of a camera module positioning system based on computer vision according to a second embodiment of the present invention; referring to fig. 13, the image segmentation module 300 includes: the straight line fitting module 301 is configured to perform straight line fitting on the edges of the camera module in the preprocessed image to obtain an ROI of the camera module; the position determining module 302 is configured to divide the obtained module ROI according to a ratio of the module ROI to the template according to the template parameter of the camera module, and obtain the ROI position of each lens.
Fig. 14 is a schematic diagram of a rough fitting module structure of a camera module positioning system based on computer vision according to a second embodiment of the present invention; as can be seen from fig. 14, the preliminary fitting module 400 includes: a contour extraction module 401, configured to perform contour extraction on each ROI; an ellipse fitting module 402, configured to perform ellipse fitting operation on the extracted contour set, so as to obtain a circle center, a width and a height of each ellipse; a screening module 403, configured to screen all ellipse sets obtained by fitting; the specific method comprises the following steps: if one of the width or height of the ellipse is smaller than a certain range, for example, if the ellipse is too small, filtering the ellipse; if the circle center of the ellipse deviates from the center point of the respective ROI too far, for example, the ellipse fitted by the noise which cannot be completely eliminated in the pretreatment stage is filtered; the mapping module 404 is configured to map each circle center coordinate back to the original image, to obtain a rough circle center coordinate in the global image coordinate system, where the specific mapping method is as follows: and respectively adding the abscissa of each circle center with the abscissa of the origin of the coordinate of the ROI of each circle center relative to the abscissa of the global image, thereby obtaining three global circle center coordinates for subsequent processing.
Fig. 15 is a schematic diagram of a module structure for determining adjacent points of a camera module positioning system based on computer vision according to a second embodiment of the present invention, as can be seen from fig. 15, the module 500 for determining adjacent points includes: the pixel determining module 501 is configured to obtain a corresponding proportion of an actual distance of the lens on an image pixel according to a resolution of a current image, and the adjacent point distance determining module 502 is configured to obtain a distance of an adjacent point according to an obtained image proportion and a detected precision requirement, and expand an original circle center into a set according to the obtained rough coordinate of the circle center and the distance of the adjacent point.
Wherein, the Loss function expression of the optimized Loss module is los=m (1-abs (sin (A))) +n (L)
(ABC) -L (standard)), where m and n are arbitrarily adjusted constants, set according to the currently acquired imaging; l represents the circumference. L (standard) is the perimeter of the pattern under a standard template.
According to the embodiment of the invention, the purpose of reducing image errors is realized through the camera module positioning system based on computer vision, the accuracy of the module position is further improved, and specifically, the noise is removed by smoothing and filtering the image through the noise removing module; the image is not related to the multi-level value of the pixel any more through the binarization module, the processing is simplified, and the processing and compression amount of the data are small; the filling anti-noise module is used for removing a small amount of white spots and black spots which occur due to slight differences of the lighting conditions of each image, so that the influence of noise on the image is reduced; the linear fitting module is used for carrying out linear fitting by utilizing the shape characteristics of the image, the optimizing loss module is combined for setting a loss function according to the detected precision requirement, the position of the fitted point is adjusted to a certain extent, and the coordinates are adjusted to obtain the coordinates of the minimum loss position, so that the error is reduced.
It should be noted that: the above embodiments provide a system, when the positioning method is implemented, only the division of the above functional modules is used as an example, in practical application, the above functions may be allocated to different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules, so as to complete all or part of the functions described above. In addition, the system and method embodiments provided in the foregoing embodiments belong to the same concept, and the specific implementation process is detailed in the description of the method embodiments, which is not repeated herein.
The foregoing description of the preferred embodiments of the invention is not intended to limit the invention to the precise form disclosed, and any such modifications, equivalents, and alternatives falling within the spirit and scope of the invention are intended to be included within the scope of the invention.

Claims (8)

1. The camera module positioning method based on computer vision is characterized by comprising the following steps:
collecting a camera module image through collecting equipment, wherein the camera module image is a three-camera module image;
preprocessing the image;
dividing the preprocessed image according to the module template parameters to obtain each lens ROI;
performing preliminary fitting on each lens ROI to obtain respective circle centers, and mapping the circle centers back into an original image;
determining the coordinates of the acquired fitted central points, and determining the central point errors according to the rule of adjacent points; and determining the coordinates of the acquired fitted central point, and determining the central point error according to the rule of the adjacent points comprises the following steps: obtaining the corresponding proportion of the actual distance of the lens on the image pixels according to the resolution of the current image; obtaining the distance between adjacent points according to the acquired image proportion and the detection precision requirement; expanding the original circle center into a set according to the obtained circle center preliminary coordinates and the distance between adjacent points;
determining a loss function according to the standard position of the camera module, traversing according to the error according to the loss function, and adjusting a coordinate point to be the position with the minimum loss which can be obtained currently;
wherein the Loss function expression is loss=m (1-abs (sin (a))) +n (L (ABC) -L (standard)), where m and n are arbitrarily adjusted constants, set according to the currently acquired imaging; l represents a circumference; l (standard) is the perimeter of the graphic under standard templates, the constraints of the standard are: three lenses form a triangle with a fixed angle and the distance between the center points of the three lenses.
2. The method of claim 1, wherein the preprocessing the image comprises:
the acquired image is subjected to graying treatment, information is simplified, and Gaussian filtering is used for eliminating noise;
performing binarization operation on the threshold value of the image, and performing pixel level segmentation of objects and backgrounds in the image through the threshold value operation;
the image is refilled and noise-protected by morphological opening and closing operations.
3. The method of claim 1, wherein segmenting the preprocessed image according to the module template parameters, the obtaining the ROI for each shot comprises:
performing linear fitting on the edges of the camera module in the preprocessed image to obtain the ROI of the camera module;
and dividing the acquired die set ROI according to the proportion of the die set ROI and the die set according to the die set parameters of the camera die set, and acquiring the approximate ROI position of each lens.
4. The method of claim 1, wherein the performing preliminary fitting on each lens ROI to obtain respective circle centers, and mapping back to the original image comprises:
contour extraction is carried out on each ROI;
performing ellipse fitting operation on each extracted contour set to obtain the circle center, width and height of each ellipse;
screening all ellipse sets obtained by fitting;
and mapping each circle center coordinate back to the original image respectively to obtain a circle center preliminary coordinate under the global image coordinate system.
5. A computer vision-based camera module positioning system, the system comprising:
the image acquisition module is used for acquiring an image pickup module image through acquisition equipment, wherein the image pickup module image is a three-camera module image;
the image preprocessing module is used for preprocessing the image;
the image segmentation module is used for segmenting the preprocessed image according to the module template parameters to obtain the ROI of each lens;
the preliminary fitting module is used for respectively carrying out preliminary fitting on each lens ROI to obtain respective circle centers and mapping the circle centers back to the original image;
the adjacent point determining module is used for determining the coordinates of the acquired fitted center points and determining center point errors according to rules of the adjacent points; the determining neighboring point module includes: the pixel determining module is used for obtaining the corresponding proportion of the actual distance of the lens on the image pixels according to the resolution of the current image; the adjacent point distance determining module is used for obtaining the distance of the adjacent point according to the acquired image proportion and the detected precision requirement, and expanding the original circle center into a set according to the acquired circle center preliminary coordinates and the distance of the adjacent point;
the optimizing loss module is used for determining a loss function, traversing according to the error and adjusting a coordinate point to be the position with the minimum loss which can be obtained currently;
wherein the Loss function expression is loss=m (1-abs (sin (a))) +n (L (ABC) -L (standard)), where m and n are arbitrarily adjusted constants, set according to the currently acquired imaging; l represents a circumference; l (standard) is the perimeter of the graphic under standard templates, the constraints of the standard are: three lenses form a triangle with a fixed angle and the distance between the center points of the three lenses.
6. The system of claim 5, wherein the image preprocessing module comprises:
the denoising module is used for carrying out graying treatment on the acquired image, simplifying information and then eliminating noise by using Gaussian filtering;
the binarization module is used for carrying out binarization operation on the threshold value of the image, and carrying out pixel level segmentation on objects and backgrounds in the image through the threshold value operation;
and the filling noise-proof module is used for carrying out refilling noise-proof on the image through morphological opening and closing operation.
7. The system of claim 6, wherein the image segmentation module comprises:
the straight line fitting module is used for carrying out straight line fitting on the edges of the image pickup module in the preprocessed image to obtain the ROI of the image pickup module;
the position determining module is used for dividing the acquired die set ROI according to the proportion of the die set ROI and the die set according to the die set parameters of the camera die set, and acquiring the position of each lens ROI.
8. The system of claim 6, wherein the preliminary fitting module comprises:
the contour extraction module is used for extracting the contour of each lens ROI;
the ellipse fitting module is used for performing ellipse fitting operation on each extracted contour set to obtain the circle center, the width and the height of each ellipse;
the screening module is used for screening all the ellipse sets obtained by fitting;
and the mapping module is used for mapping each circle center coordinate back to the original image respectively to obtain circle center preliminary coordinates under the global image coordinate system.
CN201811577406.0A 2018-12-20 2018-12-20 Computer vision-based camera module positioning method and system Active CN111354047B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811577406.0A CN111354047B (en) 2018-12-20 2018-12-20 Computer vision-based camera module positioning method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811577406.0A CN111354047B (en) 2018-12-20 2018-12-20 Computer vision-based camera module positioning method and system

Publications (2)

Publication Number Publication Date
CN111354047A CN111354047A (en) 2020-06-30
CN111354047B true CN111354047B (en) 2023-11-07

Family

ID=71195132

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811577406.0A Active CN111354047B (en) 2018-12-20 2018-12-20 Computer vision-based camera module positioning method and system

Country Status (1)

Country Link
CN (1) CN111354047B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111882530B (en) * 2020-07-15 2024-05-14 苏州佳智彩光电科技有限公司 Sub-pixel positioning map generation method, positioning method and device
CN113267502B (en) * 2021-05-11 2022-07-22 江苏大学 Micro-motor friction plate defect detection system and detection method based on machine vision
CN116309799A (en) * 2023-02-10 2023-06-23 四川戎胜兴邦科技股份有限公司 Target visual positioning method, device and system
CN116258838B (en) * 2023-05-15 2023-09-19 青岛环球重工科技有限公司 Intelligent visual guiding method for duct piece mold clamping system
CN117808770B (en) * 2023-12-29 2024-10-08 布劳宁(上海)液压气动有限公司 Check valve surface quality detecting system based on machine vision
CN118275435B (en) * 2024-06-04 2024-09-13 国鲸科技(广东横琴粤澳深度合作区)有限公司 CCD fixed-point positioning micro-channel organic film material deposition system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103033126A (en) * 2011-09-29 2013-04-10 鸿富锦精密工业(深圳)有限公司 Annular object location method and system
CN106204544A (en) * 2016-06-29 2016-12-07 南京中观软件技术有限公司 A kind of automatically extract index point position and the method and system of profile in image
CN108332681A (en) * 2018-01-03 2018-07-27 东北大学 A kind of determination method of the big plastic bending sectional profile curve lin of thin-wall pipes

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103827917B (en) * 2011-07-25 2017-06-09 科英布拉大学 For the method and apparatus that the automatic camera of the one or more image using checkerboard pattern is calibrated

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103033126A (en) * 2011-09-29 2013-04-10 鸿富锦精密工业(深圳)有限公司 Annular object location method and system
CN106204544A (en) * 2016-06-29 2016-12-07 南京中观软件技术有限公司 A kind of automatically extract index point position and the method and system of profile in image
CN108332681A (en) * 2018-01-03 2018-07-27 东北大学 A kind of determination method of the big plastic bending sectional profile curve lin of thin-wall pipes

Also Published As

Publication number Publication date
CN111354047A (en) 2020-06-30

Similar Documents

Publication Publication Date Title
CN111354047B (en) Computer vision-based camera module positioning method and system
CN110866924B (en) Line structured light center line extraction method and storage medium
CN109978839B (en) Method for detecting wafer low-texture defects
CN107463918B (en) Lane line extraction method based on fusion of laser point cloud and image data
CN111415363B (en) Image edge identification method
CN108629775B (en) Thermal state high-speed wire rod surface image processing method
CN106228161B (en) A kind of pointer-type dial plate automatic reading method
CN112651968B (en) Wood board deformation and pit detection method based on depth information
CN115170669B (en) Identification and positioning method and system based on edge feature point set registration and storage medium
CN107784669A (en) A kind of method that hot spot extraction and its barycenter determine
CN110569857B (en) Image contour corner detection method based on centroid distance calculation
CN111462066B (en) Thread parameter detection method based on machine vision
CN109580630A (en) A kind of visible detection method of component of machine defect
CN115096206B (en) High-precision part size measurement method based on machine vision
CN112734761B (en) Industrial product image boundary contour extraction method
CN114331986A (en) Dam crack identification and measurement method based on unmanned aerial vehicle vision
CN110544235A (en) Flexible circuit board image area identification method based on differential geometry
CN111968079B (en) Three-dimensional pavement crack extraction method based on local extremum of section and segmentation sparsity
CN116503462A (en) Method and system for quickly extracting circle center of circular spot
CN106815851B (en) A kind of grid circle oil level indicator automatic reading method of view-based access control model measurement
CN111178210B (en) Image identification and alignment method for cross mark
CN113807238A (en) Visual measurement method for area of river surface floater
CN117911419A (en) Method and device for detecting steel rotation angle enhancement of medium plate, medium and equipment
CN117893550A (en) Moving object segmentation method under complex background based on scene simulation
CN117635615A (en) Defect detection method and system for realizing punching die based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20210305

Address after: 200333 room 808, 8th floor, No.6 Lane 600, Yunling West Road, Putuo District, Shanghai

Applicant after: Jingrui vision intelligent technology (Shanghai) Co.,Ltd.

Address before: 409-410, building A1, Fuhai information port, Fuyong street, Bao'an District, Shenzhen, Guangdong 518000

Applicant before: RISEYE INTELLIGENT TECHNOLOGY (SHENZHEN) Co.,Ltd.

GR01 Patent grant
GR01 Patent grant