[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN114090453A - Machine vision system testing method, system and storage medium - Google Patents

Machine vision system testing method, system and storage medium Download PDF

Info

Publication number
CN114090453A
CN114090453A CN202111423140.6A CN202111423140A CN114090453A CN 114090453 A CN114090453 A CN 114090453A CN 202111423140 A CN202111423140 A CN 202111423140A CN 114090453 A CN114090453 A CN 114090453A
Authority
CN
China
Prior art keywords
image
degraded
machine vision
vision system
degradation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111423140.6A
Other languages
Chinese (zh)
Inventor
徐啸顺
朱烨添
任建
林立
干徐淳
刘清华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Saic General Power Technology Shanghai Co ltd
SAIC General Motors Corp Ltd
Original Assignee
SAIC General Motors Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SAIC General Motors Corp Ltd filed Critical SAIC General Motors Corp Ltd
Priority to CN202111423140.6A priority Critical patent/CN114090453A/en
Publication of CN114090453A publication Critical patent/CN114090453A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F11/00Error detection; Error correction; Monitoring
    • G06F11/36Preventing errors by testing or debugging software
    • G06F11/3668Software testing
    • G06F11/3672Test management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/02Affine transformations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Quality & Reliability (AREA)
  • Computer Hardware Design (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The application discloses a machine vision system testing method, a machine vision system testing system and a storage medium, which are used for testing robustness of a machine vision system. The method comprises the following steps: acquiring a target image with quality meeting a preset condition; generating a plurality of degraded images corresponding to the target image; selecting a degraded image sample set meeting a specific scoring standard from a plurality of degraded images; inputting the degraded image sample set into a machine vision system to be tested; acquiring a first identification result of each image in the degraded image sample set; and determining a test result of the machine vision system according to the comparison result of the first identification result and the second identification result. By adopting the scheme provided by the application, the test on the robustness of the machine vision system is realized; and secondly, a plurality of degraded images corresponding to the target image are generated in a degradation mode, so that the sample size input to a machine vision system is increased, and data augmentation is realized.

Description

Machine vision system testing method, system and storage medium
Technical Field
The present disclosure relates to the field of image recognition technologies, and in particular, to a method and a system for testing a machine vision system, and a storage medium.
Background
A machine vision system is a system that performs various measurements and judgments using a machine instead of human eyes. It is an important branch of computer science. Machine vision systems have wider application in modern production and life processes. The machine vision equipment can be applied to a production line mass production scene, and can be used for identifying whether workpieces on the production line are qualified or not; the machine vision equipment can also be applied to equipment needing security check, such as subways, airports and the like, and is used for monitoring whether passengers carry dangerous goods or not; the machine vision equipment can also be applied to a road traffic monitoring scene to detect whether the road vehicle has violation behaviors, so that the machine vision system is widely applied to modern production and life.
If the recognized image quality is good, most machine vision systems can give accurate judgment, however, in the actual production process, many interference factors exist, such as insufficient or overhigh ambient light brightness caused by pollution of camera lenses by oil and dust, day and night change of ambient light, out-of-focus condition caused by displacement of the camera or displacement of a shooting object, overheating or aging of the camera, and the like. These problems may cause quality problems such as blur, noise, gray/contrast variation, etc. in an image photographed by a camera, and after the quality of the photographed image is degraded, some machine vision systems may not obtain an accurate determination result, and some machine vision systems may still obtain an accurate determination result. Whether an accurate judgment result can be obtained mainly depends on the robustness of the machine vision system, and the machine vision system with better robustness has higher judgment accuracy when facing an image with poorer quality.
And the machine vision system which can still obtain an accurate judgment result under the condition of poor image quality is obviously a system with better robustness. It is necessary to test a machine vision system to identify the robustness of the machine vision system and to instruct a manufacturer to optimize the machine vision system, and therefore how to provide a machine vision system testing method to test the robustness of the machine vision system is a technical problem to be solved.
Disclosure of Invention
The application provides a machine vision system testing method, a machine vision system testing system and a storage medium, which are used for testing the robustness of a machine vision system.
The application provides a machine vision system testing method, which comprises the following steps:
acquiring a target image with quality meeting a preset condition;
generating a plurality of degraded images corresponding to the target image;
selecting a degraded image sample set which meets a specific scoring standard from the plurality of degraded images;
inputting the degraded image sample set into a machine vision system to be tested;
acquiring a first identification result of each image in the degraded image sample set, wherein each image in the degraded image sample set corresponds to one first identification result;
and determining a test result of the machine vision system according to a comparison result of the first identification result and a second identification result, wherein the second identification result is an identification result of the target image.
The beneficial effect of this application lies in: the degradation image used for the test machine vision system can be automatically generated, so that a sufficient sample is provided for the test machine vision system, the degradation image is screened according to a specific scoring standard, the degradation degree of the image can be controlled, the situation that the degradation degree of the image is insufficient or excessively degraded is avoided, the simulation effect of the degradation image is further improved, then the recognition result of the degradation image is compared with the recognition result of a target image, and the test on the robustness of the test machine vision system is realized according to the comparison result; and secondly, a plurality of degraded images corresponding to the target image are generated in a degradation mode, so that the sample size input to a machine vision system is increased, and data augmentation is realized.
In one embodiment, the generating a plurality of degraded images corresponding to the target image includes:
obtaining different types of degradation strategies corresponding to various specific working conditions;
and generating a degraded image corresponding to the target image according to different types of degradation strategies.
In one embodiment, the generating of the degraded image corresponding to the target image according to different types of degradation strategies includes:
generating a degraded image corresponding to the target image according to the following formula:
g(x,y)=h(x,y)*f(x,y)+η(x,y);
wherein g (x, y) is a degraded image generated according to a degradation strategy; f (x, y) is a target image; h (x, y) is a spatial expression of a degradation function corresponding to the degradation strategy; η (x, y) is additive noise.
In one embodiment, the generating a plurality of degraded images corresponding to the images according to different types of degradation strategies includes:
generating a degraded image corresponding to the target image according to the following formula:
Gn=TnBnCnDn *F+ηn
wherein F is the target image, GnGenerating a degraded image according to a plurality of degradation strategies; t isnIs a displacement deformation matrix; b isnRepresenting a fuzzy matrix; cnRepresenting a gray/contrast variation matrix; dnRepresents a down-sampling coefficient; etanRepresenting noise.
In one embodiment, the particular operating condition includes at least one of:
the system comprises a vision sensor overheating and aging working condition, a signal interference working condition, a transmission channel and decoding processing error working condition, a camera lens polluted working condition, a defocusing working condition, a working condition that the ambient light brightness of a shooting environment does not fall into a preset brightness interval, a working condition that a target object in an image generates spatial displacement and a working condition that the resolution of vision equipment is lower than a preset value.
In one embodiment, the degradation policy includes at least one of:
the method comprises the following steps of adding noise to a target image, blurring the target image, performing gray-scale transformation on the image, performing affine transformation on the image and converting the target into a low-resolution image by an interval sampling method.
In one embodiment, the selecting a sample set of degraded images from the plurality of degraded images that meet a certain scoring criterion includes:
determining scores of the plurality of degraded images according to the degradation degrees of the plurality of degraded images;
determining the degraded image with the score larger than the preset score as the degraded image meeting the specific standard;
a sample set of images is generated from all degraded images that meet certain criteria.
In one embodiment, the determining a test result of the machine vision system based on the comparison of the first recognition result and the second recognition result comprises:
comparing the first recognition result and the second recognition result;
and when the ratio of the number of the second recognition results, which is consistent with the first recognition results, to the number of the second recognition results is greater than a preset ratio, determining that the machine vision system passes the test.
The present application further provides a machine vision system testing system, comprising:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to implement the machine vision system testing method of any of the above embodiments.
The present application further provides a computer-readable storage medium, wherein when instructions in the storage medium are executed by a processor corresponding to a machine vision system testing system, the machine vision system testing system is enabled to implement the machine vision system testing method described in any of the above embodiments.
Additional features and advantages of the application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the application. The objectives and other advantages of the application may be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
The technical solution of the present application is further described in detail by the accompanying drawings and examples.
Drawings
The accompanying drawings are included to provide a further understanding of the application and are incorporated in and constitute a part of this specification, illustrate embodiment(s) of the application and together with the description serve to explain the application and not limit the application. In the drawings:
FIG. 1 is a flow chart of a method for testing a machine vision system according to an embodiment of the present disclosure;
FIG. 2 is a flow chart of a method for testing a machine vision system according to another embodiment of the present disclosure;
FIG. 3 is a flow chart of a method for testing a machine vision system according to yet another embodiment of the present disclosure;
FIG. 4 is a schematic diagram of generating a degraded image from a normal quality image according to an embodiment of the present application;
FIG. 5 is a schematic diagram illustrating an implementation of a method for testing a machine vision system according to an embodiment of the present disclosure;
fig. 6 is a schematic diagram of a hardware structure of a machine vision system testing system according to the present application.
Detailed Description
The preferred embodiments of the present application will be described in conjunction with the accompanying drawings, and it will be understood that they are described herein only to illustrate and explain the present application and not to limit the present application.
Fig. 1 is a flowchart of a machine vision system testing method according to an embodiment of the present application, and as shown in fig. 1, the method can be implemented as the following steps S11-S16:
in step S11, a target image whose quality meets a preset condition is acquired;
in step S12, a plurality of degraded images corresponding to the target image are generated;
in step S13, selecting a degraded image sample set satisfying a specific scoring criterion from the plurality of degraded images;
inputting the sample set of degraded images into a machine vision system to be tested in step S14;
in step S15, obtaining first recognition results of the images in the degraded image sample set, where each image in the degraded image sample set corresponds to a first recognition result;
in step S16, a test result of the machine vision system is determined according to a comparison result of the first recognition result and a second recognition result, wherein the second recognition result is a recognition result of a target image.
Taking an industrial production scene as an example, in an industrial production process, an image acquisition device is usually arranged to acquire a workpiece image on a production line, and then whether a part on the workpiece is installed in place, whether the part is short or not is judged through image processing, so that in the embodiment, a target image with quality meeting a preset condition is acquired; for example, the target image whose quality meets the preset condition may refer to an image whose various kinds of specified parameters (resolution, noise, color, sharpness, distortion degree, and the like) are within a preset interval. For another example, the quality meeting the preset condition may refer to an image with a distortion degree smaller than a preset value, and when the distortion degree of the image is calculated, a standard image is first selected, then a mean square value of a difference value between an original image and a distorted image pixel is calculated, and the distortion degree of the distorted image is determined according to the magnitude of the mean square value. For another example, a model for image quality evaluation may be set, and whether the image quality meets a preset condition or not may be determined by the model. It can be understood that, in an industrial production scene, the acquisition of the target image can be realized by receiving the image acquired by the image acquisition device arranged in the industrial production site.
After a target image with quality meeting a preset condition is acquired, generating a plurality of degraded images corresponding to the target image;
generating a plurality of degraded images corresponding to the target image by specifically:
obtaining different types of degradation strategies corresponding to various specific working conditions; and generating degraded images corresponding to the target images according to different types of degradation strategies.
Wherein the specific working condition comprises at least one of the following working conditions:
the method comprises the following steps that a visual sensor is overheated and aged, a signal interference working condition, a transmission channel and decoding processing error working condition, a camera lens polluted working condition, a defocusing working condition, a working condition that the ambient light brightness of a shooting environment does not fall into a preset brightness interval, a working condition that a target object in an image is subjected to spatial displacement and a working condition that the resolution of visual equipment is lower than a preset value;
the degradation strategy comprises at least one of the following strategies:
the method comprises the following steps of adding noise to a target image, blurring the target image, performing gray-scale transformation on the image, performing affine transformation on the image and converting the target into a low-resolution image by an interval sampling method.
It should be noted that, the strategy of adding noise to the target image can simulate the overheating and aging conditions of the visual sensor, the signal interference conditions, and the transmission channel and decoding processing error conditions in the machine vision system, and the added noise usually includes gaussian noise and salt and pepper noise. The strategy for blurring the target image can simulate the working condition that a camera lens is polluted, namely oil pollution and dust pollution, and the working condition that the camera lens is out of focus, which is usually caused by the position change of the camera or a shot object. The strategy of carrying out gray level conversion on the image can simulate the working condition that the ambient light brightness of the shooting environment does not fall into the preset brightness interval. The strategy of performing affine transformation on the image can simulate the working condition of spatial displacement of the target object in the image, and the displacement of the target object in the image can generally include three displacements of translation, rotation and inclination. The strategy of converting the target into the low-resolution image by the interval sampling method can simulate the working condition that the resolution of the visual equipment is lower than the preset value, and particularly the working condition that the resolution of the visual equipment is lower than the preset value is usually caused by switching the camera from a high-resolution model to a low-resolution model.
When the degraded images corresponding to the target images are generated according to different types of degradation strategies, one degradation strategy can be applied to the target images to generate the degraded images corresponding to the target images, and multiple degradation strategies can also be applied to the target images to generate the degraded images corresponding to the target images.
The method is applied to an industrial production scene, and can degrade images according to various image degradation strategies to simulate different degradation working conditions of a camera, so that under the condition of verifying the performance of a visual system, the method can realize the amplification of image data through the degradation strategies, and is different from a conventional amplification mode (rotation, translation and inversion).
The specific implementation manner of applying a degradation strategy to the target image to generate a degraded image corresponding to the target image is as follows:
generating a degraded image corresponding to the target image according to the following formula:
g(x,y)=h(x,y)*f(x,y)+η(x,y);
wherein g (x, y) is a degraded image generated according to a degradation strategy; f (x, y) is a target image; h (x, y) is a spatial expression of a degradation function corresponding to the degradation strategy; η (x, y) is additive noise.
The specific implementation manner of applying multiple degradation strategies to the target image to generate a degraded image corresponding to the target image is as follows:
generating a degraded image corresponding to the target image according to the following formula:
Gn=TnBnCnDn *F+ηn
wherein F is the target image, GnGenerating a degraded image according to a plurality of degradation strategies; t isnIs a displacement deformation matrix; b isnRepresenting a fuzzy matrix; cnRepresenting a gray/contrast variation matrix; dnRepresents a down-sampling coefficient; etanRepresenting noise.
After generating the degraded images, selecting a degraded image sample set which meets a specific scoring standard from the plurality of degraded images; specifically, the scores of the multiple degraded images need to be determined according to the degradation degrees of the multiple degraded images; specifically, the evaluation can be realized by an image quality evaluation method, and the image quality evaluation method is generally used for evaluating image and video compression quality, comparing the performance of an image processing algorithm, monitoring the quality of a video received by a terminal and the like. The image quality evaluation method is divided into subjective evaluation and objective evaluation. The main difference between subjective and objective ratings is whether they are scored manually by the viewer or automatically according to an algorithmically programmed program. An objective evaluation method is selected for calculation in general automatic equipment. The objective evaluation algorithm is divided into three types of image quality algorithms including full reference, half reference and no reference. Full reference refers to evaluating the quality of degraded images in the presence of original images, and the application can obtain reference standard images during debugging, so a full reference image quality algorithm is applied.
The full-reference image quality algorithm may generally adopt PSNR (Peak Signal-to-Noise Ratio), SSIM (structural Similarity Index), CW-SSIM (Complex Wavelet domain structural Similarity Index), and other algorithms to score, wherein the SSIM (structural Similarity Index) algorithm is mainly used to evaluate the difference between the brightness, contrast, and structure of an image and a reference image, and the CW-SSIM has reduced sensitivity to slight translation and rotation on the basis of SSIM, that is, a small displacement has little influence on an imaging condition, and should not score too low, and further meets the actual application requirements. Among them, the earliest full reference algorithm is PSNR: however, the PSNR scoring result is often too different from subjective evaluation of human eyes, and a more applicable algorithm is SSIM. SSIM scores are generally between (0, 1), with negative values in certain cases; the results are 1 for the two graphs are identical and close to 0 for severe degradation. The SSIM can be more consistent with subjective feeling on the premise of objectively reflecting the image quality, and simultaneously can compare three statistical characteristic differences of two images, including brightness (namely, gray level average value), contrast (namely, variance) and structure (namely, covariance) difference, so that the difference of the images before and after degradation is comprehensively and objectively reflected. In most of the degraded scenes, the SSIM algorithm can accurately calculate the image degradation degree, and is only over sensitive to the slightly translated and rotated affine transformation degraded image and over-low in score. In actual conditions, the slight translation and rotation of the whole workpiece do not need to cause subjective attention adjustment, and the influence on commercial industrial machine vision algorithm is extremely small, and the workpiece cannot be scored too low. To better cope with such situations, CW-SSIM was introduced to overcome such deficiencies. The CW-SSIM is obtained by carrying out pyramid decomposition on a complex wavelet image on the image, then weighting and summing the complex wavelet SSIM values of each sub-band (sub-band), and taking the average value.
After the score of the degraded image is obtained, determining the degraded image with the score larger than a preset score as the degraded image meeting a specific standard; a sample set of images is generated from all degraded images that meet certain criteria.
Inputting the degraded image sample set into a machine vision system to be tested; acquiring a first identification result of each image in the degraded image sample set, wherein each image in the degraded image sample set corresponds to one first identification result;
and determining a test result of the machine vision system according to a comparison result of the first identification result and a second identification result, wherein the second identification result is an identification result of the target image. Specifically, the first recognition result and the second recognition result are compared; and when the ratio of the number of the second recognition results, which is consistent with the first recognition results, to the number of the second recognition results is greater than a preset ratio, determining that the machine vision system passes the test.
Fig. 5 is a schematic view of an implementation process corresponding to the method for testing a machine vision system in the present application, and the following description will be made in detail with reference to fig. 5, taking a machine vision system for monitoring whether a workpiece on a production line is qualified or not as an example:
firstly, clear images containing workpieces to be tested on a production line acquired by a machine vision system are acquired, if the machine vision system is used for monitoring whether parts a on the workpieces are installed in place, when the machine vision system detects that the parts a on the workpieces to be tested are installed in place, prompt information of 'OK' is output, and when the machine vision system detects that the parts a on the workpieces to be tested are not installed in place, prompt information of 'NG' is output. If the target image is a clear image of a workpiece to be tested with the part a mounted in place, and the recognition result of the target image in the machine vision system is "OK". For example, 10 degraded images are generated by applying various degradation strategies to the target image, some of the 10 degraded images may be applied with only one degradation strategy, some may be applied with multiple degradation strategies, the 10 degraded images are input to the machine vision system, and if the preset proportion is assumed to be 80% in the recognition results of the 10 degraded images, if 8 or more than 8 recognition results are "OK", the machine vision system is determined to pass the test.
Similarly, if the target image is a clear image of a workpiece to be tested with a part a not in place, and the target image is recognized as "NG" in the machine vision system. And generating 10 degraded images by applying various degradation strategies to the target image, wherein the 10 degraded images can only apply one degradation strategy, and the 10 degraded images can apply multiple degradation strategies, the 10 degraded images are input into the machine vision system, and if the preset proportion is 80% in the recognition results of the 10 degraded images, the machine vision system is determined to pass the test if 8 or more than 8 recognition results are 'NG'.
Certainly, the preset proportion can be set by itself, and under the condition that the requirement on the machine vision system is high, the preset proportion can be adjusted to 100%, that is, the recognition result of the degraded image and the recognition result of the target image are required to be consistent.
The beneficial effect of this application lies in: the degradation image used for the test machine vision system can be automatically generated, so that a sufficient sample is provided for the test machine vision system, the degradation image is screened according to a specific scoring standard, the degradation degree of the image can be controlled, the situation that the degradation degree of the image is insufficient or excessively degraded is avoided, the simulation effect of the degradation image is further improved, then the recognition result of the degradation image is compared with the recognition result of a target image, and the test on the robustness of the test machine vision system is realized according to the comparison result; and secondly, a plurality of degraded images corresponding to the target image are generated in a degradation mode, so that the sample size input to a machine vision system is increased, and data augmentation is realized.
In one embodiment, as shown in FIG. 2, the above step S12 can be implemented as the following steps S21-S22:
in step S21, different types of degradation strategies corresponding to various specific operating conditions are obtained;
in step S22, a degraded image corresponding to the target image is generated according to different types of degradation strategies.
In one embodiment, the above step S22 can be implemented as the following steps:
generating a degraded image corresponding to the target image according to the following formula:
g(x,y)=h(x,y)*f(x,y)+η(x,y);
wherein g (x, y) is a degraded image generated according to a degradation strategy; f (x, y) is a target image; h (x, y) is a spatial expression of a degradation function corresponding to the degradation strategy; η (x, y) is additive noise.
In one embodiment, the step S22 can be further implemented as the following steps:
generating a degraded image corresponding to the target image according to the following formula:
Gn=TnBnCnDn *F+ηn
wherein F is the target image, GnGenerating a degraded image according to a plurality of degradation strategies; t isnIs a displacement deformation matrix; b isnRepresenting a fuzzy matrix; cnRepresenting a gray/contrast variation matrix; dnRepresents a down-sampling coefficient; etanRepresenting noise.
In this embodiment, as shown in fig. 4, the algorithm may be integrated into an image degradation processing module, and then an image with normal quality is input into the image degradation processing module, and operations such as displacement deformation, blurring, gray/contrast conversion, downsampling, and denoising are performed on the image to obtain a degraded image, where when the image degradation module processes the image F, some degradation strategies that do not need to be applied, such as displacement deformation, blurring, gray/contrast conversion, and downsampling, may assign a corresponding parameter to 1, and if denoising is not needed, assign a corresponding parameter to 0.
In one embodiment, the particular operating condition includes at least one of:
the system comprises a vision sensor overheating and aging working condition, a signal interference working condition, a transmission channel and decoding processing error working condition, a camera lens polluted working condition, a defocusing working condition, a working condition that the ambient light brightness of a shooting environment does not fall into a preset brightness interval, a working condition that a target object in an image generates spatial displacement and a working condition that the resolution of vision equipment is lower than a preset value.
In one embodiment, the degradation policy includes at least one of:
the method comprises the following steps of adding noise to a target image, blurring the target image, performing gray-scale transformation on the image, performing affine transformation on the image and converting the target into a low-resolution image by an interval sampling method.
In this embodiment, the strategy of adding noise to the target image can simulate the overheating and aging conditions of the visual sensor in the machine vision system, the signal interference conditions, and the transmission channel and decoding processing error conditions, and the added noise includes gaussian noise and salt and pepper noise under normal conditions. The strategy for blurring the target image can simulate the working condition that a camera lens is polluted, namely oil pollution and dust pollution, and the working condition that the camera lens is out of focus, which is usually caused by the position change of the camera or a shot object. The strategy of carrying out gray level conversion on the image can simulate the working condition that the ambient light brightness of the shooting environment does not fall into the preset brightness interval. The strategy of performing affine transformation on the image can simulate the working condition of spatial displacement of the target object in the image, and the displacement of the target object in the image can generally include three displacements of translation, rotation and inclination. The strategy of converting the target into the low-resolution image by the interval sampling method can simulate the working condition that the resolution of the visual equipment is lower than the preset value, and particularly the working condition that the resolution of the visual equipment is lower than the preset value is usually caused by switching the camera from a high-resolution model to a low-resolution model.
In one embodiment, the above step S13 can be implemented as the following steps S31-S33:
in step S31, determining scores of the plurality of degraded images according to the degradation degrees of the plurality of degraded images;
in step S32, determining a degraded image having a score greater than a preset score as a degraded image satisfying a certain criterion;
in step S33, a sample set of images is generated from all degraded images that meet certain criteria.
In this embodiment, the scores of the plurality of degraded images are determined according to the degradation degrees of the plurality of degraded images; determining the degraded image with the score larger than the preset score as the degraded image meeting the specific standard; a sample set of images is generated from all degraded images that meet certain criteria.
It should be noted that, when determining the scores of the multiple degraded images according to the degradation degrees of the multiple degraded images, the evaluation may be implemented by an image quality evaluation method, and the image quality evaluation method is generally used for evaluating image and video compression quality, comparing the performance of an image processing algorithm, monitoring the quality of a video received by a terminal, and the like. The image quality evaluation method is divided into subjective evaluation and objective evaluation. The main difference between subjective and objective ratings is whether they are scored manually by the viewer or automatically according to an algorithmically programmed program. An objective evaluation method is selected for calculation in general automatic equipment. The objective evaluation algorithm is divided into three types of image quality algorithms including full reference, half reference and no reference. Full reference refers to evaluating the quality of degraded images in the presence of original images, and the application can obtain reference standard images during debugging, so a full reference image quality algorithm is applied.
The full-reference image quality algorithm may generally adopt PSNR (Peak Signal-to-Noise Ratio), SSIM (Structural Similarity Index), CW-SSIM (Complex Wavelet domain Structural Similarity Index), and other algorithms to score, wherein the SSIM (Structural Similarity Index) algorithm is mainly used to evaluate the difference between the brightness, contrast, and structure of an image and a reference image, and the CW-SSIM has reduced sensitivity to slight translation and rotation on the basis of SSIM, that is, a small displacement has little influence on an imaging condition, and should not score too low, and further meets the requirements of practical applications. Among them, the earliest full reference algorithm is PSNR: however, the PSNR scoring result is often too different from subjective evaluation of human eyes, and a more applicable algorithm is SSIM. SSIM scores are generally between (0, 1), with negative values in certain cases; the results are 1 for the two graphs are identical and close to 0 for severe degradation. The SSIM can be more consistent with subjective feeling on the premise of objectively reflecting the image quality, and simultaneously can compare three statistical characteristic differences of two images, including brightness (namely, gray level average value), contrast (namely, variance) and structure (namely, covariance) difference, so that the difference of the images before and after degradation is comprehensively and objectively reflected. In most of the degraded scenes, the SSIM algorithm can accurately calculate the image degradation degree, and is only over sensitive to the slightly translated and rotated affine transformation degraded image and over-low in score. In actual conditions, the slight translation and rotation of the whole workpiece do not need to cause subjective attention adjustment, and the influence on commercial industrial machine vision algorithm is extremely small, and the workpiece cannot be scored too low. To better cope with such situations, CW-SSIM was introduced to overcome such deficiencies. The CW-SSIM is obtained by carrying out pyramid decomposition on a complex wavelet image on the image, then weighting and summing the complex wavelet SSIM values of each sub-band (sub-band), and taking the average value.
In one embodiment, the above step S16 may be implemented as the following steps A1-A2:
in step a1, comparing the first recognition result and the second recognition result;
in step a2, when the ratio of the number of the second recognition results consistent with the first recognition results to the number of the second recognition results is greater than a preset ratio, determining that the machine vision system test passes.
For example, the target image is a clear image of a workpiece to be tested with a part a mounted in place, and the recognition result of the target image in the machine vision system is "OK". For example, 10 degraded images are generated by applying various degradation strategies to the target image, some of the 10 degraded images may be applied with only one degradation strategy, some may be applied with multiple degradation strategies, the 10 degraded images are input to the machine vision system, and if the preset proportion is assumed to be 80% in the recognition results of the 10 degraded images, if 8 or more than 8 recognition results are "OK", the machine vision system is determined to pass the test.
For another example, the target image is a clear image of a workpiece to be tested whose part a is not mounted in place, and the recognition result of the target image in the machine vision system is "NG". And generating 10 degraded images by applying various degradation strategies to the target image, wherein the 10 degraded images can only apply one degradation strategy, and the 10 degraded images can apply multiple degradation strategies, the 10 degraded images are input into the machine vision system, and if the preset proportion is 80% in the recognition results of the 10 degraded images, the machine vision system is determined to pass the test if 8 or more than 8 recognition results are 'NG'.
Fig. 6 is a schematic diagram of a hardware structure of a machine vision system testing system according to the present application, including:
at least one processor 620; and the number of the first and second groups,
a memory 604 communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to implement the machine vision system testing method of any of the above embodiments.
Referring to fig. 6, the machine vision system testing system 600 may include one or more of the following components: processing component 602, memory 604, power component 606, input/output (I/0) interface 612, and communication component 616.
The processing component 602 generally controls the overall operation of the machine vision system testing system 600. The processing component 602 may include one or more processors 620 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 602 can include one or more modules that facilitate interaction between the processing component 602 and other components.
The memory 604 is configured to store various types of data to support operation of the machine vision system test system 600. Examples of such data include instructions for any application or method operating on the machine vision system testing system 600. The memory 604 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power component 606 provides power to the various components of the machine vision system test system 600. The power components 606 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power supplies for the machine vision system testing system 600.
The I/O interface 612 provides an interface between the processing component 602 and peripheral interface modules.
The communication component 616 is configured to enable the machine vision system testing system 600 to provide communication capabilities with other devices and cloud platforms in a wired or wireless manner. The machine vision system test system 600 may access a wireless network based on a communication standard, such as WiFi, 2G, or 3G, or a combination thereof. In an exemplary embodiment, the communication component 616 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 616 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
The processor 620 can be connected with the image capture device 608 in the industrial scene through the communication component 616, so as to obtain the target image sent by the image capture device 608 through the communication component 616, and secondly, the processor 620 can also be connected with the machine vision system 610 through the communication component 616, so that the processor 620 can input a plurality of degraded images generated by the target image to the machine vision system 610 to be tested.
In an exemplary embodiment, the machine vision system testing system 600 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors, or other electronic components for performing the above-described machine vision system testing methods.
The present application further provides a computer-readable storage medium, wherein when instructions in the storage medium are executed by a processor corresponding to a machine vision system testing system, the machine vision system testing system is enabled to implement the machine vision system testing method described in any of the above embodiments.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.

Claims (10)

1. A machine vision system testing method, comprising:
acquiring a target image with quality meeting a preset condition;
generating a plurality of degraded images corresponding to the target image;
selecting a degraded image sample set which meets a specific scoring standard from the plurality of degraded images;
inputting the degraded image sample set into a machine vision system to be tested;
acquiring a first identification result of each image in the degraded image sample set, wherein each image in the degraded image sample set corresponds to one first identification result;
and determining a test result of the machine vision system according to a comparison result of the first identification result and a second identification result, wherein the second identification result is an identification result of the target image.
2. The method of claim 1, wherein the generating a plurality of degraded images corresponding to the target image comprises:
obtaining different types of degradation strategies corresponding to various specific working conditions;
and generating a degraded image corresponding to the target image according to different types of degradation strategies.
3. The method of claim 1, wherein generating degraded images corresponding to the target image according to different types of degradation strategies comprises:
generating a degraded image corresponding to the target image according to the following formula:
g(x,y)=h(x,y)*f(x,y)+η(x,y);
wherein g (x, y) is a degraded image generated according to a degradation strategy; f (x, y) is a target image; h (x, y) is a spatial expression of a degradation function corresponding to the degradation strategy; η (x, y) is additive noise.
4. The method of claim 1, wherein the generating a plurality of degraded images corresponding to the image according to different types of degradation strategies comprises:
generating a degraded image corresponding to the target image according to the following formula:
Gn=TnBnCnDn *F+ηn
wherein F is the target image, GnGenerating a degraded image according to a plurality of degradation strategies; t isnIs a displacement deformation matrix; b isnRepresenting a fuzzy matrix; cnRepresenting a gray/contrast variation matrix; dnRepresents a down-sampling coefficient; etanRepresenting noise.
5. The method of claim 2, wherein the particular operating condition comprises at least one of:
the system comprises a vision sensor overheating and aging working condition, a signal interference working condition, a transmission channel and decoding processing error working condition, a camera lens polluted working condition, a defocusing working condition, a working condition that the ambient light brightness of a shooting environment does not fall into a preset brightness interval, a working condition that a target object in an image generates spatial displacement and a working condition that the resolution of vision equipment is lower than a preset value.
6. The method of claim 2, wherein the degradation policy comprises at least one of:
the method comprises the following steps of adding noise to a target image, blurring the target image, performing gray-scale transformation on the image, performing affine transformation on the image and converting the target into a low-resolution image by an interval sampling method.
7. The method of claim 1, wherein said selecting a sample set of degraded images from said plurality of degraded images that meet a particular scoring criterion comprises:
determining scores of the plurality of degraded images according to the degradation degrees of the plurality of degraded images;
determining the degraded image with the score larger than the preset score as the degraded image meeting the specific standard;
a sample set of images is generated from all degraded images that meet certain criteria.
8. The method of claim 1, wherein determining a test result for the machine vision system based on the comparison of the first recognition result and the second recognition result comprises:
comparing the first recognition result and the second recognition result;
and when the ratio of the number of the second recognition results, which is consistent with the first recognition results, to the number of the second recognition results is greater than a preset ratio, determining that the machine vision system passes the test.
9. A machine vision system testing system, comprising:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to implement the machine vision system testing method of any one of claims 1-8.
10. A computer-readable storage medium, wherein instructions in the storage medium, when executed by a corresponding processor of a machine vision system testing system, enable the machine vision system testing system to implement the machine vision system testing method of any one of claims 1-8.
CN202111423140.6A 2021-11-26 2021-11-26 Machine vision system testing method, system and storage medium Pending CN114090453A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111423140.6A CN114090453A (en) 2021-11-26 2021-11-26 Machine vision system testing method, system and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111423140.6A CN114090453A (en) 2021-11-26 2021-11-26 Machine vision system testing method, system and storage medium

Publications (1)

Publication Number Publication Date
CN114090453A true CN114090453A (en) 2022-02-25

Family

ID=80305022

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111423140.6A Pending CN114090453A (en) 2021-11-26 2021-11-26 Machine vision system testing method, system and storage medium

Country Status (1)

Country Link
CN (1) CN114090453A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115002320A (en) * 2022-05-27 2022-09-02 北京理工大学 Light intensity adjusting method, device and system based on visual detection and processing equipment
CN115147700A (en) * 2022-06-23 2022-10-04 中国电子技术标准化研究院 Method and device for calibrating target recognition rate parameters of image recognition system

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115002320A (en) * 2022-05-27 2022-09-02 北京理工大学 Light intensity adjusting method, device and system based on visual detection and processing equipment
CN115002320B (en) * 2022-05-27 2023-04-18 北京理工大学 Light intensity adjusting method, device and system based on visual detection and processing equipment
CN115147700A (en) * 2022-06-23 2022-10-04 中国电子技术标准化研究院 Method and device for calibrating target recognition rate parameters of image recognition system

Similar Documents

Publication Publication Date Title
USRE48595E1 (en) Method and system for determining optimal exposure of structured light based 3D camera
EP3777122B1 (en) Image processing method and apparatus
Abdelhamed et al. A high-quality denoising dataset for smartphone cameras
CN111381579B (en) Cloud deck fault detection method and device, computer equipment and storage medium
CN114090453A (en) Machine vision system testing method, system and storage medium
CN108664839B (en) Image processing method and device
CN101819024A (en) Machine vision-based two-dimensional displacement detection method
CN116626052B (en) Battery cover plate surface detection method, device, equipment and storage medium
CN114189671A (en) Verification of camera cleaning system
Hertel et al. Image quality standards in automotive vision applications
CN111199536B (en) Focusing evaluation method and device
CN112149707B (en) Image acquisition control method, device, medium and equipment
CN112070762A (en) Mura defect detection method and device for liquid crystal panel, storage medium and terminal
LU501813B1 (en) Clear image screening method based on fourier transform
CN114897885B (en) Infrared image quality comprehensive evaluation system and method
CN111491103A (en) Image brightness adjusting method, monitoring equipment and storage medium
US10241000B2 (en) Method for checking the position of characteristic points in light distributions
CN113658169A (en) Image speckle detection method, apparatus, medium, and computer program product
CN114022367A (en) Image quality adjusting method, device, electronic equipment and medium
US9942542B2 (en) Method for recognizing a band-limiting malfunction of a camera, camera system, and motor vehicle
CN113840134B (en) Camera tuning method and device
JP2023130329A (en) Method and device for inspecting black box camera focusing
CN118505693B (en) Holographic printing quality detection method and system based on computer vision
CN117221736B (en) Automatic regulating AI camera system for low-illumination clear collection
Wischow et al. Monitoring and Adapting the Physical State of a Camera for Autonomous Vehicles

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20240717

Address after: No.1500, Shenjiang Road, Pudong New Area pilot Free Trade Zone, Shanghai, 201206

Applicant after: SAIC GENERAL MOTORS Corp.,Ltd.

Country or region after: China

Applicant after: SAIC General Power Technology (Shanghai) Co.,Ltd.

Address before: No.1500, Shenjiang Road, Pudong New Area pilot Free Trade Zone, Shanghai, 201206

Applicant before: SAIC GENERAL MOTORS Corp.,Ltd.

Country or region before: China

TA01 Transfer of patent application right