Disclosure of Invention
The application provides a machine vision system testing method, a machine vision system testing system and a storage medium, which are used for testing the robustness of a machine vision system.
The application provides a machine vision system testing method, which comprises the following steps:
acquiring a target image with quality meeting a preset condition;
generating a plurality of degraded images corresponding to the target image;
selecting a degraded image sample set which meets a specific scoring standard from the plurality of degraded images;
inputting the degraded image sample set into a machine vision system to be tested;
acquiring a first identification result of each image in the degraded image sample set, wherein each image in the degraded image sample set corresponds to one first identification result;
and determining a test result of the machine vision system according to a comparison result of the first identification result and a second identification result, wherein the second identification result is an identification result of the target image.
The beneficial effect of this application lies in: the degradation image used for the test machine vision system can be automatically generated, so that a sufficient sample is provided for the test machine vision system, the degradation image is screened according to a specific scoring standard, the degradation degree of the image can be controlled, the situation that the degradation degree of the image is insufficient or excessively degraded is avoided, the simulation effect of the degradation image is further improved, then the recognition result of the degradation image is compared with the recognition result of a target image, and the test on the robustness of the test machine vision system is realized according to the comparison result; and secondly, a plurality of degraded images corresponding to the target image are generated in a degradation mode, so that the sample size input to a machine vision system is increased, and data augmentation is realized.
In one embodiment, the generating a plurality of degraded images corresponding to the target image includes:
obtaining different types of degradation strategies corresponding to various specific working conditions;
and generating a degraded image corresponding to the target image according to different types of degradation strategies.
In one embodiment, the generating of the degraded image corresponding to the target image according to different types of degradation strategies includes:
generating a degraded image corresponding to the target image according to the following formula:
g(x,y)=h(x,y)*f(x,y)+η(x,y);
wherein g (x, y) is a degraded image generated according to a degradation strategy; f (x, y) is a target image; h (x, y) is a spatial expression of a degradation function corresponding to the degradation strategy; η (x, y) is additive noise.
In one embodiment, the generating a plurality of degraded images corresponding to the images according to different types of degradation strategies includes:
generating a degraded image corresponding to the target image according to the following formula:
Gn=TnBnCnDn *F+ηn;
wherein F is the target image, GnGenerating a degraded image according to a plurality of degradation strategies; t isnIs a displacement deformation matrix; b isnRepresenting a fuzzy matrix; cnRepresenting a gray/contrast variation matrix; dnRepresents a down-sampling coefficient; etanRepresenting noise.
In one embodiment, the particular operating condition includes at least one of:
the system comprises a vision sensor overheating and aging working condition, a signal interference working condition, a transmission channel and decoding processing error working condition, a camera lens polluted working condition, a defocusing working condition, a working condition that the ambient light brightness of a shooting environment does not fall into a preset brightness interval, a working condition that a target object in an image generates spatial displacement and a working condition that the resolution of vision equipment is lower than a preset value.
In one embodiment, the degradation policy includes at least one of:
the method comprises the following steps of adding noise to a target image, blurring the target image, performing gray-scale transformation on the image, performing affine transformation on the image and converting the target into a low-resolution image by an interval sampling method.
In one embodiment, the selecting a sample set of degraded images from the plurality of degraded images that meet a certain scoring criterion includes:
determining scores of the plurality of degraded images according to the degradation degrees of the plurality of degraded images;
determining the degraded image with the score larger than the preset score as the degraded image meeting the specific standard;
a sample set of images is generated from all degraded images that meet certain criteria.
In one embodiment, the determining a test result of the machine vision system based on the comparison of the first recognition result and the second recognition result comprises:
comparing the first recognition result and the second recognition result;
and when the ratio of the number of the second recognition results, which is consistent with the first recognition results, to the number of the second recognition results is greater than a preset ratio, determining that the machine vision system passes the test.
The present application further provides a machine vision system testing system, comprising:
at least one processor; and the number of the first and second groups,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to implement the machine vision system testing method of any of the above embodiments.
The present application further provides a computer-readable storage medium, wherein when instructions in the storage medium are executed by a processor corresponding to a machine vision system testing system, the machine vision system testing system is enabled to implement the machine vision system testing method described in any of the above embodiments.
Additional features and advantages of the application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the application. The objectives and other advantages of the application may be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
The technical solution of the present application is further described in detail by the accompanying drawings and examples.
Detailed Description
The preferred embodiments of the present application will be described in conjunction with the accompanying drawings, and it will be understood that they are described herein only to illustrate and explain the present application and not to limit the present application.
Fig. 1 is a flowchart of a machine vision system testing method according to an embodiment of the present application, and as shown in fig. 1, the method can be implemented as the following steps S11-S16:
in step S11, a target image whose quality meets a preset condition is acquired;
in step S12, a plurality of degraded images corresponding to the target image are generated;
in step S13, selecting a degraded image sample set satisfying a specific scoring criterion from the plurality of degraded images;
inputting the sample set of degraded images into a machine vision system to be tested in step S14;
in step S15, obtaining first recognition results of the images in the degraded image sample set, where each image in the degraded image sample set corresponds to a first recognition result;
in step S16, a test result of the machine vision system is determined according to a comparison result of the first recognition result and a second recognition result, wherein the second recognition result is a recognition result of a target image.
Taking an industrial production scene as an example, in an industrial production process, an image acquisition device is usually arranged to acquire a workpiece image on a production line, and then whether a part on the workpiece is installed in place, whether the part is short or not is judged through image processing, so that in the embodiment, a target image with quality meeting a preset condition is acquired; for example, the target image whose quality meets the preset condition may refer to an image whose various kinds of specified parameters (resolution, noise, color, sharpness, distortion degree, and the like) are within a preset interval. For another example, the quality meeting the preset condition may refer to an image with a distortion degree smaller than a preset value, and when the distortion degree of the image is calculated, a standard image is first selected, then a mean square value of a difference value between an original image and a distorted image pixel is calculated, and the distortion degree of the distorted image is determined according to the magnitude of the mean square value. For another example, a model for image quality evaluation may be set, and whether the image quality meets a preset condition or not may be determined by the model. It can be understood that, in an industrial production scene, the acquisition of the target image can be realized by receiving the image acquired by the image acquisition device arranged in the industrial production site.
After a target image with quality meeting a preset condition is acquired, generating a plurality of degraded images corresponding to the target image;
generating a plurality of degraded images corresponding to the target image by specifically:
obtaining different types of degradation strategies corresponding to various specific working conditions; and generating degraded images corresponding to the target images according to different types of degradation strategies.
Wherein the specific working condition comprises at least one of the following working conditions:
the method comprises the following steps that a visual sensor is overheated and aged, a signal interference working condition, a transmission channel and decoding processing error working condition, a camera lens polluted working condition, a defocusing working condition, a working condition that the ambient light brightness of a shooting environment does not fall into a preset brightness interval, a working condition that a target object in an image is subjected to spatial displacement and a working condition that the resolution of visual equipment is lower than a preset value;
the degradation strategy comprises at least one of the following strategies:
the method comprises the following steps of adding noise to a target image, blurring the target image, performing gray-scale transformation on the image, performing affine transformation on the image and converting the target into a low-resolution image by an interval sampling method.
It should be noted that, the strategy of adding noise to the target image can simulate the overheating and aging conditions of the visual sensor, the signal interference conditions, and the transmission channel and decoding processing error conditions in the machine vision system, and the added noise usually includes gaussian noise and salt and pepper noise. The strategy for blurring the target image can simulate the working condition that a camera lens is polluted, namely oil pollution and dust pollution, and the working condition that the camera lens is out of focus, which is usually caused by the position change of the camera or a shot object. The strategy of carrying out gray level conversion on the image can simulate the working condition that the ambient light brightness of the shooting environment does not fall into the preset brightness interval. The strategy of performing affine transformation on the image can simulate the working condition of spatial displacement of the target object in the image, and the displacement of the target object in the image can generally include three displacements of translation, rotation and inclination. The strategy of converting the target into the low-resolution image by the interval sampling method can simulate the working condition that the resolution of the visual equipment is lower than the preset value, and particularly the working condition that the resolution of the visual equipment is lower than the preset value is usually caused by switching the camera from a high-resolution model to a low-resolution model.
When the degraded images corresponding to the target images are generated according to different types of degradation strategies, one degradation strategy can be applied to the target images to generate the degraded images corresponding to the target images, and multiple degradation strategies can also be applied to the target images to generate the degraded images corresponding to the target images.
The method is applied to an industrial production scene, and can degrade images according to various image degradation strategies to simulate different degradation working conditions of a camera, so that under the condition of verifying the performance of a visual system, the method can realize the amplification of image data through the degradation strategies, and is different from a conventional amplification mode (rotation, translation and inversion).
The specific implementation manner of applying a degradation strategy to the target image to generate a degraded image corresponding to the target image is as follows:
generating a degraded image corresponding to the target image according to the following formula:
g(x,y)=h(x,y)*f(x,y)+η(x,y);
wherein g (x, y) is a degraded image generated according to a degradation strategy; f (x, y) is a target image; h (x, y) is a spatial expression of a degradation function corresponding to the degradation strategy; η (x, y) is additive noise.
The specific implementation manner of applying multiple degradation strategies to the target image to generate a degraded image corresponding to the target image is as follows:
generating a degraded image corresponding to the target image according to the following formula:
Gn=TnBnCnDn *F+ηn;
wherein F is the target image, GnGenerating a degraded image according to a plurality of degradation strategies; t isnIs a displacement deformation matrix; b isnRepresenting a fuzzy matrix; cnRepresenting a gray/contrast variation matrix; dnRepresents a down-sampling coefficient; etanRepresenting noise.
After generating the degraded images, selecting a degraded image sample set which meets a specific scoring standard from the plurality of degraded images; specifically, the scores of the multiple degraded images need to be determined according to the degradation degrees of the multiple degraded images; specifically, the evaluation can be realized by an image quality evaluation method, and the image quality evaluation method is generally used for evaluating image and video compression quality, comparing the performance of an image processing algorithm, monitoring the quality of a video received by a terminal and the like. The image quality evaluation method is divided into subjective evaluation and objective evaluation. The main difference between subjective and objective ratings is whether they are scored manually by the viewer or automatically according to an algorithmically programmed program. An objective evaluation method is selected for calculation in general automatic equipment. The objective evaluation algorithm is divided into three types of image quality algorithms including full reference, half reference and no reference. Full reference refers to evaluating the quality of degraded images in the presence of original images, and the application can obtain reference standard images during debugging, so a full reference image quality algorithm is applied.
The full-reference image quality algorithm may generally adopt PSNR (Peak Signal-to-Noise Ratio), SSIM (structural Similarity Index), CW-SSIM (Complex Wavelet domain structural Similarity Index), and other algorithms to score, wherein the SSIM (structural Similarity Index) algorithm is mainly used to evaluate the difference between the brightness, contrast, and structure of an image and a reference image, and the CW-SSIM has reduced sensitivity to slight translation and rotation on the basis of SSIM, that is, a small displacement has little influence on an imaging condition, and should not score too low, and further meets the actual application requirements. Among them, the earliest full reference algorithm is PSNR: however, the PSNR scoring result is often too different from subjective evaluation of human eyes, and a more applicable algorithm is SSIM. SSIM scores are generally between (0, 1), with negative values in certain cases; the results are 1 for the two graphs are identical and close to 0 for severe degradation. The SSIM can be more consistent with subjective feeling on the premise of objectively reflecting the image quality, and simultaneously can compare three statistical characteristic differences of two images, including brightness (namely, gray level average value), contrast (namely, variance) and structure (namely, covariance) difference, so that the difference of the images before and after degradation is comprehensively and objectively reflected. In most of the degraded scenes, the SSIM algorithm can accurately calculate the image degradation degree, and is only over sensitive to the slightly translated and rotated affine transformation degraded image and over-low in score. In actual conditions, the slight translation and rotation of the whole workpiece do not need to cause subjective attention adjustment, and the influence on commercial industrial machine vision algorithm is extremely small, and the workpiece cannot be scored too low. To better cope with such situations, CW-SSIM was introduced to overcome such deficiencies. The CW-SSIM is obtained by carrying out pyramid decomposition on a complex wavelet image on the image, then weighting and summing the complex wavelet SSIM values of each sub-band (sub-band), and taking the average value.
After the score of the degraded image is obtained, determining the degraded image with the score larger than a preset score as the degraded image meeting a specific standard; a sample set of images is generated from all degraded images that meet certain criteria.
Inputting the degraded image sample set into a machine vision system to be tested; acquiring a first identification result of each image in the degraded image sample set, wherein each image in the degraded image sample set corresponds to one first identification result;
and determining a test result of the machine vision system according to a comparison result of the first identification result and a second identification result, wherein the second identification result is an identification result of the target image. Specifically, the first recognition result and the second recognition result are compared; and when the ratio of the number of the second recognition results, which is consistent with the first recognition results, to the number of the second recognition results is greater than a preset ratio, determining that the machine vision system passes the test.
Fig. 5 is a schematic view of an implementation process corresponding to the method for testing a machine vision system in the present application, and the following description will be made in detail with reference to fig. 5, taking a machine vision system for monitoring whether a workpiece on a production line is qualified or not as an example:
firstly, clear images containing workpieces to be tested on a production line acquired by a machine vision system are acquired, if the machine vision system is used for monitoring whether parts a on the workpieces are installed in place, when the machine vision system detects that the parts a on the workpieces to be tested are installed in place, prompt information of 'OK' is output, and when the machine vision system detects that the parts a on the workpieces to be tested are not installed in place, prompt information of 'NG' is output. If the target image is a clear image of a workpiece to be tested with the part a mounted in place, and the recognition result of the target image in the machine vision system is "OK". For example, 10 degraded images are generated by applying various degradation strategies to the target image, some of the 10 degraded images may be applied with only one degradation strategy, some may be applied with multiple degradation strategies, the 10 degraded images are input to the machine vision system, and if the preset proportion is assumed to be 80% in the recognition results of the 10 degraded images, if 8 or more than 8 recognition results are "OK", the machine vision system is determined to pass the test.
Similarly, if the target image is a clear image of a workpiece to be tested with a part a not in place, and the target image is recognized as "NG" in the machine vision system. And generating 10 degraded images by applying various degradation strategies to the target image, wherein the 10 degraded images can only apply one degradation strategy, and the 10 degraded images can apply multiple degradation strategies, the 10 degraded images are input into the machine vision system, and if the preset proportion is 80% in the recognition results of the 10 degraded images, the machine vision system is determined to pass the test if 8 or more than 8 recognition results are 'NG'.
Certainly, the preset proportion can be set by itself, and under the condition that the requirement on the machine vision system is high, the preset proportion can be adjusted to 100%, that is, the recognition result of the degraded image and the recognition result of the target image are required to be consistent.
The beneficial effect of this application lies in: the degradation image used for the test machine vision system can be automatically generated, so that a sufficient sample is provided for the test machine vision system, the degradation image is screened according to a specific scoring standard, the degradation degree of the image can be controlled, the situation that the degradation degree of the image is insufficient or excessively degraded is avoided, the simulation effect of the degradation image is further improved, then the recognition result of the degradation image is compared with the recognition result of a target image, and the test on the robustness of the test machine vision system is realized according to the comparison result; and secondly, a plurality of degraded images corresponding to the target image are generated in a degradation mode, so that the sample size input to a machine vision system is increased, and data augmentation is realized.
In one embodiment, as shown in FIG. 2, the above step S12 can be implemented as the following steps S21-S22:
in step S21, different types of degradation strategies corresponding to various specific operating conditions are obtained;
in step S22, a degraded image corresponding to the target image is generated according to different types of degradation strategies.
In one embodiment, the above step S22 can be implemented as the following steps:
generating a degraded image corresponding to the target image according to the following formula:
g(x,y)=h(x,y)*f(x,y)+η(x,y);
wherein g (x, y) is a degraded image generated according to a degradation strategy; f (x, y) is a target image; h (x, y) is a spatial expression of a degradation function corresponding to the degradation strategy; η (x, y) is additive noise.
In one embodiment, the step S22 can be further implemented as the following steps:
generating a degraded image corresponding to the target image according to the following formula:
Gn=TnBnCnDn *F+ηn;
wherein F is the target image, GnGenerating a degraded image according to a plurality of degradation strategies; t isnIs a displacement deformation matrix; b isnRepresenting a fuzzy matrix; cnRepresenting a gray/contrast variation matrix; dnRepresents a down-sampling coefficient; etanRepresenting noise.
In this embodiment, as shown in fig. 4, the algorithm may be integrated into an image degradation processing module, and then an image with normal quality is input into the image degradation processing module, and operations such as displacement deformation, blurring, gray/contrast conversion, downsampling, and denoising are performed on the image to obtain a degraded image, where when the image degradation module processes the image F, some degradation strategies that do not need to be applied, such as displacement deformation, blurring, gray/contrast conversion, and downsampling, may assign a corresponding parameter to 1, and if denoising is not needed, assign a corresponding parameter to 0.
In one embodiment, the particular operating condition includes at least one of:
the system comprises a vision sensor overheating and aging working condition, a signal interference working condition, a transmission channel and decoding processing error working condition, a camera lens polluted working condition, a defocusing working condition, a working condition that the ambient light brightness of a shooting environment does not fall into a preset brightness interval, a working condition that a target object in an image generates spatial displacement and a working condition that the resolution of vision equipment is lower than a preset value.
In one embodiment, the degradation policy includes at least one of:
the method comprises the following steps of adding noise to a target image, blurring the target image, performing gray-scale transformation on the image, performing affine transformation on the image and converting the target into a low-resolution image by an interval sampling method.
In this embodiment, the strategy of adding noise to the target image can simulate the overheating and aging conditions of the visual sensor in the machine vision system, the signal interference conditions, and the transmission channel and decoding processing error conditions, and the added noise includes gaussian noise and salt and pepper noise under normal conditions. The strategy for blurring the target image can simulate the working condition that a camera lens is polluted, namely oil pollution and dust pollution, and the working condition that the camera lens is out of focus, which is usually caused by the position change of the camera or a shot object. The strategy of carrying out gray level conversion on the image can simulate the working condition that the ambient light brightness of the shooting environment does not fall into the preset brightness interval. The strategy of performing affine transformation on the image can simulate the working condition of spatial displacement of the target object in the image, and the displacement of the target object in the image can generally include three displacements of translation, rotation and inclination. The strategy of converting the target into the low-resolution image by the interval sampling method can simulate the working condition that the resolution of the visual equipment is lower than the preset value, and particularly the working condition that the resolution of the visual equipment is lower than the preset value is usually caused by switching the camera from a high-resolution model to a low-resolution model.
In one embodiment, the above step S13 can be implemented as the following steps S31-S33:
in step S31, determining scores of the plurality of degraded images according to the degradation degrees of the plurality of degraded images;
in step S32, determining a degraded image having a score greater than a preset score as a degraded image satisfying a certain criterion;
in step S33, a sample set of images is generated from all degraded images that meet certain criteria.
In this embodiment, the scores of the plurality of degraded images are determined according to the degradation degrees of the plurality of degraded images; determining the degraded image with the score larger than the preset score as the degraded image meeting the specific standard; a sample set of images is generated from all degraded images that meet certain criteria.
It should be noted that, when determining the scores of the multiple degraded images according to the degradation degrees of the multiple degraded images, the evaluation may be implemented by an image quality evaluation method, and the image quality evaluation method is generally used for evaluating image and video compression quality, comparing the performance of an image processing algorithm, monitoring the quality of a video received by a terminal, and the like. The image quality evaluation method is divided into subjective evaluation and objective evaluation. The main difference between subjective and objective ratings is whether they are scored manually by the viewer or automatically according to an algorithmically programmed program. An objective evaluation method is selected for calculation in general automatic equipment. The objective evaluation algorithm is divided into three types of image quality algorithms including full reference, half reference and no reference. Full reference refers to evaluating the quality of degraded images in the presence of original images, and the application can obtain reference standard images during debugging, so a full reference image quality algorithm is applied.
The full-reference image quality algorithm may generally adopt PSNR (Peak Signal-to-Noise Ratio), SSIM (Structural Similarity Index), CW-SSIM (Complex Wavelet domain Structural Similarity Index), and other algorithms to score, wherein the SSIM (Structural Similarity Index) algorithm is mainly used to evaluate the difference between the brightness, contrast, and structure of an image and a reference image, and the CW-SSIM has reduced sensitivity to slight translation and rotation on the basis of SSIM, that is, a small displacement has little influence on an imaging condition, and should not score too low, and further meets the requirements of practical applications. Among them, the earliest full reference algorithm is PSNR: however, the PSNR scoring result is often too different from subjective evaluation of human eyes, and a more applicable algorithm is SSIM. SSIM scores are generally between (0, 1), with negative values in certain cases; the results are 1 for the two graphs are identical and close to 0 for severe degradation. The SSIM can be more consistent with subjective feeling on the premise of objectively reflecting the image quality, and simultaneously can compare three statistical characteristic differences of two images, including brightness (namely, gray level average value), contrast (namely, variance) and structure (namely, covariance) difference, so that the difference of the images before and after degradation is comprehensively and objectively reflected. In most of the degraded scenes, the SSIM algorithm can accurately calculate the image degradation degree, and is only over sensitive to the slightly translated and rotated affine transformation degraded image and over-low in score. In actual conditions, the slight translation and rotation of the whole workpiece do not need to cause subjective attention adjustment, and the influence on commercial industrial machine vision algorithm is extremely small, and the workpiece cannot be scored too low. To better cope with such situations, CW-SSIM was introduced to overcome such deficiencies. The CW-SSIM is obtained by carrying out pyramid decomposition on a complex wavelet image on the image, then weighting and summing the complex wavelet SSIM values of each sub-band (sub-band), and taking the average value.
In one embodiment, the above step S16 may be implemented as the following steps A1-A2:
in step a1, comparing the first recognition result and the second recognition result;
in step a2, when the ratio of the number of the second recognition results consistent with the first recognition results to the number of the second recognition results is greater than a preset ratio, determining that the machine vision system test passes.
For example, the target image is a clear image of a workpiece to be tested with a part a mounted in place, and the recognition result of the target image in the machine vision system is "OK". For example, 10 degraded images are generated by applying various degradation strategies to the target image, some of the 10 degraded images may be applied with only one degradation strategy, some may be applied with multiple degradation strategies, the 10 degraded images are input to the machine vision system, and if the preset proportion is assumed to be 80% in the recognition results of the 10 degraded images, if 8 or more than 8 recognition results are "OK", the machine vision system is determined to pass the test.
For another example, the target image is a clear image of a workpiece to be tested whose part a is not mounted in place, and the recognition result of the target image in the machine vision system is "NG". And generating 10 degraded images by applying various degradation strategies to the target image, wherein the 10 degraded images can only apply one degradation strategy, and the 10 degraded images can apply multiple degradation strategies, the 10 degraded images are input into the machine vision system, and if the preset proportion is 80% in the recognition results of the 10 degraded images, the machine vision system is determined to pass the test if 8 or more than 8 recognition results are 'NG'.
Fig. 6 is a schematic diagram of a hardware structure of a machine vision system testing system according to the present application, including:
at least one processor 620; and the number of the first and second groups,
a memory 604 communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to implement the machine vision system testing method of any of the above embodiments.
Referring to fig. 6, the machine vision system testing system 600 may include one or more of the following components: processing component 602, memory 604, power component 606, input/output (I/0) interface 612, and communication component 616.
The processing component 602 generally controls the overall operation of the machine vision system testing system 600. The processing component 602 may include one or more processors 620 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 602 can include one or more modules that facilitate interaction between the processing component 602 and other components.
The memory 604 is configured to store various types of data to support operation of the machine vision system test system 600. Examples of such data include instructions for any application or method operating on the machine vision system testing system 600. The memory 604 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power component 606 provides power to the various components of the machine vision system test system 600. The power components 606 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power supplies for the machine vision system testing system 600.
The I/O interface 612 provides an interface between the processing component 602 and peripheral interface modules.
The communication component 616 is configured to enable the machine vision system testing system 600 to provide communication capabilities with other devices and cloud platforms in a wired or wireless manner. The machine vision system test system 600 may access a wireless network based on a communication standard, such as WiFi, 2G, or 3G, or a combination thereof. In an exemplary embodiment, the communication component 616 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 616 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
The processor 620 can be connected with the image capture device 608 in the industrial scene through the communication component 616, so as to obtain the target image sent by the image capture device 608 through the communication component 616, and secondly, the processor 620 can also be connected with the machine vision system 610 through the communication component 616, so that the processor 620 can input a plurality of degraded images generated by the target image to the machine vision system 610 to be tested.
In an exemplary embodiment, the machine vision system testing system 600 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors, or other electronic components for performing the above-described machine vision system testing methods.
The present application further provides a computer-readable storage medium, wherein when instructions in the storage medium are executed by a processor corresponding to a machine vision system testing system, the machine vision system testing system is enabled to implement the machine vision system testing method described in any of the above embodiments.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.