Disclosure of Invention
In view of the above, the main objective of the present invention is to provide an image self-generating system based on a virtual pixel array, which does not directly acquire image information of a target, but indirectly acquires spectrum information of light reflected by the target, converts the spectrum information into pixel data corresponding to an image, and then generates a configuration matrix to correct the generated image data so as to generate a final image, so that compared with the prior art, the image self-generating system can effectively avoid the problem of the physical pixel array that affects the image acquisition or the image quality, and reduces the cost of image acquisition and improves the image quality although the generated image efficiency is reduced; meanwhile, the invention also carries out image denoising and image enhancement on the final image, thereby further improving the quality of the image.
In order to achieve the above purpose, the technical scheme of the invention is realized as follows:
an image self-generating system based on virtual pixel array generation, the system comprising:
an optical pixel acquisition module configured to acquire photosensitive pixel data of a target; the photosensitive pixel data is: according to the spectrum data of the reflected light of the target under the illumination condition, calculating corresponding pixel value data;
A virtual pixel array construction module configured to construct a virtual pixel array; the virtual pixel array is a matrix, and elements of each row in the matrix are circularly arranged from left to right and elements of each column from top to bottom in sequence according to the sequence of R value, G value and B value; the R value, the G value and the B value are all values of each channel of the RGB element channel;
The virtual pixel array marking module is configured to mark a numerical value in the total of each element in the virtual pixel array based on the photosensitive pixel data of the target acquired by the optical pixel acquisition module, so as to generate an image virtual pixel array;
the configuration matrix generation module is configured to generate a configuration matrix based on the pixel value data acquired by the optical pixel acquisition module and the parameters of the optical pixel acquisition module;
And the image synthesis unit is configured to scan the image virtual pixel array line by line, convert each numerical value in the image virtual pixel array into a numerical value corresponding to the digital image matrix by using a preset pixel value conversion model to obtain an intermediate image matrix, and then correct the intermediate image matrix by using the generated configuration matrix to generate a final image.
Further, the optical pixel acquisition module is a spectrum sensor, and can sense spectrum data of light reflected by a target under the illumination condition, wherein the spectrum data is as follows: the wavelength of light reflected by the target under illumination conditions; the spectral data is converted into corresponding pixel value data by the following formula: wherein R is the R value of the pixel under the RGB channel, G is the G value of the pixel under the RGB channel, and B is the B value of the pixel under the RGB channel; lambda is the wavelength of light reflected at a certain point of the target.
Further, the virtual pixel array is: the virtual pixel array marking module marks the value of each element in the virtual pixel array, and the generated image virtual pixel array is as follows: The conversion relation between the element values in the image virtual pixel array and the calculated pixel value data is as follows: p=s×γ; wherein P is the element value in the converted image pixel array, gamma is the conversion coefficient, and the value range is as follows: and 0.1-0.2, wherein S is R value, G value or B value under RGB channel in the calculated pixel value data.
Further, the method for generating the configuration matrix by the configuration matrix generating module based on the pixel value data acquired by the optical pixel acquiring module and the parameters of the optical pixel acquiring module comprises the following steps: calculating the average value of the pixel value data, and if the calculated average value exceeds a set threshold value, obtaining a first coefficient which is marked as D; if the calculated average value is lower than the set threshold value, a second coefficient is obtained and is recorded as: s, S; the parameters of the optical pixel acquisition module include: sensitivity, impulse response characteristics, frequency response characteristics, illumination characteristics, spectral characteristics, and temperature characteristics; based on the parameters of the optical acquisition module, a normalized adjustment coefficient value is obtained and is recorded as: o; the generated configuration matrix is expressed using the following formula: Where n is the number of entries of the matrix.
Further, the method for converting each numerical value in the image virtual pixel array into a numerical value corresponding to the digital image matrix by using a preset pixel value conversion model by using the image synthesis unit to scan the image virtual pixel array line by line includes: l=b (a); wherein L is a converted digital image matrix, and B () represents an operation of converting decimal data into binary data.
Further, the image synthesis unit obtains an intermediate image matrix, and then corrects the intermediate image matrix by using the generated configuration matrix, and the method for generating the final image comprises the following steps: end=l+a; wherein End is the image array corresponding to the final image.
Further, the system further comprises: the image denoising module is configured to denoise the generated final image; the method for denoising the final image by the image denoising module comprises the following steps: selecting proper filtering wave, setting the highest decomposed layer N, and calculating the filtering coefficient of the final image in each layer; a step of thresholding the filter coefficients, in which a suitable threshold is determined in each direction of each layer of filter decomposition, the filter coefficients of each layer of detail are processed by using a suitable threshold function, and the filter coefficients of the image signals are kept as much as possible, so that the filter coefficients of noise are zero; the image signal is reconstructed using the approximated partial filter coefficients of the nth layer and the detail filter coefficients processed from the 1 st layer to the nth layer.
Further, the method for determining the threshold value specifically includes the following steps: determining a threshold value, wherein the threshold value suitable for each layer is as follows: Wherein i=1: n, f E (0, 1); r represents the neighborhood of the average, σ is the noise variance, f is the adjustment factor, and the estimation is shown in the following formula: σ=sin (|w HH1|)/0.6745;WHH1 denotes the first layer HH subband coefficients.
Further, the system further comprises: an image enhancement module; the image enhancement module carries out image enhancement on the final image subjected to denoising processing by the image denoising module; the method for enhancing the image of the final image by the image enhancement module comprises the following steps: synthesizing the characteristics of the final image to obtain a first illumination map corresponding to the final image, wherein the resolution of the first illumination map is lower than that of the final image; based on the first illumination map, a mapping relation for mapping the image into a second illumination map is obtained; mapping the final image based on the mapping relation to obtain a second illumination map, wherein the resolution of the second illumination map is the same as that of the final image; and carrying out image enhancement processing on the final image according to the second illumination map.
Further, the synthesizing process is performed on the features of the original image to obtain a first illumination map corresponding to the original image, including: extracting local features and global features of the original image based on a convolution network; and performing feature synthesis on the local features and the global features to obtain a first illumination map corresponding to the original image.
The image self-generating system based on virtual pixel array generation has the following beneficial effects:
1. The image quality is high: the invention is completely different from the traditional scheme and the prior art, and does not directly acquire image information through a physical image array; indirectly generating an image using a preset virtual image array by acquiring spectral data of a target; the generated image is not directly acquired in a strict sense, but is synthesized artificially, and the synthesis process can completely avoid the problem of acquired images caused by a physical image array. Because the virtual image array is an algorithm, the purpose of generating images is achieved by simulating the image array, and the maintenance and improvement of the virtual image array are very convenient, so that on one hand, the cost can be greatly reduced, and on the other hand, the acquisition of the images can be greatly separated from the constraint of physical hardware, and the difficulty of image processing is further reduced; meanwhile, the physical image array is easy to cause the problem of the whole image because of the problem of a certain element in the array, the image quality is easy to be impressed, and the virtual image array can completely avoid the problem after the use times are too many.
2. The algorithm complexity is low: although the present invention guarantees image quality by artificially synthesizing images, such a scheme is difficult to apply in practice because of the great amount of computing system resources often required for artificially synthesizing images. However, in the process of realizing the method, the algorithm complexity is reduced in a targeted way, so that the occupied system resources can be reduced to a certain extent; the method is characterized in that the method does not use complete progressive scanning to generate images, but uses a spectrum data conversion method, namely, the method is more digital image processing instead of analog image processing, emphasizes the digital processing of the images instead of conversion based on analog signals, and can greatly reduce the complexity; and is also well suited for use with computer systems in our use today.
Detailed Description
The method of the present invention will be described in further detail with reference to the accompanying drawings.
Example 1
As shown in fig. 1, an image self-generating system generated based on a virtual pixel array, the system comprising:
an optical pixel acquisition module configured to acquire photosensitive pixel data of a target; the photosensitive pixel data is: according to the spectrum data of the reflected light of the target under the illumination condition, calculating corresponding pixel value data;
A virtual pixel array construction module configured to construct a virtual pixel array; the virtual pixel array is a matrix, and elements of each row in the matrix are circularly arranged from left to right and elements of each column from top to bottom in sequence according to the sequence of R value, G value and B value; the R value, the G value and the B value are all values of each channel of the RGB element channel;
The virtual pixel array marking module is configured to mark a numerical value in the total of each element in the virtual pixel array based on the photosensitive pixel data of the target acquired by the optical pixel acquisition module, so as to generate an image virtual pixel array;
the configuration matrix generation module is configured to generate a configuration matrix based on the pixel value data acquired by the optical pixel acquisition module and the parameters of the optical pixel acquisition module;
And the image synthesis unit is configured to scan the image virtual pixel array line by line, convert each numerical value in the image virtual pixel array into a numerical value corresponding to the digital image matrix by using a preset pixel value conversion model to obtain an intermediate image matrix, and then correct the intermediate image matrix by using the generated configuration matrix to generate a final image.
Specifically, in the field of photography, reflected light means that light is not directly emitted from a light source to be irradiated onto a scene, but is reflected once by a prop and then irradiated onto an object. The props used for reflecting are not pure planes, but reflecting plates treated by special processes. Thus, the reflected light can obtain the irradiation effect of scattered light, namely softening. The reflected light is generally weaker than the direct light but stronger than the natural scattered light, so that the light receiving surface obtained by the subject can be softer. The reflected light is most commonly photographed in natural light, so that the main person faces away from the light source, and then the light is reflected by the reflecting plate to supplement the light to the face of the person. Reflected light is also often used when photographing some merchandise or stills.
Example 2
On the basis of the above embodiment, the optical pixel acquisition module is a spectrum sensor, and is capable of sensing spectrum data of light reflected by a target under an illumination condition, where the spectrum data is: the wavelength of light reflected by the target under illumination conditions; the spectral data is converted into corresponding pixel value data by the following formula: wherein R is the R value of the pixel under the RGB channel, G is the G value of the pixel under the RGB channel, and B is the B value of the pixel under the RGB channel; lambda is the wavelength of light reflected at a certain point of the target.
The structure, the generation mechanism, the property and the application of the related spectrum in scientific research and production practice have accumulated very rich knowledge and constitute an important discipline-spectroscopy, the application of the spectroscopy is very wide, each atom has unique spectrums which are different as the fingerprint of people, the properties of the atomic spectrum line systems are closely connected with the atomic structure, the important basis for researching the atomic structure can be realized by applying the principle of spectroscopy and the experimental method, each element has a specific identification spectral line, the comparison of the open line spectrum generated by a certain substance with the identification spectral lines of known elements can know which elements the substance consists of, the spectrum can be used for qualitatively analyzing the chemical composition of the substance, the spectrum analysis method has extremely high sensitivity and accuracy, can be used for detecting trace noble metals, rare elements or radioactive elements and the like contained in ores by utilizing spectrum analysis in geological exploration, has high spectrum analysis speed and greatly improves the working efficiency, and can be used for researching chemical components of celestial bodies, standard originals with calibrated lengths and the like by utilizing spectrum analysis.
Example 3
Based on the above embodiment, the virtual pixel array is: the virtual pixel array marking module marks the value of each element in the virtual pixel array, and the generated image virtual pixel array is as follows: The conversion relation between the element values in the image virtual pixel array and the calculated pixel value data is as follows: p=s×γ; wherein P is the element value in the converted image pixel array, gamma is the conversion coefficient, and the value range is as follows: and 0.1-0.2, wherein S is R value, G value or B value under RGB channel in the calculated pixel value data.
Specifically, among the RGB colors, the G color is the largest in relative luminosity, and then the R color and the B color are sequentially arranged, and the relative luminosity of G is higher than the relative luminosity of R and B. Therefore, the spatial distribution of the relative luminosity within the pixel is unbalanced (shifted) according to the arrangement of the R, G, and B sub-pixels within the pixel. For example, in the RGB vertical stripe system, the G sub-pixel is arranged in the center of the pixel. This configuration maximizes the spatial distribution of the sum of the relative luminosities of the RGB colors at the center of gravity of the pixel and reduces the imbalance of the relative luminosities within the pixel. On the other hand, in the PenTile method, the G sub-pixels are arranged in columns adjacent to the edges of the pixels, and in the S stripe method, the G sub-pixels are arranged at the corners of the pixels. The PenTile and S-stripe configuration causes the spatially distributed maximum position of the sum of the relative luminosities of the RGB colors to be separated from the center of the pixel and the imbalance of the relative luminosities to increase within the pixel.
The imbalance of the relative luminosity is hardly recognized inside the display object within the image, but in the case where the edge of the image extends in the row or column direction of the pixels, the imbalance of the relative luminosity can be recognized remarkably and cause a phenomenon in which the edge of the image is observed as coloring (so-called "color streak"). In particular, in the S stripe method, when viewed along a diagonal line that does not pass through the G sub-pixel within the diagonal line of the pixel, the G sub-pixel is located at a position farthest from the center of gravity of the pixel. This structure makes the imbalance of the relative luminosity noticeable, and further causes a large problem of degradation of display quality caused by color stripes. The present invention addresses these issues.
Example 4
On the basis of the above embodiment, the method for generating a configuration matrix by the configuration matrix generating module based on the pixel value data acquired by the optical pixel acquiring module and the parameters of the optical pixel acquiring module, specifically includes: calculating the average value of the pixel value data, and if the calculated average value exceeds a set threshold value, obtaining a first coefficient which is marked as D; if the calculated average value is lower than the set threshold value, a second coefficient is obtained and is recorded as: s, S; the parameters of the optical pixel acquisition module include: sensitivity, impulse response characteristics, frequency response characteristics, illumination characteristics, spectral characteristics, and temperature characteristics; based on the parameters of the optical acquisition module, a normalized adjustment coefficient value is obtained and is recorded as: o; the generated configuration matrix is expressed using the following formula:
Where n is the number of entries of the matrix.
Specifically, in the disciplines of signal and system or circuit theory, impulse response (or impulse response) generally refers to the output (response) of a system when the input is a unit impulse function. For continuous time systems, the impulse response is generally represented by a function h (t). For deterministic linear systems without random noise, when the input signal is a pulse function δ (t), the output response h (t) of the system is called the impulse response function.
Example 5
On the basis of the above embodiment, the method for converting each value in the image virtual pixel array into a value corresponding to the digital image matrix by using the preset pixel value conversion model by using the image synthesis unit to scan the image virtual pixel array line by line includes: l=b (a); wherein L is a converted digital image matrix, and B () represents an operation of converting decimal data into binary data.
In particular, the digital image data may be represented by a matrix, and thus the digital image may be analyzed and processed using matrix theory and matrix algorithms. Since digital images can be represented in a matrix form, in a computer digital image processing program, image data is typically stored in a two-dimensional array.
The digital image data may be represented by a matrix, and thus the digital image may be analyzed and processed using matrix theory and matrix algorithms. The most typical example is a gray scale image, as shown in the following figure. The pixel data of the gray image is a matrix, the rows of the matrix correspond to the high (unit is pixel) of the image, the columns of the matrix correspond to the wide (unit is pixel) of the image, the elements of the matrix correspond to the pixels of the image, and the values of the matrix elements are the gray values of the pixels.
Since digital images can be represented in a matrix form, in a computer digital image processing program, image data is typically stored in a two-dimensional array, see below. The rows of the two-dimensional array correspond to the height of the image, the columns of the two-dimensional array correspond to the width of the image, the elements of the two-dimensional array correspond to the pixels of the image, and the values of the elements of the two-dimensional array are the gray values of the pixels. The two-dimensional array is used for storing the digital image, the line and row characteristics of the two-dimensional image are met, and meanwhile, the addressing operation of a program is convenient, so that the programming of the computer image is very convenient.
Example 6
On the basis of the above embodiment, the method for generating a final image by the image synthesis unit to obtain an intermediate image matrix and then correcting the intermediate image matrix by using the generated configuration matrix includes: end=l+a; wherein End is the image array corresponding to the final image.
Example 7
On the basis of the above embodiment, the system further comprises: the image denoising module is configured to denoise the generated final image; the method for denoising the final image by the image denoising module comprises the following steps: selecting proper filtering wave, setting the highest decomposed layer N, and calculating the filtering coefficient of the final image in each layer; a step of thresholding the filter coefficients, in which a suitable threshold is determined in each direction of each layer of filter decomposition, the filter coefficients of each layer of detail are processed by using a suitable threshold function, and the filter coefficients of the image signals are kept as much as possible, so that the filter coefficients of noise are zero; the image signal is reconstructed using the approximated partial filter coefficients of the nth layer and the detail filter coefficients processed from the 1 st layer to the nth layer.
In particular, an average filter using a neighborhood averaging method is very suitable for removing particle noise in an image obtained by scanning. The neighborhood averaging method strongly suppresses noise and also causes blurring due to averaging, the degree of blurring being proportional to the radius of the domain.
The smoothness achieved by the geometric mean filter is comparable to that achieved by the arithmetic mean filter, but less image detail is lost during the filtering process.
The harmonic mean filter works better for "salt" noise but is not suitable for "pepper" noise. It is good at handling other noise like gaussian noise.
An inverse harmonic mean filter is more suitable for handling impulse noise, but it has the disadvantage that it has to be known whether the noise is dark or bright in order to select the appropriate filter order symbols, which may have disastrous consequences if the order symbols are wrong.
Example 8
On the basis of the above embodiment, the method for determining the threshold value specifically includes the following steps: determining a threshold value, wherein the threshold value suitable for each layer is as follows: Wherein i=1: n, f E (0, 1); r represents the neighborhood of the average, σ is the noise variance, f is the adjustment factor, and the estimation is shown in the following formula: σ=sin (|w HH1|)/0.6745;WHH1 denotes the first layer HH subband coefficients.
Example 9
On the basis of the above embodiment, the system further comprises: an image enhancement module; the image enhancement module carries out image enhancement on the final image subjected to denoising processing by the image denoising module; the method for enhancing the image of the final image by the image enhancement module comprises the following steps: synthesizing the characteristics of the final image to obtain a first illumination map corresponding to the final image, wherein the resolution of the first illumination map is lower than that of the final image; based on the first illumination map, a mapping relation for mapping the image into a second illumination map is obtained; mapping the final image based on the mapping relation to obtain a second illumination map, wherein the resolution of the second illumination map is the same as that of the final image; and carrying out image enhancement processing on the final image according to the second illumination map.
Specifically, image enhancement can be divided into two main categories: a frequency domain method and a spatial domain method.
The former regards the image as a two-dimensional signal, which is subjected to a two-dimensional fourier transform based signal enhancement. The noise in the graph can be removed by adopting a low-pass filtering (namely only passing low-frequency signals) method; by adopting the high-pass filtering method, high-frequency signals such as edges and the like can be enhanced, so that a blurred picture becomes clear.
Representative algorithms in the latter spatial domain method are a local averaging method, a median filtering method (taking the intermediate pixel value in the local neighborhood), and the like, which can be used to remove or attenuate noise.
Example 10
On the basis of the above embodiment, the synthesizing process is performed on the features of the original image to obtain a first illumination map corresponding to the original image, including: extracting local features and global features of the original image based on a convolution network; and performing feature synthesis on the local features and the global features to obtain a first illumination map corresponding to the original image.
It will be clear to those skilled in the art that, for convenience and brevity of description, the specific working process of the system described above and the related description may refer to the corresponding process in the foregoing method embodiment, which is not repeated here.
It should be noted that, in the system provided in the foregoing embodiment, only the division of the foregoing functional units is illustrated, in practical application, the foregoing functional allocation may be performed by different functional units, that is, the units or steps in the embodiment of the present invention are further decomposed or combined, for example, the units in the foregoing embodiment may be combined into one unit, or may be further split into multiple sub-units, so as to complete all or the functions of the units described above. The names of the units and the steps related to the embodiment of the invention are only used for distinguishing the units or the steps, and are not to be construed as undue limitation of the invention.
It will be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process of the storage device and the processing device described above and the related description may refer to the corresponding process in the foregoing method embodiment, which is not repeated herein.
Those of skill in the art will appreciate that the various illustrative elements, method steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the program(s) corresponding to the software elements, method steps may be embodied in Random Access Memory (RAM), memory, read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, removable disk, CD-ROM, or any other form of storage medium known in the art. To clearly illustrate this interchangeability of electronic hardware and software, various illustrative components and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as electronic hardware or software depends upon the particular application and design constraints imposed on the solution. Those skilled in the art may implement the described functionality using different approaches for each particular application, but such implementation is not intended to be limiting.
The terms "first," "another portion," and the like, are used for distinguishing between similar objects and not for describing a particular sequential or chronological order.
The terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or unit/apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or unit/apparatus.
Thus far, the technical solution of the present invention has been described in connection with the preferred embodiments shown in the drawings, but it is easily understood by those skilled in the art that the scope of protection of the present invention is not limited to these specific embodiments. Equivalent modifications and substitutions for related art marks may be made by those skilled in the art without departing from the principles of the present invention, and such modifications and substitutions will fall within the scope of the present invention.
The foregoing description is only of the preferred embodiments of the present invention and is not intended to limit the scope of the present invention.