[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN114002931A - Large-view-field holographic projection method and system based on deep learning accelerated calculation - Google Patents

Large-view-field holographic projection method and system based on deep learning accelerated calculation Download PDF

Info

Publication number
CN114002931A
CN114002931A CN202111171388.8A CN202111171388A CN114002931A CN 114002931 A CN114002931 A CN 114002931A CN 202111171388 A CN202111171388 A CN 202111171388A CN 114002931 A CN114002931 A CN 114002931A
Authority
CN
China
Prior art keywords
diffraction
plane
neural network
projection
field
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111171388.8A
Other languages
Chinese (zh)
Inventor
苏萍
蔡超
汪郡容
马建设
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen International Graduate School of Tsinghua University
Original Assignee
Shenzhen International Graduate School of Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen International Graduate School of Tsinghua University filed Critical Shenzhen International Graduate School of Tsinghua University
Priority to CN202111171388.8A priority Critical patent/CN114002931A/en
Publication of CN114002931A publication Critical patent/CN114002931A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03HHOLOGRAPHIC PROCESSES OR APPARATUS
    • G03H1/00Holographic processes or apparatus using light, infrared or ultraviolet waves for obtaining holograms or for obtaining an image from them; Details peculiar thereto
    • G03H1/22Processes or apparatus for obtaining an optical image from holograms
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03HHOLOGRAPHIC PROCESSES OR APPARATUS
    • G03H1/00Holographic processes or apparatus using light, infrared or ultraviolet waves for obtaining holograms or for obtaining an image from them; Details peculiar thereto
    • G03H1/0005Adaptation of holography to specific applications
    • G03H1/0011Adaptation of holography to specific applications for security or authentication
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03HHOLOGRAPHIC PROCESSES OR APPARATUS
    • G03H1/00Holographic processes or apparatus using light, infrared or ultraviolet waves for obtaining holograms or for obtaining an image from them; Details peculiar thereto
    • G03H1/04Processes or apparatus for producing holograms
    • G03H1/08Synthesising holograms, i.e. holograms synthesized from objects or objects from holograms
    • G03H1/0808Methods of numerical synthesis, e.g. coherent ray tracing [CRT], diffraction specific
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/363Image reproducers using image projection screens
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03HHOLOGRAPHIC PROCESSES OR APPARATUS
    • G03H1/00Holographic processes or apparatus using light, infrared or ultraviolet waves for obtaining holograms or for obtaining an image from them; Details peculiar thereto
    • G03H1/0005Adaptation of holography to specific applications
    • G03H2001/0088Adaptation of holography to specific applications for video-holography, i.e. integrating hologram acquisition, transmission and display
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03HHOLOGRAPHIC PROCESSES OR APPARATUS
    • G03H1/00Holographic processes or apparatus using light, infrared or ultraviolet waves for obtaining holograms or for obtaining an image from them; Details peculiar thereto
    • G03H1/04Processes or apparatus for producing holograms
    • G03H1/08Synthesising holograms, i.e. holograms synthesized from objects or objects from holograms
    • G03H1/0808Methods of numerical synthesis, e.g. coherent ray tracing [CRT], diffraction specific
    • G03H2001/0816Iterative algorithms

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Holo Graphy (AREA)

Abstract

The invention provides a large-view-field holographic projection method and a system based on deep learning accelerated calculation, which comprises the following steps: s1, calculating a hologram generated in the lens-free projection system through a Gausherberg-Saxon algorithm and an angular spectrum method, and making a data set for U-shaped neural network training; s2, constructing a convolution neural network structure based on the U-shaped neural network; and S3, inputting the data set into the U-shaped neural network for training and storing the trained U-shaped neural network model. The invention can solve the problems of low convergence speed and the like of the traditional iterative algorithm, improves the imaging quality of the neural network, has good performance of the calculated hologram, realizes dynamic real-time projection and has certain universality.

Description

Large-view-field holographic projection method and system based on deep learning accelerated calculation
Technical Field
The invention relates to the technical field of computer generated holography, in particular to a large-view-field holographic projection method and system based on deep learning accelerated computation.
Background
Holographic projection essentially does not require lenses, can reduce the size of an optical system, reconstruct a high-contrast image, and also has an advantage in that a color image can be reconstructed using a Spatial Light Modulator (SLM). A Diffractive Optical Element (DOE) is a transmissive Optical phase modulator, and a diffraction field image of an arbitrary shape can be obtained by changing the phase distribution of the plane of the Diffractive Optical element (i.e., the step shape of the Diffractive Optical element). Based on the computer-generated hologram technology, the conventional computer-generated hologram loads a calculated hologram on a spatial light modulator to obtain a pattern on a projection surface. Since the diffraction process of a hologram can be described in terms of fresnel diffraction, fresnel diffraction integration is used as a paraxial approximation solution. A commonly used diffraction numerical calculation Algorithm is to obtain a hologram by a reverse one-time Fast Fourier Transform Algorithm (S-FFT Algorithm), which is suitable for the case of a large diffraction distance. To improve the calculation accuracy, the Gerchberg-Saxton (G-S) algorithm is often used: the positive and negative single fast Fourier transform algorithm is used in one iteration, and the hologram with better quality can be obtained after multiple iterations. However, the geiger-saxon algorithm has two significant disadvantages: 1. the calculation process takes long time; 2. in the single fast fourier transform algorithm, the projection plane sampling interval has the following limitations:
Δx=λd/NΔx0 (1)
where Δ x is the sampling interval of the projection surface, λ is the wavelength, d is the diffraction distance, N is the number of samples, Δ x0Is the sampling interval of the hologram. Therefore, it is difficult to obtain a large projection image in an actual experimental system, and the application range of the holographic projection is limited.
To further increase the viewing angle of the projection system, it is efficient to illuminate the spatial light modulator with diverging spherical waves, and a corresponding Double-Sampling algorithm (DSF) is used. The dual sampling algorithm is time consuming and cannot be used in real-time projection systems.
The existing large-view field holographic projection system has slow real-time response and low imaging quality. The current principle of the lens-free projection system is based on the computer-generated holography technology, and the holography of the modulation surface is calculated by using an iterative algorithm, so that the time consumption is high, and the real-time projection pattern transformation is difficult to realize.
Disclosure of Invention
The invention provides a large-view-field holographic projection method and system based on deep learning accelerated calculation, which can solve the problem that the traditional method is time-consuming and long in time and realize real-time projection pattern transformation.
The technical scheme provided by the invention for achieving the purpose is as follows:
the large-view-field holographic projection method based on deep learning acceleration calculation comprises the following steps of:
s1, calculating a hologram generated in the lens-free projection system through a Gausherberg-Saxon algorithm, and making a data set for U-shaped neural network training;
s2, constructing a convolution neural network structure based on the U-shaped neural network;
and S3, inputting the data set into the U-shaped neural network for training and storing the trained U-shaped neural network model.
Further:
in step S1, the lens-free projection system includes a laser, a microscope objective, a pinhole, a digital micromirror device, a TIR prism, a diffractive optical element, and a projection surface;
the laser, the microscope objective and the pinhole are sequentially and mechanically connected, the light path is coaxial, the TIR prism is used for receiving divergent spherical waves emitted by the pinhole and emitting the spherical waves to the digital micromirror device, the minimum distance between the digital micromirror device and a point light source at the pinhole is 80mm, the distance between the plane of the digital micromirror device and the plane of the diffractive optical element is 400mm, and the projection surface is used for receiving the spherical waves emitted by the diffractive optical element and displaying images;
laser emitted by the laser passes through the microscope objective and the pinhole and becomes divergent spherical waves;
the spherical wave is subjected to amplitude modulation of the digital micro-mirror device and phase modulation of the diffractive optical element in sequence, and an image is generated at a projection position in real time.
Further:
the lens-free projection system comprises a forward diffraction process which sequentially passes through the digital micromirror device plane, the diffraction optical element plane and the projection plane:
the diffraction process is Fresnel diffraction, and the accurate solution of the diffraction field can be calculated according to the following formula:
Figure BDA0003293399780000031
Figure BDA0003293399780000032
wherein x and y respectively represent two vertical direction coordinates in the imaging plane, U (x, y) represents the complex amplitude distribution of the projection plane, U1(x1,y1) Representing the complex amplitude distribution of the digital micromirror device surface, exp is an exponential function with a natural constant e as the base, i is an imaginary number, d is the diffraction distance, A represents the amplitude, phiDOERepresenting the phase on the diffractive optical element, Δ x being the sampling distance on the projection plane, Δ x1Is the sampling distance, Δ x, in the plane of the digital micromirror device2Is the sampling pitch in the plane of the diffractive optical element, SASM stands for angular spectroscopy, and k is the propagation constant.
Further:
the lens-free projection system further comprises a reverse diffraction process sequentially passing through the projection plane, the plane of the diffractive optical element and the plane of the digital micromirror device:
the inverse diffraction process is fraunhofer diffraction, and the exact solution of the diffraction field is calculated according to the following formula:
U1(x1,y1)=SASM-1[d2,Δx,Δx2]×At exp(iφU)×SASM-1[d1,Δx2,Δx1]×exp(-iφDOE)
the sampling interval Δ x of the projection plane should be calculated according to the following formula:
Δx=λd/NΔx0
where λ is the wavelength, d is the diffraction distance, N is the number of samples, Δ x0Is the sampling interval of the hologram.
Further:
the specific process of the Geiger-Sakstone algorithm comprises the following steps:
s1-1, carrying out the forward diffraction transformation on the initial phase and the preset incident light field distribution to obtain the target plane light field distribution;
s1-2, introducing constraint on the target plane, namely replacing the amplitude distribution of the light field of the target plane after the forward diffraction calculation with the amplitude distribution of the light field of the required target plane while keeping the phase distribution unchanged;
s1-3, performing the transformation of the inverse diffraction to obtain the light field distribution of the diffraction surface;
s1-4, introducing constraint on a diffraction surface, namely replacing the light field amplitude distribution obtained by the inverse diffraction with the given incident light field amplitude distribution, and keeping the phase unchanged, thereby obtaining a value after one iteration and taking the value as the initial distribution of the next iteration;
s1-5, carrying out the transformation of the forward diffraction again, and continuing to stop the iteration loop until the iteration precision reaches the preset iteration exit condition: and the iteration result reaches the set precision or the maximum iteration times, and finally the amplitude hologram on the holographic surface is obtained.
Further:
the data set in step 1 comprises a training set and a test set.
Further:
in step S2, the principle of the U-shaped neural network includes: after one-time downsampling, the length and the width of the image are reduced by half, and geometric feature extraction of the input image can be realized through four times of downsampling; after the up-sampling process is carried out for four times, the reconstructed original size image obtained by reduction decoding is output; in order to avoid the disappearance of the gradient in the network training process, residual error connection is added to realize cross-layer transmission of the gradient; batch normalization is performed after each convolution to avoid overfitting.
Further:
the U-shaped neural network is implemented by programming a convolutional network by using TensorFlow, and the specific process comprises the following steps:
s2-1, defining a convolutional Layer, inheriting Python class of Layer class in Keras, comprising:
s2-1-1, initializing parameters by using an initialization function __ init __: number of convolution kernels, convolution kernel size, training state, sampling state (up-sampling or down-sampling), whether BN algorithm is used;
s2-1-2, using a building function build to represent specific convolution operation (synchronous convolution, downsampling convolution and deconvolution) and BN algorithm;
s2-1-3, describing an execution relation among a plurality of convolution operations in one convolution layer by using a call function;
s2-2, defining a Model class, inheriting a Model class in Keras, and including:
s2-2-1, completing construction of a U-shaped neural network model by using a __ init __ function;
s2-2-2, the picture tensor flow process in the U-shaped neural network model is described by using the call function.
Further:
the U-shaped neural network training process in the step 3 is as follows:
s3-1, taking the picture of the training set as an input image, and taking an amplitude hologram calculated on a holographic surface by a Gaster-Sakstone algorithm as an output image;
s3-2, using root mean square error as a loss function, wherein the initial learning rate is L, and 16 images are used for each training;
and S3-3, when the root mean square error does not continuously decrease any more, reducing the learning rate, and when the learning rate becomes L/100, stopping the training.
The invention also proposes a large field of view holographic projection system based on deep learning accelerated computing, comprising a processor, a memory and a computer program stored on said memory and executable on said processor, said computer program realizing the method according to any of claims 1 to 9 when executed by said processor.
The invention has the following beneficial effects:
the invention provides a large-view-field holographic projection method and system based on deep learning accelerated computation, which apply deep learning to the field of computing holography and realize the rapid recovery of object phase and amplitude.
Drawings
FIG. 1 is a schematic optical path diagram of a lensless projection system according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a simple geometry for projection in a lens-less projection system according to an embodiment of the present invention;
FIG. 3 is a detailed flow chart of the Geiger-Saxon algorithm in an embodiment of the present invention;
FIG. 4 is a schematic diagram of a U-shaped neural network in an embodiment of the present invention;
FIG. 5 is a scatter plot of correlation coefficients in an embodiment of the present invention;
FIGS. 6A-6F are schematic diagrams of target images obtained on a projection surface according to an embodiment of the present invention;
FIG. 7 is a schematic diagram of an optical path of a lens-free projection experiment system according to an embodiment of the present invention;
FIG. 8 is a flowchart of a large-field-of-view holographic projection method based on deep learning acceleration calculation according to an embodiment of the present invention.
Detailed Description
The embodiments of the present invention will be described in detail below. It should be emphasized that the following description is merely exemplary in nature and is not intended to limit the scope of the invention or its application.
It should be noted that the terms "first," "second," "left," "right," "front," "back" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicit to a number of technical features indicated. Thus, features defined as "first," "second," "left," "right," "front," "back" may explicitly or implicitly include one or more of the features.
In the invention, Fresnel diffraction, an angular spectrum method and a neural network are key technologies. Aiming at the time consumption problem of the iterative algorithm in the traditional holographic projection technology, the high-efficiency characteristic of the neural network is utilized with the help of the parallel calculation of the display card. The invention increases the deep learning of the computer on the basis of the original positive and reverse Gerster-Sakstone algorithm, achieves the light homogenizing effect through proper network training, and improves the imaging performance.
The invention uses the Gauss-Saxongton algorithm to establish a data set for training the U-shaped neural network, and the U-shaped neural network is trained by using the data set, thereby realizing a large-field-of-view projection system capable of quickly changing the projection mode. After training, the calculation result of the U-shaped neural network is evaluated by using the correlation coefficient R, and the result shows that the training result of the U-shaped neural network is very similar to the result obtained by the Geiger-Sakson algorithm. By mixing ynetAnd yGSLoaded onto a Digital Micromirror Device (DMD), simulations show that: the holographic image obtained through the U-shaped neural network can enable the projection surface to present a target image, and the time for generating the hologram through calculation of the U-shaped neural network is short.
The embodiment of the invention provides a large-view-field holographic projection method based on deep learning acceleration calculation, which mainly comprises the following three parts:
one, U-shaped neural network training data set preparation
The main content of this subsection is to make training and testing sets of the training data of the U-shaped neural network.
A lens-free projection system has a light path as shown in figure 1, wherein laser beams emitted by a red laser 1 are converged by a microscope objective 2 and then filtered by a small hole to obtain divergent spherical waves; the spherical wave sequentially passes through the amplitude modulation of the digital micro-mirror device 4 and the phase modulation of the diffractive optical element 5 and finally reaches the projection surface 6; a desired projection pattern of the object is imaged on the projection surface 6. In this optical system, the diffractive optical element 5 functions to homogenize the projection pattern, and the digital micromirror device 4, which is a hologram surface, is a main imaging device.
In the first step, from the plane of the digital micromirror device 4 to the projection plane 6, the exact solution of the forward and inverse diffraction fields can be calculated according to the following equation (2):
Figure BDA0003293399780000071
U1(x1,y1)=SASM-1[d2,Δx,Δx2]×Ate xp(iφU)×SASM-1[d1,Δx2,Δx1]×exp(-iφDOE) (3)
Figure BDA0003293399780000072
wherein the formula (2) is a forward diffraction process, the formula (3) is a reverse diffraction process, x and y respectively represent two vertical direction coordinates in an imaging plane, U (x, y) represents the complex amplitude distribution of the projection plane 6, and U (x, y) represents the complex amplitude distribution of the projection plane1(x1) Denotes the complex amplitude distribution in the plane of the digital micromirror device 4, exp is an exponential function with a natural constant e as the base, i is an imaginary number, k is a wave number, d is a diffraction distance, A denotes the amplitude, phiDOERepresenting the phase on the diffractive optical element 5, ax being the sampling distance on the projection plane 6, ax1Is in the plane of the digital micromirror device 4Sampling interval, Δ x2Is the sampling interval on the plane of the diffractive optical element 5, the SASM represents an angular spectrum method, and the phase of the spherical wave changes after passing through the plane of the diffractive optical element 5.
The process from the plane of the point light source 3 to the projection surface 6 is two times of Fresnel diffraction, and the digital micro-mirror device 4 modulates the amplitude of the scattered spherical wave, so that a target pattern is obtained on the projection surface 6. The diffractive optical element 5 modulates the phase of the spherical wave to make the phase of the light field at the projection surface 6 the same, thereby achieving the effect of light uniformization.
In the second step, returning from the projection plane 6 to the plane of the digital micromirror device 4, the process is inverse fraunhofer diffraction, and the exact solution of the diffraction field can be calculated according to equation (3). Each step here requires a single fast fourier transform algorithm for the calculation, and therefore the sampling interval should be calculated according to equation (1). Since the size of the holographic surface and the sampling interval on the projection surface have a positive correlation, the forward diffraction process is:
Figure BDA0003293399780000073
Figure BDA0003293399780000081
before training of the U-network, a data set needs to be prepared, and for a lens-free projection system, a simple geometry of the projection is considered. To obtain a data set trained by a convolutional neural network, an image on the projection plane 6 can be generated by the rules of the convolutional neural network, as shown in fig. 2, a square with a resolution of 256 × 256 is divided into four small square areas: AEOH, EBFO, OFCG and HOGD. Each small square may be further divided into two triangles along the diagonal. The triangle can be black or white, so that there are 6 types of small squares, and 1296 images can be obtained on the projection surface 6 by the principle of permutation and combination.
The amplitude of the plane of the digital micromirror device 4 and the phase of the projection plane 6 are unknown, so the hologram on the digital micromirror device 4 is calculated using the forward and reverse geiger-saxon algorithm. Since the essential feature of the convolutional neural network model is the image feature extractor, in order to approximate the image features of the patterns loaded on the holographic surface and the projection surface 6, it is necessary to load the amplitude hologram on the holographic surface. Selecting a digital micromirror device 4 as a spatial light modulator, and performing a specific flow of a Geiger-Sakson algorithm as shown in FIG. 3, firstly, performing forward diffraction transformation on an initial phase and a predetermined incident light field distribution to obtain a target planar light field distribution; then, introducing constraint on the target plane, namely replacing the amplitude distribution of the target plane light field after forward diffraction calculation with the amplitude distribution of the required target plane light field while keeping the phase distribution unchanged, and then executing the transformation of reverse diffraction to obtain the light field distribution of a diffraction surface; introducing constraint on the diffraction surface, namely replacing the optical field amplitude distribution obtained by reverse diffraction with the given incident optical field amplitude distribution, and simultaneously keeping the phase unchanged; the value after one iteration is thus obtained and is used as the initial distribution for the next iteration. Then, the transformation of the forward diffraction is carried out again, and the iteration loop is continued to be interrupted until the iteration precision reaches the preset iteration exit condition: and the iteration result reaches the set precision or the maximum iteration times, and finally the amplitude hologram on the holographic surface is obtained. A data set of 1296 groups of pictures is obtained according to the series of steps of the baker's berg-saxophone algorithm, wherein 1024 groups of pictures are used as a training set and 272 groups of pictures are used as a test set.
The angular spectral transfer function of diffraction is generally more efficient than other transfer functions for the same computational problem. From the sampling times and the calculation time, it can be verified that the calculation result of the angular spectrum transfer function is basically consistent with the calculation result of Fresnel diffraction, and theoretically, an accurate solution of the diffraction problem can be obtained. Therefore, the angle spectrum diffraction formula can be used in practical application to obtain more reliable results.
Based on the angular spectrum diffraction theory, the exact solution of the diffraction field is expressed in the form of fourier transform as:
U(r2)=F-1[r2,f1]H(f1)F[f1,r1]{U(r1)} (5)
wherein r is1Is the source plane coordinate, r2Is the coordinate of the observation plane, H (f)1) Is a transfer function of free space propagation.
Figure BDA0003293399780000091
Wherein i is an imaginary number, fx1Refers to the sampling frequency, f, in the x-direction in the plane of the digital micromirror device 4y1Is the sampling frequency in the y-direction in the plane of the digital micromirror device 4 and deltaz is the distance between the plane of the digital micromirror device 4 and the plane of the diffractive optical element 5. Equation (5) is an angular spectrum form of fresnel diffraction integral, and is used for numerical solution of fresnel integral.
Introducing a scaling parameter m from the source plane to the viewing plane,
Figure BDA0003293399780000092
Figure BDA0003293399780000093
bringing back formula (7) to obtain
Figure BDA0003293399780000094
Coordinates and distance defining the scaling:
Figure BDA0003293399780000095
Figure BDA0003293399780000096
obtaining integrals in the form of convolutions
Figure BDA0003293399780000097
Wherein:
Figure BDA0003293399780000098
Figure BDA0003293399780000099
the diffraction of light is the superposition of the diffraction of wave field angle spectrums, the range of the diffraction field is linearly expanded along with the increase of the diffraction distance, and when the diffraction distance is larger, the convolution algorithm cannot completely obtain the diffraction field. Therefore, the angular spectrum scaling algorithm is mainly used when the high-frequency angular spectrum of the object wave light field is small and the diffraction distance is small. The simulation parameters for calculating the amplitude hologram on the holographic surface are shown in table 1.
TABLE 1
Figure BDA0003293399780000101
Designing an end-to-end convolution neural network structure based on residual U-shaped neural network
The main content of this subsection is the construction of U-shaped neural networks using TensorFlow.
The principle of the U-shaped neural network is shown in fig. 4, where the rectangular bars represent the convolved image and the number of channels of the image is marked near the rectangular bars. The thin arrow pointing to the right indicates the convolution process with step size 1 and convolution kernel size 3 × 3, the downward arrow indicates the convolution process with step size 2 and convolution kernel size 3 × 3 (down-sampled portion), the upward arrow indicates the deconvolution process (up-sampled portion), and the bold arrow to the right indicates the residual concatenation. After each time of downsampling, the length and the width of the image are reduced by half, and the geometric feature extraction of the input image can be realized through 4 times of downsampling processing. The down-sampling is used for reducing the risk of overfitting and reducing the calculation amount. And after 4 times of upsampling processes, outputting the reconstructed original size image obtained by reduction decoding. In order to avoid the gradient disappearing in the network training process, residual error connection is added to realize cross-layer transmission of the gradient. Batch normalization is performed after each convolution to avoid overfitting.
In programming a convolutional network with a TensorFlow, we first define the convolutional layers: class down _ OR _ up (tf. keras. layers. layer):
def__init__(self,filter_num,kernel_len,istraining,downORup,USEBN):
……
defbuild(self,input_shape):
……
defcall(self,inputs):
……
the Python class inherits the Layer class in Keras and needs to initialize parameters using an initialization function __ init __: number of convolution kernels, convolution kernel size, training state, sampling state (up-sampling or down-sampling), whether BN algorithm is used. The specific convolution operation (synchronous convolution, downsampling convolution, deconvolution) and BN algorithm are then represented using the building function build. Finally, a call function is used for describing the execution relation among a plurality of convolution operations in one convolution layer.
Next, a Model class is defined, which inherits the Model class in Keras:
class Unet(tf.keras.Model):
def__init__(self):
……
defcall(self,inputs):
……
the __ init __ function completes the construction of the U-shaped neural network model. The call function describes the flow process of the picture tensor in the U-shaped neural network model.
Thirdly, training the U-shaped neural network and storing the U-shaped neural network model
The main content of this subsection is to train the U-shaped neural network and evaluate the fitting effect after the training of the U-shaped neural network.
The specific process of the U-shaped neural network training is as follows:
the image on the projection plane 6 is used as an input image, and the amplitude hologram calculated on the holographic plane by the Gauss-Sakson algorithm is used as an output image. The Root Mean Square Error (RMSE) is then used as a loss function, the smaller the loss function, the better the robustness of the network model. The initial learning rate was L, using 16 images per training. When the root mean square error does not continue to decrease, the learning rate is decreased, and when the learning rate becomes L/100, the training is stopped. At the end of the training, the root mean square error is sufficiently small compared to 256 (the range of values of the grayscale image).
Inputting different training set pictures into the U-shaped neural network for repeated training, evaluating the fitting effect of the U-shaped neural network training by using the test set pictures, and calculating the output y of the U-shaped neural networknetAnd ideal output y of test setlabelAnd (4) the correlation coefficient R between the two, and the calculation result of the U-shaped neural network is quantitatively evaluated by using the R.
Figure BDA0003293399780000121
Wherein y isnetIs an amplitude hologram, y, calculated by a U-shaped neural networkGSIs an amplitude hologram, σ (y), obtained by the Geiger-Sakstone algorithmnet) And σ (y)label) And N is the total number of pixels of one picture.
Figure BDA0003293399780000122
And
Figure BDA0003293399780000123
are average values. When the value of | R | is close to 1, i.e. ynetAnd yGSThe more similar, the higher the similarity of the two pictures. FIG. 5 is a scatter plot of correlation coefficients, with the values on the abscissa representing the picture order numbers and the values on the ordinate representing the absolute values of the correlation coefficients R, most of the R values being greater than 0.9, showing that the results of the U-shaped neural network are highly similar to the results calculated by the Geiger-Saxon algorithm and are usedIt is feasible for the network to speed up the hologram computation.
Handle yGSAnd ynetAfter loading onto the digital micromirror device 4, a target image will be obtained on the projection surface 6. The experiment was carried out in two groups, when yGSAnd ynetWhen loaded onto the digital micromirror device 4, respectively, FIG. 6(a) shows the target image one, and (b) shows the target image y in the experimentGSObtaining its projection, (c) is the projection of y in the experimentnetThe obtained projections; FIG. 6(d) shows the second target image, and (e) shows the second target image from y in the experimentGSThe projection from which it was obtained, (f) is the projection from y in the experimentnetThe projections obtained, it can be seen that fig. 6(b) and (c), (e) and (f) are very close to the respective target images.
Calculation of IGSResults of the experiments and IUnetThe correlation coefficients between the experimental results and the correlation coefficients of the two graphs are 0.9244 and 0.9142, respectively. Therefore, the U-shaped neural network trained by a proper data set can well replace a Geiger-Saxon algorithm and a zoom angle spectrum method to calculate the amplitude distribution of the plane of the digital micromirror device 4, and the calculation time is only one tenth of that of an iterative algorithm.
The embodiment of the invention also provides a lens-free projection experimental system, the optical path of which is shown in fig. 7, and the system comprises a red laser 1, a microscope objective 2, a pinhole 7, a digital micromirror device 4, a TIR prism 8, a diffractive optical element 5 and a projection surface 6, wherein a laser beam emitted by the red laser 1 is converged by the microscope objective 2 and then is filtered by a small hole to obtain a divergent spherical wave, the spherical wave is subjected to amplitude modulation of the digital micromirror device 4 and phase modulation of the processed diffractive optical element 5 in sequence, and the divergent spherical wave generates an image at a projection position in real time. In order to obtain the visual angle as large as possible, the minimum distance between the digital micromirror device 4 and the point light source 3 at the pinhole 7 is 80mm under the condition that the experimental device does not generate mechanical interference; in order to reduce the interference of secondary diffraction as much as possible at the plane of the diffractive optical element 5, the distance between the plane of the digital micromirror device 4 and the plane of the diffractive optical element 5 is chosen to be 400 mm. Preferably, a laser with the wavelength of 660nm is adopted; the digital micromirror device 4 employs DLP 6500.
Specifically, the experimental system for lens-free projection further comprises software equipment: win10 professional edition 64 bit; pychar (Integrated level environment, IDE), 2019.2.3(Community Edition); python 3.6.9; tensorflow-gpu 1.14; keras 2.2.5; matlab 2017 a.
The background of the present invention may contain background information related to the problem or environment of the present invention and does not necessarily describe the prior art. Accordingly, the inclusion in the background section is not an admission of prior art by the applicant.
The foregoing is a more detailed description of the invention in connection with specific preferred embodiments and it is not intended that the invention be limited to these specific details. For those skilled in the art to which the invention pertains, several equivalent substitutions or obvious modifications can be made without departing from the spirit of the invention, and all the properties or uses are considered to be within the scope of the invention.

Claims (10)

1. The large-view-field holographic projection method based on deep learning acceleration calculation is characterized by comprising the following steps of:
s1, calculating a hologram generated in the lens-free projection system through a Gausherberg-Saxon algorithm, and making a data set for U-shaped neural network training;
s2, constructing a convolution neural network structure based on the U-shaped neural network;
and S3, inputting the data set into the U-shaped neural network for training and storing the trained U-shaped neural network model.
2. The deep learning acceleration calculation-based large-field holographic projection method according to claim 1, characterized in that in step S1, the lens-free projection system comprises a laser, a micro-objective (2), a pinhole (7), a digital micromirror device (4), a TIR prism (8), a diffractive optical element (5) and a projection surface (6);
the laser, the microscope objective (2) and the pinhole (7) are sequentially and mechanically connected, the light path is coaxial, the TIR prism (8) is used for receiving divergent spherical waves emitted by the pinhole (7) and emitting the spherical waves to the digital micromirror device (4), the minimum distance between the digital micromirror device (4) and the point light source (3) at the pinhole (7) is 80mm, the distance between the plane of the digital micromirror device (4) and the plane of the diffractive optical element (5) is 400mm, and the projection surface (6) is used for receiving the spherical waves emitted by the diffractive optical element (5) and displaying images;
laser emitted by the laser device passes through the microscope objective (2) and the pinhole (7) and then becomes divergent spherical waves;
the spherical wave is subjected to amplitude modulation of the digital micro-mirror device (4) and phase modulation of the diffractive optical element (5) in sequence, and an image is generated at a projection position in real time.
3. The deep learning acceleration calculation-based large-field holographic projection method according to claim 2, wherein the lens-free projection system comprises a forward diffraction process in which the spherical wave passes through the plane of the digital micromirror device (4) and the plane of the diffractive optical element (5) in sequence and then reaches the projection surface (6);
the forward diffraction process is Fresnel diffraction, and the accurate solution of the diffraction field can be calculated according to the following formula:
Figure FDA0003293399770000011
Figure FDA0003293399770000012
wherein x and y respectively represent two perpendicular direction coordinates in the imaging plane, U (x, y) represents the complex amplitude distribution of the projection plane (6), U1(x1,y1) Represents the complex amplitude distribution of the plane of the digital micromirror device (4), exp is an exponential function with a natural constant e as the base, i is an imaginary number, d is the diffraction distance, A represents the amplitude, phiDOERepresenting the phase on the diffractive optical element 5, Δ x being the sampling distance on the projection plane, Δ x1Is on the plane of the digital micromirror device 4Sample spacing of (1), Δ x2Is the sampling pitch in the plane of the diffractive optical element 5, SASM stands for angular spectroscopy, and k is the wavenumber.
4. The deep learning acceleration calculation-based large-field holographic projection method according to claim 3, characterized in that said lens-free projection system further comprises a reverse diffraction process of said spherical wave sequentially passing through said projection surface (6), said plane of said diffractive optical element (5) and said plane of said digital micromirror device (4);
the inverse diffraction process is fraunhofer diffraction, and the exact solution of the diffraction field is calculated according to the following formula:
U1(x1,y1)=SASM-1[d2,Δx,Δx2]×Atexp(iφU)×SASM-1[d1,Δx2,Δx1]×exp(-iφDOE)
the sampling interval Δ x of the projection plane (6) should be calculated according to the following formula:
Δx=λd/NΔx0
where λ is the wavelength, d is the diffraction distance, N is the number of samples, Δ x0Is the sampling interval of the hologram.
5. The deep learning acceleration computing-based large-field-of-view holographic projection method according to claim 4, wherein the detailed process of the Geiger-Sakstone algorithm comprises:
s1-1, carrying out the forward diffraction transformation on the initial phase and the preset incident light field distribution to obtain the target plane light field distribution;
s1-2, introducing constraint on the target plane, namely replacing the amplitude distribution of the target plane light field after the forward diffraction calculation with the amplitude distribution of the required target plane light field while keeping the phase distribution unchanged;
s1-3, performing the transformation of the inverse diffraction to obtain the light field distribution of the diffraction surface;
s1-4, introducing constraint on a diffraction surface, namely replacing the light field amplitude distribution obtained by the inverse diffraction with the given incident light field amplitude distribution, and keeping the phase unchanged, thereby obtaining a value after one iteration and taking the value as the initial distribution of the next iteration;
s1-5, carrying out the transformation of the forward diffraction again, and continuing to stop the iteration loop until the iteration precision reaches the preset iteration exit condition: and the iteration result reaches the set precision or the maximum iteration times, and finally the amplitude hologram on the holographic surface is obtained.
6. The deep learning acceleration computing-based large-field holographic projection method according to claim 1, wherein the data set in step 1 comprises a training set and a test set.
7. The deep learning acceleration calculation-based large-field holographic projection method according to claim 1, wherein in step S2, the principle of the U-shaped neural network comprises: after one-time downsampling, the length and the width of the image are reduced by half, and the geometric feature extraction of the input image can be realized through four times of downsampling; after the up-sampling process is carried out for four times, the reconstructed original size image obtained by reduction decoding is output; in order to avoid the disappearance of the gradient in the network training process, residual error connection is added to realize cross-layer transmission of the gradient; batch normalization is performed after each convolution to avoid overfitting.
8. The deep learning acceleration calculation-based large-field holographic projection method according to claim 7, wherein the U-shaped neural network is a convolutional network programmed by TensorFlow, and the specific process includes:
s2-1, defining convolution Layer, inheriting Python class of Layer class in Keras, including
S2-1-1, initializing parameters by using an initialization function _ init _ initialization: number of convolution kernels, convolution kernel size, training state, sampling state (up-sampling or down-sampling), whether BN algorithm is used;
s2-1-2, using a building function build to represent specific convolution operation (synchronous convolution, downsampling convolution and deconvolution) and BN algorithm;
s2-1-3, describing an execution relation among a plurality of convolution operations in one convolution layer by using a call function;
s2-2, defining a Model class, inheriting a Model class in Keras, and including:
s2-2-1, completing construction of a U-shaped neural network model by using an _ init _ function;
s2-2-2, the picture tensor flow process in the U-shaped neural network model is described by using the call function.
9. The deep learning acceleration computing-based large-field holographic projection method according to claim 8, wherein the U-shaped neural network training in step 3 is as follows:
s3-1, taking the picture of the training set as an input image, and taking an amplitude hologram calculated on a holographic surface by a Gaster-Sakstone algorithm as an output image;
s3-2, using root mean square error as a loss function, wherein the initial learning rate is L, and 16 images are used for each training;
and S3-3, when the root mean square error does not continuously decrease any more, reducing the learning rate, and when the learning rate becomes L/100, stopping the training.
10. A large field of view holographic projection system based on deep learning accelerated computing, comprising a processor, a memory and a computer program stored on the memory and executable on the processor, which computer program, when executed by the processor, implements the method according to any of claims 1-9.
CN202111171388.8A 2021-10-08 2021-10-08 Large-view-field holographic projection method and system based on deep learning accelerated calculation Pending CN114002931A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111171388.8A CN114002931A (en) 2021-10-08 2021-10-08 Large-view-field holographic projection method and system based on deep learning accelerated calculation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111171388.8A CN114002931A (en) 2021-10-08 2021-10-08 Large-view-field holographic projection method and system based on deep learning accelerated calculation

Publications (1)

Publication Number Publication Date
CN114002931A true CN114002931A (en) 2022-02-01

Family

ID=79922352

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111171388.8A Pending CN114002931A (en) 2021-10-08 2021-10-08 Large-view-field holographic projection method and system based on deep learning accelerated calculation

Country Status (1)

Country Link
CN (1) CN114002931A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114519403A (en) * 2022-04-19 2022-05-20 清华大学 Optical diagram neural classification network and method based on-chip diffraction neural network
CN114964524A (en) * 2022-06-06 2022-08-30 中国科学院光电技术研究所 Target imaging wavefront phase restoration method based on defocused grating and neural network expansion
CN114967398A (en) * 2022-05-13 2022-08-30 安徽大学 Large-size two-dimensional calculation hologram real-time generation method based on deep learning
CN117850726A (en) * 2024-03-08 2024-04-09 深圳市光科全息技术有限公司 Method, system and equipment for carrying out convolution calculation by utilizing DOE photonic crystal film

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010004563A1 (en) * 2008-07-10 2010-01-14 Real View Imaging Ltd. Broad viewing angle displays and user interfaces
CN102591123A (en) * 2012-03-13 2012-07-18 苏州大学 Real-time three-dimensional display device and display method
CN102749793A (en) * 2012-07-24 2012-10-24 东南大学 Holographic projection method
CN108008615A (en) * 2017-11-14 2018-05-08 清华大学 Wide visual field compresses holographic imaging systems and method
CN109459923A (en) * 2019-01-02 2019-03-12 西北工业大学 A kind of holographic reconstruction algorithm based on deep learning
CN110308547A (en) * 2019-08-12 2019-10-08 青岛联合创智科技有限公司 A kind of dense sample based on deep learning is without lens microscopic imaging device and method
CN110570431A (en) * 2019-09-18 2019-12-13 东北大学 Medical image segmentation method based on improved convolutional neural network
CN111260562A (en) * 2020-03-05 2020-06-09 张立珠 Digital holographic image reconstruction method based on deep learning
GB202105628D0 (en) * 2020-04-20 2021-06-02 Univ North Carolina Chapel Hill High-speed computer generated holography using convolutional neural networks
CN113223106A (en) * 2021-05-07 2021-08-06 西北工业大学 Few-angle digital holographic tomographic reconstruction algorithm based on deep learning

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010004563A1 (en) * 2008-07-10 2010-01-14 Real View Imaging Ltd. Broad viewing angle displays and user interfaces
CN102591123A (en) * 2012-03-13 2012-07-18 苏州大学 Real-time three-dimensional display device and display method
CN102749793A (en) * 2012-07-24 2012-10-24 东南大学 Holographic projection method
CN108008615A (en) * 2017-11-14 2018-05-08 清华大学 Wide visual field compresses holographic imaging systems and method
CN109459923A (en) * 2019-01-02 2019-03-12 西北工业大学 A kind of holographic reconstruction algorithm based on deep learning
CN110308547A (en) * 2019-08-12 2019-10-08 青岛联合创智科技有限公司 A kind of dense sample based on deep learning is without lens microscopic imaging device and method
CN110570431A (en) * 2019-09-18 2019-12-13 东北大学 Medical image segmentation method based on improved convolutional neural network
CN111260562A (en) * 2020-03-05 2020-06-09 张立珠 Digital holographic image reconstruction method based on deep learning
GB202105628D0 (en) * 2020-04-20 2021-06-02 Univ North Carolina Chapel Hill High-speed computer generated holography using convolutional neural networks
CN113223106A (en) * 2021-05-07 2021-08-06 西北工业大学 Few-angle digital holographic tomographic reconstruction algorithm based on deep learning

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
李菊、李军: "基于U_net网络的单幅全息图重建方法研究", vol. 41, no. 1, pages 96 - 99 *
李菊;李军;: "基于U_net网络的单幅全息图重建方法研究", 激光杂志, no. 01 *
涂铮铮;汤进;史东;: "基于DMD和分数傅里叶的动态全息体视图显示", 计算机技术与发展, vol. 19, no. 08, pages 247 - 250 *
涂铮铮等: "基于DMD和分数傅里叶的动态全息体视图显示" *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114519403A (en) * 2022-04-19 2022-05-20 清华大学 Optical diagram neural classification network and method based on-chip diffraction neural network
CN114519403B (en) * 2022-04-19 2022-09-02 清华大学 Optical diagram neural classification network and method based on-chip diffraction neural network
CN114967398A (en) * 2022-05-13 2022-08-30 安徽大学 Large-size two-dimensional calculation hologram real-time generation method based on deep learning
CN114967398B (en) * 2022-05-13 2024-05-31 安徽大学 Large-size two-dimensional calculation hologram real-time generation method based on deep learning
CN114964524A (en) * 2022-06-06 2022-08-30 中国科学院光电技术研究所 Target imaging wavefront phase restoration method based on defocused grating and neural network expansion
CN117850726A (en) * 2024-03-08 2024-04-09 深圳市光科全息技术有限公司 Method, system and equipment for carrying out convolution calculation by utilizing DOE photonic crystal film
CN117850726B (en) * 2024-03-08 2024-06-21 深圳市光科全息技术有限公司 Method, system and equipment for carrying out convolution calculation by utilizing DOE photonic crystal film

Similar Documents

Publication Publication Date Title
CN114002931A (en) Large-view-field holographic projection method and system based on deep learning accelerated calculation
Shajkofci et al. Spatially-variant CNN-based point spread function estimation for blind deconvolution and depth estimation in optical microscopy
Wu et al. Online regularization by denoising with applications to phase retrieval
Chianese et al. Differentiable strong lensing: uniting gravity and neural nets through differentiable probabilistic programming
CN113281979B (en) Lensless laminated diffraction image reconstruction method, system, device and storage medium
Cheremkhin et al. Iterative synthesis of binary inline Fresnel holograms for high-quality reconstruction in divergent beams with DMD
Finckh et al. Geometry construction from caustic images
CN102636882A (en) Method for analyzing space images of high numerical aperture imaging system
Brewer et al. Strong gravitational lens inversion: a bayesian approach
Hazineh et al. D-flat: A differentiable flat-optics framework for end-to-end metasurface visual sensor design
Yoshizawa et al. Fast gauss bilateral filtering
Peyvan et al. RiemannONets: Interpretable neural operators for Riemann problems
Su et al. Large field-of-view lensless holographic dynamic projection system with uniform illumination and U-net acceleration
Kaza Differentiable volume rendering using signed distance functions
Wang et al. Conditional diffusion model-based generation of speckle patterns for digital image correlation
CN111340902A (en) Optical phase modulation method and spatial light modulation method for arbitrary position and shape illumination
Wu et al. Depth acquisition from dual-frequency fringes based on end-to-end learning
WO2022083735A1 (en) Method for calculating ronchi shear interference image in photolithography projection objective
CN112613142B (en) Method for obtaining safety margin of sheet forming process parameters based on images
Cviljušac et al. Computer generated holograms of 3D points cloud
Cai et al. Development of learning-based noise reduction and image reconstruction algorithm in two dimensional Rayleigh thermometry
Subr Q‐NET: A Network for Low‐dimensional Integrals of Neural Proxies
Pérez et al. Enhancing Neural Rendering Methods with Image Augmentations
Shevkunov et al. Deep convolutional neural network-based lensless quantitative phase retrieval
Gladrow Digital phase-only holography using deep conditional generative models

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination