[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

WO2020244952A1 - System and method for projecting images through scattering media - Google Patents

System and method for projecting images through scattering media Download PDF

Info

Publication number
WO2020244952A1
WO2020244952A1 PCT/EP2020/064445 EP2020064445W WO2020244952A1 WO 2020244952 A1 WO2020244952 A1 WO 2020244952A1 EP 2020064445 W EP2020064445 W EP 2020064445W WO 2020244952 A1 WO2020244952 A1 WO 2020244952A1
Authority
WO
WIPO (PCT)
Prior art keywords
networks
screen
images
slm
light
Prior art date
Application number
PCT/EP2020/064445
Other languages
French (fr)
Inventor
Christophe Moser
Babak RAHMANI
Demetri Psaltis
Original Assignee
Ecole Polytechnique Federale De Lausanne (Epfl)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ecole Polytechnique Federale De Lausanne (Epfl) filed Critical Ecole Polytechnique Federale De Lausanne (Epfl)
Publication of WO2020244952A1 publication Critical patent/WO2020244952A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3191Testing thereof
    • H04N9/3194Testing thereof including sensor feedback
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B6/00Light guides; Structural details of arrangements comprising light guides and other optical elements, e.g. couplings
    • G02B6/02Optical fibres with cladding with or without a coating
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3179Video signal processing therefor
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B6/00Light guides; Structural details of arrangements comprising light guides and other optical elements, e.g. couplings
    • G02B6/02Optical fibres with cladding with or without a coating
    • G02B6/028Optical fibres with cladding with or without a coating with core or cladding having graded refractive index
    • G02B6/0288Multimode fibre, e.g. graded index core for compensating modal dispersion

Definitions

  • Transmission of light through a scattering medium is a challeng ing task.
  • the propagating light entering such a medium could un- dergo strong distortion depending on the medium, for example but not limited to a multimode fiber (MMF) , because of energy split ting of thousands of the modes being excited in the medium.
  • MMF multimode fiber
  • SLMs spatial light modulators
  • the training in the reverse optical path could be carried out by us- ing an implementation akin to Generative Adversarial Networks (GANs) .
  • GANs Generative Adversarial Networks
  • two sets of neural networks are used in this embodiment.
  • One set of neural networks (designated here as set of networks D) is the one that emulates the forward path of light propagation through said scattering medium.
  • the second set of neural networks (designated here as set of networks G) learns the reverse optical path of light propagation through said scat tering medium.
  • the training of the first set i.e. neural net works emulating the forward path
  • the domain of the output SLM patterns generated by the second set of networks G must have moved closer in some metrics (Euclidean distance, for example) to the domain of SLM images that produce perfect desired images at the output of the scattering medium (e.g. a fiber) .
  • Figure 5 shows examples of projected images on the screen using amplitude-only modulation for input SLM pat terns.
  • the examples are digit images
  • the networks For training the networks that are intended to emu late the reverse path of light propagation, examples relating to the pattern on the secondary SLM 3' and the secondary screen 5' are obtained and the networks are trained with the set of these examples.
  • the light from the laser source 2 is led through the beam splitter 6, from where it is sent through the light scattering medium 4 (here an MMF) to another beam splitter 6', from where it is led via mirror 1 to a secondary (extra) SLM 3' .
  • the light is modulated and sent back in the re verse path direction via mirror 1’’ , beam splitter 6', through the scattering medium (MMF) 4 to the beam splitter 6, from where it is led via mirror 7 , , , to a secondary (extra) screen 5'.
  • MMF light scattering medium
  • Target images shown next to the projected images, are blindly fed to the neural net- work system which was trained only on alphabet characters.
  • the neural network system-generated SLM patterns (amplitude- modulated) are sent through the fiber and the outputs of the fi ber are captured on the screen and by a camera.
  • the fidelity of the projected image with respect to its corresponding grayscale target image is shown as inset in each image.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Probability & Statistics with Applications (AREA)
  • Optics & Photonics (AREA)
  • Holo Graphy (AREA)

Abstract

The invention discloses a system (1) and method to guide light through a scattering medium (4) and project images on a screen (5) placed after the medium (4). Many of the shortcomings of previous approaches for guiding light through scattering medium (4) have been overcome.

Description

SYSTEM AND METHOD FOR PROJECTING IMAGES THROUGH
SCATTERING MEDIA
BACKGROUND OF THE INVENTION
1. FIELD OF THE INVENTION
The present invention relates to a system for guiding light through scattering media with the aim of projecting perceivable images of objects and natural scenes on a screen placed after the scattering medium. The projection is achieved with an inten sity only detection. No interferometric detection is needed, which is a significant improvement over prior art. Despite this partial measurement, the fidelity of the projected images is on a par with that of the gold standard full measurement (intensity and phase) systems.
2. BACKGROUND ART
Transmission of light through a scattering medium is a challeng ing task. The propagating light entering such a medium could un- dergo strong distortion depending on the medium, for example but not limited to a multimode fiber (MMF) , because of energy split ting of thousands of the modes being excited in the medium.
Several approaches have been proposed to compensate the modal dispersion in MMFs . Analog phase conjugation via holographic ma terials initially suggested by Wertz and Spitz in 1966 (Spitz, E. & Werts, A. Transmission des images a travers une fibre optique. Comptes Rendus Hebd. Des. Seances De. L Acad. Des. Sci. Ser. B 264, 1015, 1967.) and later on via a third order nonline ar crystal by Yariv (Gover, A., Lee, C. P. & Yariv, A. Direct transmission of pictorial information in multimode optical fi- bers . JOSA 66, 306-311, 1976.) was the first method proposed for recovering the distorted fields. Many decades later, digital ho lography and phase conjugation was carried out digitally on the computer (Yamaguchi, I. & Zhang, T. Phase-shifting digital ho lography. Optics letters 22, 1268-1270, 1997./ Papadopoulos , I. N., Farahi, S., Moser, C. & Psaltis, D. Focusing and scanning light through a multimode optical fiber using digital phase con jugation. Optics express 20, 10583-10590, 2012.) . Nevertheless, today it still requires an extremely accurate calibration and is significantly prone to perturbation.
Alternatively, digital iterative algorithms implemented with spatial light modulators (SLMs) are used to shape the beam en tering the MMF to form a focused spot at the output of the fiber (Di Leonardo, R. & Bianchi, S. Hologram transmission through multi-mode optical fibers. Optics express 19, 247-254 2011./
Cizmar, T. & Dholakia, K. Shaping the light transmission through a multimode optical fibre: complex transformation analysis and applications in biophotonics. Optics Express 19, 18871-18884,
2011) . By scanning the focused spot in the field of view of the fiber, an image can be produced. Other methods use interferometry to measure the complex field at the output of the fiber and then construct a transmission matrix between the light modulator component (SLM for example) and the object on both sides of the MMF (Choi, Y. et al . Scanner-free and wide-field endoscopic imaging by using a single multimode optical fiber. Physical review letters 109, 203901, 2012./ Cara- vaca-Aguirre, A. M., Niv, E., Conkey, D. B. & Piestun, R. Real time resilient focusing through a bending multimode fiber. Op tics express 21, 12881-12887, 2013./ Gu, R. Y., Mahalati, R. N. & Kahn, J. M. Design of flexible multi-mode fiber endoscope. Op tics express 23, 26905-26918, 2015) . This method entails measur ing the phase with holography. Additionally, after any small perturbation in the system the matrix should be re-measured, which could be a challenging task in terms of time and computa- tional resources. A machine learning approach which utilizes conventional simplistic training procedures is also used to fo cus spots through MMFs . The ability of this approach in projec tion of images is limited to focusing spots or simple shapes without the ability to generalize to other classes never seen by the network (Turpin, A., Vishniakou, I. & d Seelig, J. Light scattering control in transmission and reflection with neural networks. Optics express 26, 30911-30929, 2018) .
It should be noted that out of the two major approaches for light transmission inside the MMF, i.e. the transmission matrix and the digital iterative method, the former can project any ar bitrary pattern at the output of the fiber but requires nontriv ial experimental setup, and the latter is limited to projecting simple and basic shapes such as focused spots.
Hence there is a need for a system that while being able to pro ject a wide range of images through the scattering medium, is simple enough to implement. SUMMARY OF THE INVENTION
The present invention circumvents all of the previous shortcom ings of guiding light through the scattering media.
In detail, the present invention is related to a method for pro- jecting images through or by reflection off a scattering medium, the method comprising the steps of:
Illuminating a light modulator, preferably selected from the group consisting of a spatial light modulator (SLM) and a digital micromirror device (DLD) , of a multimode optical fiber projector with one or more light source (s);
Providing a modulation pattern selected from the group consisting of amplitude-only modulation, phase-only modu lation, amplitude and phase-modulation, to the light mod ulator to modulate spatially the light from said light source ; Sending said modulated light through the scattering medi um or by reflection off the scattering medium onto a screen;
Detecting the amplitude of the modulated light with a camera system; and
computing a spatial modulation pattern by using a system of two or more neural networks in series.
The present invention is furthermore related to a multimode op tical fiber projector for performing the method of the present invention, comprising
One or more light source (s), preferably a laser source or an LED,
A light modulator, preferably selected from the group consisting of a spatial light modulator (SLM) and a digi tal micromirror device (DLD) ,
a device comprising a scattering medium, such as a multi- mode fiber (MMF) ,
a camera system comprising a screen, and
a system of two or more neural networks in series for computing a spatial modulation pattern. Detailed description
A first embodiment of the present invention comprises a training of one of the neural networks in a path of propagation of light in forward direction, i.e. from the light modulator such as a spatial light modulator (SLM) or Digital Micromirror Device (DMD), but not limited to, until the screen, i.e. an intensity detector, through the scattering medium. The detection with a CMOS camera or a CCD camera is performed non- interferometrically, meaning that only the amplitude is detect ed .
This can be done by collecting examples displayed on the light modulator such as the SLM or DMD, which could modulate a light beam incident on it by amplitude-only modulation, phase-only modulation or a combination of both, and collecting the corre sponding output fields on the screen. The digital neural network (in the computer) then could be trained with these experimental ly collected dataset.
Another embodiment of the present invention comprises a training of one of the neural networks in a reverse path, i.e. from the screen to the light modulator, for example a SLM, through propa gation inside the scattering medium. This screen and SLM are re- ferred to as the original screen and original SLM. Then, the training of the neural network in the reverse path can be done in several ways: As an example, but not limited to, by adding an extra screen and extra light modulator, for example SLM, to the setup, so that in total the setup consists of two SLMs and two screens. The extra SLM added to the system must be placed in a position with the same optical path difference as the original screen. Similarly, the extra added screen must be placed in a position with the same optical path difference as the original
SLM. In this way, by generating SLM patterns in the extra added SLM device and capturing the output patterns on the extra added screen, and then training the neural network based on these pairs of inputs-outputs from the extra SLM and extra screen, the reverse path of the original setup is learned. So, when the goal is to project a perceivable image on the original screen, a training set consisting of images from, but not limited to, the same class are first uploaded to the extra SLM, and their corre sponding outputs transmitted through the fiber in the reverse path are captured on the extra screen. By training a neural net work with these two sets of input-output images, it is possible to project perceivable patterns in the forward way to be cap tured on the original screen.
In another embodiment according to the present invention, the training in the reverse optical path could be carried out by us- ing an implementation akin to Generative Adversarial Networks (GANs) . In more detail, two sets of neural networks are used in this embodiment. One set of neural networks (designated here as set of networks D) is the one that emulates the forward path of light propagation through said scattering medium. The second set of neural networks (designated here as set of networks G) learns the reverse optical path of light propagation through said scat tering medium. The training of the first set (i.e. neural net works emulating the forward path) is performed as explained in the first embodiment. The second set of neural networks is trained synergistically with the first set so that the discrimi nator (i.e. the first set of neural networks) forces the genera tor (i.e. the second set of neural networks) to output SLM pat terns that, upon sending through the scattering medium (e.g. a fiber), produce images on the screen similar to the desired set. The training of the two networks, depicted schematically in Fig.
3b, may be carried out in the following way: initially neither of the networks is trained; hence the training may start by col lecting examples for training network D. These examples may con sist of SLM images and their corresponding output speckle pat- terns on the screen. It should be noted that the goal here is to find a subset domain of SLM images that produce a certain class of images on the output of the scattering medium (e.g. a fiber) . In the beginning, there is no information about such a domain.
Hence, in a first iteration, random SLM images are sent through the scattering medium (e.g. a fiber) and the images are measured on the screen so as to produce a dataset, and then the first set of networks D is trained with this dataset. Once training of the first set of networks D is complete, training of the second set of networks G begins by feeding it with a class of images that should be projected through the scattering medium (e.g. a fi ber) . The second set of networks G learns the mapping from the output amplitude images on the screen to the proper SLM images constrained with the physical propagation rules enforced by the first set of networks D. By the end of the first iteration, the domain of the output SLM patterns generated by the second set of networks G must have moved closer in some metrics (Euclidean distance, for example) to the domain of SLM images that produce perfect desired images at the output of the scattering medium (e.g. a fiber) .
In a second iteration, training of the first set of networks D is carried out using the SLM images that were produced by the second set of networks G in the previous (first) iteration ra ther than random images as in the first iteration. Once com- plete, training of the second set of networks G is carried out in the same way as in the first iteration.
Depending on the quality of images that are projected through the scattering medium (e.g. a fiber), this process could be re- peated for multiple times. It is expected that after each itera- tion, the domain of the SLM patterns generated by the second set of networks G gets closer to the domain of SLM images that pro duce perfect target images at the output of the scattering medi um (e.g. a fiber) . It should be noted that the similarity of the proposed networks to GAN' s architecture roots from the fact that training of the second set of networks G, i.e. mapping from am plitude patterns to SLM images, cannot be straightforwardly car ried out because no label (ground truth SLM images) for target output images exists a-priori (training cannot be performed in a supervised manner in which ground truth labels are available be forehand) . Therefore, the performance of the second set of net works G gets better by competing with the first set of networks D where it tries to generate SLM images that result in output amplitude images with higher fidelities; hence the name genera tive adversarial network.
In another embodiment of the present invention, the first set of networks proposed in the previous embodiment that emulates the forward path of light propagation through said scattering medium can itself be comprised of two sub-sets, one sub-set emulating the real and the other sub-set emulating the imaginary part of the complex input patterns coming from the light modulator and propagating to the screen. Similarly, the second set of networks that emulates the reverse path of light propagation through said scattering medium, can also be comprised of two sub-sets, one for emulating the real and one for emulating the imaginary part of light propagation from the screen to the light modulator. The training of the sets of networks in the forward and reverse path, which includes the training of the sub-networks, is exact- ly carried out in the same fashion as described in the previous embodiment .
In another embodiment of the present invention, a colored RGB image can be projected through the scattering medium by using three different light sources for each color (corresponding to a specific wavelength) and used for an approach similar to the previous embodiments. For example, but not limited to, in the case of the neural network method similar to the GAN architec ture, the architecture of the network could be changed in a way that it consists of three different training units for each col or. The procedure for projecting an image with each color is then carried out in the same fashion as in the previous embodi ments. Once trained, each unit generates its own SLM pattern. Then, for display on the screen, a rapid cycle through the three SLM patterns transmitted through the scattering medium and ob served on the screen gives the impression of a colored image to the human eye. The three light sources could be for example, but not limited to, three different laser sources with the red, green and blue colors, or one tunable laser, or three or more LEDs with appropriate emitting wavelengths. The incoherency of the latter source, i.e. LEDs, is also beneficial for suppressing the unwanted background of the projected images due to shortcom ings of the neural network in generating the appropriate SLM pattern or incomplete light modulation by the SLM (amplitude- only or phase-only modulations instead of amplitude-and-phase (complex) modulation) .
BRIEF DESCRIPTION OF THE DRAWINGS These and other features, aspects and advantages of the present invention will become better understood with regard to the fol lowing non-limiting examples and accompanying drawings where:
Figure 1 is a schematic illustration of an image projector device that modulates the incoming light with the proper modulation to project images of certain shapes on the screen.
Figure 2 : is a schematic illustration of an embodiment of obtaining input-output data to train the neural networks in both a forward way and reverse way.
Figure 3: is a schematic illustration of the neural networks and its training procedure in which the network learns to generate proper modulation that when up loaded on the light modulator, projects certain pattern on the screen. Figure 4 : shows examples of the projected images on the
screen using amplitude-only modulation for input SLM patterns. The examples are Latin alphabet characters .
Figure 5: shows examples of projected images on the screen using amplitude-only modulation for input SLM pat terns. The examples are digit images,
Figure 6: shows examples of projected images on the screen using amplitude-only modulation for input SLM pat terns. The examples are drawings of a running per son .
Figure 7 : shows examples of projected images on the screen using amplitude-only modulation for input SLM pat terns. The examples are drawings of a skull- shape, a heart, a smiley and a circle,
Figure 8 : shows examples of projected images on the screen for three different wavelengths (488nm, 532nm, 633nm) , RGB colored images and gray-scale images of incoherent superposition (sum) of three wave lengths projected on the screen using amplitude and phase (complex) modulation for input SLM pat terns. The examples are Latin alphabet characters,
Figure 9: shows examples of the projected RGB images on the screen as well as incoherent superposition (sum) of three different wavelengths (488nm, 532nm, 633nm) projected on the screen using amplitude and phase (complex) modulation for input SLM patterns generated using the proposed neural network ap proach as well as the full measurement transmis- sion matrix approach (provided for the sake of comparison) . The examples are comprised of vari ous drawings .
Figure 10: shows examples of the projected images on the
screen for three different wavelengths (488nm, 532nm, 633nm) , RGB colored images and gray-scale images of incoherent superposition (sum) of three wavelengths projected on the screen using ampli tude and phase (complex) modulation for input SLM patterns. The examples are portraits of people and cartoon characters.
EXEMPLARY EMBODIMENTS
In figure 1, a schematic illustration of an embodiment of a mul- timode optical fiber projector 1 according to the present inven tion is shown. The projector comprises a laser source 2, such as a superluminescent diode, SLED or a Light emitting diode (LED) , an optional lens 2a provided between the laser source 2 and a subsequent light modulator 3, a light modulator 3 selected from the group consisting of a spatial light modulator (SLM) and a digital micromirror device (DMD) ) , a device 4 comprising a scat tering medium, such as a multimode fiber (MMF) , and a camera system comprising a screen (5) . Wavefront shaping is required to undo the scrambling of the scattering medium (e.g. the MMF) . This wavefront shaping is carried out by a system of two or more neural networks (not shown in Fig. 1) . The system of networks is given the desired image that is to be projected on the screen 5, and the appropriate wavefront (i.e. spatial modulation pattern) is computed by the system of networks.
In figure 2, a schematic illustration of an embodiment of the optical setup for acquiring examples for forward and reverse training of the system of networks is presented. For training the system of networks in the forward path of light transmission inside the scattering medium 4, a set of many examples from the original SLM 3 to the original screen 5 is provided. For this, light from a laser source 2 is led through a beam splitter 6 to the original SLM 3, where it is modulated and sent through the light scattering medium 4 (here an MMF) to another beam splitter 6', from where it is led via mirrors 7, Ί' to the original screen 5. The system of networks is then trained with this set of examples. For training the networks that are intended to emu late the reverse path of light propagation, examples relating to the pattern on the secondary SLM 3' and the secondary screen 5' are obtained and the networks are trained with the set of these examples. For this, the light from the laser source 2 is led through the beam splitter 6, from where it is sent through the light scattering medium 4 (here an MMF) to another beam splitter 6', from where it is led via mirror 1 to a secondary (extra) SLM 3' . There, the light is modulated and sent back in the re verse path direction via mirror 1’’ , beam splitter 6', through the scattering medium (MMF) 4 to the beam splitter 6, from where it is led via mirror 7, , , to a secondary (extra) screen 5'. In figure 3, the overall schematic of the projector network sys tem comprising two sub-networks is shown: a discriminator (D) (first set of networks) and a generator (G) (second set of net works) are shown. Once trained, the sub-network G accepts the target pattern desired to be projected at the output of the MMF (i.e. the desired image) and accordingly generates an SLM image corresponding to that pattern. The role of the sub-network D, which is trained to emulate the optical forward path SLM-MMF- camera, is to help the network G in the training step to come up with SLM patterns which result in projected images on the camera with higher fidelities. In this diagram, the fiber, the discrim inator and generator sub-networks are denoted by F, D and G, re spectively. The training procedure is carried out in three steps :
A- a number of experimental examples from the SLM images to the amplitude-only images on the screen are obtained. B- the sub-network D is trained on these images to learn to map the SLM images to screen images; hence D is essentially learning the optical forward path SLM-MMF-camera .
C- the network weights of D are fixed and sub-network G is trained with target images as input and produces an SLM image corresponding to the target image. The SLM image is then passed to the fixed sub-network D which produces an estimate of the fi ber output (NN output) based on the SLM image it received. The error between the output of D and the target image (desired im- age) (input of G) is back propagated to G to update its traina ble weights and biases. The validation procedure is carried out by feeding the target image (desired image) to the trained sub network G and acquiring the appropriate SLM image corresponding to that target image.
Figure 4 shows examples of the projected images from the Latin alphabet; (a) train and (b) test dataset captured on the camera are shown. The target image, shown on the bottom right corner of each image, is fed to the neural network to generate the appro- priate SLM pattern (amplitude-modulated) that produces that de sired target image corresponding to the central part of the fi ber's facet FOV (field of view) . This area is shown as a dashed box on the top left image. The fidelity of the projected image with respect to its corresponding grayscale target image is shown as inset in each image. In figure 5, examples of the projected images from the MNIST handwritten digits dataset are provided. Target images, shown next to the projected images, are blindly fed to the neural net- work system which was trained only on alphabet characters. The neural network system-generated SLM patterns (amplitude- modulated) are sent through the fiber and the outputs of the fi ber are captured on the screen and by a camera. The fidelity of the projected image with respect to its corresponding grayscale target image is shown as inset in each image.
Figure 6 shows the result of the neural network system trained on Latin alphabet characters when used to generate the required SLM pattern (amplitude-modulated) that produces a drawing of a running person in an area of 200x200 pixels (1.6x1.6 mm2) corre sponding to the central part of the fiber's facet FOV. The de sired target image (left), the corresponding SLM image (middle) and the projected image (right) are depicted. The fidelity of the projected image with respect to its desired image is shown as inset.
In figure 7, examples of the projected random drawings (a skull a heart, a smiley, a circle) are provided. Target images, shown in bottom right corner, are blindly fed to the neural network system which was trained only on alphabet characters. The neural network system-generated SLM patterns (amplitude-modulated) are sent through the fiber and the outputs of the fiber are captured on the camera. The fidelity of the projected image with respect to its corresponding grayscale target image is shown as inset in each image.
In figure 8, examples of the projected colored images from the EMNIST Latin characters dataset for three different wavelengths (488nm, 532nm, 633nm) are provided. Target images are separately fed to the neural network system which was trained only on al phabet characters for a single color. The neural network system generated SLM patterns (amplitude and phase modulated) are sent through the fiber and the outputs of the fiber corresponding to the central part of the fiber's facet FOV (shown as a dashed box on in one of the images) are captured on the screen and by the camera. The three colored images can be recombined as a stack of images comprised of three channels (one channel for each color) to create a single colored RGB image. Similarly, the three imag es can be superimposed to create a single incoherent gray-scale image. The fidelity of the projected images with respect to their corresponding grayscale target images is shown as inset in each image.
In figure 9, various examples of the projected colored RGB imag- es comprising of three wavelengths (488nm, 532nm, 633nm) as well as the incoherent superposition thereof are provided. Target im ages are fed to the system of neural networks trained only on alphabet characters. The neural network system-generated SLM patterns (amplitude and phase modulated patterns) are sent through the fiber and the outputs of the fiber are captured on the screen and by the camera. For the sake of comparison, simi lar images are projected through the fiber using the convention al transmission matrix approach. In figure 10, various examples of the projected colored RGB im ages comprising of three wavelengths (488nm, 532nm, 633nm) as well as the incoherent superposition thereof are provided. Tar get images comprising of pictures of human portraits are fed to the system of neural networks trained on these images. The neu- ral network system-generated SLM patterns (amplitude and phase modulated) are sent through the fiber and the outputs of the fi ber are captured on the screen and by the camera.

Claims

Claims
1. A method for projecting images through or by reflection off a scattering medium, the method comprising the steps of:
Illuminating a light modulator (3), preferably selected from the group consisting of a spatial light modulator (SLM) and a digital micromirror device (DMD) , of a multi- mode optical fiber projector (1) with one or more light source ( s ) ( 2 ) ;
Providing a modulation pattern selected from the group consisting of amplitude-only modulation, phase-only modu lation, and amplitude and phase-modulation, to the light modulator (3) to modulate spatially the light from said light source (2);
Sending the said modulated light through scattering medi um (4) or by reflection off a scattering medium (4) onto a screen ( 5 ) ;
Detecting the amplitude of the light with a camera sys tem; and
computing the spatial modulation pattern by using a sys tem of two or more neural networks in series.
2. The method according to claim 1, characterized in that one of the neural networks is trained in a path of propagation of light in forward direction.
3. The method according to claim 2, characterized in that said training in forward direction is performed by collecting examples displayed on the light modulator (3) and the cor responding output field on the screen (5) .
4. The method according to claim 1, characterized in that one of the neural networks is trained in reverse path, from the screen (5) to the light modulator (3) through propagation inside the scattering medium (4) .
5. The method according to claim 4, characterized in that
training in the reverse path is performed by adding an ex tra screen (5') and extra light modulator (3') to the pro jector (1), wherein the extra light modulator (3') added to the projector (1) is placed in a position with the same op tical path difference as the original screen (5), and the extra screen (5') is placed in a position with the same op tical path difference as the original light modulator (3), generating SLM patterns in the extra light modulator (3') and capturing the output patterns on the extra screen (5'), and then training the system of neural networks based on these pairs of inputs-outputs .
6. The method according to claim 4, characterized in that
training in reverse path is performed by using two sets of neural networks, wherein one set of networks D emulates the forward path of light propagation through scattering medium
(4), and the second set of networks G learn the reverse op tical path through said scattering medium (4), wherein the second set of networks are trained synergistically with the first set so that the first set of networks D forces the second set of networks G to output SLM patterns that, upon sending through the scattering medium (4), produce images on the screen (5) similar to a desired set.
7. The method according to claim 6, characterized in that in a first iteration of training random SLM images are sent through the scattering medium (4), the images on the screen
(5) are measured so as to produce a dataset, the first set of networks D are trained with this dataset, and after com- pletion of the training of the first set of networks D the second set of networks G is trained by feeding said second set of networks G with a class of images to be projected through the scattering medium (4) .
8. The method according to claim 6 or 7, characterized in that in a second iteration of the training the first set of net works D are trained with the SLM images that were produced by the second set of networks G in the first iteration, and after completion of the training of the first set of net- works D the second set of networks G is trained as in the first iteration.
9. The method according to any of claims 6 to 8, characterized in that additional iterations are carried out, wherein the first set of networks D are trained with the SLM images that were produced by the second set of networks G in the previous iteration.
10. The method according to any of claims 2 to 9, characterized in that the training of the neural network is carried out for beams emitted from light sources of three different colors red (R) , green (G) and blue (B) to project an RGB image on the screen (5) after the scattering medium (4) .
11. Multimode optical fiber projector (1) for performing the method according to any of claims 1 to 10, comprising
One or more light source (s) (2), preferably a laser source or an LED,
a light modulator (3), preferably selected from the group consisting of a spatial light modulator (SLM) and a digi tal micromirror device (DMD) ,
a device comprising a scattering medium (4), such as a multimode fiber (MMF) ,
a camera system comprising a screen (5), and
a system of two or more neural networks in series for computing a spatial modulation pattern.
12. Multimode optical fiber projector according to claim 11, ad ditionally comprising an extra screen (5') and extra light modulator (3'), wherein the extra light modulator (3') add ed to the projector (1) is in a position with the same op- tical path difference as the original screen (5), and the extra screen (5') is in a position with the same optical path difference as the original light modulator (3) .
13. Multimode optical fiber projector according to claim 11 or 12, characterized in that the camera is designed to only detect the amplitude of the light.
PCT/EP2020/064445 2019-06-07 2020-05-25 System and method for projecting images through scattering media WO2020244952A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
EP19178922 2019-06-07
EP19178922.1 2019-06-07
EP19179311.6 2019-06-11
EP19179311 2019-06-11

Publications (1)

Publication Number Publication Date
WO2020244952A1 true WO2020244952A1 (en) 2020-12-10

Family

ID=70740680

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2020/064445 WO2020244952A1 (en) 2019-06-07 2020-05-25 System and method for projecting images through scattering media

Country Status (1)

Country Link
WO (1) WO2020244952A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113567396A (en) * 2021-06-02 2021-10-29 西安电子科技大学 Scattering imaging system and method based on speckle field polarization common mode rejection
CN113985566A (en) * 2021-09-10 2022-01-28 西南科技大学 Scattered light focusing method based on spatial light modulation and neural network
CN114839789A (en) * 2022-05-20 2022-08-02 西南科技大学 Diffraction focusing method and device based on binary space modulation

Non-Patent Citations (13)

* Cited by examiner, † Cited by third party
Title
ALEX TURPIN ET AL: "Light scattering control in transmission and reflection with neural networks", OPTICS EXPRESS, vol. 26, no. 23, 9 November 2018 (2018-11-09), pages 30911 - 30929, XP055715694, DOI: 10.1364/OE.26.030911 *
BABAK RAHMANI ET AL: "Competing Neural Networks for Robust Control of Nonlinear Systems", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 29 June 2019 (2019-06-29), XP081590961 *
CARA-VACA-AGUIRRE, A. M.NIV, E.CONKEY, D. B.PIESTUN, R.: "Real-time resilient focusing through a bending multimode fiber", OPTICS EXPRESS, vol. 21, 2013, pages 12881 - 12887
CHOI, Y. ET AL.: "Scanner-free and wide-field endoscopic imaging by using a single multimode optical fiber", PHYSICAL REVIEW LETTERS, vol. 109, 2012, pages 203901, XP055423438, DOI: 10.1103/PhysRevLett.109.203901
CIZMAR, T.DHOLAKIA, K.: "Shaping the light transmission through a multimode optical fibre: complex transformation analysis and applications in biophotonics", OPTICS EXPRESS, vol. 19, 2011, pages 18871 - 18884, XP002705793, DOI: 10.1364/OE.19.018871
DI LEONARDO, R.BIANCHI, S.: "Hologram transmission through multi-mode optical fibers", OPTICS EXPRESS, vol. 19, 2011, pages 247 - 254, XP002705794, DOI: 10.1364/OE.19.000247
GOVER, A.LEE, C. P.YARIV, A.: "Direct transmission of pictorial information in multimode optical fibers", JOSA, vol. 66, 1976, pages 306 - 311
GU, R. Y.MAHALATI, R. N.KAHN, J. M.: "Design of flexible multi-mode fiber endoscope", OPTICS EXPRESS, vol. 23, 2015, pages 26905 - 26918
PAPADOPOULOS, I. N.FARAHI, S.MOSER, C.PSALTIS, D.: "Focusing and scanning light through a multimode optical fiber using digital phase conjugation", OPTICS EXPRESS, vol. 20, 2012, pages 10583 - 10590
PHILLIP ISOLA ET AL: "Image-to-Image Translation with Conditional Adversarial Networks", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 21 November 2016 (2016-11-21), XP080733474, DOI: 10.1109/CVPR.2017.632 *
SPITZ, E.WERTS, A.: "Transmission des images a travers une fibre optique", COMPTES RENDUS HEBD. DES. SEANCES DE. L ACAD. DES. SCI. SER. B, vol. 264, 1967, pages 1015
TURPIN, A.VISHNIAKOU, I.D SEELIG, J.: "Light scattering control in transmission and reflection with neural networks", OPTICS EXPRESS, vol. 26, 2018, pages 30911 - 30929
YAMAGUCHI, I.ZHANG, T.: "Phase-shifting digital holography", OPTICS LETTERS, vol. 22, 1997, pages 1268 - 1270, XP000699486

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113567396A (en) * 2021-06-02 2021-10-29 西安电子科技大学 Scattering imaging system and method based on speckle field polarization common mode rejection
CN113567396B (en) * 2021-06-02 2022-11-11 西安电子科技大学 Scattering imaging system and method based on speckle field polarization common mode rejection
CN113985566A (en) * 2021-09-10 2022-01-28 西南科技大学 Scattered light focusing method based on spatial light modulation and neural network
CN113985566B (en) * 2021-09-10 2023-09-12 西南科技大学 Scattered light focusing method based on spatial light modulation and neural network
CN114839789A (en) * 2022-05-20 2022-08-02 西南科技大学 Diffraction focusing method and device based on binary space modulation
CN114839789B (en) * 2022-05-20 2023-09-12 西南科技大学 Diffraction focusing method and device based on binarization spatial modulation

Similar Documents

Publication Publication Date Title
US20240255762A1 (en) Near-eye sequential light-field projector with correct monocular depth cues
WO2020244952A1 (en) System and method for projecting images through scattering media
Kozacki et al. Color holographic display with white light LED source and single phase only SLM
KR101428819B1 (en) Projection device for the holographic reconstruction of scenes
CN109489583B (en) Projection device, acquisition device and three-dimensional scanning system with same
US9568886B2 (en) Method of printing holographic 3D image
US9148658B2 (en) Light-based caustic surface calibration
CN107850867A (en) Dynamic holographic depth of focus printing equipment
US20160378062A1 (en) Hologram data generating method, hologram image reproduction method, and hologram image reproduction device
WO2004102542A1 (en) Optical information recording/reproduction device and method
JPH11126012A (en) Device for producting individual hologram for keeping document secrecy
CN101449214A (en) Holographic projection device for the reconstruction of scenes
WO2019204479A1 (en) Image reconstruction from image sensor output
ES2937132T3 (en) System and method for projecting digital content including hair color changes on a user's head
CN108762033B (en) Imaging method and optical system, and storage medium, chip and assembly thereof
US11567451B2 (en) Holographic display apparatus and method for providing expanded viewing window
US6062693A (en) Three-dimensional image projecting device
CN110263848B (en) Multimode fiber imaging method and device based on generating type countermeasure network
EP4280589A1 (en) Imaging method and imaging device
CN108965838B (en) High dynamic range scene generation method, device and system
US11347057B2 (en) Image display device and method of displaying image using multiplex holographic optical element
CN107797435A (en) A kind of color holographic display system based on optical attenuation principle
Cao et al. Quaternary pulse width modulation based ultra-high frame rate scene projector used for hard-ware-in-the-loop testing
Rahmani et al. Competing neural networks for robust control of nonlinear systems
CN102722095B (en) A method and a system for generating holographic interference fringes

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20726495

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20726495

Country of ref document: EP

Kind code of ref document: A1