[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

WO2014203038A1 - System and method for implementing reservoir computing in magnetic resonance imaging device using elastography techniques - Google Patents

System and method for implementing reservoir computing in magnetic resonance imaging device using elastography techniques Download PDF

Info

Publication number
WO2014203038A1
WO2014203038A1 PCT/IB2013/055041 IB2013055041W WO2014203038A1 WO 2014203038 A1 WO2014203038 A1 WO 2014203038A1 IB 2013055041 W IB2013055041 W IB 2013055041W WO 2014203038 A1 WO2014203038 A1 WO 2014203038A1
Authority
WO
WIPO (PCT)
Prior art keywords
gel
computing
reservoir
recurrent neural
physical
Prior art date
Application number
PCT/IB2013/055041
Other languages
French (fr)
Inventor
Ozgur Yilmaz
Volkan ACIKEL
Original Assignee
Aselsan Elektronik Sanayi Ve Ticaret Anonim Sirketi
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Aselsan Elektronik Sanayi Ve Ticaret Anonim Sirketi filed Critical Aselsan Elektronik Sanayi Ve Ticaret Anonim Sirketi
Priority to PCT/IB2013/055041 priority Critical patent/WO2014203038A1/en
Publication of WO2014203038A1 publication Critical patent/WO2014203038A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/067Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using optical means
    • G06N3/0675Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using optical means using electro-optical, acousto-optical or opto-electronic means
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01RMEASURING ELECTRIC VARIABLES; MEASURING MAGNETIC VARIABLES
    • G01R33/00Arrangements or instruments for measuring magnetic variables
    • G01R33/20Arrangements or instruments for measuring magnetic variables involving magnetic resonance
    • G01R33/44Arrangements or instruments for measuring magnetic variables involving magnetic resonance using nuclear magnetic resonance [NMR]
    • G01R33/48NMR imaging systems
    • G01R33/54Signal processing systems, e.g. using pulse sequences ; Generation or control of pulse sequences; Operator console
    • G01R33/56Image enhancement or correction, e.g. subtraction or averaging techniques, e.g. improvement of signal-to-noise ratio and resolution
    • G01R33/563Image enhancement or correction, e.g. subtraction or averaging techniques, e.g. improvement of signal-to-noise ratio and resolution of moving material, e.g. flow contrast angiography
    • G01R33/56358Elastography

Definitions

  • the present invention relates to a system and method for implementing a specific class of a recurrent neural network algorithm called reservoir computing using magnetic resonance imaging device and its principles in elastography.
  • RNNs Recurrent Neural Networks
  • RNNs are connectionist computational models that utilize distributed representation and nonlinear dynamics of its units. Information in RNNs is propagated and processed in time through the states of its hidden units, which make them appropriate tools for sequential information processing.
  • RNNs stochastic energy based with symmetric connections
  • deterministic with directed connections There are two broad types of RNNs: stochastic energy based with symmetric connections, and deterministic with directed connections.
  • RNNs are known to be Turing complete computational models (Siegelmann and Sontag, 1995) and universal approximators of dynamical systems (Funahashi and Nakamura, 1993). They are especially appealing for problems that require remembering long-range statistical relationships such as speech, natural language processing, video processing, financial data analysis etc. Additionally, RNNs are shown to be very successful generative models for data completion tasks (Salakhutdinov and Hinton, 2012).
  • Previously Fernando and Sojakka implemented reservoir computing in a system consisting of a bucket of water and a camera mounted on top of the water surface.
  • the water waves act as the reservoir of dynamical activity that maps the input onto a high dimensional nonlinear space.
  • Using diffusive wave front interactions on water surface produces the necessary nonlinearity in the reservoir.
  • the water reservoir is vibrated by multiple mechanical actuators mounted on the bucket surfaces that act as the input device. It is necessary to encode the set of actual inputs (speech signals, images, financial data etc.) into mechanical actuator commands.
  • the camera captures the water wave evolution caused by actuator vibration, and uses the image properties (edge strength in a N*M grid) as the output of the reservoir.
  • the output vector of the reservoir is then used to train the network using classification methods for speech recognition task ("one" vs "zero” speech recognition).
  • Adamatzky (2001 and 2002) previously analyzed usage of various nonlinear media for computing. Walmsley (2001) and Duport et al. (2012) and Paquot et al. (2012) proposed optical devices and Adamatzky (2004) proposed chemical reactions for implementing nonlinear medium.
  • US patent 7,392,230 B2 proposed a method for implementing reservoir system using nanotechnology, i.e. molecular interactions modulated by electrical input.
  • Patents US 20130060772 Al and US 8301628 B2 suggest using an echo state network for ontology generation
  • patent EP 2389669 Al proposes using reservoir computing based neural network for geodatabase information processing.
  • Muthupillai et al. proposed a method for magnetic resonance (MR) imaging elastography, in which the tissue stiffness properties are measured using a setup in an MR machine.
  • the tissue is mechanically vibrated using actuators and the acoustic strain waves that propagate in the tissue are captured in MR image using special type of magnetic gradients called "motion sensitizing gradients".
  • the method output is a 3D MR image volume of the acoustic waves that vibrate the tissue.
  • the object of the invention is to provide a system for implementing reservoir computing in a magnetic resonance imaging device and utilizing both acoustic and magnetic wavefront interactions, and the inhomogeneity in the phantom gel as the nonlinearity in the medium.
  • Another object of the invention is to provide a method for preparing the physical medium of an inhomogeneous gel volume that is specifically tailored for the task of the reservoir computer. This method is very similar to unsupervised pre- training of neural networks with unlabeled data (Hinton et al., 2006). Detailed description of the invention
  • the physical construct (100) of the reservoir computing device is shown and it is composed of a block of gel (101), inhomogeneity introduced in the gel (102), mechanical actuators that vibrate the gel (103), an MR imaging machine (104) and a computing device (105) for processing and communicating reservoir data.
  • the gel is an agarose compound, however it can be chosen as any semisolid gel that can be vibrated by the actuators. Any shape can be used for the container of the gel, however using a cube enables systematic placement of mechanical actuators (103).
  • the inhomogeneity in the gel can be a combination of a non-metal object of any shape that distorts the acoustic wave propagation, and stiffness inhomogeneity due to varying concentration of gel ingredient.
  • An object can be inserted into the gel during preparation, or the inhomogeneity can be introduced in a region by using a different concentration of gel ingredient which creates different stiffness in that region, or both.
  • the actuators are located on 5 sides of the cube (bottom side is omitted). The actuators are able to create point source or planar acoustic waves, depending on the input code generated in encoding stage (202).
  • Actuators can be any type available in elastography methods: electromechanical driver, piezoelectric stack driver, focused-ultrasound based or acoustic speaker based.
  • MR imaging machine (104) is the both one of the input (via motion sensitizing gradient) devices and the output device in the system that can be used for phase-contrast MRI (Muthupillai et al., 1996). This technique provides the 3D volume images of propagating acoustic waves in the gel.
  • a computing device (105) is connected to the system to receive input from the outside world (201), to provide processing medium for encoding (202) and decoding (205) stages, to process (206) the reservoir output for a specific task (classification etc.) and to communicate the output (207) of the system to the outside world.
  • the reservoir computing system receives the input data (201).
  • the encoding stage (202) translates the input into a code that drives the mechanical actuators and modulates the motion sensitizing magnetic gradient.
  • input data is transformed into a set of instructions that drives the physical system.
  • the physical system is excited (203) according to these instructions generated in the previous stage and the phase-contrast MR imaging (204) is performed that gives an image volume of the gel.
  • the MR image volume of the gel is decoded (205). In the decoding stage, the MR image volume is converted into a data vector that represents the state of the complex acoustic wave patterns in the gel.
  • the decoded data vector is then further processed (206) according to the task at hand (eg. recognition, data compression, clustering etc.).
  • the stages (203) and (204) take place in the physical construct of the system whereas the rest of the stages are part of software implemented in the computing device (105).
  • This computer accesses input data (201) and fetches MR image volume (204) for decoding (205), and then applies processing algorithms (206) for the assigned task.
  • ⁇ (r) — fi . ⁇ ) s ⁇ k . r -f- )
  • the MR image is dependent on the gyromagnetic ratio ( ⁇ ) of the material at location r, the angular frequency ( ⁇ ), the wavenumber (k), initial phase offset (a) and peak displacement ( ⁇ ) of the mechanical excitation and the magnetic gradient (G).
  • G can be a temporally periodic signal:
  • V There are total of at least 14 (V) variables for encoding. Using these variables, the input data (201) needs to be converted into a specific set of system instructions and the system is excited (203) with these instructions.
  • actuator locations represented different frequency channels of the speech data, and time varying peak displacement of the waves represented the magnitude of the specific frequency channel.
  • the steps of the encoding stage (202) are shown in Figure 3.
  • the input data is first pre-processed (301), and this can be a sequence of operations (filter, whiten, transformations such as fourier or wavelet, dimensionality reduction etc.) that modifies and transforms the data to make it more appropriate for the subsequent stages.
  • the data At the end of pre-processing, the data have an inherent number of dimensions, K.
  • dimensionality of the input data (201) increases it takes more time to encode the input, and the complexity of the mapping algorithm as well as the need for dimensionality reducing pre-processing steps (i.e. principle component analysis, wavelet transform etc.) also increases.
  • X K is the k th dimension of the vector.
  • mapping stage a function maps the component of the input (can be real, integer or binary) onto the value of m th reservoir variable at epoch e, 3 ⁇ 4:
  • the steps of decoding stage (205) are shown in Figure 4.
  • the MR image volume passed from the imaging stage (204) is first post-processed (401) with a sequence of algorithms such as high pass filtering and edge extraction. This stage filters out low spatial frequency information and enhances the wave propagation information in the image.
  • the image volume can be divided into a coarser grid (M by N by P) than the MR image volume (many voxels falls inside a grid).
  • a single feature or a set of features (length Q) are computed for each grid cell.
  • the features can be average, standard deviation, or any spatial transformation output which can be applied to a neighborhood of voxels inside a grid cell.
  • the feature values for each grid cell are concatenated to form a data vector (403) of size (M*N*P*Q). This vector is the reservoir output for a given input.
  • the output vector of the reservoir is used as an input to another algorithm designed for a specific task such as classification, clustering, dimensionality reduction, data completion etc.
  • a specific task such as classification, clustering, dimensionality reduction, data completion etc.
  • the output of the overall system is computed from this stage but the specifics are irrelevant for this patent.
  • the acoustic waves that are captured by the MR imaging device need to be harnessing the nonlinearity provided by diffusive wavefront interactions and inhomogeneity in the gel.
  • the nonlinearity in the reservoir maps the input onto a high dimensional nonlinear manifold.
  • a series of simulations are performed in 2D acoustic waves.
  • Figure 5 gives an instance of waveforms generated by 3 different scenarios. In (501), 4 point acoustic sources with different phases are used in a homogeneous medium and waves seem to exhibit very complex patterns. A single point source and a very stiff line object are used in (502), and this scenario shows complex diffraction patterns.
  • stiffness inhomogeneity can be used.
  • Using a different concentration of gel ingredient creates different stiffness in a region, and inhomogeneity in stiffness can be achieved by mixing gel ingredient (603) without stirring the hot solution.
  • Data is composed of many instances (i.e. images), and the system instructions (303) of each one of the instances can be computed offline and stored (605) in the computing device.
  • annealing phase (604) the hot gel is excited (acoustic waves) (203) with the system instructions of the whole data (605) one instance at a time, continuously, repeatedly and in random instance order until the gel becomes solid.
  • the acoustic waves applied onto the hot gel stir and mix the gel guiding gel ingredient diffusion and move the inserted objects.
  • the inhomogeneities are created according to the applied excitation.
  • the diffusion/motion process minimizes the total energy of the gel, optimizing it for the data.
  • object insertion (602) and gel ingredient mixing (603) can be selected as the only source of inhomogeneity, or can be applied together.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Signal Processing (AREA)
  • Data Mining & Analysis (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Neurology (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Condensed Matter Physics & Semiconductors (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Vascular Medicine (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)

Abstract

A physical implementation of a recurrent neural network is disclosed, which comprises of a reservoir computing medium, magnetic resonance imaging device and a computing/storage device. An inhomogeneous gel substance located in a magnetic resonance imaging device, acoustic wave generators connected to the gel and the magnetic gradients in the magnetic resonance imaging device act as the physical reservoir of computation. The input is encoded in the generated acoustic and magnetic waves, and wave interactions provide the nonlinear operations for computation. The nonlinear output of the reservoir is read out by the MR imaging device. The inhomogeneous gel is prepared via an unsupervised annealing procedure that optimizes the representation of the reservoir medium for a specific dataset.

Description

DESCRIPTION
SYSTEM AND METHOD FOR IMPLEMENTING RESERVOIR COMPUTING IN MAGNETIC RESONANCE IMAGING DEVICE USING ELASTOGRAPHY TECHNIQUES
Field of the invention
The present invention relates to a system and method for implementing a specific class of a recurrent neural network algorithm called reservoir computing using magnetic resonance imaging device and its principles in elastography.
Background of the invention
Recurrent Neural Networks (RNNs) are connectionist computational models that utilize distributed representation and nonlinear dynamics of its units. Information in RNNs is propagated and processed in time through the states of its hidden units, which make them appropriate tools for sequential information processing. There are two broad types of RNNs: stochastic energy based with symmetric connections, and deterministic with directed connections.
RNNs are known to be Turing complete computational models (Siegelmann and Sontag, 1995) and universal approximators of dynamical systems (Funahashi and Nakamura, 1993). They are especially appealing for problems that require remembering long-range statistical relationships such as speech, natural language processing, video processing, financial data analysis etc. Additionally, RNNs are shown to be very successful generative models for data completion tasks (Salakhutdinov and Hinton, 2012).
Despite their immense potential as universal computers, difficulties in training RNNs arise due to the inherent difficulty of learning long-term dependencies (Hochreiter, 1991 ; Bengio et al., 1994; and see Hochreiter and Schmidhuber, 1997) and convergence issues (Doya, 1992). However, recent advances suggest promising approaches in overcoming these issues, such as utilizing a reservoir of coupled oscillators (Maass et al., 2002; Jaeger, 2001). Reservoir computing (echo state networks or liquid state machines) alleviates the problem of training in a recurrent network by using a static dynamical reservoir of coupled oscillators, which are operating at the edge of chaos. It is claimed that many of these type of dynamical systems possess high computational power (Bertschinger and Natschlager, 2004; Legenstein and Maass, 2007). In this approach, due to rich dynamics already provided by the reservoir, there is no need to train many recurrent layers and learning takes place only at the output (or readout stage) layer. This simplification enables usage of recurrent neural networks in complicated tasks that require memory for long-range (both spatially and temporally) statistical relationships.
Previously Fernando and Sojakka (2003) implemented reservoir computing in a system consisting of a bucket of water and a camera mounted on top of the water surface. In their approach, the water waves act as the reservoir of dynamical activity that maps the input onto a high dimensional nonlinear space. Using diffusive wave front interactions on water surface produces the necessary nonlinearity in the reservoir. The water reservoir is vibrated by multiple mechanical actuators mounted on the bucket surfaces that act as the input device. It is necessary to encode the set of actual inputs (speech signals, images, financial data etc.) into mechanical actuator commands. The camera captures the water wave evolution caused by actuator vibration, and uses the image properties (edge strength in a N*M grid) as the output of the reservoir. The output vector of the reservoir is then used to train the network using classification methods for speech recognition task ("one" vs "zero" speech recognition). Adamatzky (2001 and 2002) previously analyzed usage of various nonlinear media for computing. Walmsley (2001) and Duport et al. (2012) and Paquot et al. (2012) proposed optical devices and Adamatzky (2004) proposed chemical reactions for implementing nonlinear medium. US patent 7,392,230 B2 proposed a method for implementing reservoir system using nanotechnology, i.e. molecular interactions modulated by electrical input. In addition to these physical implementation studies, there are a patents that implement reservoir computing in software for specific purposes. Patents US 20130060772 Al and US 8301628 B2 suggest using an echo state network for ontology generation, and patent EP 2389669 Al proposes using reservoir computing based neural network for geodatabase information processing.
Muthupillai et al. (1996) proposed a method for magnetic resonance (MR) imaging elastography, in which the tissue stiffness properties are measured using a setup in an MR machine. In this setup, the tissue is mechanically vibrated using actuators and the acoustic strain waves that propagate in the tissue are captured in MR image using special type of magnetic gradients called "motion sensitizing gradients". The method output is a 3D MR image volume of the acoustic waves that vibrate the tissue. In their experiment, they used a homogeneous phantom gel for emulating tissue. Objects of the invention
The object of the invention is to provide a system for implementing reservoir computing in a magnetic resonance imaging device and utilizing both acoustic and magnetic wavefront interactions, and the inhomogeneity in the phantom gel as the nonlinearity in the medium.
Another object of the invention is to provide a method for preparing the physical medium of an inhomogeneous gel volume that is specifically tailored for the task of the reservoir computer. This method is very similar to unsupervised pre- training of neural networks with unlabeled data (Hinton et al., 2006). Detailed description of the invention
In Figure 1, the physical construct (100) of the reservoir computing device is shown and it is composed of a block of gel (101), inhomogeneity introduced in the gel (102), mechanical actuators that vibrate the gel (103), an MR imaging machine (104) and a computing device (105) for processing and communicating reservoir data. In a preferred implementation of the system, the gel is an agarose compound, however it can be chosen as any semisolid gel that can be vibrated by the actuators. Any shape can be used for the container of the gel, however using a cube enables systematic placement of mechanical actuators (103). As given in method 600, the inhomogeneity in the gel can be a combination of a non-metal object of any shape that distorts the acoustic wave propagation, and stiffness inhomogeneity due to varying concentration of gel ingredient. An object can be inserted into the gel during preparation, or the inhomogeneity can be introduced in a region by using a different concentration of gel ingredient which creates different stiffness in that region, or both. In a preferred implementation, the actuators are located on 5 sides of the cube (bottom side is omitted). The actuators are able to create point source or planar acoustic waves, depending on the input code generated in encoding stage (202). Actuators can be any type available in elastography methods: electromechanical driver, piezoelectric stack driver, focused-ultrasound based or acoustic speaker based. MR imaging machine (104) is the both one of the input (via motion sensitizing gradient) devices and the output device in the system that can be used for phase-contrast MRI (Muthupillai et al., 1996). This technique provides the 3D volume images of propagating acoustic waves in the gel. A computing device (105) is connected to the system to receive input from the outside world (201), to provide processing medium for encoding (202) and decoding (205) stages, to process (206) the reservoir output for a specific task (classification etc.) and to communicate the output (207) of the system to the outside world. In Figure 2, the algorithmic flow of the system (200) is given. The reservoir computing system receives the input data (201). The encoding stage (202) translates the input into a code that drives the mechanical actuators and modulates the motion sensitizing magnetic gradient. At the end of this stage input data is transformed into a set of instructions that drives the physical system. Then, the physical system is excited (203) according to these instructions generated in the previous stage and the phase-contrast MR imaging (204) is performed that gives an image volume of the gel. The MR image volume of the gel is decoded (205). In the decoding stage, the MR image volume is converted into a data vector that represents the state of the complex acoustic wave patterns in the gel. The decoded data vector is then further processed (206) according to the task at hand (eg. recognition, data compression, clustering etc.). The stages (203) and (204) take place in the physical construct of the system whereas the rest of the stages are part of software implemented in the computing device (105). This computer accesses input data (201) and fetches MR image volume (204) for decoding (205), and then applies processing algorithms (206) for the assigned task.
The design of encoding and decoding stages is essential for efficient use of the reservoir computing system. In order to understand the encoding stage of the system, the principles of the MR elastography need to be visited. In elastography, the phase shift of the MR signal is acquired by the machine and the image intensity at a voxel is a function of this shift. The shift at location r in the gel is given by:
φ (r) = — fi . ξ) s{k . r -f- ) The MR image is dependent on the gyromagnetic ratio (γ) of the material at location r, the angular frequency (ω), the wavenumber (k), initial phase offset (a) and peak displacement (ξ) of the mechanical excitation and the magnetic gradient (G). G can be a temporally periodic signal: By looking at the formula that converts the acoustic wave vibration into MR image, and inspecting the physical architecture of the proposed system (Figure 1), we can list the different variables that can be used for encoding of the input signal into instructions of the physical system:
Mechanical wave type (point source or planar)
Mechanical wave location (x,y,z) on the gel volume, see (103) in Figure 1
Mechanical wave parameters (ω, k, α, ξ)
Magnetic gradient temporal frequency ω χ, ω γ, ω ζ in each dimension. Magnetic gradient strength Gox, Goy, Goz in each dimension.
There are total of at least 14 (V) variables for encoding. Using these variables, the input data (201) needs to be converted into a specific set of system instructions and the system is excited (203) with these instructions. In a similar but simpler system proposed by Fernando and Sojakka (2003), actuator locations represented different frequency channels of the speech data, and time varying peak displacement of the waves represented the magnitude of the specific frequency channel.
The steps of the encoding stage (202) are shown in Figure 3. The input data is first pre-processed (301), and this can be a sequence of operations (filter, whiten, transformations such as fourier or wavelet, dimensionality reduction etc.) that modifies and transforms the data to make it more appropriate for the subsequent stages. At the end of pre-processing, the data have an inherent number of dimensions, K. In the mapping stage (302), these dimensions are mapped onto the system variables given above. K can be much larger than the number of system variables (V >= 14), then in that case the mapping algorithm exploits time. Making the system variables time varying and presenting different dimensions of the pre-processed input in different excitation epochs (e=l,2,..T), allows the encoding stage to map as many dimensions as needed. As the dimensionality of the input data (201) increases it takes more time to encode the input, and the complexity of the mapping algorithm as well as the need for dimensionality reducing pre-processing steps (i.e. principle component analysis, wavelet transform etc.) also increases. Suppose we have a K dimensional pre-processed data vector and XK is the kth dimension of the vector. Then in mapping stage, a function maps the component of the input (can be real, integer or binary) onto the value of mth reservoir variable at epoch e, ¾:
(302)
For all k = 1,2, ><*K there is a combination of m and e in set: m. = 1,2,,. and e = 1,2,
The steps of decoding stage (205) are shown in Figure 4. The MR image volume passed from the imaging stage (204) is first post-processed (401) with a sequence of algorithms such as high pass filtering and edge extraction. This stage filters out low spatial frequency information and enhances the wave propagation information in the image. Then in the subsequent step (402), for dimensionality reduction, the image volume can be divided into a coarser grid (M by N by P) than the MR image volume (many voxels falls inside a grid). A single feature or a set of features (length Q) are computed for each grid cell. The features can be average, standard deviation, or any spatial transformation output which can be applied to a neighborhood of voxels inside a grid cell. The feature values for each grid cell are concatenated to form a data vector (403) of size (M*N*P*Q). This vector is the reservoir output for a given input.
In (206), the output vector of the reservoir is used as an input to another algorithm designed for a specific task such as classification, clustering, dimensionality reduction, data completion etc. The output of the overall system is computed from this stage but the specifics are irrelevant for this patent.
The acoustic waves that are captured by the MR imaging device need to be harnessing the nonlinearity provided by diffusive wavefront interactions and inhomogeneity in the gel. The nonlinearity in the reservoir maps the input onto a high dimensional nonlinear manifold. In order to illustrate the complexity in the acoustic waves of the proposed system in a controlled manner, a series of simulations are performed in 2D acoustic waves. Figure 5 gives an instance of waveforms generated by 3 different scenarios. In (501), 4 point acoustic sources with different phases are used in a homogeneous medium and waves seem to exhibit very complex patterns. A single point source and a very stiff line object are used in (502), and this scenario shows complex diffraction patterns. In (503), a single point source and many small inhomogeneities are used. The inhomogeneous small objects also distort the waves to create wave patterns. Both simulations and experiments show that proposed system is able to exhibit nonlinear operations essential to reservoir computing, through reflection, refraction and dispersion of acoustic waves. The suggested system is a complicated combination of acoustic waves, magnetic waves, spatial nonlinearities, and is able to provide a very rich nonlinear projection of the input. Recent advances (Hinton, 2006) in recurrent neural networks have shown the importance of unsupervised pre-training of the network for robustness and high performance. Traditional pre-training in software recurrent neural networks is executed using a big unlabeled dataset and unsupervised learning peinciples, tailoring the network connections for a specific task and data. The pre-training stage iteratively minimizes an energy function defined on the connection of the nodes. We propose a method for performing a hardware pre-training stage (600) in the reservoir of our system (Figure 6). This pre-training phase prepares a specific gel for a specific dataset. In (601) a homogeneous and hot (in its liquid form) gel is prepared in its container (101). Then a number of high density objects (plastic, wood, etc.) are inserted into the hot gel (602) and stirred for homogeneous diffusion of the objects. These objects act as nonlinear operators on the acoustic wave. In addition to these objects stiffness inhomogeneity can be used. Using a different concentration of gel ingredient creates different stiffness in a region, and inhomogeneity in stiffness can be achieved by mixing gel ingredient (603) without stirring the hot solution. Data is composed of many instances (i.e. images), and the system instructions (303) of each one of the instances can be computed offline and stored (605) in the computing device. In annealing phase (604), the hot gel is excited (acoustic waves) (203) with the system instructions of the whole data (605) one instance at a time, continuously, repeatedly and in random instance order until the gel becomes solid. During this annealing phase, the acoustic waves applied onto the hot gel, stir and mix the gel guiding gel ingredient diffusion and move the inserted objects. The inhomogeneities (object locations and stiffness gradients) are created according to the applied excitation. The diffusion/motion process minimizes the total energy of the gel, optimizing it for the data. In preferred implementations, object insertion (602) and gel ingredient mixing (603) can be selected as the only source of inhomogeneity, or can be applied together.

Claims

A physical embodiment (100) of a recurrent neural network that exploits reservoir computing principles, and utilizes magnetic resonance elastography techniques, comprised of:
- a semi- solid, gel medium in which acoustic waves can propagate (101)
- inhomogeneities in the gel medium that distort the acoustic waves to create complex wave patterns (102)
- electromechanical actuators connected to the gel medium that create acoustic waves in the medium (103)
- magnetic resonance imaging device that creates motion sensitizing gradients and captures phase-contrast based images of the gel medium according to MR elastography techniques (104).
- a computing device that receives the input to be processed from a user or memory device, encodes the input into system commands, decodes the MR image volume into a data representation, further processes it for a specific task and transmits a system output to a receiver (105).
A method (200) for implementing a reservoir computing based recurrent neural network algorithm in the said physical embodiment (100), that comprises the steps of:
- receiving the input using a computing device (201)
- encoding the input into a set of instructions that drive the physical reservoir system (202)
- exciting the physical system with electromechanical actuators and motion sensitizing MR gradients using the encoded set of instructions (203)
- capturing phase-contrast based magnetic resonance 3D image volume using the magnetic resonance imaging device (204) - decoding the MR image volume using data processing techniques to get physical reservoir output (205)
- processing the reservoir output for achieving the assigned task (206)
- transmitting the output of the whole system (207).
A method for implementing a reservoir computing based recurrent neural network algorithm in the said physical embodiment (100), characterized in that the step 202 further comprises the steps of:
- pre-processing the received input to denoise, whiten, normalize, reduce dimensionality and generally transform the data in order to make it more appropriate for the upcoming stages of processing (301)
- mapping the pre-processed input dimensions onto the set of time varying variables of the physical reservoir system, such as mechanical wave type (point source or planar), mechanical wave location (x,y,z) on the gel volume, mechanical wave parameters (ω, k, α, ξ) , magnetic gradient temporal frequency ω x, ω y, ω z in each dimension, magnetic gradient strength Gox, Goy, Goz in each dimension (302)
- generating the set of system instructions to drive the physical system according to the given input (303).
A method for implementing a reservoir computing based recurrent neural network algorithm in the said physical embodiment (100), characterized in that the step 205 further comprises the steps of:
- post-processing the MR image volume with data processing algorithms such as high pass filtering and edge extraction, to enhance the wave propagation information in the MR image (401)
- analyzing and processing the 3D image grid (which can be sub- sampled for dimensionality reduction) to extract spatial features such as average and standard deviation edge values in a grid cell, that capture statistical information about the wave properties, generated by the input, (402)
- generating a data vector that is a concatenation of feature values in each grid cell, and is the reservoir output for the given input (403).
A method (600) for implementing a reservoir computing based recurrent neural network algorithm in the said physical embodiment (100), characterized in that the preparation of the gel 101 and the inhomogeneities 102 further comprises the steps of:
- preparing a hot gel solution that is still in its fluid form, not hardened (601)
- inserting a number of solid objects into the hot gel and stirring it (602)
- putting extra gel ingredient, mixing the extra ingredient without stirring the hot solution (603)
- computing the system instructions for each data instance in the database and storing them in the computing device (605)
- exciting the hot liquid gel with the system instructions of the whole data one instance at a time, continuously, repeatedly and in random instance order until the gel becomes solid (604)
- preparing the hardened gel by attaching the mechanical actuators and placing it in the phase-contrast based magnetic resonance imaging device (606).
A system and method for processing data using gel as the physical medium of computing, that is placed in a magnetic resonance imaging device.
A system and method for implementing a recurrent neural network by using both acoustic and magnetic wave interactions as the nonlinear computations.
8. A system and method for implementing a recurrent neural network by using a gel as the physical medium of computing.
9. A system and method for implementing a recurrent neural network by introducing inhomogeneities into the said gel medium for creating nonlinearities in said wave interactions.
10. A system and method for implementing a recurrent neural network by using magnetic resonance imaging techniques for reading out from the computing physical medium.
11. A system and method for preparing a gel computing medium in implementing a recurrent neural network, by preparing a specific gel inhomogeneity for a specific dataset, via unsupervised annealing methods.
12. A system and method for introducing inhomogeneities during preparing a gel computing medium in implementing a recurrent neural network, by mixing non-gel objects into the gel medium.
13. A system and method for introducing inhomogeneities during preparing a gel computing medium in implementing a recurrent neural network, by introducing gel ingredient concentration inhomogeneities.
References:
Siegelmann, H., and Sontag, E. (1995). On the computational power of neural nets. J. Comput. Systems Sci., 50, 132- 150.
Funahashi, K., and Nakamura, Y. (1993) Approximation of dynamical systems by continuous time recurrent neural networks. Neural Networks, 6, 801-806.
Salakhutdinov, R. and Hinton, G. E. (2012). An efficient learning procedure for deep Boltzmann machines. Neural Computation, 24, 1967-2006.
Hochreiter, S. (1991). Untersuchungen zu dynamischen neuronalen Netzen. Diploma thesis, T.U. Munich. Bengio, Y., Simard, R, and Frasconi, R (1994). Learning long-term dependencies with gradient descent is difficult. IEEE T. Neural Networks., 5(2).
Hochreiter S., Schmidhuber, J. (1997). Long short-term memory. Neural Computation, 9, 1735-1780. Doya, K. (1992). Bifurcations in the learning of recurrent neural networks. Proceedings of IEEE International Symposium on Circuits and Systems, 6, 2777-2780.
Maass, W., Natschlager, T., and Markram, H. (2002). Real-time computing without stable states: a new framework for neural computation based on perturbations. Neural Computation, 14(11), 2531-2560.
Jaeger, H. (2001). The echo state approach to analysing and training recurrent neural networks. Technical Report GMD Report 148, German National Research Center for Information Technology.
Lukosevicius, M., Jaeger, H. (2009). Reservoir computing approaches to recurrent neural network training. Computer Science Review, 3(3), 127-149.
Maass, W. (2010). Liquid state machines: motivation, theory, and applications. In Computability and Context: Computation and Logic in the Real World, B. Cooper and A. Sorbi, Eds. Imperial College Press. Nils Bertschinger and Thomas Natschlager (2004). Real-time computation at the edge of chaos in recurrent neural networks. Neural Computation, 16(7).
Robert A. Legenstein and Wolfgang Maass (2007). Edge of chaos and prediction of computational performance for neural circuit models. Neural Networks, 20(3):, 323-334.
Chrisantha Fernando and Sampsa Sojakka (2003). Pattern recognition in a bucket. In Proceedings of the 7th European Conference on Advances in Artificial Life (ECAL 2003), volume 2801 of LNCS, pages 588-597.
Adamatzky A. (2001). Computing in nonlinear media: make waves, study collisions. Lecture Notes in Artificial Intelligence. 2159, 1-11.
Adamatzky A. (2002). Experimental logical gates in a reaction-diffusion medium: The XOR gate and beyond. Physical Review E. 66, 046112.
Walmsley, I. (2001). Computing with interference: All-optical single-query 50- element database search. Conference on Lasers and Electro-Optics/Quantum Electronics and Laser Science.
Duport, F., Schneider, B., Smerieri, A., Haelterman, M., Massar, S. (2012). All optical reservoir computing. Opt. Express, 20, 22783.
Paquot, Y., Duport, F., Smerieri, A.,Dambre, J., Schrauwen, B., Haelterman, M., Massar, S. (2012). Optoelectronic Reservoir Computing. Scientific Reports, 2, 287.
Adamatzky (2004). Computing with Waves in Chemical Media: Massively Parallel Reaction-Diffusion Processors
Hinton, G.E., Osindero, S., Teh, Y (2006). A fast learning algorithm for deep belief nets. Neural Computation, 18, 1527-1554. Muthupillai et al. (1996). Magnetic Resonance Imaging of Transverse Acoustic Strain Waves.
Nugent, A. (2008). Physical Neural Network Liquid State Machine Utilizing Nanotechnology. US Patent No. US 7,392,230 B2 Clark, D., Pieslak, B., Gipson, B. Walton, Z (2013). US Patent No. US 20130060772 Al.
Clark, D., Gipson, B. , Pieslak, B., Walton, Z (2012). US Patent No. US 8301628 B2.
Bellens, R., Gautama, S.(2011). European Patent No. EP 2389669 Al.
PCT/IB2013/055041 2013-06-19 2013-06-19 System and method for implementing reservoir computing in magnetic resonance imaging device using elastography techniques WO2014203038A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/IB2013/055041 WO2014203038A1 (en) 2013-06-19 2013-06-19 System and method for implementing reservoir computing in magnetic resonance imaging device using elastography techniques

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/IB2013/055041 WO2014203038A1 (en) 2013-06-19 2013-06-19 System and method for implementing reservoir computing in magnetic resonance imaging device using elastography techniques

Publications (1)

Publication Number Publication Date
WO2014203038A1 true WO2014203038A1 (en) 2014-12-24

Family

ID=49080921

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2013/055041 WO2014203038A1 (en) 2013-06-19 2013-06-19 System and method for implementing reservoir computing in magnetic resonance imaging device using elastography techniques

Country Status (1)

Country Link
WO (1) WO2014203038A1 (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018524711A (en) * 2015-06-19 2018-08-30 株式会社Preferred Networks Cross-domain time-series data conversion apparatus, method, and system
WO2018212201A1 (en) * 2017-05-15 2018-11-22 国立大学法人大阪大学 Information processing device and information processing method
JP2019134100A (en) * 2018-01-31 2019-08-08 国立大学法人 東京大学 Information processing device
CN110892421A (en) * 2017-05-29 2020-03-17 根特大学 Mixed wave based computation
WO2021067358A1 (en) * 2019-10-01 2021-04-08 Ohio State Innovation Foundation Optimizing reservoir computers for hardware implementation
WO2021084768A1 (en) * 2019-10-29 2021-05-06 Tdk株式会社 Reservoir element and neuromorphic device
US11295198B2 (en) 2017-10-26 2022-04-05 International Business Machines Corporation Implementation model of self-organizing reservoir based on lorentzian nonlinearity
US11397895B2 (en) 2019-04-24 2022-07-26 X Development Llc Neural network inference within physical domain via inverse design tool
CN116441554A (en) * 2023-04-19 2023-07-18 珠海凤泽信息科技有限公司 Gold nanorod AuNRs synthesis method and system based on reinforcement learning

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5666467A (en) * 1993-03-03 1997-09-09 U.S. Philips Corporation Neural network using inhomogeneities in a medium as neurons and transmitting input signals as an unchannelled wave pattern through the medium
US7392230B2 (en) 2002-03-12 2008-06-24 Knowmtech, Llc Physical neural network liquid state machine utilizing nanotechnology
EP2389669A1 (en) 2009-01-21 2011-11-30 Universiteit Gent Geodatabase information processing
US8301628B2 (en) 2005-01-12 2012-10-30 Metier, Ltd. Predictive analytic method and apparatus

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5666467A (en) * 1993-03-03 1997-09-09 U.S. Philips Corporation Neural network using inhomogeneities in a medium as neurons and transmitting input signals as an unchannelled wave pattern through the medium
US7392230B2 (en) 2002-03-12 2008-06-24 Knowmtech, Llc Physical neural network liquid state machine utilizing nanotechnology
US8301628B2 (en) 2005-01-12 2012-10-30 Metier, Ltd. Predictive analytic method and apparatus
US20130060772A1 (en) 2005-01-12 2013-03-07 Metier, Ltd. Predictive analytic method and apparatus
EP2389669A1 (en) 2009-01-21 2011-11-30 Universiteit Gent Geodatabase information processing

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
"Field Programmable Logic and Application", vol. 2801, 1 January 2003, SPRINGER BERLIN HEIDELBERG, Berlin, Heidelberg, ISBN: 978-3-54-045234-8, ISSN: 0302-9743, article CHRISANTHA FERNANDO ET AL: "Pattern Recognition in a Bucket", pages: 588 - 597, XP055105623, DOI: 10.1007/978-3-540-39432-7_63 *
L LARGER ET AL: "Photonic information processing beyond Turing: an optoelectronic implementation of reservoir computing References and linksMultiple-valued stationary state and its instability of the transmitted light by a ring cavity system", OPT. COMMUN. REP. PROG. PHYS. SCIENCE NAT. PHOTONICS NAT. PHOTONICS NAT. PHOTONICS SCIENCE NAT. REV. NEUROSCI. SCIENCE OPT. EXPRESS IEEE TRANS. NEURAL NETW. IEEE J. QUANTUM ELEC-TRON C. R. PHYS. PHYS. REV. LETT. OPT. COMMUN. NATURE PHYSICA D PHYS. RE, 1 January 1978 (1978-01-01), pages 133 - 136, XP055105625, Retrieved from the Internet <URL:http://digital.csic.es/bitstream/10261/48571/1/Larger_OE12.pdf> [retrieved on 20140305] *
LUKOSEVICIUS M ET AL: "Reservoir computing approaches to recurrent neural network training", COMPUTER SCIENCE REVIEW, ELSEVIER, AMSTERDAM, NL, vol. 3, no. 3, 1 August 2009 (2009-08-01), pages 127 - 149, XP026470818, ISSN: 1574-0137, [retrieved on 20090513], DOI: 10.1016/J.COSREV.2009.03.005 *
NICHOLAS G RAMBIDI ET AL: "Towards a biomolecular computer. Information processing capabilities of biomolecular nonlinear dynamic media", BIOSYSTEMS, vol. 41, no. 3, 1 February 1997 (1997-02-01), pages 195 - 211, XP055105676, ISSN: 0303-2647, DOI: 10.1016/S0303-2647(96)01678-4 *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018524711A (en) * 2015-06-19 2018-08-30 株式会社Preferred Networks Cross-domain time-series data conversion apparatus, method, and system
WO2018212201A1 (en) * 2017-05-15 2018-11-22 国立大学法人大阪大学 Information processing device and information processing method
JPWO2018212201A1 (en) * 2017-05-15 2020-03-12 国立大学法人大阪大学 Information processing apparatus and information processing method
JP7108987B2 (en) 2017-05-15 2022-07-29 国立大学法人大阪大学 Information processing device and information processing method
CN110892421A (en) * 2017-05-29 2020-03-17 根特大学 Mixed wave based computation
US11295198B2 (en) 2017-10-26 2022-04-05 International Business Machines Corporation Implementation model of self-organizing reservoir based on lorentzian nonlinearity
JP2019134100A (en) * 2018-01-31 2019-08-08 国立大学法人 東京大学 Information processing device
WO2019151254A1 (en) * 2018-01-31 2019-08-08 国立大学法人東京大学 Information processing device
JP7109046B2 (en) 2018-01-31 2022-07-29 国立大学法人 東京大学 Information processing device
US11397895B2 (en) 2019-04-24 2022-07-26 X Development Llc Neural network inference within physical domain via inverse design tool
WO2021067358A1 (en) * 2019-10-01 2021-04-08 Ohio State Innovation Foundation Optimizing reservoir computers for hardware implementation
WO2021084768A1 (en) * 2019-10-29 2021-05-06 Tdk株式会社 Reservoir element and neuromorphic device
CN116441554A (en) * 2023-04-19 2023-07-18 珠海凤泽信息科技有限公司 Gold nanorod AuNRs synthesis method and system based on reinforcement learning

Similar Documents

Publication Publication Date Title
WO2014203038A1 (en) System and method for implementing reservoir computing in magnetic resonance imaging device using elastography techniques
US20200410384A1 (en) Hybrid quantum-classical generative models for learning data distributions
Glaws et al. Deep learning for in situ data compression of large turbulent flow simulations
Gregor et al. Deep autoregressive networks
Elouard et al. Thermodynamics of optical Bloch equations
Horton et al. Layer-wise data-free cnn compression
Ṣahin Conformal Riemannian maps between Riemannian manifolds, their harmonicity and decomposition theorems
Ong et al. Integral autoencoder network for discretization-invariant learning
Zhao et al. Qksan: A quantum kernel self-attention network
Ong et al. Iae-net: Integral autoencoders for discretization-invariant learning
Miao et al. Neural-network-encoded variational quantum algorithms
Raasakka Spacetime-free approach to quantum theory and effective spacetime structure
Antonini et al. Can one hear the shape of a wormhole?
Wang et al. Performance of training sparse deep neural networks on GPUs
Nista et al. Influence of adversarial training on super-resolution turbulence reconstruction
Çarpınlıoğlu et al. Genetically programmable optical random neural networks
Taufik et al. LatentPINNs: Generative physics-informed neural networks via a latent representation learning
Momenifar et al. Emulating spatio-temporal realizations of three-dimensional isotropic turbulence via deep sequence learning models
Delgado-Granados et al. Quantum Algorithms and Applications for Open Quantum Systems
Münzer et al. A curriculum-training-based strategy for distributing collocation points during physics-informed neural network training
Zhang et al. Sequential quantum simulation of spin chains with a single circuit QED device
Huggett A philosopher looks at non-commutative geometry
Olin-Ammentorp et al. Bridge networks: Relating inputs through vector-symbolic manipulations
Price et al. Fast emulation of anisotropies induced in the cosmic microwave background by cosmic strings
Hallam Tensor network descriptions of quantum entanglement in path integrals, thermalisation and machine learning

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13753685

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: P1694/2015

Country of ref document: AE

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2015/1463.1

Country of ref document: KZ

122 Ep: pct application non-entry in european phase

Ref document number: 13753685

Country of ref document: EP

Kind code of ref document: A1