WO2016020671A1 - Methods and apparatus for determining image data - Google Patents
Methods and apparatus for determining image data Download PDFInfo
- Publication number
- WO2016020671A1 WO2016020671A1 PCT/GB2015/052256 GB2015052256W WO2016020671A1 WO 2016020671 A1 WO2016020671 A1 WO 2016020671A1 GB 2015052256 W GB2015052256 W GB 2015052256W WO 2016020671 A1 WO2016020671 A1 WO 2016020671A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- radiation
- diffraction pattern
- estimate
- determining
- plane
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N23/00—Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00
- G01N23/20—Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups G01N3/00 – G01N17/00, G01N21/00 or G01N22/00 by using diffraction of the radiation by the materials, e.g. for investigating crystal structure; by using scattering of the radiation by the materials, e.g. for investigating non-crystalline materials; by using reflection of the radiation by the materials
Definitions
- the present invention relates to methods and apparatus for determining one or more object characteristics.
- the present invention relates to methods and apparatus for determining image data from which an image of an object may be generated.
- Some embodiments of present invention relate to methods and apparatus for determining image data from which images of three dimensional objects may be generated.
- WO 2005/106531 discloses a method for providing image data which may be used to construct an image of an object based on a measured intensity of several diffraction patterns.
- the several diffraction patterns are formed with an object at each of a plurality of positions with respect to incident radiation.
- This method is known as a ptychographical iterative engine (PIE).
- PIE ptychographical iterative engine
- an iterative phase-retrieval method is used to determine an estimate of the absorption and phase-change caused by the object to a wave field as it passes through or is reflected by the object.
- This method uses redundancy in the plurality of diffraction patterns to determine the estimate.
- WO 2010/064051 which is incorporated herein by reference for all purposes, discloses an enhanced PIE (ePIE) method wherein it is not necessary to know or estimate a probe function. Instead a process is disclosed in which the probe function is iteratively calculated step by step with a running estimate of the probe function being utilised to determine running estimates of an object function associated with a target object.
- ePIE enhanced PIE
- Figure 1 shows an apparatus according to an embodiment of the present invention
- Figure 2 shows a method according to an embodiment of the invention
- Figure 3 shows two recorded low resolution images from different tilt angles according to an embodiment of the invention
- Figure 4 shows image data produced by an embodiment of the invention.
- FIG. 1 illustrates an apparatus according to an embodiment of the invention generally denoted with reference 100.
- the apparatus 100 comprises a radiation source 1 10 for providing incident radiation, a focussing element 130, a filtering element 140 and a detector 150 for detecting an intensity of radiation.
- the radiation source 110 is a source of radiation 1 1 1, 112, 113 for directing to a target object 120.
- the radiation source 110 may comprise one or more radiation sources such as, where the radiation is light, one or more illumination devices.
- the radiation source 110 comprises a plurality of illumination devices each configured to provide respective illumination directed to the target object 120.
- each illumination device may be an LED although it will be realised that other sources of illumination may be used.
- the radiation source may comprise one or more devices for directing the radiation toward the target object at a desired incidence angle.
- a radiation source may be used to provide radiation at more than one incidence angle by appropriate reconfiguration of the devices for directing the radiation toward the target obj ect 120.
- a radiation source or apparatus for directing radiation such as a fibre optic cable, may be moveably mounted with respect to the target object 120.
- the radiation source 1 10 is arranged to direct plane wave radiation toward the target object 120.
- the radiation source 1 10 is arranged to selectively direct the plane wave radiation toward the target obj ect from each of a plurality of incidence angles 1 1 1 , 1 12, 1 13.
- An incidence angle is an angle between the plane wave and the target object 120.
- the radiation directed toward the target object 120 may be referred to as tilted radiation indicative of radiation which is not generally directed perpendicular to the target object 120, although one of the incidence angles may be perpendicular to the target object 120.
- the radiation source 1 10 may be operated responsive to a control signal provided from a control unit (not shown in Figure 1) to provide radiation directed toward the target obj ect from a desired or selected incidence angle such that radiation detected by a detector 150 may be recorded corresponding to that incidence angle.
- radiation is to be broadly construed.
- the term radiation includes various wave fronts. Radiation includes energy from a radiation source. This will include electromagnetic radiation including X-rays, emitted particles such as electrons. Other types of radiation include acoustic radiation, such as sound waves. Such radiation may be represented by a wavefront function. This wave function includes a real part and an imaginary part as will be understood by those skilled in the art.
- the radiation 1 1 1 , 1 12, 1 13 provided at each incidence angle may be considered to have the same structure which may be that of a perfect plane wave. However in other embodiments the radiation provided at each incidence angle may be considered to have a respective form which allows for deviations of the radiation provided from the perfect plane wave.
- a probe function indicative of one or more characteristics of the provided radiation is updated during the method.
- a probe function Pk, indicative of the radiation is used where k is indicative of the incidence angle, as noted above.
- the probe function indicative of the incident radiation is updated during the method the probe function may be defined as Pk,n, where n is indicative of an iteration number.
- a probe function may be a complex numbered function representing one or more characteristics of the radiation, which may be an illumination wavefront.
- the focussing element 130 is arranged post (downstream) the target object 120 to receive radiation exiting from the target object 120.
- the focussing element 130 may comprise one or a plurality of devices for focussing the radiation.
- the focussing element 130 comprises a lens as shown in Figure 1 although it will be realised that other elements may be used, particularly depending upon the radiation type. It will be realised that embodiments may be envisaged including a plurality of lenses, such as two lenses, although it will be realised that more than two lenses may be used in some embodiments.
- the focussing element 130 is arranged to focus received radiation at a back focal plane (BFP).
- BFP back focal plane
- the filtering element 140 is arranged to filter the radiation.
- the filter may be arranged within the focussing element 130.
- the filter 140 may be arranged within the lens 130, or between first and second lenses forming the focussing element 130.
- the filtering element 140 may be arranged post (downstream of) the focussing element 130, such as at or near the BFP.
- the filtering element 140 is arranged to filter radiation falling thereon to form a filtered diffraction pattern at the BFP.
- the filtering element 140 may comprise an aperture such that only radiation passes through the filtering element 140 within the aperture.
- the filtering element 140 may be located at the BFP of the focussing element 130.
- the filtering element 140 is associated with a filtering function.
- the filtering function 140 defines a transmission of radiation through the filtering element 140.
- the filtering function 140 may be updated in at least some iterations of a method according to an embodiment of the invention.
- the detector 150 is a suitable recording device for recording an intensity of radiation falling thereon, such as a CCD or the like. The detector 150 allows the detection of a low resolution image in a detector plane i .e. a plane of the detector 150.
- the detector 150 may be located at an image plane, or may be offset from the image plane, for example to allow for a large dynamic range of the intensity. It will be realised that where the detector is offset from the image plane an appropriate propagation operator is required to determine an estimate of the intensity at the detector 150.
- the detector 150 may comprise an array of detector elements, such as in a CCD. Intensity data output by the detector 150 may be received by the control unit comprising a processing device for carrying out a method according to an embodiment of the invention, as will be explained with reference to Figure 2.
- Figure 2 illustrates a method 200 according to an embodiment of the invention.
- the method 200 is a method of providing image data associated with the target object 120.
- the image data may be used to determine one or more characteristics of the target object 120.
- the image data may be used to determine one or more images associated with the target object 120.
- the method 200 may be performed in association with the apparatus 100 described with reference to Figure 1.
- the method 200 is a method of iteratively determining the image data of the target object 120.
- the method 200 involves iteratively updating one or more estimates of the image data associated with the target object 120 in a real space or domain, as will be explained.
- initial estimates of one or more object functions indicative of characteristics of the target object 120 are provided.
- the object function estimate(s) may be stored as be an array of appropriate size.
- the matrix(es) may store an array of random or pseudo-random values, or may contain predetermined values, such as all being T s.
- image data at two respective “slices" or planes through the target object 120 is determined where each slice is associated with a respective object function.
- image data for only a single slice through the target object 120 is determined, or where image data at more than two slices through the target object 120 is determined.
- three-dimensional image data it is meant image data relating to a plurality of slices which intersect the target object 120.
- Each slice through the target object is associated with a respective object function O s where s is indicative of a number of the slice.
- s is indicative of a number of the slice.
- the method illustrated in Figure 2 iteratively determines two object functions 0 ⁇ and 0 2 respectively each associated with a respective one of the slices 121, 122.
- the slices 121, 122 are separated by a distance Az. Therefore, at a start of the method, there is provided two initial object functions ⁇ , ⁇ and On, 2 where, as above, n is indicative of the iteration number, which for the initial functions is 0, and the respective number is indicative of the slice concerned.
- each probe function has a planar phase gradient representative of the incidence angle of the radiation 1 11, 112, 113.
- the probe functions are used throughout the iterations of the method 200 without change or updating. In other embodiments, some or all of the probe functions may be updated during iterations of the method. For ease of explanation, an embodiment where the probe functions remain constant will be explained.
- each probe function may be denoted as P k where k is indicative of the incidence angle.
- step 210 an exit wave from the target object 120 is determined.
- the exit wave ⁇ emanating from the target object 120 is determined by multiplying an object function indicative of the target object 120, where there is only a single object function, by a probe function indicative of radiation incident radiation on the target object 120.
- the determination of the exit wave comprises propagating intermediate exit waves between slices or planes 121, 122 associated with the object functions.
- an exit wave from each slice 121, 122 is determined.
- a first exit wave i/> f c , i is determined from the first slice 121.
- the exit wave may be determined by multiplying the probe function i incident on the target object 120 by the current object function ⁇ , ⁇ for the first slice 121.
- the exit wave ⁇ 1 1 0 1 1 P 1 for the first angle of incident radiation 111 is determined.
- the exit wave of radiation from the first slice 121 is then propagated to a next adjacent slice, which in Figure 1 is slice 122.
- the propagation of the exit wave may be made by use of a suitable propagation operator.
- an angular spectrum propagator is used to propagate the exit wave over a distance between the slices 121, 122 which in the example of Figure 1 is the distance Az.
- P k s+ i ⁇ ⁇ , ⁇ ⁇
- P AZ is a suitable propagator over the distance Az is used, where the propagation operator may be the angular spectrum operator.
- the propagated wave at the subsequent slice is then used as a probe function indicative of one or more characteristics of the radiation incident upon the subsequent slice 122.
- An exit wave from the last or most downstream slice of the target object 120 is used as the exit wave from the object. For example the exit wave from the last slice 122 of the object 120 shown in Figure 1.
- step 220 a diffraction pattern at the BFP 140 is determined.
- Step 220 comprises determining the diffraction pattern based upon the exit wave ip k 2 from the target object 120 and a suitable propagation operator.
- the propagation operator may be a Fourier transform, although it will be realised that other propagation operators may be used.
- Step 230 comprises determining an exit wave from the filter 140.
- Step 230 comprises applying the filtering function associated with the filtering element 140 to the incident wave at the BFP calculated in step 220.
- the filtering function may be defined as A which may be a finite aperture function. In some embodiments the filtering function is updated in at least some iterations of the method 200. Therefore the filtering function may be defined as A n where n is indicative of the iteration number.
- an intensity of radiation at a plane of the detector 150 is determined. The intensity of radiation may form an image the plane of the detector, particularly where the detector is located at the image plane. The intensity is determined by propagating the exit wave from the filtering element 140 using a suitable propagation operator. The propagation operator may be a Fourier transform.
- Step 250 comprises updating the determined intensity from step 240 based on data measured by the detector 150.
- step 250 may comprise updating an intensity of the image determined in step 240 based upon intensity measured by the detector 150.
- This step may be referred to as applying a modulus constraint on the determined intensity from step 240.
- the modulus constraint may be applied by:
- Ik,c V'k,m Ik,c/
- Step 260 comprises determining an updated exit wave at the BFP.
- an updated exit wave from the filter 140 is determined.
- Step 260 comprises back propagating the updated intensity data from the plane of the detector 150 to the plane of the filter 140 or the BFP.
- the updated image data may be back-propagated to the BFP by: 3 ⁇ 4e— F 1 ⁇ Ik,c ⁇ -
- ⁇ ⁇ is the updated exit wave from the filter 140.
- Step 270 comprises determining an updated diffraction pattern at the BFP.
- step 270 may optionally comprise updating the filtering function associated with the filter 140.
- the updated diffraction pattern may be determined by:
- a n+1 is the updated filtering function which may be used in an n+1 iteration of the method 200.
- the filtering function may only be updated after a predetermined number of iterations of the method 200.
- the updated filtering function is determined in parallel form based upon diffraction patterns from a plurality of incidence angles of radiation. In the above equation the summation ⁇ f c in the equation sums across all K incidence angles.
- step 280 an updated exit wave from the object 120 is determined.
- step 280 comprises updating, during each iteration of the method 200, a calculated wave in the real domain or space based upon the updated diffraction pattern or input wave at the filter 140.
- the updated exit wave from the target object 120 is determined by back- propagating the updated diffraction pattern from the plane of the filter 140 or BFP to the plane of the object 120.
- the updated exit wave is determined by using an inverse propagation operator to that used in step 220.
- Step 280 may be based upon an inverse Fourier transform.
- the updated exit wave may be calculated by: where ⁇ ) k ' 2 the updated exit wave in the example shown in Figure 1 is at the second plane 122, although it will be realized that this is merely exemplary.
- step 290 one or more object functions associated with the object 120 are updated.
- step 290 comprises determining updated image data for the object 120 at one or more planes in the real domain.
- step 290 may comprise determining each updated object function in turn, thereby sequentially determining each updated object function toward the source of radiation 1 10.
- intermediate probe functions are also determined and propagated between the planes 121, 122 associated with each object function.
- an updated object associated with the second plane 122 is determined by: where 0 ⁇ +1 2 is an object function associated with the second plane 122 for an n+ ⁇ iteration of the method 200.
- an updated probe function at the second plane 122 is determined by:
- P 2 is the updated probe function indicative of incident radiation at the second plane 122 within the object 120.
- step 290 comprises calculating:
- step 290 is a value used to alter a strength of feedback.
- step 290 may be altered from the above described embodiment depending upon a number of planes associated with object functions.
- the updated one or more object functions are provided for use in a next iteration of step 210.
- step 295 it is determined whether there remain further incidence angles of radiation to consider in iterations of the method 200.
- step 298 it is determined whether to terminate the method 200.
- the method 200 may be terminated when a predetermined number of iterations have been completed or when a predetermined condition occurs.
- the predetermined condition may be when an error metric associated with the method 200 meets a predetermined value.
- the error metric is based upon difference between the measured and calculated intensities at the detector 150.
- the error metric may be calculated by the normalized deviation between the measured and calculated intensities using:
- step 270 when updating the filtering function to determine A n+1 associated with the filtering element 140, a ptychography reconstruction still applies since the filtering element 140 is consistent in the BFP.
- the reason for using a parallel form of the equation to update the filtering function is because a DC component of the spectrum
- ⁇ is usually a very high value.
- max m the update equation of a series version will dramatically slow the convergence.
- the diffraction pattern is updated at the BFP. However, this update is rather taking out the effect of the filter function from the corrected diffraction pattern e than trying to reconstruct the spectrum. More importantly, the updated diffraction pattern is back- propagated to the object plane to perform the object reconstruction slice by slice in real space.
- An exit end of the fiber, which produces a divergent point source, is mounted onto a stepper motor driven x-y translation stage.
- the object 120 is placed approximately 70mm downstream of the source 120.
- a doublet lens 130 with a focal length of 30mm is positioned at a distance of 45mm from the object 120.
- a diaphragm with the aperture diameter set to 2mm is used as the filter 140.
- a CCD detector 150 is placed in the image plane (about 215mm from the diaphragm 140) to record the low resolution image intensities.
- a dataset is collected from 225 source positions arranged on a 15 x 15 raster grid.
- a nominal step size of the stage is 1mm, with ⁇ 20% random offset to avoid a "raster grid pathology" as is known to those familiar in the art.
- the change in position of the radiation corresponds to a change of incident angle of the radiation 111, 112, 113 at the target object 120 of about 0.5 degrees, as is explained in more detail below.
- the object 120 is composed of two microscope slides arranged coverslip-to-slide (see the inset of Fig. 2). A measured separation between the two specimen layers is 1.025mm.
- 'Dark-field' is used here in the sense as used in conventional microscopy, namely that the unscattered beam associated with the incident radiation 1 11, 112, 113 does not pass through the aperture in the lens for this particular angle of illumination, but is blocked by the aperture, thus giving an image which shows only scattered intensity from the object 120 on an otherwise black, or dark, background.
- Both the conventional ePIE algorithm and a method according to an embodiment of the invention are applied to recorded intensity data for 100 iterations.
- the initial guess for the object function is free space - namely a value of 1 for all positions or matrix cells, and the initial guess for the filter function associated with the filtering element 140 is a Gaussian filtering function.
- a tilt angle for all scan positions is calculated to generate a set of tilted plane waves.
- the filtering function associated with the filter 140 is not updated. After that, both a part of the diffraction pattern corresponding to an area within the filter 140, and the filter function are updated in each iteration of the method 200. The final results are shown in Fig. 4.
- Figure 4 shows reconstructions of the dataset from the illumination tilting strategy using both the ePIE algorithm and a method 200 according to an embodiment of the invention.
- Figure 4(a)(b)(c)(d) are the results of the conventional ePIE algorithm and respectively represent the object modulus, the object phase, the filter modulus and the filter phase.
- Figure 4 (e)(f)(g)(h)(i)(j) are the results of the a method according to an embodiment of the invention and, respectively, represent the first slice 121 modulus, the first slice 121 phase, the second slice 122 modulus, the second slice 122 phase, the filtering function modulus and the filtering function phase.
- the object 120 and the filter 140 are in different spaces and that the scale bars are also different.
- the scale of Fig4(a) is also the same for 4(b,e,f,g,h), and the scale of 4(c) is the same as for 4(d,i,j).
- the ePIE algorithm manages to reconstruct image data indicative of a blurred representation of the target object 120, which means a projection approximation has not completely broken down.
- the reconstruction is an artifice from which it is not possible to extract accurate images of either layer or slice through the object 120.
- the method 200 according to an embodiment of the invention successfully separates the two slices 121, 122 and produces much better reconstructions.
- Cross-talk between the two slices 121, 122 is hardly visible in either reconstructed slice 121, 122. Since both two layers 121, 122 are out focus, during the scanning process the images will shift in the plane of the detector 150. Given the Field Of View (FOV) reconstructed, some features move out and new features move into the FOV at the edges.
- FOV Field Of View
- Phase curvature from the reconstructed filter element 140 implies that the exit wave plane (the instant downstream plane of the second layer 122) is not in focus. In other words the exit wave plane is not coincident with the conjugate plane of the detector 150.
- the method 200 automatically accounts for this and produces a phase curvature on the filter function associated with the filtering element 140 to refocus the exit wave plane.
- multi-slice embodiments may extract 3D information for the object 120.
- the problem for the conventional reconstruction method is that the object has to be thin enough to validate the projection approximation. Otherwise, the spectrum in the back focal plane is not consistent when tilting the illumination.
- a filter scanning strategy has been proposed to circumvent this object thickness limitation.
- the reconstruction is just the exit wave of the object. Although we can refocus each layer by propagating this exit wave, the other out focused layers always corrupt the in focused layer.
- Optical experiments have been conducted to test the proposed method. The results indicate that the proposed method extracts the 3D information of the object very well, while the conventional method fails to produce right reconstructions and the filter scanning method cannot separate the object.
- embodiments of the present invention can be realised in the form of hardware, software or a combination of hardware and software. Any such software may be stored in the form of volatile or non-volatile storage such as, for example, a storage device like a ROM, whether erasable or rewritable or not, or in the form of memory such as, for example, RAM, memory chips, device or integrated circuits or on an optically or magnetically readable medium such as, for example, a CD, DVD, magnetic disk or magnetic tape. It will be appreciated that the storage devices and storage media are embodiments of machine-readable storage that are suitable for storing a program or programs that, when executed, implement embodiments of the present invention.
- embodiments provide a program comprising code for implementing a system or method as claimed in any preceding claim and a machine readable storage storing such a program. Still further, embodiments of the present invention may be conveyed electronically via any medium such as a communication signal carried over a wired or wireless connection and embodiments suitably encompass the same.
Landscapes
- Chemical & Material Sciences (AREA)
- Crystallography & Structural Chemistry (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Analytical Chemistry (AREA)
- Biochemistry (AREA)
- General Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Immunology (AREA)
- Pathology (AREA)
- Analysing Materials By The Use Of Radiation (AREA)
Abstract
Embodiments of the present invention provide a method of determining object characteristics, comprising providing radiation directed toward an object at each of a plurality of incidence angles, detecting, by a detector, an intensity of radiation for each of the incidence angles; and iteratively determining object data indicative of one or more characteristics of the object, wherein iterations of the method comprise determining, for each incidence angle of radiation, an estimate of a filtered diffraction pattern at a focal plane of a radiation focussing element based upon a current estimate of the object data, determining an estimate of intensity data at a plane of the detector based upon the filtered diffraction pattern, and updating the estimate of the object data in a real domain based upon the intensity of radiation at the detector.
Description
Methods and Apparatus for Determining Image Data
The present invention relates to methods and apparatus for determining one or more object characteristics. The present invention relates to methods and apparatus for determining image data from which an image of an object may be generated. Some embodiments of present invention relate to methods and apparatus for determining image data from which images of three dimensional objects may be generated.
Background
WO 2005/106531, which is herein incorporated by reference for all purposes, discloses a method for providing image data which may be used to construct an image of an object based on a measured intensity of several diffraction patterns. The several diffraction patterns are formed with an object at each of a plurality of positions with respect to incident radiation. This method is known as a ptychographical iterative engine (PIE). In PIE an iterative phase-retrieval method is used to determine an estimate of the absorption and phase-change caused by the object to a wave field as it passes through or is reflected by the object. This method uses redundancy in the plurality of diffraction patterns to determine the estimate. WO 2010/064051, which is incorporated herein by reference for all purposes, discloses an enhanced PIE (ePIE) method wherein it is not necessary to know or estimate a probe function. Instead a process is disclosed in which the probe function is iteratively calculated step by step with a running estimate of the probe function being utilised to determine running estimates of an object function associated with a target object.
It is an object of embodiments of the invention to at least mitigate one or more of the problems of the prior art.
Brief Description of the Drawings
Embodiments of the invention will now be described by way of example only, with reference to the accompanying figures, in which:
Figure 1 shows an apparatus according to an embodiment of the present invention,
Figure 2 shows a method according to an embodiment of the invention; Figure 3 shows two recorded low resolution images from different tilt angles according to an embodiment of the invention; and
Figure 4 shows image data produced by an embodiment of the invention.
Detailed Description of Embodiments of the Invention
Figure 1 illustrates an apparatus according to an embodiment of the invention generally denoted with reference 100.
The apparatus 100 comprises a radiation source 1 10 for providing incident radiation, a focussing element 130, a filtering element 140 and a detector 150 for detecting an intensity of radiation. The radiation source 110, although not shown in Figure 1, is a source of radiation 1 1 1, 112, 113 for directing to a target object 120. The radiation source 110 may comprise one or more radiation sources such as, where the radiation is light, one or more illumination devices. In one embodiment the radiation source 110 comprises a plurality of illumination devices each configured to provide respective illumination directed to the target object 120. For example each illumination device may be an LED although it will be realised that other sources of illumination may be used. The radiation source may comprise one or more devices for directing the radiation toward the target object at a desired incidence angle. In one embodiment a radiation source may be used to provide radiation at more than one incidence angle by appropriate
reconfiguration of the devices for directing the radiation toward the target obj ect 120. A radiation source or apparatus for directing radiation, such as a fibre optic cable, may be moveably mounted with respect to the target object 120. The radiation source 1 10 is arranged to direct plane wave radiation toward the target object 120. The radiation source 1 10 is arranged to selectively direct the plane wave radiation toward the target obj ect from each of a plurality of incidence angles 1 1 1 , 1 12, 1 13. An incidence angle is an angle between the plane wave and the target object 120. There may be K incidence angles of radiation where k=l, 2. . K Thus the radiation directed toward the target object 120 may be referred to as tilted radiation indicative of radiation which is not generally directed perpendicular to the target object 120, although one of the incidence angles may be perpendicular to the target object 120. The radiation source 1 10 may be operated responsive to a control signal provided from a control unit (not shown in Figure 1) to provide radiation directed toward the target obj ect from a desired or selected incidence angle such that radiation detected by a detector 150 may be recorded corresponding to that incidence angle.
It is to be understood that the term radiation is to be broadly construed. The term radiation includes various wave fronts. Radiation includes energy from a radiation source. This will include electromagnetic radiation including X-rays, emitted particles such as electrons. Other types of radiation include acoustic radiation, such as sound waves. Such radiation may be represented by a wavefront function. This wave function includes a real part and an imaginary part as will be understood by those skilled in the art.
In some embodiments the radiation 1 1 1 , 1 12, 1 13 provided at each incidence angle may be considered to have the same structure which may be that of a perfect plane wave. However in other embodiments the radiation provided at each incidence angle may be considered to have a respective form which allows for deviations of the radiation provided from the perfect plane wave. In some embodiments of the invention a probe function indicative of one or more characteristics of the provided radiation is updated during the method. In embodiments of the invention a probe function Pk, indicative of the radiation is used where k is indicative of the incidence angle, as noted above. In some embodiments where the probe function indicative of
the incident radiation is updated during the method the probe function may be defined as Pk,n, where n is indicative of an iteration number. A probe function may be a complex numbered function representing one or more characteristics of the radiation, which may be an illumination wavefront.
The focussing element 130 is arranged post (downstream) the target object 120 to receive radiation exiting from the target object 120. The focussing element 130 may comprise one or a plurality of devices for focussing the radiation. In one embodiment the focussing element 130 comprises a lens as shown in Figure 1 although it will be realised that other elements may be used, particularly depending upon the radiation type. It will be realised that embodiments may be envisaged including a plurality of lenses, such as two lenses, although it will be realised that more than two lenses may be used in some embodiments. The focussing element 130 is arranged to focus received radiation at a back focal plane (BFP).
The filtering element 140 is arranged to filter the radiation. The filter may be arranged within the focussing element 130. For example the filter 140 may be arranged within the lens 130, or between first and second lenses forming the focussing element 130. In some embodiments the filtering element 140 may be arranged post (downstream of) the focussing element 130, such as at or near the BFP. The filtering element 140 is arranged to filter radiation falling thereon to form a filtered diffraction pattern at the BFP. The filtering element 140 may comprise an aperture such that only radiation passes through the filtering element 140 within the aperture. The filtering element 140 may be located at the BFP of the focussing element 130. Radiation exiting the target object 120 and being focussed by the focussing element 130 forms the filtered diffraction pattern. In this way the filtering element selects a portion of the radiation exiting the target object. For example radiation outside of the aperture does not contribute to radiation detected by the detector 150. The filtering element 140 is associated with a filtering function. The filtering function 140 defines a transmission of radiation through the filtering element 140. The filtering function 140 may be updated in at least some iterations of a method according to an embodiment of the invention.
The detector 150 is a suitable recording device for recording an intensity of radiation falling thereon, such as a CCD or the like. The detector 150 allows the detection of a low resolution image in a detector plane i .e. a plane of the detector 150. The detector 150 may be located at an image plane, or may be offset from the image plane, for example to allow for a large dynamic range of the intensity. It will be realised that where the detector is offset from the image plane an appropriate propagation operator is required to determine an estimate of the intensity at the detector 150. The detector 150 may comprise an array of detector elements, such as in a CCD. Intensity data output by the detector 150 may be received by the control unit comprising a processing device for carrying out a method according to an embodiment of the invention, as will be explained with reference to Figure 2.
Figure 2 illustrates a method 200 according to an embodiment of the invention. The method 200 is a method of providing image data associated with the target object 120. The image data may be used to determine one or more characteristics of the target object 120. The image data may be used to determine one or more images associated with the target object 120. The method 200 may be performed in association with the apparatus 100 described with reference to Figure 1. The method 200 is a method of iteratively determining the image data of the target object 120. The method 200 involves iteratively updating one or more estimates of the image data associated with the target object 120 in a real space or domain, as will be explained.
At a start of the method 200 initial estimates of one or more object functions indicative of characteristics of the target object 120 are provided. The object function estimate(s) may be stored as be an array of appropriate size. Initially the matrix(es) may store an array of random or pseudo-random values, or may contain predetermined values, such as all being T s.
An embodiment of the invention will be explained where image data at two respective "slices" or planes through the target object 120 is determined where each slice is associated with a respective object function. However it will be realised that embodiments of the invention may be envisaged where image data for only a single slice through the target object 120 is determined, or where image data at more than two slices through the target object 120 is determined. Thus, by three-dimensional
image data it is meant image data relating to a plurality of slices which intersect the target object 120.
Each slice through the target object is associated with a respective object function Os where s is indicative of a number of the slice. Referring to Figure 1 there is illustrated two slices 121, 122 through the target object 120 s=\ and s=2, respectively. Thus the method illustrated in Figure 2 iteratively determines two object functions 0} and 02 respectively each associated with a respective one of the slices 121, 122. The slices 121, 122 are separated by a distance Az. Therefore, at a start of the method, there is provided two initial object functions Οη,ι and On, 2 where, as above, n is indicative of the iteration number, which for the initial functions is 0, and the respective number is indicative of the slice concerned.
Similarly, a plurality of probe functions P each associated with a respective incidence angle are provided at the start of the method 200. In order to represent the incidence angle, each probe function has a planar phase gradient representative of the incidence angle of the radiation 1 11, 112, 113. In the method illustrated in Figure 2 the probe functions are used throughout the iterations of the method 200 without change or updating. In other embodiments, some or all of the probe functions may be updated during iterations of the method. For ease of explanation, an embodiment where the probe functions remain constant will be explained. Thus each probe function may be denoted as Pk where k is indicative of the incidence angle.
In step 210 an exit wave from the target object 120 is determined. In some embodiments the exit wave ψ emanating from the target object 120 is determined by multiplying an object function indicative of the target object 120, where there is only a single object function, by a probe function indicative of radiation incident radiation on the target object 120. The probe function may be a probe function for a respective incidence angle of radiation Pk. Therefore step 210 may comprise ψ = OnPk which, for a first iteration of step 210, may be ψ = O0P± for the initial object function and a first incidence angle of radiation 11 1.
However in other embodiments where multiple object functions Oi , 02 are associated with the target object 120 the determination of the exit wave comprises propagating intermediate exit waves between slices or planes 121, 122 associated with the object functions. In such embodiments an exit wave from each slice 121, 122 is determined. Thus, for the target object shown in Figure 1, a first exit wave i/>fc,iis determined from the first slice 121. The exit wave may be determined by multiplying the probe function i incident on the target object 120 by the current object function Οη,ι for the first slice 121. Thus, in a first iteration, the exit wave ψ1 1 = 01 1P1 for the first angle of incident radiation 111 is determined.
The exit wave of radiation from the first slice 121 is then propagated to a next adjacent slice, which in Figure 1 is slice 122. The propagation of the exit wave may be made by use of a suitable propagation operator. In one embodiment an angular spectrum propagator is used to propagate the exit wave over a distance between the slices 121, 122 which in the example of Figure 1 is the distance Az. Thus Pk s+i = ΡΆζίΨΐί,ε} where PAZ is a suitable propagator over the distance Az is used, where the propagation operator may be the angular spectrum operator.
The propagated wave at the subsequent slice, such as slice 122 in Figure 1, is then used as a probe function indicative of one or more characteristics of the radiation incident upon the subsequent slice 122. An exit wave from the subsequent slice may be calculated as for the first slice, such as T kiS+i = Os+l nPk s+1 which for the slice 122 in Figure 1 is ipk 2 = 02,nPk,2 An exit wave from the last or most downstream slice of the target object 120 is used as the exit wave from the object. For example the exit wave from the last slice 122 of the object 120 shown in Figure 1.
In step 220 a diffraction pattern at the BFP 140 is determined. Step 220 comprises determining the diffraction pattern based upon the exit wave ipk 2 from the target object 120 and a suitable propagation operator. The propagation operator may be a Fourier transform, although it will be realised that other propagation operators may be used. Thus step 220 comprises calculating k i = T{ipk 2} where x¥k i is an incident wave at the BFP and T is the propagation operator, such as the Fourier transform.
Step 230 comprises determining an exit wave from the filter 140. Step 230 comprises applying the filtering function associated with the filtering element 140 to the incident wave at the BFP calculated in step 220. The filtering function may be defined as A which may be a finite aperture function. In some embodiments the filtering function is updated in at least some iterations of the method 200. Therefore the filtering function may be defined as An where n is indicative of the iteration number. The filtering function is applied to the incident wave at the BFP in some embodiments by multiplication. Thus Ψ¾ β = Ψ¾;^η where β is an exit wave from the filter 140. In step 240 an intensity of radiation at a plane of the detector 150 is determined. The intensity of radiation may form an image the plane of the detector, particularly where the detector is located at the image plane. The intensity is determined by propagating the exit wave from the filtering element 140 using a suitable propagation operator. The propagation operator may be a Fourier transform. Thus step 240 may comprise determining lk c = T^Vk e} where Ik c is the image at the detector 150.
Step 250 comprises updating the determined intensity from step 240 based on data measured by the detector 150. In particular, step 250 may comprise updating an intensity of the image determined in step 240 based upon intensity measured by the detector 150. This step may be referred to as applying a modulus constraint on the determined intensity from step 240. The modulus constraint may be applied by:
Ik,c = V'k,m Ik,c/|lk,c | where Ik m is the measured intensity and Ik c is an updated (or corrected) intensity data at the plane of the detector 150.
Step 260 comprises determining an updated exit wave at the BFP. In step 260 an updated exit wave from the filter 140 is determined. Step 260 comprises back propagating the updated intensity data from the plane of the detector 150 to the plane of the filter 140 or the BFP. The updated image data may be back-propagated to the BFP by:
¾e— F 1{Ik,c}-
Where ψ ε is the updated exit wave from the filter 140.
Step 270 comprises determining an updated diffraction pattern at the BFP. In some embodiments step 270 may optionally comprise updating the filtering function associated with the filter 140. The updated diffraction pattern may be determined by:
2
|A„| where an updated diffraction pattern or input wave at the filter 140. In embodiments in which the filtering function associated with the filtering element 140 is also updated, this may be updated by:
Where An+1 is the updated filtering function which may be used in an n+1 iteration of the method 200. In some embodiments the filtering function may only be updated after a predetermined number of iterations of the method 200. The updated filtering function is determined in parallel form based upon diffraction patterns from a plurality of incidence angles of radiation. In the above equation the summation ∑fc in the equation sums across all K incidence angles.
In step 280 an updated exit wave from the object 120 is determined. Thus step 280 comprises updating, during each iteration of the method 200, a calculated wave in the real domain or space based upon the updated diffraction pattern or input wave at the filter 140. The updated exit wave from the target object 120 is determined by back- propagating the updated diffraction pattern from the plane of the filter 140 or BFP to the plane of the object 120. The updated exit wave is determined by using an inverse propagation operator to that used in step 220. Step 280 may be based upon an inverse Fourier transform. The updated exit wave may be calculated by:
where \ )k' 2 the updated exit wave in the example shown in Figure 1 is at the second plane 122, although it will be realized that this is merely exemplary.
In step 290 one or more object functions associated with the object 120 are updated. Thus step 290 comprises determining updated image data for the object 120 at one or more planes in the real domain. In some embodiments where a plurality of object functions are associated with the object 120, step 290 may comprise determining each updated object function in turn, thereby sequentially determining each updated object function toward the source of radiation 1 10. In order to update a plurality of object functions intermediate probe functions are also determined and propagated between the planes 121, 122 associated with each object function. Thus, for the exemplary embodiment shown in Figure 1, an updated object associated with the second plane 122 is determined by:
where 0η+1 2 is an object function associated with the second plane 122 for an n+\ iteration of the method 200. Similarly an updated probe function at the second plane 122 is determined by:
pk,2 = pk,2 + a ■
l °n-2 l max
Where P 2 is the updated probe function indicative of incident radiation at the second plane 122 within the object 120.
The updated probe function at the second plane 122 is back-propagated to a next plane toward the source of radiation, which in the exemplary embodiment of Figure 1 is the first plane 121. The probe function is back-propagated over the distance Az.
The back propagation uses an inverse of the propagation operator used in step 210, which may be the angular spectrum operator. Therefore step 290 comprises calculating:
Ψία - ^-Az{Pk,2}- where is an updated exit wave from the first slice 121 in the exemplary embodiment shown in Figure 1. As at the previous slice 122, an object function associated with the first slice 121 is updated. The update at the first slice may be achieved by calculating:
¾,ι max
In the above calculations a is a value used to alter a strength of feedback. In some embodiments ct=l, although it will be realized that other values may be chosen. Furthermore it will be realized that step 290 may be altered from the above described embodiment depending upon a number of planes associated with object functions. In step 290 the updated one or more object functions are provided for use in a next iteration of step 210. In step 295 it is determined whether there remain further incidence angles of radiation to consider in iterations of the method 200. In some embodiments step 295 comprises determining whether k=K. If k<K then k is incremented and the method returns to step 210. Otherwise k is reset to a first incidence angle such as k=\ . Therefore the method 200 considers all incidence angles. Each incidence angle may be considered sequentially in-turn, or in a random or pseudo-random manner.
In step 298 it is determined whether to terminate the method 200. The method 200 may be terminated when a predetermined number of iterations have been completed or when a predetermined condition occurs. The predetermined condition may be when an error metric associated with the method 200 meets a predetermined value. In some embodiments the error metric is based upon difference between the measured
and calculated intensities at the detector 150. The error metric may be calculated by the normalized deviation between the measured and calculated intensities using:
2
_∑k∑u |y/jc,c ( ) - y/fc,m(" i
∑k∑u lk,m(u) where u represents a coordinate in the plane of the detector 150 and E is the error. When E meets a predetermined value the method 200 is terminated. Otherwise the method returns to step 210. In step 270 when updating the filtering function to determine An+1 associated with the filtering element 140, a ptychography reconstruction still applies since the filtering element 140 is consistent in the BFP. The reason for using a parallel form of the equation to update the filtering function is because a DC component of the spectrum
Ψ^ί is usually a very high value. The weight |liik,i | max m the update equation of a series version will dramatically slow the convergence. Secondly the diffraction pattern is updated at the BFP. However, this update is rather taking out the effect of the filter function from the corrected diffraction pattern e than trying to reconstruct the spectrum. More importantly, the updated diffraction pattern is back- propagated to the object plane to perform the object reconstruction slice by slice in real space.
The algorithm has been investigated experimentally using an apparatus as illustrated in Figure 1. The radiation source 120 is a HeNe laser beam having a wavelength λ = 632.8nm which is collimated before being transmitted through an optical fiber via a 10x Olympus objective lens. An exit end of the fiber, which produces a divergent point source, is mounted onto a stepper motor driven x-y translation stage. The object 120 is placed approximately 70mm downstream of the source 120. A doublet lens 130 with a focal length of 30mm is positioned at a distance of 45mm from the object 120. In the back focal plane (about 33mm from the lens), a diaphragm with the aperture diameter set to 2mm is used as the filter 140. A CCD detector 150 is
placed in the image plane (about 215mm from the diaphragm 140) to record the low resolution image intensities.
A dataset is collected from 225 source positions arranged on a 15 x 15 raster grid. A nominal step size of the stage is 1mm, with ±20% random offset to avoid a "raster grid pathology" as is known to those familiar in the art. The change in position of the radiation corresponds to a change of incident angle of the radiation 111, 112, 113 at the target object 120 of about 0.5 degrees, as is explained in more detail below. The object 120 is composed of two microscope slides arranged coverslip-to-slide (see the inset of Fig. 2). A measured separation between the two specimen layers is 1.025mm. Since the size of the object (less than 1mm x 1mm) is relatively small compared to a distance between the object 120 and the source 1 10, the shifts of the source position introduces quasi -tilted-plane-waves incident on the object 120. A focus plane is adjusted to lie between these two layers. It will be realized that embodiments of the invention may be used with other types of object 120 and that, quasi-tilted radiation is used merely for the ease of experimental set-up. Two recorded low resolution images are shown in Figure. 3. Fig. 3 is two recorded low resolution images from different tilt or incidence angles. Figure 3(a) is a bright field image and Figure 3(b) is a dark field image. 'Dark-field' is used here in the sense as used in conventional microscopy, namely that the unscattered beam associated with the incident radiation 1 11, 112, 113 does not pass through the aperture in the lens for this particular angle of illumination, but is blocked by the aperture, thus giving an image which shows only scattered intensity from the object 120 on an otherwise black, or dark, background.
Both the conventional ePIE algorithm and a method according to an embodiment of the invention are applied to recorded intensity data for 100 iterations. The initial guess for the object function is free space - namely a value of 1 for all positions or matrix cells, and the initial guess for the filter function associated with the filtering element 140 is a Gaussian filtering function. Using the geometric relation between the point
source 1 10 and the object 120, a tilt angle for all scan positions is calculated to generate a set of tilted plane waves.
During the first 5 iterations of the method 200, the filtering function associated with the filter 140 is not updated. After that, both a part of the diffraction pattern corresponding to an area within the filter 140, and the filter function are updated in each iteration of the method 200. The final results are shown in Fig. 4.
Figure 4 shows reconstructions of the dataset from the illumination tilting strategy using both the ePIE algorithm and a method 200 according to an embodiment of the invention. Figure 4(a)(b)(c)(d) are the results of the conventional ePIE algorithm and respectively represent the object modulus, the object phase, the filter modulus and the filter phase. Figure 4 (e)(f)(g)(h)(i)(j) are the results of the a method according to an embodiment of the invention and, respectively, represent the first slice 121 modulus, the first slice 121 phase, the second slice 122 modulus, the second slice 122 phase, the filtering function modulus and the filtering function phase. Note that the object 120 and the filter 140 are in different spaces and that the scale bars are also different. The scale of Fig4(a) is also the same for 4(b,e,f,g,h), and the scale of 4(c) is the same as for 4(d,i,j).
The ePIE algorithm manages to reconstruct image data indicative of a blurred representation of the target object 120, which means a projection approximation has not completely broken down. However, the reconstruction is an artifice from which it is not possible to extract accurate images of either layer or slice through the object 120. At the same time, the method 200 according to an embodiment of the invention successfully separates the two slices 121, 122 and produces much better reconstructions. Cross-talk between the two slices 121, 122 is hardly visible in either reconstructed slice 121, 122. Since both two layers 121, 122 are out focus, during the scanning process the images will shift in the plane of the detector 150. Given the Field Of View (FOV) reconstructed, some features move out and new features move into the FOV at the edges. This means the dataset will not be consistent at the edges of the FOV. However, the algorithm 200 still well resolves the edges. Phase curvature from the
reconstructed filter element 140 implies that the exit wave plane (the instant downstream plane of the second layer 122) is not in focus. In other words the exit wave plane is not coincident with the conjugate plane of the detector 150. The method 200 automatically accounts for this and produces a phase curvature on the filter function associated with the filtering element 140 to refocus the exit wave plane.
It will be appreciated that methods and apparatus are disclosed in which image data for an object 120 is updated in the real domain or space during iterations of the method. In some embodiment multi-slice embodiments may extract 3D information for the object 120. The problem for the conventional reconstruction method is that the object has to be thin enough to validate the projection approximation. Otherwise, the spectrum in the back focal plane is not consistent when tilting the illumination. A filter scanning strategy has been proposed to circumvent this object thickness limitation. However, the reconstruction is just the exit wave of the object. Although we can refocus each layer by propagating this exit wave, the other out focused layers always corrupt the in focused layer. Optical experiments have been conducted to test the proposed method. The results indicate that the proposed method extracts the 3D information of the object very well, while the conventional method fails to produce right reconstructions and the filter scanning method cannot separate the object.
It will be appreciated that embodiments of the present invention can be realised in the form of hardware, software or a combination of hardware and software. Any such software may be stored in the form of volatile or non-volatile storage such as, for example, a storage device like a ROM, whether erasable or rewritable or not, or in the form of memory such as, for example, RAM, memory chips, device or integrated circuits or on an optically or magnetically readable medium such as, for example, a CD, DVD, magnetic disk or magnetic tape. It will be appreciated that the storage devices and storage media are embodiments of machine-readable storage that are suitable for storing a program or programs that, when executed, implement embodiments of the present invention. Accordingly, embodiments provide a program comprising code for implementing a system or method as claimed in any preceding claim and a machine readable storage storing such a program. Still further, embodiments of the present invention may be conveyed electronically via any
medium such as a communication signal carried over a wired or wireless connection and embodiments suitably encompass the same.
All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and/or all of the steps of any method or process so disclosed, may be combined in any combination, except combinations where at least some of such features and/or steps are mutually exclusive.
Each feature disclosed in this specification (including any accompanying claims, abstract and drawings), may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise. Thus, unless expressly stated otherwise, each feature disclosed is one example only of a generic series of equivalent or similar features. The invention is not restricted to the details of any foregoing embodiments. The invention extends to any novel one, or any novel combination, of the features disclosed in this specification (including any accompanying claims, abstract and drawings), or to any novel one, or any novel combination, of the steps of any method or process so disclosed. The claims should not be construed to cover merely the foregoing embodiments, but also any embodiments which fall within the scope of the claims.
Claims
1. A method of determining object characteristics, comprising: providing radiation directed toward an object at each of a plurality of incidence angles; detecting, by a detector, an intensity of radiation for each of the incidence angles; and iteratively determining object data indicative of one or more characteristics of the object, wherein iterations of the method comprise: determining, for each incidence angle of radiation, an estimate of a filtered diffraction pattern at a focal plane of a radiation focussing element based upon a current estimate of the object data; determining an estimate of intensity data at a plane of the detector based upon the filtered diffraction pattern; and updating the estimate of the object data in a real domain based upon the intensity of radiation at the detector.
2. The method of claim 1, wherein the updating the estimate of the object data in the real domain comprises: determining an updated estimate of the intensity data based upon the intensity of radiation detected at the detector. 3. The method of claim 2, comprising determining an updated diffraction pattern at the focal plane of the radiation focussing element based upon the updated estimate of the intensity data.
The method of claim 3, comprising determining updated an exit wave from the object by back propagating the updated diffraction pattern to a plane of the obj ect.
The method of claim 3 or 4, comprising determining an estimate of a filtering function based upon the updated diffraction pattern.
The method of any preceding claim, wherein the diffraction pattern is updated for a region corresponding to the filtered diffraction pattern.
The method of claim 3 or any claim dependent thereon, wherein the diffraction pattern is allowed to float outside of the region corresponding to the filtered diffraction pattern.
The method of any preceding claim, wherein the determining the estimate of a filtered diffraction pattern comprises estimating an exit wave from the object, propagating the exit wave to the focal plane of the radiation focussing element and applying a filtering function to the propagated exit wave.
The method of claim 8, wherein the exit wave is determined based upon a probe function indicative of one or more characteristics of the radiation directed toward the object at the respective incidence angle.
The method of claim 9, wherein the probe function has a phase gradient indicative of the respective incidence angle.
The method of claim 8, 9 or 10, wherein the filtering function is an aperture function.
The method of any preceding claim, wherein the object data is indicative of one or more characteristics of the object at a plurality of planes through the object.
The method of any preceding claim, wherein the object data is at least one object function one or more characteristics of the object.
14. The method of claim 10 or 1 1, comprising determining an exit wave from a first plane through the obj ect and propagating the exit wave to a second plane through the obj ect. 15. The method of claim 14, comprising determining an exit wave from the second plane and propagating the exit wave from the second plane to form the filtered diffraction pattern.
16. The method of any preceding claim, wherein the detector is located at an image plane.
17. The method of any preceding claim, wherein the estimate of the filtered diffraction pattern is determined in a Fourier domain. 18. The method of any preceding claim, wherein the focal plane of the focussing element and the plane of the detector are spatially separated.
19. An apparatus for determining object characteristics, compri radiation source for providing radiation at each of a plurality of incidence angles toward a target object; a radiation focussing element for focussing radiation from the target object at a focal plane; a filtering element for filtering the radiation from the target object to form a filtered diffraction pattern; a detector for detecting an intensity of radiation from the filtered diffraction pattern and outputting intensity data indicative thereof; and a processor arranged to receive the intensity data and to perform an iterative process to determine object data indicative of one or more characteristics of the object, wherein iterations of the process comprise:
determining, for each incidence angle of radiation, an estimate of the filtered diffraction pattern based upon a current estimate of the object data; determining an estimate of the intensity data based upon the estimate of the filtered diffraction pattern; and updating the estimate of the object data in a real domain based upon the intensity of radiation at the detector.
The apparatus of claim 19, wherein the radiation focussing element comprises one or more lenses.
21. The apparatus of claim 19 or 20, wherein the filtering element comprises an aperture.
22. The apparatus of claim 20 or 21 wherein the filtering element is arranged downstream of the one or more lenses.
23. The apparatus of any of claims 19 to 22 wherein the radiation source comprises a plurality of radiation sources arranged at each of the plurality of incidence angles.
24. The apparatus of any of claims 19 to 23 wherein the processor is arranged to perform a method as claimed in any of claims 2 to 18.
25. Computer software which, when executed by a computer, is arranged to perform a method according to any of claims 1 to 18.
26. The computer software of claim 25 stored on a computer readable medium.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GBGB1414063.6A GB201414063D0 (en) | 2014-08-08 | 2014-08-08 | Methods and apparatus for determining image data |
GB1414063.6 | 2014-08-08 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2016020671A1 true WO2016020671A1 (en) | 2016-02-11 |
Family
ID=51629495
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/GB2015/052256 WO2016020671A1 (en) | 2014-08-08 | 2015-08-04 | Methods and apparatus for determining image data |
Country Status (2)
Country | Link |
---|---|
GB (1) | GB201414063D0 (en) |
WO (1) | WO2016020671A1 (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1740975A1 (en) * | 2004-04-29 | 2007-01-10 | University of Sheffield | High resolution imaging |
WO2008142360A1 (en) * | 2007-05-22 | 2008-11-27 | Phase Focus Limited | Three dimensional imaging |
WO2010064051A1 (en) * | 2008-12-04 | 2010-06-10 | Phase Focus Limited | Provision of image data |
WO2014033459A1 (en) * | 2012-08-31 | 2014-03-06 | Phase Focus Limited | Improvements in phase retrieval from ptychography |
-
2014
- 2014-08-08 GB GBGB1414063.6A patent/GB201414063D0/en not_active Ceased
-
2015
- 2015-08-04 WO PCT/GB2015/052256 patent/WO2016020671A1/en active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1740975A1 (en) * | 2004-04-29 | 2007-01-10 | University of Sheffield | High resolution imaging |
WO2008142360A1 (en) * | 2007-05-22 | 2008-11-27 | Phase Focus Limited | Three dimensional imaging |
WO2010064051A1 (en) * | 2008-12-04 | 2010-06-10 | Phase Focus Limited | Provision of image data |
WO2014033459A1 (en) * | 2012-08-31 | 2014-03-06 | Phase Focus Limited | Improvements in phase retrieval from ptychography |
Non-Patent Citations (1)
Title |
---|
MAIDEN A M ET AL: "An improved ptychographical phase retrieval algorithm for diffractive imaging", ULTRAMICROSCOPY, ELSEVIER, AMSTERDAM, NL, vol. 109, no. 10, 1 September 2009 (2009-09-01), pages 1256 - 1262, XP026470501, ISSN: 0304-3991, [retrieved on 20090606], DOI: 10.1016/J.ULTRAMIC.2009.05.012 * |
Also Published As
Publication number | Publication date |
---|---|
GB201414063D0 (en) | 2014-09-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP2356487B1 (en) | Provision of image data | |
JP5917507B2 (en) | Calibration method of probe by typography method | |
DK1740975T3 (en) | high-resolution imagery | |
EP2702556B1 (en) | A method and apparatus for providing image data for constructing an image of a region of a target object | |
JP5314676B2 (en) | 3D imaging | |
KR101896506B1 (en) | Three dimensional imaging | |
JP6556623B2 (en) | Improved phase recovery | |
JP2016173594A (en) | Autofocus for scanning microscopy based on differential measurement | |
US10466184B2 (en) | Providing image data | |
EP2227705B1 (en) | Method and apparatus for providing image data | |
WO2016020671A1 (en) | Methods and apparatus for determining image data | |
Xie et al. | Deep learning for estimation of Kirkpatrick–Baez mirror alignment errors | |
Angland et al. | Angular filter refractometry analysis using simulated annealing | |
CN104132952B (en) | Time resolution ptychography | |
CN104155320B (en) | A kind of time resolution overlapping associations Imaging |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 15749851 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 15749851 Country of ref document: EP Kind code of ref document: A1 |