[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

WO2001057805A2 - Procede et appareil de traitement de donnees d'images - Google Patents

Procede et appareil de traitement de donnees d'images Download PDF

Info

Publication number
WO2001057805A2
WO2001057805A2 PCT/GB2001/000389 GB0100389W WO0157805A2 WO 2001057805 A2 WO2001057805 A2 WO 2001057805A2 GB 0100389 W GB0100389 W GB 0100389W WO 0157805 A2 WO0157805 A2 WO 0157805A2
Authority
WO
WIPO (PCT)
Prior art keywords
image data
interior configuration
scanner
data
volumetric
Prior art date
Application number
PCT/GB2001/000389
Other languages
English (en)
Other versions
WO2001057805A3 (fr
Inventor
Ivan Daniel Meir
Norman Ronald Smith
Guy Richard John Fowler
Original Assignee
Tct International Plc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tct International Plc filed Critical Tct International Plc
Priority to AU2001230372A priority Critical patent/AU2001230372A1/en
Publication of WO2001057805A2 publication Critical patent/WO2001057805A2/fr
Publication of WO2001057805A3 publication Critical patent/WO2001057805A3/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • G06T3/4061Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution by injecting details from different spectral ranges
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • G06V10/245Aligning, centring, orientation detection or correction of the image by locating a pattern; Special marks for positioning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images

Definitions

  • the present invention relates to a method and apparatus for processing image data derived from an object such as a patient.
  • volumetric image data sets pertaining to the same patient but of one or more different modalities and/or obtained on different occasions It would be convenient for a clinician to combine volumetric image data sets pertaining to the same patient but of one or more different modalities and/or obtained on different occasions.
  • each apparatus will have its own reference frame and furthermore in many cases there will be little or no overlap between the data sets (e.g. X-ray data and surface or sliced MRI data) which will make accurate registration difficult or impossible.
  • the configuration or shape of the patient will be somewhat different when the respective data sets are acquired, again tending to prevent accurate registration.
  • the patient may move during the acquisition of data for example it may take a number minutes to acquire an MRI scan. Even the fastest CT scan can take several seconds which will result in imaging errors unless the patient is kept absolutely rigid.
  • An object of the present invention is to overcome or alleviate at least some of the above problems.
  • the invention provides a method of processing configuration-sensitive data comprising acquiring configuration information in association with said configuration-sensitive data and enhancing the configuration-sensitive data using the configuration information.
  • the configuration-sensitive data comprises a volumetric data set relating to the interior of a subject.
  • the volumetric data set might define the shape of internal organs of a patient.
  • surface data is acquired by optical means and utilised to derive the configuration information.
  • the configuration information might be a digitised surface representation of the patient's body acquired by a 2D or preferably a 3D camera arrangement.
  • the surface data is acquired by optically tracking markers located on the surface of the subject.
  • the markers are detachable and are located in predetermined relationships to permanent or semi-permanent markings on the surface of the subject.
  • the configuration information could also be information derived from the digitised surface representation for example it could comprise a set of normals to the surface or could comprise model data (derived from a physical or statistical model of the relevant part of the patient's body for example) which could be used to enhance the configuration-sensitive data e.g. by means of statistical correlation between the configuration information and the configuration-sensitive data.
  • the object (or the surface or volumetric data thereof) is modelled (e.g. as a rigid model, an affine model or a spline model such as a NURBS model) in such a manner as to constrain its allowable movement and the images or volumetric data sets are registered under the constraint that movement is limited to that allowed by the model.
  • Useful medical information can then be derived, e.g. by carrying over information from the surface data to the volumetric data. In certain embodiments such information can be derived with the aid of a statistical or physical model of the object.
  • the invention provides medical imaging apparatus arranged to 30 acquire configuration information in association with configuration-sensitive medical data.
  • the configuration information comprises surface data representative of a subject.
  • the configuration information could be a surface representation of a patient's body and the apparatus could include one or more cameras for acquiring such a representation.
  • the apparatus is calibrated to output the surface representation and an internal representation (e.g. an X-ray image or a volumetric data set) referenced to a common reference frame.
  • the apparatus includes display means arranged to display both a present surface representation and a stored previous surface representation of the same patient and means for adjusting the position or orientation of the apparatus (optionally under the control of a user) in relation to the patient such that the two surface representations are registered.
  • This ensures that the previous and present internal representations are also registered and aids comparison of such representations (e.g. X-ray images) and also helps to prevent mistakes in alignment of the apparatus relative to the patient which might result in further visits from the patient and unnecessary radiation dosage, for example.
  • the invention provides a method of associating two sets of volumetric data (VI and V2) comprising the step of associating sets ( ⁇ Si, VI ⁇ and ⁇ S2, V2 ⁇ ) of surface data (SI and S2) registered with the respective sets of volumetric data.
  • This aspect is related to the apparatus of the above-mentioned aspect in that each set of surface data associated with a set of volumetric data can be acquired with that apparatus.
  • the registration is performed on a model of the subject which is constrained to allow movement only in accordance with a predetermined model for example a rigid model, an affine model or a spline model.
  • the invention provides a frame to fit the surface of scanned body part, the frame carrying guide means for locating the site of a medical procedure at a defined position or orientation at or beneath the surface of the body part.
  • the frame is in the form of a mask shaped to fit the surface of a patient's face or head.
  • the invention also provides a method of making such a frame.
  • the invention further provides a method of processing image data relating to a three dimensional object of a variable positional disposition, the object having an outer surface and an interior configuration, the method comprising: acquiring for a first positional disposition of the object, first interior configuration image data concerning the interior configuration of the object, and also first three dimensional outer surface image data concerning the outer surface of the object, acquiring for a second positional disposition of the object, a second interior configuration image data concerning the interior configuration of the object, and also second three dimensional outer surface image data concerning the outer surface of the object, and registering the first and second interior configuration image data as a function of the relationship between the first and second three dimensional outer surface image data.
  • FIG. 1 is a diagrammatic representation of scanning apparatus in accordance with one aspect of the invention.
  • Figure 2 is a flow diagram illustrating a method of processing images or volumetric data sets in accordance with another aspect of the invention
  • Figures 3A and 3B are diagrammatic representations showing the distortion of a coordinate system 10 correspond to the movement of a patient as detected optically;
  • Figures 3C to 3E are diagrammatic representations showing the corresponding distortion of a coordinate system relative to which a volumetric data set is obtained;
  • Figure 4 is a diagrammatic representation illustrating a method of calibration of the apparatus of Figure 1;
  • Figure 5 is a diagrammatic representation of scanning apparatus in accordance with an aspect of the invention and the registration of surface and volumetric representations obtained with the apparatus;
  • Figure 6 is a diagrammatic representation of X-ray and video camera apparatus in accordance with one aspect of the invention.
  • Figure 7 is a diagrammatic profile of two sets of superimposed surface and volumetric data showing the registration in accordance with an aspect of the invention of the two volumetric data sets from the registration of each volumetric data set with its associated surface and the registration of the surfaces;
  • Figure 8A is a diagrammatic representation of the combination of two volumetric data sets in accordance with an aspect of the invention by the registration of their associated surfaces and Figure 8B is a similar diagrammatic representation utilising sets of sliced MRI data;
  • Figure 9 is a diagrammatic transverse cross-section of a scanner in accordance with an aspect of the invention showing the enhancement of the volumetric data with correction data derived from surface information
  • Figure 10 is a diagrammatic elevation of a mask fitted to a patient's head and incorporating a biopsy needle for taking a biopsy at a defined 3D position in the patient's brain, and
  • FIG 11 illustrates ceiling mounted stereoscopic cameras for use with a scanner, in accordance with the invention. Detailed description
  • a volumetric scanner 10 which is suitably a MRI, CT or PET scanner for example is shown and is arranged to generate a 3D internal volumetric image 30 of a patient P.
  • Such scanners are well known per se and accordingly no further details are given of the conventional features of such scanners.
  • Three transversely disposed stereoscopic camera arrangements are provided, comprising digital cameras Cl and C2 which acquire a 3D image of the head and shoulders of the patient P, digital cameras C3 and C4 which acquire a 3D image of the torso of the patient and digital cameras C5 and C6 which acquire a 3D image of the legs of the patient.
  • These images can be still or moving images and the left and right images of each pair can be correlated with the aid of a projected pattern in order to facilitate the derivation of a 3D surface representation.
  • the images acquired by the three sets of cameras are displayed to the user as indicated at 20A, 20B and 20C.
  • Figure 2 shows how the data acquired e.g. by the scanner of Figure 1 can be processed.
  • step 100 images of a surface and/or volumetric data sets are obtained from the same subject (e.g. patient P) at different times, e.g. before and after treatment. It is assumed in this embodiment that little or no movement of the patient P occurs during data acquisition, which is a valid assumption if the data acquisition takes less than say 25 milliseconds or so.
  • a model of the object is provided and used to constrain allowable distortion of the images or volumetric data sets of step 100 when subsequently registering them (step 600).
  • the model of step 500 could be a rigid model 200 (in which case no distortion is allowed between the respective images or volumetric data sets acquired at different times) an affine model 300 (which allows shearing or stretching as shown) which term includes a piecewise affine model comprising a patchwork of affine transforms and allows different shearing and stretching distortions in the different parts of the surface or volumetric data set as shown or a spline model 400 e.g.
  • the models can take into account the characteristics of the scanner used to acquire the volumetric data sets.
  • the models 200, 300 and 400 characterise the relationship between the three dimensional image of the patient's outer surface and the interior configuration imaged by the scanner and captured in the volumetric data.
  • step 600 the selected model is used to constrain relative movement or distortion of the images or volumetric data sets acquired at different times while they are registered as completely as possible within this constraint.
  • step 700 the volumetric data is enhanced and output. More detailed information on this step is given in the description of the subsequent embodiments. This step is optionally performed with the aid of a physical model 700 and/or a statistical model 800 of the subject.
  • a physical model 700 can incorporate a knowledge of the object's physical properties and can employ e.g. a finite element analysis- technique to model the subject's deformations between the different occasions of data acquisition.
  • a statistical model 800 is employed.
  • Statistical models of the human face and other parts, of the human body e.g. internal organs are known, e.g. from A Hill, A Thornham, C J Taylor “Model-Based Interpretation of 3D Medical Images.”, Proc BMVC 1993 pp 339-348, which is incorporated herein by reference.
  • This reference describes Point Distribution Models.
  • a Point Distribution Model comprises an envelope in m- dimensional space defined by eigenvectors representative of atleast the principal modes of variation of the envelope, each point within the envelope representing a potential instance of the model and defining the positions (in 2D or 3D physical space) of n landmark points which are located on characteristic features of the model.
  • the envelope is generated from a training set of examples, of the subject being modelled (e.g. images of human faces if the human face is being modelled).
  • the main usefulness of such models lies in the possibility of applying Principal Component Analysis (PCA) to find the eigenvectors corresponding to the main modes of variation of the envelope derived from the training set which enables the envelope to be approximated by an envelope in fewer dimensions.
  • PCA Principal Component Analysis
  • Point Distribution Models are described in more detail by A Hill et al, 'Model- Based Interpretation of 3-D Medical Images' Procs. 4th British Machine Vision Conference pp 339-348 Sept 1993 which is incorporated herein by reference.
  • Active Shape Models are derived from Point Distribution Models and are used to generate new instances, of the model ie new shapes, represented by points, within the envelope in the m-dimensional space.
  • a new shape e.g. the shape of a new human face known to nearly conform to the envelope
  • its set of shape parameters can be found and the shape can then be manipulated e.g. by rotation and scaling to conform better to the envelope, preferably in an iterative process.
  • the PDM preferably incorporates grey level or other image information (e.g. colour information) besides shape and e.g. grey scale profile perpendicular to a boundary at a landmark point is compared in order to move the landmark point to make the new image conform more closely to the set of allowable shapes represented by the PDM.
  • grey level or other image information e.g. colour information
  • an ASM consists of a shape model controlling a set of landmark points, together with a statistical model of image information e.g. grey levels, around each landmark.
  • Active Shape Models and the associated Grey-Level Models are described in more detail by T.F. Cootes et al in "Active Shape Models: Evaluation of a Multi-Resolution Method for Improving Image Search" Proc. British Machine Vision Conference 1994 pp 327-336, which is incorporated herein by reference.
  • the training set from which the statistical model 900 is 15 derived is preferably trained on a variety of images or volumetric data sets, derived from a single organ, patient or other subject in order to model the possible variation of a given subject as opposed to the variance in file general population or such subjects.
  • the resulting analysis enables data e.g. growth of a tumour, which is not a function of normal variation in the subject to be highlighted -and indeed quantified.
  • the model 200, 300 or 400 (Figure 2) is utilised to distort the coordinate frame of Figure 3A in such a manner that the model patient coordinates in Figure 3B are unchanged relative to the coordinate frame.
  • a given point on the modelled patient e.g. the nose tip, has the same coordinates, in the distorted coordinate frame of Figure IB as- in the undistorted coordinate frame of Figure 3A.
  • a complementary distortion is then applied to the volumetric image (data set) obtained on the second occasion as shown in Figure 3C to transform this data set to a data set (shown in Figure 3D) which would have been obtained on the second occasion if the patient had not moved.
  • the volumetric image of Figure 3D can then be compared with the volumetric data set not shown obtained on the first occasion.
  • the surface of the patient p is permanently or semipermanently marked by small locating tattoos not shown, which are visible to the human eye on close examination.
  • Temporary markers M are applied to these tattoos and are sufficiently large to be tracked easily in real time by a stereoscopic camera arrangement.
  • the markers M can for example be in the form of detachable stickers or can be drawn over the tattoos with a marker pen. It is not essential for the markers M to be precisely located over the tattoos (although this will usually be the most practical option) but the markers should each have a location which is precisely defined by the tattoos for example the markers could each be equidistant from two, three or more tattoos.
  • a volumetric scanner e.g. scanner 10 of Figure 1
  • the markers M are tracked optically and the scanner's volumetric coordinate frame is distorted to correspond with that distortion of the scanner's optical coordinate frame which would leave the 3D positions of the markers M unchanged, either during the scan or relative to a previous scan during which the markers were also used.
  • a calibration target T which is visible both to the cameras Cl and C2 and to the scanner 10 is imaged by both the scanner and the cameras resulting in images II in the camera reference frame FI and 12 in the scanner reference frame F2.
  • a geometrical transformation TR can be found in a known manner which will map II onto 12 and the same transformation can then be applied e.g. in software or firmware to move, scale and (if necessary) distort the visual image of a patient's body to registration with the scanner reference frame F2.
  • the scanner is an MRI or a CT scanner there will be data common to. the visual data acquired by the cameras, and the volumetric data, typically the patient's skin surface.
  • a calibration procedure such as that shown in Figure 5 can be used.
  • a patient p is imaged by both the cameras Cl and C2 and the scanner 10 and the resulting images pi and p2 of the patient's surface in the respective reference frames Fl and F2 of the camera system and scanner can be registered by a transformation TR which can be found in a known manner by analysis.
  • This transformation TR can be stored and used subsequently to register the digitised 5 surfaces acquired by the cameras to the volumetric data set acquired by the scanner 10.
  • Figure 6 shows a slightly different embodiment which consists essentially of an X-ray camera Cx rigidly mounted with a digital video, camera Cv on a common supporting frame FR.
  • the X-ray image and visual image 11 acquired by the cameras Cx and Cv are processed Dy a computer FC and displayed on a display D (only 11 is shown).
  • the computer PC is provided with video memory arranged to store a visual image 12 of the same patient previously acquired by the video camera Cv when taking an X-ray image.
  • the cameras Cv and Cx are moved e.g. under the control of the operator or possibly under control of a suitable image registration program, on their common mounting frame FR to superimpose image 12 on image II as shown by arrow al and the X-ray image is the captured.
  • the X-ray camera is movable with respect to the video camera and the movement of the X-ray camera required to register the new X-ray image with the previous X-ray image is derived from the movement needed to register the surface images.
  • the arrangement can instead image an array of markers M located by tattoos on the patient in a manner similar to that of Figure 3E.
  • X-ray images are standardised, which aids comparison and also facilitates further analysis such as that illustrated by blocks 700, 800 and 900 of Figure 2.
  • Other embodiments could utilise a standard type of volumetric scanner e.g. a CT scanner or an MRI scanner rather than an X-ray camera Cx, with similar advantages.
  • MRI scanning can be enhanced by scanning only the relevant volume known from previously acquired surface data to contain the region of interest.
  • stereoscopic video camera arrangements rather than a single video camera Cv could be employed and more sophisticated registration techniques analogous to those of blocks 200 to 600 of Figure 2 could be employed.
  • stereoscopic viewing arrangements rather than a screen could be used for the registration.
  • Figure 7 illustrates a further aspect of the invention involving using e.g. an MRI scanner arrangement of Figure 1 to capture a surface representation SI of the patient's body and a volumetric data set II including the surface of the patient's body and using a further scanner of different modality e.g. a CT scanner to capture a different volumetric data set 12 in association with a surface representation S2.
  • the volumetric data sets 11 and 12 are registered with their respective associated surface representations SI and S2 by appropriate transformations rl and r2 as shown (preferably utilising the results of an earlier calibration procedure as described above) and the surface representations are then registered with each other by a transformation Rl. Since the volumetric data sets 11 and 12 are each referenced to the resulting common surface they can be registered with each other by a transformation R2 which can be simply derived from Rl, rl and r2.
  • FIG. 8A Such a combination of different features is shown in Figure 8A, wherein surface representations S registered with respective volumetric data sets Va and Vb of different modality are combined to generate a new volumetric data set Vc registered with a surface, representation. S.' which is. a composite of the surface representations S.
  • Such a technique can be used to combine not only volumetric data sets of different modality out also volumetric data sets of different resolution or accuracy or different size or shape.
  • the volumetric data sets acquired by different modalities have no overlap, ie no information in common, but are incorporated into a common reference frame by mutually registering surface representations which are acquired optically e.g. by a camera arrangement similar to that of Figure 1, simultaneously with the respective volumetric data sets.
  • Figure 8B shows two composite images of an organ having cross-sections XI, X3 and X5 and X2 and X4 respectively wherein the cross-sections are acquired by MRI and the surfaces S are acquired optically. By registering surfaces S to a composite surface S.', the MRI cross-sections are mutually aligned, as shown.
  • volumetric data sets which are acquired whilst the patient is moving.
  • an MRI scan can take 40 minutes and if the patient moves during this period the resulting scan is degraded by blurring.
  • the blurring could be alleviated by utilising the optically acquired surface data of the moving patient to distort the reference frame of the volumetric data set and reconstructing the volumetric data set with reference to the distorted reference frame.
  • This de-blurring technique is somewhat similar to the registration technique described above with reference to Figures 3A to 3D but unlike that technique, can involve a progressive distortion of the reference frame to follow movement of the patient.
  • the blurring can be alleviated by utilising a statistical or physical model of the patient to define a range of possible configurations of the patient in a mathematical space, generating volumetric data sets (in this case, artificial MRI scans) corresponding to the respective configurations allowed by the model, finding the generated volumetric data sets, corresponding to a path in the above mathematical space, whose appropriately weighted mean best matches (registers with) the actual blurred volumetric data set acquired by the scanner, and then processing the model and its associated volumetric data sets to find the volumetric data set which would have been maintained by the scanner if the patient had maintained a given configuration.
  • volumetric data sets in this case, artificial MRI scans
  • surface data acquired by one or more cameras can be used to aid directly the processing of volumetric scanning data to generate an accurate volumetric image of the interior of the patient.
  • a PET scanner 10' having an array of photon detectors D around its periphery and having a centrally located position source is provided with at least one stereoscopic arrangement of digital cameras C which capture the surface S of the subject
  • the camera array would be arranged to capture the entire 360 degree periphery of the subject and to this end could be arranged to rotate around the longitudinal axis of the scanner, for example.
  • True photon paths are shown by the full arrowed lines and include a path PT resulting from scattering at PS on the object surface.
  • the outputs of the relevant photon detectors would be interpreted to infer a photon path Pr, which is physically impossible because it does not pass through surface S. Accordingly such an erroneous interpretation can be avoided with the aid of the surface data acquired by the cameras C.
  • the surface data acquired by the cameras C can be used to derive volumetric information which can facilitate the processing of the output signals of the detectors D, in particular the absorption and scattering can be estimated from the amount of tissue as determined from the surface acquired by the cameras C.
  • X-ray imaging which can be used with a knowledge of the patient's surface e.g. to locate soft tissue.
  • a statistical model of the X-ray image or volumetric data set could be used to aid the derivation of an actual X-ray image or volumetric data set from a patient. This would result in higher accuracy and/or a lower required dosage.
  • One reconstruction method which would be applicable is based on lever sets as 15 described by J. A. Sethian "Level Sets and fast Matching Methods", Cambridge University Press 1999 which is hereby incorporated by reference. This method would involve, computation of successive layers from the surface acquired by the cameras toward the centre of the subject.
  • a further application of the registration of surface and volumetric data in accordance with the present invention lies in the construction of a frame to fit the surface of a scanned body part, the frame carrying guide means for locating the site of a medical procedure at a defined position or orientation at or beneath the surface of the body part.
  • the position or orientation can be set by utilising the volumetric data which has been registered with the surface data and by implication with the frame and its guide means.
  • the frame could be a mask that fits over the patient's face or head or it could rearranged to be fitted to a rigid part of the leg or abdomen.
  • Mask M has an interior surface IS which matches a surface of 30 the patient's head previously acquired by a stereoscopic camera arrangement and is provided with a guide canal G for guiding a biopsy needle N to a defined position p in the patient's brain.
  • the orientation of the guide canal is predetermined with the aid of volumetric data, registered with the surface data and acquired by a scanner in accordance with the invention (e.g.
  • a scale SC to enable the needle N to be advanced to a predetermined extent until a reference mark on the needle is aligned with a predetermined graduation of the scale (also chosen on the basis of the volumetric data).
  • a stop could be used instead of scale SC
  • the described embodiments relate to the enhancement of volumetric data with the aid of surface data
  • other medical data could be enhanced with the aid of such surface data.
  • measurements of breathing could be combined with a moving 3D surface representation of the patient while such measurements are being made and the resulting measurements could be registered with previous measurements by a method of the type illustrated in Figure 2.
  • the invention may be used to advantage in order to correlate and register interior configuration image data for a patient obtained from a CT scanner and a PET scanner.
  • CT and PET scanners are large devices usually fixedly mounted in a dedicated room or area.
  • stereoscopic camera arrangements such as Cl and C3 for a PET scanner can be mounted on the wall, ceiling or an overhead gantry, separate from the scanner 10, itself.
  • the cameras are shown to be ceiling mounted in Figure 11. Similar camera mounting arrangements are provided for the CT scanner (not shown). Thus, there is no need to carry out modifications to the scanner itself, in order to install cameras at each of the CT and PET scanner locations.
  • a CT scan of the patient is taken when in a first disposition lying on the scanner table of the CT scanner, and a 3D surface image of the patient is captured using the cameras Cl, C3, i.e. in the first disposition.
  • the patient is moved to the PET scanner and the process is repeated.
  • the image data from the PET and CT scans are then processed as previously described to bring the data into registry so that the data can be merged and used to analyse the patient's condition.
  • the 3D image data captured by the overhead cameras C for each of the CT and PET scanners is used as previously described to bring the scanner data into registry
  • Volumetric data captured by a scanner can be gated on the basis of surface data. acquired by a camera arrangement e.g. to ensure that volumetric data is captured only when the patient is in a defined position or configuration or at a particular time in the patient's breathing cycle.
  • optical' is to be construed to cover infra-red as well as visible wavelengths.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

La présente invention concerne un scanner volumétrique (10) tel qu'un scanner TEP ou IRM, présentant un réseau de caméras (C1 à C6) qui sont disposées pour permettre d'obtenir une représentation de surface en 3D (20) d'un patient (P), ladite représentation étant associée à la même image de référence que l'image volumétrique interne (30) du patient. Les données de surface 3D sont utilisées pour améliorer les données volumétriques par exemple par décryptage de l'image par distorsion de l'image de référence volumétrique par l'intermédiaire de la représentation de surface, éventuellement par l'intermédiaire d'un modèle statistique des configurations possibles du patient. Des ensembles de données volumétriques de différentes modalités (par ex. IRM et TEP respectivement) peuvent être combinées par enregistrement des surfaces 3D capturées par des systèmes de caméras associés aux scanners respectifs.
PCT/GB2001/000389 2000-01-31 2001-01-31 Procede et appareil de traitement de donnees d'images WO2001057805A2 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2001230372A AU2001230372A1 (en) 2000-01-31 2001-01-31 Image data processing method and apparatus

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB0002181.6 2000-01-31
GB0002181A GB2358752A (en) 2000-01-31 2000-01-31 Surface or volumetric data processing method and apparatus

Publications (2)

Publication Number Publication Date
WO2001057805A2 true WO2001057805A2 (fr) 2001-08-09
WO2001057805A3 WO2001057805A3 (fr) 2002-03-21

Family

ID=9884669

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB2001/000389 WO2001057805A2 (fr) 2000-01-31 2001-01-31 Procede et appareil de traitement de donnees d'images

Country Status (3)

Country Link
AU (1) AU2001230372A1 (fr)
GB (1) GB2358752A (fr)
WO (1) WO2001057805A2 (fr)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006095324A1 (fr) 2005-03-10 2006-09-14 Koninklijke Philips Electronics N.V. Systeme et procede de traitement d'image avec calage des donnees 2d sur les donnees 3d pendant des procedures chirurgicales
CN111723837A (zh) * 2019-03-20 2020-09-29 斯瑞克欧洲控股I公司 用于计算机辅助手术导航的处理患者特定图像数据的技术
CN113821652A (zh) * 2021-01-21 2021-12-21 北京沃东天骏信息技术有限公司 模型数据处理方法、装置、电子设备以及计算机可读介质

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7359748B1 (en) 2000-07-26 2008-04-15 Rhett Drugge Apparatus for total immersion photography
AU2005234725B2 (en) 2003-05-22 2012-02-23 Evogene Ltd. Methods of Increasing Abiotic Stress Tolerance and/or Biomass in Plants and Plants Generated Thereby
US7554007B2 (en) 2003-05-22 2009-06-30 Evogene Ltd. Methods of increasing abiotic stress tolerance and/or biomass in plants
EP2336330B1 (fr) 2004-06-14 2018-01-10 Evogene Ltd. Polynucléotides et polypeptides impliqués dans le développement de la fibre végétale et procédés permettant des les utiliser
BRPI0618965B1 (pt) 2005-10-24 2021-01-12 Evogene Ltd método para aumentar a tolerância de uma planta a uma condição de estresse abiótico, método para aumentar a biomassa, vigor e/ou rendimento de uma planta, método para aumentar a eficiência do uso de fertilizante e/ou absorção de uma planta
GB2455926B (en) * 2006-01-30 2010-09-01 Axellis Ltd Method of preparing a medical restraint
MX2009010858A (es) 2007-04-09 2009-11-02 Evogene Ltd Polinucleotidos, polipeptidos y metodos para aumentar el contenido de aceite, la velocidad de crecimiento y biomasa de las plantas.
BRPI0812742B1 (pt) 2007-07-24 2021-04-20 Evogene Ltd método de aumento da biomassa, da taxa de crescimento, da produtividade de semente, da eficiência do uso de nitrogênio, do estresse abiótico de uma planta, do comprimento de raiz, da cobertura de raiz, da taxa de crescimento da área de roseta, e da taxa de crescimento do diâmetro da roseta de uma planta
CN101959456A (zh) 2007-12-31 2011-01-26 真实成像有限公司 用于成像数据的配准的系统和方法
JP5694778B2 (ja) 2007-12-31 2015-04-01 リアル イメージング リミテッド 腫瘍の存在の可能性を決定するための方法、装置、およびシステム
EP2265163B1 (fr) 2008-03-28 2014-06-04 Real Imaging Ltd. Procedé, appareil et système d'analyse d'images
CA3148194A1 (fr) 2008-05-22 2009-11-26 Evogene Ltd. Polynucleotides et polypeptides isoles et leurs procedes d'utilisation pour augmenter le rendement vegetal, la biomasse, la vitesse de croissance, la vigueur, la teneur en huile, la tolerance au stress abiotique des plantes et l'efficacite d'utilisation de l'azote
WO2010020941A2 (fr) 2008-08-18 2010-02-25 Evogene Ltd. Polypeptides et polynucléotides isolés utiles pour augmenter l'efficacité de l'utilisation de l'azote, la tolérance au stress abiotique, le rendement et la biomasse de plantes
AU2010220157C1 (en) 2009-03-02 2015-08-06 Evogene Ltd. Isolated polynucleotides and polypeptides, and methods of using same for increasing plant yield and/or agricultural characteristics

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5902239A (en) * 1996-10-30 1999-05-11 U.S. Philips Corporation Image guided surgery system including a unit for transforming patient positions to image positions
US5999840A (en) * 1994-09-01 1999-12-07 Massachusetts Institute Of Technology System and method of registration of three-dimensional data sets

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2733338B1 (fr) * 1995-04-18 1997-06-06 Fouilloux Jean Pierre Procede d'obtention d'un mouvement de type solide indeformable d'un ensemble de marqueurs disposes sur des elements anatomiques, notamment du corps humain
GB2330913B (en) * 1996-07-09 2001-06-06 Secr Defence Method and apparatus for imaging artefact reduction
JPH1119080A (ja) * 1997-07-08 1999-01-26 Shimadzu Corp X線ct装置

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5999840A (en) * 1994-09-01 1999-12-07 Massachusetts Institute Of Technology System and method of registration of three-dimensional data sets
US5902239A (en) * 1996-10-30 1999-05-11 U.S. Philips Corporation Image guided surgery system including a unit for transforming patient positions to image positions

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006095324A1 (fr) 2005-03-10 2006-09-14 Koninklijke Philips Electronics N.V. Systeme et procede de traitement d'image avec calage des donnees 2d sur les donnees 3d pendant des procedures chirurgicales
US7912262B2 (en) 2005-03-10 2011-03-22 Koninklijke Philips Electronics N.V. Image processing system and method for registration of two-dimensional with three-dimensional volume data during interventional procedures
CN111723837A (zh) * 2019-03-20 2020-09-29 斯瑞克欧洲控股I公司 用于计算机辅助手术导航的处理患者特定图像数据的技术
CN111723837B (zh) * 2019-03-20 2024-03-12 斯瑞克欧洲控股I公司 用于计算机辅助手术导航的处理具体患者的图像数据的技术
CN113821652A (zh) * 2021-01-21 2021-12-21 北京沃东天骏信息技术有限公司 模型数据处理方法、装置、电子设备以及计算机可读介质

Also Published As

Publication number Publication date
GB0002181D0 (en) 2000-03-22
AU2001230372A1 (en) 2001-08-14
GB2358752A (en) 2001-08-01
WO2001057805A3 (fr) 2002-03-21

Similar Documents

Publication Publication Date Title
US11576645B2 (en) Systems and methods for scanning a patient in an imaging system
CN113347937B (zh) 参照系的配准
US7117026B2 (en) Physiological model based non-rigid image registration
US5531520A (en) System and method of registration of three-dimensional data sets including anatomical body data
JP5906015B2 (ja) 特徴に基づいた2次元/3次元画像のレジストレーション
US20200268251A1 (en) System and method for patient positioning
US11672505B2 (en) Correcting probe induced deformation in an ultrasound fusing imaging system
JP4495926B2 (ja) X線立体再構成処理装置、x線撮影装置、x線立体再構成処理方法及びx線立体撮影補助具
WO2001057805A2 (fr) Procede et appareil de traitement de donnees d'images
EP2452649A1 (fr) Visualisation de données anatomiques à réalité améliorée
US20130094742A1 (en) Method and system for determining an imaging direction and calibration of an imaging apparatus
US20130034203A1 (en) 2d/3d registration of a digital mouse atlas with x-ray projection images and optical camera photos
JP2003265408A (ja) 内視鏡誘導装置および方法
JP2002186603A (ja) 対象物の案内のための座標変換法
KR101767005B1 (ko) 표면정합을 이용한 영상정합방법 및 영상정합장치
TW202333631A (zh) 註冊二維影像資料組與感興趣部位的三維影像資料組的方法及導航系統
US9254106B2 (en) Method for completing a medical image data set
Richey et al. Soft tissue monitoring of the surgical field: detection and tracking of breast surface deformations
Hawkes et al. Registration methodology: introduction
CN115908121B (zh) 内窥镜配准方法及装置和标定系统
KR20160057024A (ko) 마커리스 3차원 객체추적 장치 및 그 방법
Wang et al. Towards video guidance for ultrasound, using a prior high-resolution 3D surface map of the external anatomy
JP2022094744A (ja) 被検体動き測定装置、被検体動き測定方法、プログラム、撮像システム
JP7407831B2 (ja) 介入装置追跡
KR102534981B1 (ko) 표면 영상유도 기반의 환자 위치 정렬 및 모니터링 시스템

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY CA CH CN CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG US UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
AK Designated states

Kind code of ref document: A3

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY CA CH CN CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG US UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A3

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: JP