CA2425075A1 - Intra-operative image-guided neurosurgery with augmented reality visualization - Google Patents
Intra-operative image-guided neurosurgery with augmented reality visualization Download PDFInfo
- Publication number
- CA2425075A1 CA2425075A1 CA002425075A CA2425075A CA2425075A1 CA 2425075 A1 CA2425075 A1 CA 2425075A1 CA 002425075 A CA002425075 A CA 002425075A CA 2425075 A CA2425075 A CA 2425075A CA 2425075 A1 CA2425075 A1 CA 2425075A1
- Authority
- CA
- Canada
- Prior art keywords
- image
- stereoscopic
- guided surgery
- accordance
- video
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
- 230000003190 augmentative effect Effects 0.000 title claims abstract description 74
- 238000012800 visualization Methods 0.000 title description 9
- 238000002675 image-guided surgery Methods 0.000 claims abstract description 51
- 238000000034 method Methods 0.000 claims abstract description 26
- 238000009877 rendering Methods 0.000 claims abstract description 17
- 238000003384 imaging method Methods 0.000 claims abstract description 14
- 238000002059 diagnostic imaging Methods 0.000 claims abstract description 5
- 230000003287 optical effect Effects 0.000 claims description 14
- 238000001356 surgical procedure Methods 0.000 claims description 14
- 238000002591 computed tomography Methods 0.000 claims description 3
- 230000003993 interaction Effects 0.000 claims description 3
- 230000011218 segmentation Effects 0.000 claims description 2
- 238000012285 ultrasound imaging Methods 0.000 claims description 2
- 238000002595 magnetic resonance imaging Methods 0.000 claims 4
- RZVAJINKPMORJF-UHFFFAOYSA-N Acetaminophen Chemical compound CC(=O)NC1=CC=C(O)C=C1 RZVAJINKPMORJF-UHFFFAOYSA-N 0.000 claims 1
- 210000004556 brain Anatomy 0.000 description 10
- 210000003484 anatomy Anatomy 0.000 description 5
- 230000009466 transformation Effects 0.000 description 5
- 239000002131 composite material Substances 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 206010028980 Neoplasm Diseases 0.000 description 3
- 230000002452 interceptive effect Effects 0.000 description 3
- 238000013507 mapping Methods 0.000 description 3
- BQCADISMDOOEFD-UHFFFAOYSA-N Silver Chemical compound [Ag] BQCADISMDOOEFD-UHFFFAOYSA-N 0.000 description 2
- 230000001747 exhibiting effect Effects 0.000 description 2
- 230000003340 mental effect Effects 0.000 description 2
- 238000002156 mixing Methods 0.000 description 2
- 229910052709 silver Inorganic materials 0.000 description 2
- 239000004332 silver Substances 0.000 description 2
- 238000012546 transfer Methods 0.000 description 2
- BSYNRYMUTXBXSQ-UHFFFAOYSA-N Aspirin Chemical compound CC(=O)OC1=CC=CC=C1C(O)=O BSYNRYMUTXBXSQ-UHFFFAOYSA-N 0.000 description 1
- 241000237519 Bivalvia Species 0.000 description 1
- 208000003174 Brain Neoplasms Diseases 0.000 description 1
- 101100114362 Caenorhabditis elegans col-7 gene Proteins 0.000 description 1
- 244000000626 Daucus carota Species 0.000 description 1
- 235000002767 Daucus carota Nutrition 0.000 description 1
- 230000003416 augmentation Effects 0.000 description 1
- 235000020639 clam Nutrition 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000007428 craniotomy Methods 0.000 description 1
- 238000003745 diagnosis Methods 0.000 description 1
- 235000012489 doughnuts Nutrition 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000011065 in-situ storage Methods 0.000 description 1
- 239000003550 marker Substances 0.000 description 1
- 229940127554 medical product Drugs 0.000 description 1
- 238000013188 needle biopsy Methods 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 238000002278 reconstructive surgery Methods 0.000 description 1
- 238000002271 resection Methods 0.000 description 1
- 230000037390 scarring Effects 0.000 description 1
- 210000003625 skull Anatomy 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000003860 storage Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 238000002604 ultrasonography Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/50—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for simulation or modelling of medical disorders
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H20/00—ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
- G16H20/40—ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to mechanical, radiation or invasive therapies, e.g. surgery, laser therapy, dialysis or acupuncture
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/40—ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/239—Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/332—Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
- H04N13/344—Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/366—Image reproducers using viewer tracking
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/246—Calibration of cameras
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Public Health (AREA)
- Medical Informatics (AREA)
- Primary Health Care (AREA)
- General Health & Medical Sciences (AREA)
- Epidemiology (AREA)
- Signal Processing (AREA)
- Multimedia (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Pathology (AREA)
- Databases & Information Systems (AREA)
- Data Mining & Analysis (AREA)
- Biomedical Technology (AREA)
- Surgery (AREA)
- Urology & Nephrology (AREA)
- Radiology & Medical Imaging (AREA)
- Magnetic Resonance Imaging Apparatus (AREA)
- Image Processing (AREA)
Abstract
Apparatus for image-guided surgery includes medical imaging apparatus. The imaging apparatus is utilized for capturing 3-dimensional (3D) volume data of patient portions in reference to a coordination system. A computer processes the volume data so as to provide a graphical representation of the data. A
stero camera assembly captures a stereoscopic video view of a scene including at least portions of the patient. A tracking system measures pose data of the stereoscopic video view in reference to the coordinate system. The computer is utilized for rendering the graphical representation and the stereoscopic video view in a blended way in conjunction with the pose data so as to provide a stereoscopic augmented image. A head-mounted video-see-through displays the stereoscopic augmented image.
stero camera assembly captures a stereoscopic video view of a scene including at least portions of the patient. A tracking system measures pose data of the stereoscopic video view in reference to the coordinate system. The computer is utilized for rendering the graphical representation and the stereoscopic video view in a blended way in conjunction with the pose data so as to provide a stereoscopic augmented image. A head-mounted video-see-through displays the stereoscopic augmented image.
Description
INTRA-OPERATIVE IMAGE-GUIDED NEUROSURGERY WITH
AUGMENTED REALITY VISUALIZATION
Reference is hereby made to Provisional Patent Application No. 60/238,253 entitled INTRA-OPERATIVE-MR GUIDED NEURO~URGERY WITH AUGMENTED REALITY
VISUALIZATIOIvT. filed October 10, 2000 in the names of Wendt et al.; and to Provisional Patent Application No. 60/279,931 entitled METI~OD AND APPARATUS FOR
AUGMENTED REALITY VISUALIZATION, filed March 29, 2001 in the name of Saner, whereof the disclosures are hereby herein incorporated by reference.
The present invention relates to the field of image-Guided surgery, and more particularly to MR-guided neurosurgery wherein imaging scans, such as magnetic resonance (MR) scans, are taken intra-operatively or inter-operatively.
In the practice of neurosurgery, an operating surgeon is generally required to look back and forth between the patient and a monitor displaying patient anatomical information for guidance in the operation. In this manner, a for n of "mental mapping" occurs of the image information observed on the monitor and the brain.
Typically, in the case of surgery of a brain tumor, 3-dimensional (3D) volume images taken with MR (magnetic resonance) and CT (computed tomography) scamers are used for diagnosis and for surgical plamzing.
After opening of the skull (craniotomy), the brain, being non-rigid in its physical the brain will typically further defoum. This brain shift makes the pre-operative 3D imaging data fit the actual brain geometry less and less accurately so that it is significantly out of con-espondence with what is confronting the surgeon during the operation.
Flowever. there are tumors that look like and are textured like normal healthy brain matter so that they are visually indistinguishable. Such tumors can be distinguished only by MR data and reliable resection is generally only possible with MR data that are updated during the course of the surgery. The term "intra-opera ive" MR imaging usually refers to MR scans that are being taken while the actual surgery is ongoing, whereas the tern "inter-operative"
MR imaging is used when the surgical procedure is halted for the acquisition of the scan and resumed afterwards.
Equipment has been developed by various companies for providing intra/inter -operative MR
imaging capabilities in the operating room. For example. General Electric has built an MR
scanner with a double-doughnut-shaped magnet, where the surgeon has access to the patient inside the scanner.
U.S. Patent No. 5,740,802 entitled COMPUTER GRAPHIC AND LIVE VIDEO SYSTEM
FOR ENHANCING VISUALIZATION OF BODY STRUCTURES DURING SURGERY.
assigned to General Electric Company, issued April 21, 1998 in the names ofNafis et al., is directed to an interactive surgery planning and display system which mixes live video of extem~al surfaces of the patient with interactive computer generated models of internal anatomy obtained from medical diagnostic imaging data of the patient. The computer images and the live video are coordinated and displayed to a surgeon in real-time during surgery allowing the surgeon to view internal and extem~al structures and the relation between them simultaneously, and adjust his surgery accordingly. In an alternative embodiment, a nomnah anatomical model is also displayed as a guide in reconstructive surgery.
Another embodiment employs tlu-ee-dimensional viewing.
Work relating to ultrasound imaging is disclosed by Andrei State, Mark A.
Livingston, Gentaro Hirota, William F. Garrett, Mary C. Whitton, Henry Fuchs. and Etta D.
Pisano, "Technologies for Augmented Reality Systems: realizing Ultrasound-Guided Needle Biopsies, "Proceed. of SIGGRAPH.(New Orleans, LA, August 4-9, 1996), in Computer.
Graphics Proceedings, Amoral Conference Series 1996, ACM SIGGRAPH, 439-446.
For inter-operative imaging, Siemens has built a combination of MR scarcer and operating table where the operating table with the patient can be inserted into the scanner for MR image capture (imaging position) and be withdrawn into a position where the patient is accessible to the operating team, that is, into the operating position.
In the case of the Siemens equipment, the MR data are displayed on a computer monitor. A
specialized neuroradiologist evaluates the images and discusses them with the neurosurgeon.
The neurosurgeon has to understand the relevant image infomnation and mentally map it onto the patient's brain. While such equipment provides a useful modality, this type of mental mapping is difficult and subjective and carrot preserve the complete accuracy of the information.
An object of the present invention is to generate an augmented view of the patient from the surgeon's own dynamic viewpoint and display the view to the surgeon.
AUGMENTED REALITY VISUALIZATION
Reference is hereby made to Provisional Patent Application No. 60/238,253 entitled INTRA-OPERATIVE-MR GUIDED NEURO~URGERY WITH AUGMENTED REALITY
VISUALIZATIOIvT. filed October 10, 2000 in the names of Wendt et al.; and to Provisional Patent Application No. 60/279,931 entitled METI~OD AND APPARATUS FOR
AUGMENTED REALITY VISUALIZATION, filed March 29, 2001 in the name of Saner, whereof the disclosures are hereby herein incorporated by reference.
The present invention relates to the field of image-Guided surgery, and more particularly to MR-guided neurosurgery wherein imaging scans, such as magnetic resonance (MR) scans, are taken intra-operatively or inter-operatively.
In the practice of neurosurgery, an operating surgeon is generally required to look back and forth between the patient and a monitor displaying patient anatomical information for guidance in the operation. In this manner, a for n of "mental mapping" occurs of the image information observed on the monitor and the brain.
Typically, in the case of surgery of a brain tumor, 3-dimensional (3D) volume images taken with MR (magnetic resonance) and CT (computed tomography) scamers are used for diagnosis and for surgical plamzing.
After opening of the skull (craniotomy), the brain, being non-rigid in its physical the brain will typically further defoum. This brain shift makes the pre-operative 3D imaging data fit the actual brain geometry less and less accurately so that it is significantly out of con-espondence with what is confronting the surgeon during the operation.
Flowever. there are tumors that look like and are textured like normal healthy brain matter so that they are visually indistinguishable. Such tumors can be distinguished only by MR data and reliable resection is generally only possible with MR data that are updated during the course of the surgery. The term "intra-opera ive" MR imaging usually refers to MR scans that are being taken while the actual surgery is ongoing, whereas the tern "inter-operative"
MR imaging is used when the surgical procedure is halted for the acquisition of the scan and resumed afterwards.
Equipment has been developed by various companies for providing intra/inter -operative MR
imaging capabilities in the operating room. For example. General Electric has built an MR
scanner with a double-doughnut-shaped magnet, where the surgeon has access to the patient inside the scanner.
U.S. Patent No. 5,740,802 entitled COMPUTER GRAPHIC AND LIVE VIDEO SYSTEM
FOR ENHANCING VISUALIZATION OF BODY STRUCTURES DURING SURGERY.
assigned to General Electric Company, issued April 21, 1998 in the names ofNafis et al., is directed to an interactive surgery planning and display system which mixes live video of extem~al surfaces of the patient with interactive computer generated models of internal anatomy obtained from medical diagnostic imaging data of the patient. The computer images and the live video are coordinated and displayed to a surgeon in real-time during surgery allowing the surgeon to view internal and extem~al structures and the relation between them simultaneously, and adjust his surgery accordingly. In an alternative embodiment, a nomnah anatomical model is also displayed as a guide in reconstructive surgery.
Another embodiment employs tlu-ee-dimensional viewing.
Work relating to ultrasound imaging is disclosed by Andrei State, Mark A.
Livingston, Gentaro Hirota, William F. Garrett, Mary C. Whitton, Henry Fuchs. and Etta D.
Pisano, "Technologies for Augmented Reality Systems: realizing Ultrasound-Guided Needle Biopsies, "Proceed. of SIGGRAPH.(New Orleans, LA, August 4-9, 1996), in Computer.
Graphics Proceedings, Amoral Conference Series 1996, ACM SIGGRAPH, 439-446.
For inter-operative imaging, Siemens has built a combination of MR scarcer and operating table where the operating table with the patient can be inserted into the scanner for MR image capture (imaging position) and be withdrawn into a position where the patient is accessible to the operating team, that is, into the operating position.
In the case of the Siemens equipment, the MR data are displayed on a computer monitor. A
specialized neuroradiologist evaluates the images and discusses them with the neurosurgeon.
The neurosurgeon has to understand the relevant image infomnation and mentally map it onto the patient's brain. While such equipment provides a useful modality, this type of mental mapping is difficult and subjective and carrot preserve the complete accuracy of the information.
An object of the present invention is to generate an augmented view of the patient from the surgeon's own dynamic viewpoint and display the view to the surgeon.
2 The use of Augmented Reality visualization for medical applications has been proposed as early as 1992; see, for example, M. Bajura, H. Fucks. and R. Ohbuchi. "Merging Virtual Objects with the Real World: Seeing Ultrasound lmagery within the Patient."
Proceedings of SIGGRAPH X32 (Chicago, IL, July 26-31, 1992). In Computer Graphics 26, #2 (July 1992):
203-210.
As herein used, the "augmented view" generally comprises the "real" view overlaid with additional "virtual' graphics. The real view is provided as video images. The virtual graphics is derived from a 3D volume imaging system. Hence. the virtual graphics also corresponds to real anatomical structures; however, views of these structures are available only as computer graphics renderings.
The real view of the external structures and the virtual view of the internal structures are blended with an appropriate degree of transparency, which may vary over the field of view.
Registration between real and virtual views makes all structures in the augmented view appear in the correct location with respect to each other.
In accordance with an aspect of the invention, the MR data revealing inten~al anatomic -sta-uctu-r-es_aue_show-n-in=situ.-oueil.ai.d_on th.e_surgeon'-s ~i.e_w_of tlae_patient._ With this Augmented Reality type of visualization, the derived image of the inteu~al anatomical structure is directly presented in the surgeon's workspace in a registered fashion.
In accordance with an aspect of the invention, the surgeon wears a head-mounted display and =cari=ex~aiiiin~ tliewspatial relationship between the anatomical structures from varying positions in a natural way.
n a~T' ccordance wifh an aspec~of~he mW o' n, tl~e~ is- practically elimmate~
fo'r the surgeon to look back and forth vefweeW oonitor and patient, and to mentally map the image infom~ation to the real brain. As a consequence, the surgeon can better focus on the surgical task at hand and perform the operation more precisely and confidently.
The invention will be more fully understood from the following detailed description of preferred embodiments, in conjunction with the Drawings, in which Figure 1 shows a system block diagram in accordance with the invention;
Figure 2 shows a flow diagram in accordance with the invention:
Figure 3 shows a headmounted display as may be used in an embodiment of the invention;
Figure 4 shows a frame in accordance with the invention:
Figure 5 show a boom-mounted see-tlu-ough display in accordance with the invention;
Figure 6 shows a robotic amn in accordance with the invention:
Figure 7 shows a 3D camera calibration object as may be used in an embodiment of the invention; and Figure 8 shows an MR calibration object as may be used in an embodiment of the invention.
Ball-shaped MR markers and doughnut shaped MR marl<ers are shown In accordance with the principles of the present invention, the MR infom~ation is utilized in an effective and optimal n Lamer. In an exemplary embodiment, the surgeon wears a stereo video-see-through head-mounted display. A pair of video can eras attached to the head-mounted display captures a stereoscopic view of the real scene. The video images are blended together with the computer images of the internal anatomical structures and displayed on the head-n counted stereo display in real time. To the surgeon, the internal structures appear directly superimposed on and in the patient's brain. The surgeon is free to move his or her head around to view the spatial relationship of the structures from varying positions, whereupon a computer provides the precise. objective 3D
registration between the coiizpufer iniages-of the internal stouctures and the video in gages of the real brain. This in situ or "augmented reality" visualization gives the surgeon intuitively based, direct, and precise access to the image infom~ation in regard to the surgical task of removing the patient's tumor without Hurting vital regions.
In an alternate embodiment, the stereoscopic video-see-tlu-ough display may not be head-mounted but be attached to an articulated mechanical arm that is, e.g., suspended from the ceiling (reference to "videoscope"~i~ovisioiial fili~2g)(include is2 claims).
For our purpose, a video-see-through display is understood as a display with a video camera attacluoent, whereby the video camera looks into substantially the same direction as the user who views the display: A-stereoscopic video-see-through display combines a stereoscopic display, e.g. a pair of miniature displays, and a stereoscopic camera system, e.g. a pair of cameras.
Figure 1 shows the building blocks of an exemplary system in accordance with the invention.
A 3D imaging apparatus 2. in the present example an MR scanner, is used to capture 3D
volume data of the patient. The volume data contain infounation about internal structures of the patient. .A video-see-through head-mounted display 4 gives the surgeon a dynamic viewpoint. It comprises a pair of video cameras 6 to capture a stereoscopic view of the scene (external structures) and a pair of displays 8 to display the augmented view in a stereoscopic way.
A tracking device or apparatus 10 measures position and orientation (pose) of the pair of cameras with respect to the coordinate system in which the 3D data are described.
The computer 12 comprises a set of networked computers. One of the computer tasks is to process., with possible user interaction. the volume data and provide one or more graphical representations of the imaged structures: volume representations andlor surface representations (based on segmentation of the volume data). In this context, we understand the teen graphical representation to mean a data set that is in a "graphical"
format (e.g.
VRML fomnat), ready to be eff ciently visualized respectively rendered into an image. The user can selectively enhance structures. color or annotate them, pick out relevant ones;
include graphical objects as guides for the surgical procedure and so forth.
This pre-processing can be done "off line", in preparation of the actual image guidance.
Another computer task is to render, in real time. the augmented stereo view to provide the image guidance for the surgeon. For that purpose. the computer receives the video images -a~ad-the_cam.era_po-s-e-ialfonl~ation.-and._mal~-es-use_of tlae-pre=pr-ocessed_3D. data, i.e. the stored.
graphical representation If the video images are not already in digital form, the computer digitizes them. Views of the 3D data are rendered according to the camera pose and blended with the-cowesponding-video images. The augmented images are then output to the stereo display._ An optional recording means 14 allows one to record the augmented view for documentation and. training.. The-recording-means can-be a digital storage device, or it can be a video --recorder, if necessary, combined with a scan convertor.
A general user interface 16 allows one to control the system in general, and in particular to interactively select the 3D data and pre-process them.
A realtime user interface l 8 allows the user to control the system during its realtime operation, i.e. during the realtime display of the augmented view. It allows the user to interactively change the augmented view, e.g. invoke an optical or digital zoom, switch between different degrees of transparency for the blending of real and virtual graphics, show or turn off different graphical structures. A possible hands-free embodiment would be a voice controlled user interface.
An optional remote user interface 20 allows an additional user to see and interact with the augmented view during the system's realtime operation as described later in this document.
For registration. a C011~117011 frame of reference is defned. that is. a common coordinate system, to be able io relate the 3D data and the 2D video images, with the respective pose and pre-determined internal parameters of the video cameras, to this common coordinate system.
The common coordinate system is most canveniently one in regard to which the patient's head does not.move. The patient's head is fixed in a clamp during surgery and intemnittent 3D imaging. Markers ugidly attached to this head clamp can serve as landmarks to define and locate the common coordinate system.
Figure 4 shows as an example a photo of a head clamp 4-2 with an attached frame of markers 4-4. The individual markers are retro-reflective discs 4-6, made from 3M's Scotchlite 8710 Silver Transfer Film. A preferred embodiment of the marker set is in form of a bridge as seen in the photo. See Figure 7.
The markers should be visible in the volume data or should have at least a known geometric relationship to other markers that are visible in the volume data. If necessary, tlvs i-elatioi~sliip can Ue detei-iiiii~e~ iiz aii i~iitial~ calibWatioii step.-TheWthe~volume data can Me measured with regard to the common coordinate system, or the volume data can be transformed into this common coordinate system.
The calibration procedures follow in more detail. For correct registration between graphics and=patient.-the system-needs to be-calibrated. O.ne needs to detenmine the transformation that maps the medical data onto the patient, and one needs to deteunine the inters gal parameters and=relative poses of~tl~e video cameias~slfow~the'ii~appiipg-cou-ectly in the augmented view. _ _ _ . . .
Camera-calibration and camera-patient transfom~ation. Fig. 7 shows a photo of an example of a calibration object that has been used for the calibration of a camera triplet consisting of a stereo pair of video cameras and an attached tracker camera.
The markers 7-2 are retro-reflective discs. The 3D coordinates of the markers were measured with a commercial Optotrak~ system. Then one can measure the 2D coordinates of the markers in the images, and calibrate the cameras based on 3D-2D point con-espondences for example -with-Tsai=-s-algoritlun as-described in Roger Y. Tsai,"A versatile Camera Calibration Technique for High-Accuracy 3D Machine Vision Metrology Using Off the-Shelf TV
Cameras and Lenses", IEEE Journal of Robotics and Automation, Vol. RA-3, No.
4, August 1987, pages 323-344. For realtime tracking, one rigidly attaches a set of markers with known 3D coordinates to the patient (respectively a head clamp) defining the patient coordinate system. For more detailed infonna ion, refer to F. Saner et al., "Augmented Workspace:
Designing an AR Testbed," IEEE and ACM lnt. Symp. On Augmented Reality - ISAR
(Munich. Germany. October 5-6, 2000), pages 47-53.
MR data - patient transformation for the example of the Siemens inter-operative MR
imaging an-an~ement. The patient's bed can be placed in the magnet's fringe field for the surgical procedure or swiveled into the; n Magnet for MR scarring. The bed with the head clamp, and therefore also the patient's head, are reproducibly positioned in the magnet with a specified accuracy of ~lnmn. One can pre-deternine the transformation between the MR
volume set and the head clamp with a phantom and then re-apply the sane transformation when mapping the MR data to the patient's head, with the head-clamp still in the same position.
Fig. 8 shows au example for a phantom that can be used for pre-detenniung the transformation. It consists of two sets of narkers visible in the MR data set and a set of optical markers visible to the tracker camera. Oa~e type of MR markers is ball-shaped 8-2 and can, e.g.' be obtained from Brainlab, Inc. The other type of MR marlcers 8-4 is doughnut-shaped, e.g. Multi-Modality Radiographics Markers from IZI Medical Products, Inc. In principle, only a single set of at least three MR markers is necessary. The disc-shaped retro-reflective optical markers 8-6 can be punched out from 3M's Scotchlite 8710 Silver Transfer Fihn. Oi~e-tracks-the optical markers, and ~ with the lcnowledge of the phantom's geometry -detennines the 3D locations of the MR markers in the patient coordinate system. One also determines the 3D locations of the MR markers in the MR data set, and calculates the.
transfon nation between the two coordinate systems based on the 3D-3D point correspondences.
The pose position and orientation) of the video cameras is then measuoed in reference to the common coordinate system. This is the task of the tracking means. In a preferred implementation, optical tracking is used due to its superior accuracy. A
prefen-ed implementation of optical tracking comprises rigidly attaching an additional video camera to the stereo pair of video cameras that provide the stereo view of the scene.
This tracker video -camera poin s iris substantially the sane direction as the ofher~wo video can9eras. When the surgeon looks at the patient. the tracker video camera can see the aforementioned markers that locate the common coordinate system, and from the 2D locations of the markers in the tracker camera's image one can calculate the tracker camera's pose. As the video cameras are rigidly attached to each other, the poses of the other two cameras can be calculated from the tracker camera's pose, the relative camera poses having been determined in a prior calibration step. Such camera calibration is preferably based on 3D-2D point correspondences and is described, for example, in Roger Y. Tsai, "A versatile Can sera Calibration Technique for High-Accuracy 3D Machine Vision Metrology Using Off the-Shelf TV Cameras and Lenses", IEEE .Tournal of Robotics and Automation, Vol.
RA-3, No.
4, August 198 7, pages 323-344.
Figure 2 shows a flow diagram of the system when it operates in real-time mode, i.e. when it is displaying the augmented view in real time. The computing means 2-2 receives input from tracking systems; which are here separated into tracker camera (understood to be a head-mounted tracker camera) 2-4 and external tracking systems 2-6. The computing means perform pose calculations 2-8, based on this input and prior calibration data.
The computing means also receives as input the real-time video of the scene cameras 2-l 0 and has available the stored data for the 3D graphics 2-12. In its graphics subsystem 2-14, the computing means renders graphics and video into a composite augmented view, according to the pose information. Via the user interface 2-16, the user can select between different augmentation modes (e.g. the user can vary the transparency of the virtual structures or select a digital zoom for the rendering process). The display 2-18 displays the rendered augmented view to the user.
To allow for a comfortable and relaxed posture of the surgeon during the use of the system.
the two video cameras that provide the stereo view of the scene point downward at an angle, whereby the surgeon can work on the patient without having to bend the head down into an uncomfortable position. See the pending patent application Ser. No. entitled AUGMENTED REALITY VISUALIZATION DEV1CE, filed September 17, 2001, Express Mail Label No. EL727968622US, in the names of Saner and Bani-Ha hemi, Attorney Docket No. 2001 P 14757US.
Figure 3 shows a photo of a stereoscopic video-see-tlv-ough head-mounted display. It includes the stereoscopic display 3-2 and a pair of downward tilted video cameras 3-4 for capturing the scene (scene cameras). Furthermore, it includes a tracker camera
Proceedings of SIGGRAPH X32 (Chicago, IL, July 26-31, 1992). In Computer Graphics 26, #2 (July 1992):
203-210.
As herein used, the "augmented view" generally comprises the "real" view overlaid with additional "virtual' graphics. The real view is provided as video images. The virtual graphics is derived from a 3D volume imaging system. Hence. the virtual graphics also corresponds to real anatomical structures; however, views of these structures are available only as computer graphics renderings.
The real view of the external structures and the virtual view of the internal structures are blended with an appropriate degree of transparency, which may vary over the field of view.
Registration between real and virtual views makes all structures in the augmented view appear in the correct location with respect to each other.
In accordance with an aspect of the invention, the MR data revealing inten~al anatomic -sta-uctu-r-es_aue_show-n-in=situ.-oueil.ai.d_on th.e_surgeon'-s ~i.e_w_of tlae_patient._ With this Augmented Reality type of visualization, the derived image of the inteu~al anatomical structure is directly presented in the surgeon's workspace in a registered fashion.
In accordance with an aspect of the invention, the surgeon wears a head-mounted display and =cari=ex~aiiiin~ tliewspatial relationship between the anatomical structures from varying positions in a natural way.
n a~T' ccordance wifh an aspec~of~he mW o' n, tl~e~ is- practically elimmate~
fo'r the surgeon to look back and forth vefweeW oonitor and patient, and to mentally map the image infom~ation to the real brain. As a consequence, the surgeon can better focus on the surgical task at hand and perform the operation more precisely and confidently.
The invention will be more fully understood from the following detailed description of preferred embodiments, in conjunction with the Drawings, in which Figure 1 shows a system block diagram in accordance with the invention;
Figure 2 shows a flow diagram in accordance with the invention:
Figure 3 shows a headmounted display as may be used in an embodiment of the invention;
Figure 4 shows a frame in accordance with the invention:
Figure 5 show a boom-mounted see-tlu-ough display in accordance with the invention;
Figure 6 shows a robotic amn in accordance with the invention:
Figure 7 shows a 3D camera calibration object as may be used in an embodiment of the invention; and Figure 8 shows an MR calibration object as may be used in an embodiment of the invention.
Ball-shaped MR markers and doughnut shaped MR marl<ers are shown In accordance with the principles of the present invention, the MR infom~ation is utilized in an effective and optimal n Lamer. In an exemplary embodiment, the surgeon wears a stereo video-see-through head-mounted display. A pair of video can eras attached to the head-mounted display captures a stereoscopic view of the real scene. The video images are blended together with the computer images of the internal anatomical structures and displayed on the head-n counted stereo display in real time. To the surgeon, the internal structures appear directly superimposed on and in the patient's brain. The surgeon is free to move his or her head around to view the spatial relationship of the structures from varying positions, whereupon a computer provides the precise. objective 3D
registration between the coiizpufer iniages-of the internal stouctures and the video in gages of the real brain. This in situ or "augmented reality" visualization gives the surgeon intuitively based, direct, and precise access to the image infom~ation in regard to the surgical task of removing the patient's tumor without Hurting vital regions.
In an alternate embodiment, the stereoscopic video-see-tlu-ough display may not be head-mounted but be attached to an articulated mechanical arm that is, e.g., suspended from the ceiling (reference to "videoscope"~i~ovisioiial fili~2g)(include is2 claims).
For our purpose, a video-see-through display is understood as a display with a video camera attacluoent, whereby the video camera looks into substantially the same direction as the user who views the display: A-stereoscopic video-see-through display combines a stereoscopic display, e.g. a pair of miniature displays, and a stereoscopic camera system, e.g. a pair of cameras.
Figure 1 shows the building blocks of an exemplary system in accordance with the invention.
A 3D imaging apparatus 2. in the present example an MR scanner, is used to capture 3D
volume data of the patient. The volume data contain infounation about internal structures of the patient. .A video-see-through head-mounted display 4 gives the surgeon a dynamic viewpoint. It comprises a pair of video cameras 6 to capture a stereoscopic view of the scene (external structures) and a pair of displays 8 to display the augmented view in a stereoscopic way.
A tracking device or apparatus 10 measures position and orientation (pose) of the pair of cameras with respect to the coordinate system in which the 3D data are described.
The computer 12 comprises a set of networked computers. One of the computer tasks is to process., with possible user interaction. the volume data and provide one or more graphical representations of the imaged structures: volume representations andlor surface representations (based on segmentation of the volume data). In this context, we understand the teen graphical representation to mean a data set that is in a "graphical"
format (e.g.
VRML fomnat), ready to be eff ciently visualized respectively rendered into an image. The user can selectively enhance structures. color or annotate them, pick out relevant ones;
include graphical objects as guides for the surgical procedure and so forth.
This pre-processing can be done "off line", in preparation of the actual image guidance.
Another computer task is to render, in real time. the augmented stereo view to provide the image guidance for the surgeon. For that purpose. the computer receives the video images -a~ad-the_cam.era_po-s-e-ialfonl~ation.-and._mal~-es-use_of tlae-pre=pr-ocessed_3D. data, i.e. the stored.
graphical representation If the video images are not already in digital form, the computer digitizes them. Views of the 3D data are rendered according to the camera pose and blended with the-cowesponding-video images. The augmented images are then output to the stereo display._ An optional recording means 14 allows one to record the augmented view for documentation and. training.. The-recording-means can-be a digital storage device, or it can be a video --recorder, if necessary, combined with a scan convertor.
A general user interface 16 allows one to control the system in general, and in particular to interactively select the 3D data and pre-process them.
A realtime user interface l 8 allows the user to control the system during its realtime operation, i.e. during the realtime display of the augmented view. It allows the user to interactively change the augmented view, e.g. invoke an optical or digital zoom, switch between different degrees of transparency for the blending of real and virtual graphics, show or turn off different graphical structures. A possible hands-free embodiment would be a voice controlled user interface.
An optional remote user interface 20 allows an additional user to see and interact with the augmented view during the system's realtime operation as described later in this document.
For registration. a C011~117011 frame of reference is defned. that is. a common coordinate system, to be able io relate the 3D data and the 2D video images, with the respective pose and pre-determined internal parameters of the video cameras, to this common coordinate system.
The common coordinate system is most canveniently one in regard to which the patient's head does not.move. The patient's head is fixed in a clamp during surgery and intemnittent 3D imaging. Markers ugidly attached to this head clamp can serve as landmarks to define and locate the common coordinate system.
Figure 4 shows as an example a photo of a head clamp 4-2 with an attached frame of markers 4-4. The individual markers are retro-reflective discs 4-6, made from 3M's Scotchlite 8710 Silver Transfer Film. A preferred embodiment of the marker set is in form of a bridge as seen in the photo. See Figure 7.
The markers should be visible in the volume data or should have at least a known geometric relationship to other markers that are visible in the volume data. If necessary, tlvs i-elatioi~sliip can Ue detei-iiiii~e~ iiz aii i~iitial~ calibWatioii step.-TheWthe~volume data can Me measured with regard to the common coordinate system, or the volume data can be transformed into this common coordinate system.
The calibration procedures follow in more detail. For correct registration between graphics and=patient.-the system-needs to be-calibrated. O.ne needs to detenmine the transformation that maps the medical data onto the patient, and one needs to deteunine the inters gal parameters and=relative poses of~tl~e video cameias~slfow~the'ii~appiipg-cou-ectly in the augmented view. _ _ _ . . .
Camera-calibration and camera-patient transfom~ation. Fig. 7 shows a photo of an example of a calibration object that has been used for the calibration of a camera triplet consisting of a stereo pair of video cameras and an attached tracker camera.
The markers 7-2 are retro-reflective discs. The 3D coordinates of the markers were measured with a commercial Optotrak~ system. Then one can measure the 2D coordinates of the markers in the images, and calibrate the cameras based on 3D-2D point con-espondences for example -with-Tsai=-s-algoritlun as-described in Roger Y. Tsai,"A versatile Camera Calibration Technique for High-Accuracy 3D Machine Vision Metrology Using Off the-Shelf TV
Cameras and Lenses", IEEE Journal of Robotics and Automation, Vol. RA-3, No.
4, August 1987, pages 323-344. For realtime tracking, one rigidly attaches a set of markers with known 3D coordinates to the patient (respectively a head clamp) defining the patient coordinate system. For more detailed infonna ion, refer to F. Saner et al., "Augmented Workspace:
Designing an AR Testbed," IEEE and ACM lnt. Symp. On Augmented Reality - ISAR
(Munich. Germany. October 5-6, 2000), pages 47-53.
MR data - patient transformation for the example of the Siemens inter-operative MR
imaging an-an~ement. The patient's bed can be placed in the magnet's fringe field for the surgical procedure or swiveled into the; n Magnet for MR scarring. The bed with the head clamp, and therefore also the patient's head, are reproducibly positioned in the magnet with a specified accuracy of ~lnmn. One can pre-deternine the transformation between the MR
volume set and the head clamp with a phantom and then re-apply the sane transformation when mapping the MR data to the patient's head, with the head-clamp still in the same position.
Fig. 8 shows au example for a phantom that can be used for pre-detenniung the transformation. It consists of two sets of narkers visible in the MR data set and a set of optical markers visible to the tracker camera. Oa~e type of MR markers is ball-shaped 8-2 and can, e.g.' be obtained from Brainlab, Inc. The other type of MR marlcers 8-4 is doughnut-shaped, e.g. Multi-Modality Radiographics Markers from IZI Medical Products, Inc. In principle, only a single set of at least three MR markers is necessary. The disc-shaped retro-reflective optical markers 8-6 can be punched out from 3M's Scotchlite 8710 Silver Transfer Fihn. Oi~e-tracks-the optical markers, and ~ with the lcnowledge of the phantom's geometry -detennines the 3D locations of the MR markers in the patient coordinate system. One also determines the 3D locations of the MR markers in the MR data set, and calculates the.
transfon nation between the two coordinate systems based on the 3D-3D point correspondences.
The pose position and orientation) of the video cameras is then measuoed in reference to the common coordinate system. This is the task of the tracking means. In a preferred implementation, optical tracking is used due to its superior accuracy. A
prefen-ed implementation of optical tracking comprises rigidly attaching an additional video camera to the stereo pair of video cameras that provide the stereo view of the scene.
This tracker video -camera poin s iris substantially the sane direction as the ofher~wo video can9eras. When the surgeon looks at the patient. the tracker video camera can see the aforementioned markers that locate the common coordinate system, and from the 2D locations of the markers in the tracker camera's image one can calculate the tracker camera's pose. As the video cameras are rigidly attached to each other, the poses of the other two cameras can be calculated from the tracker camera's pose, the relative camera poses having been determined in a prior calibration step. Such camera calibration is preferably based on 3D-2D point correspondences and is described, for example, in Roger Y. Tsai, "A versatile Can sera Calibration Technique for High-Accuracy 3D Machine Vision Metrology Using Off the-Shelf TV Cameras and Lenses", IEEE .Tournal of Robotics and Automation, Vol.
RA-3, No.
4, August 198 7, pages 323-344.
Figure 2 shows a flow diagram of the system when it operates in real-time mode, i.e. when it is displaying the augmented view in real time. The computing means 2-2 receives input from tracking systems; which are here separated into tracker camera (understood to be a head-mounted tracker camera) 2-4 and external tracking systems 2-6. The computing means perform pose calculations 2-8, based on this input and prior calibration data.
The computing means also receives as input the real-time video of the scene cameras 2-l 0 and has available the stored data for the 3D graphics 2-12. In its graphics subsystem 2-14, the computing means renders graphics and video into a composite augmented view, according to the pose information. Via the user interface 2-16, the user can select between different augmentation modes (e.g. the user can vary the transparency of the virtual structures or select a digital zoom for the rendering process). The display 2-18 displays the rendered augmented view to the user.
To allow for a comfortable and relaxed posture of the surgeon during the use of the system.
the two video cameras that provide the stereo view of the scene point downward at an angle, whereby the surgeon can work on the patient without having to bend the head down into an uncomfortable position. See the pending patent application Ser. No. entitled AUGMENTED REALITY VISUALIZATION DEV1CE, filed September 17, 2001, Express Mail Label No. EL727968622US, in the names of Saner and Bani-Ha hemi, Attorney Docket No. 2001 P 14757US.
Figure 3 shows a photo of a stereoscopic video-see-tlv-ough head-mounted display. It includes the stereoscopic display 3-2 and a pair of downward tilted video cameras 3-4 for capturing the scene (scene cameras). Furthermore, it includes a tracker camera
3-6 and an infrared illuminator in form of a ring of infrared LEDs 3-8.
In another embodiment, the augmented view is recorded for documentation and/or for subseguent use in applications such as trammg.
It is contemplated that the augmented view can be provided for pre-operative plamiing for surgery.
In another embodiment, interactive annotation of the augmented view is provided to permit communication between a user of the head-mounted display and an observer or associate who watches the augmented view on a monitor, stereo monitor, or another head-mounted display so that the,a.ugmented view provided to the surgeon can be shared; for example, it can observed by neuroradiologist. The neuroradiologist can then point out, such as by way of an interface to the computer (mouse, 3D mouse, Trackball. etc.) certain features to the surgeon by adding extra graphics to the augmented view or highlighting existing graphics that is being displayed as part of the augmented view.
Figure 5 shoves a diagram of a boom-mounted video-see-tlv-ough display. The video-see-tlu-ough display comprises a display and a video camera, respectively a stereo display and a stereo pair of video cameras. In the example. the video-see-tlu-ough display 52 is suspended from a-ceiling 50 by a b00111 54. For tracking. tracking means 56 are attached to the video-see-through display, more specifically to the video cameras as it is their pose that needs to be determined for rendering a col7~ectly registered augmented view. Tracking means can include a tracking camera=that-works in-conjunction w,=itlu active or passive optical markers that are placed in the scene. Altel~l~atively, traelcing means can include passive or active optical markers that work in conjunction with an exten~al tracker camera. Also, different kind of tracking systems can be employed such as magnetic tracking, inertial tracking, ultrasonic -tracking, etc. Mechanical tracking is possible by fitting the joints of the boom with encoders.
However, optical tracking is prefen-ed because of its accuracy.
Figure 6 shows elements of a system that employs a robotic arm 62, attached to a ceiling 60.
The system includes a video camera respectively a stereo pair of video cameras 64. On a remote display and control station 66, the user sees an augmented 'video and controls the robot. The robot includes tools, e.g. a drill, that the user can position and activate remotely.
Tracking means 68 enable the system to render an accurately augmented video view and to position the instruments colTectly. Embodiments of the tracking means are the same as in the description of Figure 5.
In an embodiment exhibiting remote use capability, a robot carries scene cameras. The tracking camera may then no longer be required as robot arts can be mechanically tracked.
However, in order to establish the relationship between the robot and patient coordinate systems, the tracking camera can still be useful.
The user, sited in a ren rote location. can move the robot "head" around by remote control to gain appropriate views, Ioolc at the augmented views on a head-mounted display or other stereo viewing display or external monitor, preferably in stereo. to diagnose and consult. The remote user may also be able to perform actual surgery via remote control of the robot, with or without help of personnel present at the patient site.
In another embodiment in accordance with the invention, a video-see-tlu-ough head-mounted display has downward looking scene camera/cameras. The scene cameras are video cameras that provide a view of the scene, mono or stereo, allowing a comfortable work position. The downward angle of the camera /cameras is such that - in the preferred work posture - the head does not have to be tilted up or dawn to any substantial degree.
In another embodiment in accordance with the invention, a video-see-through display _comprises_an.inte.grated tracker camera whereby the tracker camera is forward looking or is looking into substantially the same direction as the scene cameras, traclcing landmarks that are positioned on or around the object of interest. The tracker camera can have a larger field of view than the scene cameras, and can work in limited wavelength range (for example, the infrared wavelength range). See the afore-mentioned pending patent application Ser. No.
entitled AUGMENTED REALITY VISUALIZATION DEVICE, filed September 17, 2001, Express Mail Label No. EL727968622US, in the names of Sauer and BaW =I-Iashemi, -Attorney Docket No.- 2001P14757US, hereby incorporated herein by reference.
In accordance with another embodiment of the invention wherein retroreflective markers are used, a light source for illumination is placed close to or around the tracker camera lens. The wavelength of the light source is adapted to the wavelength range for which the tracker camera is sensitive. Alternatively, active markers, for example small lightsources such as LEDs can be utilized as marl<ers.
Tracking systems with large cameras that work with retroreflective markers or active markers are commercially available.
In accordance with another embodiment of the invention, a video-see-through display includes a digital zoom feature. The user can zoom in to see a magnified augmented view;
interacting with the computer by voice or other interface. or telling an assistant to interact with the computer via keyboard or mouse or other interface.
It will be apparent that the present inventions provide certain useful characteristics and features in comparison with prior systems. For example, in reference to the system disclosed in the more-mentioned U.S. Patent No. 5.740.802. video cameras are attached to head-mounted display in accordance with the present invention, thereby exhibiting a dynamic viewpoint, in contrast with prior systems which provide a viewpoint, implicitly static or quasi-static. «~lllCll 1S Otlly "substa.ntially" the same as the surgeon's viewpoint.
In contrast with a system which merely displays a live video of external surfaces of a patient and an augmented view to allow a surgeon to locate internal structures relative to visible external surfaces, the present invention malces it unnecessary for the surgeon to look at an augmented view, then determine the relative positions of external and internal structures and thereafter orient himself based on the external structures. drawil~g upon his memory of the relative position of the internal structures.
The use of a "video-see-through" head mounted display in accordance with the present invention provides an augmented view in a more direct and intuitive way without the need for -the-user to look vaclc-and~forth between-monitor and patient. This~also results in better spatial perceptionbecau-se_of_kinetic (parallax) depth cues and_there is no need for the physician to orient himself with respect to surface landmarks. since he is directly guided by the augmented view.
In such a prior art system mixing is performed in the video domain wherein the graphics is convened into video format and then mixed with the live video such that the mixer an -angement creates a composite image with a movable window which is in a region in the composite image that shows predominantly the video image or the computer image. In contrast, an embodiment in accordance with the present invention does not require a movable window; however, such a movable window may be helpful in certain kinds of augmented views. In accordance with a principle of the present invention, a composite image is created in the computer graphics domain whereby the live video is converted into a digital representation in the computer and therein blended together with the graphics.
Furthemuore, in such a prior art system, internal structures are segmented and visualized as surface models; in accordance with the present invention, 3D images can be shown in surface or in volume representations.
The present invention has been described by way of exemplary embodiments. It will be understood by one of sleill in the art to which it pertains that various changes, substitutions and the like n gay be made without departing from the spirit of the invention.
Such changes are contemplated to be within the scope of the clams following.
In another embodiment, the augmented view is recorded for documentation and/or for subseguent use in applications such as trammg.
It is contemplated that the augmented view can be provided for pre-operative plamiing for surgery.
In another embodiment, interactive annotation of the augmented view is provided to permit communication between a user of the head-mounted display and an observer or associate who watches the augmented view on a monitor, stereo monitor, or another head-mounted display so that the,a.ugmented view provided to the surgeon can be shared; for example, it can observed by neuroradiologist. The neuroradiologist can then point out, such as by way of an interface to the computer (mouse, 3D mouse, Trackball. etc.) certain features to the surgeon by adding extra graphics to the augmented view or highlighting existing graphics that is being displayed as part of the augmented view.
Figure 5 shoves a diagram of a boom-mounted video-see-tlv-ough display. The video-see-tlu-ough display comprises a display and a video camera, respectively a stereo display and a stereo pair of video cameras. In the example. the video-see-tlu-ough display 52 is suspended from a-ceiling 50 by a b00111 54. For tracking. tracking means 56 are attached to the video-see-through display, more specifically to the video cameras as it is their pose that needs to be determined for rendering a col7~ectly registered augmented view. Tracking means can include a tracking camera=that-works in-conjunction w,=itlu active or passive optical markers that are placed in the scene. Altel~l~atively, traelcing means can include passive or active optical markers that work in conjunction with an exten~al tracker camera. Also, different kind of tracking systems can be employed such as magnetic tracking, inertial tracking, ultrasonic -tracking, etc. Mechanical tracking is possible by fitting the joints of the boom with encoders.
However, optical tracking is prefen-ed because of its accuracy.
Figure 6 shows elements of a system that employs a robotic arm 62, attached to a ceiling 60.
The system includes a video camera respectively a stereo pair of video cameras 64. On a remote display and control station 66, the user sees an augmented 'video and controls the robot. The robot includes tools, e.g. a drill, that the user can position and activate remotely.
Tracking means 68 enable the system to render an accurately augmented video view and to position the instruments colTectly. Embodiments of the tracking means are the same as in the description of Figure 5.
In an embodiment exhibiting remote use capability, a robot carries scene cameras. The tracking camera may then no longer be required as robot arts can be mechanically tracked.
However, in order to establish the relationship between the robot and patient coordinate systems, the tracking camera can still be useful.
The user, sited in a ren rote location. can move the robot "head" around by remote control to gain appropriate views, Ioolc at the augmented views on a head-mounted display or other stereo viewing display or external monitor, preferably in stereo. to diagnose and consult. The remote user may also be able to perform actual surgery via remote control of the robot, with or without help of personnel present at the patient site.
In another embodiment in accordance with the invention, a video-see-tlu-ough head-mounted display has downward looking scene camera/cameras. The scene cameras are video cameras that provide a view of the scene, mono or stereo, allowing a comfortable work position. The downward angle of the camera /cameras is such that - in the preferred work posture - the head does not have to be tilted up or dawn to any substantial degree.
In another embodiment in accordance with the invention, a video-see-through display _comprises_an.inte.grated tracker camera whereby the tracker camera is forward looking or is looking into substantially the same direction as the scene cameras, traclcing landmarks that are positioned on or around the object of interest. The tracker camera can have a larger field of view than the scene cameras, and can work in limited wavelength range (for example, the infrared wavelength range). See the afore-mentioned pending patent application Ser. No.
entitled AUGMENTED REALITY VISUALIZATION DEVICE, filed September 17, 2001, Express Mail Label No. EL727968622US, in the names of Sauer and BaW =I-Iashemi, -Attorney Docket No.- 2001P14757US, hereby incorporated herein by reference.
In accordance with another embodiment of the invention wherein retroreflective markers are used, a light source for illumination is placed close to or around the tracker camera lens. The wavelength of the light source is adapted to the wavelength range for which the tracker camera is sensitive. Alternatively, active markers, for example small lightsources such as LEDs can be utilized as marl<ers.
Tracking systems with large cameras that work with retroreflective markers or active markers are commercially available.
In accordance with another embodiment of the invention, a video-see-through display includes a digital zoom feature. The user can zoom in to see a magnified augmented view;
interacting with the computer by voice or other interface. or telling an assistant to interact with the computer via keyboard or mouse or other interface.
It will be apparent that the present inventions provide certain useful characteristics and features in comparison with prior systems. For example, in reference to the system disclosed in the more-mentioned U.S. Patent No. 5.740.802. video cameras are attached to head-mounted display in accordance with the present invention, thereby exhibiting a dynamic viewpoint, in contrast with prior systems which provide a viewpoint, implicitly static or quasi-static. «~lllCll 1S Otlly "substa.ntially" the same as the surgeon's viewpoint.
In contrast with a system which merely displays a live video of external surfaces of a patient and an augmented view to allow a surgeon to locate internal structures relative to visible external surfaces, the present invention malces it unnecessary for the surgeon to look at an augmented view, then determine the relative positions of external and internal structures and thereafter orient himself based on the external structures. drawil~g upon his memory of the relative position of the internal structures.
The use of a "video-see-through" head mounted display in accordance with the present invention provides an augmented view in a more direct and intuitive way without the need for -the-user to look vaclc-and~forth between-monitor and patient. This~also results in better spatial perceptionbecau-se_of_kinetic (parallax) depth cues and_there is no need for the physician to orient himself with respect to surface landmarks. since he is directly guided by the augmented view.
In such a prior art system mixing is performed in the video domain wherein the graphics is convened into video format and then mixed with the live video such that the mixer an -angement creates a composite image with a movable window which is in a region in the composite image that shows predominantly the video image or the computer image. In contrast, an embodiment in accordance with the present invention does not require a movable window; however, such a movable window may be helpful in certain kinds of augmented views. In accordance with a principle of the present invention, a composite image is created in the computer graphics domain whereby the live video is converted into a digital representation in the computer and therein blended together with the graphics.
Furthemuore, in such a prior art system, internal structures are segmented and visualized as surface models; in accordance with the present invention, 3D images can be shown in surface or in volume representations.
The present invention has been described by way of exemplary embodiments. It will be understood by one of sleill in the art to which it pertains that various changes, substitutions and the like n gay be made without departing from the spirit of the invention.
Such changes are contemplated to be within the scope of the clams following.
Claims (50)
1. A method for image-guided surgery comprising:
capturing 3-dimensional (3D) volume data of at least a portion of a patient;
processing said volume data so as to provide a graphical representation of said data;
capturing a stereoscopic video view of a scene including said at least a portion of said patient;
rendering said graphical representation and said stereoscopic video view in a blended manner so as to provide a stereoscopic augmented image: and displaying said stereoscopic augmented image in a video-see-through display.
capturing 3-dimensional (3D) volume data of at least a portion of a patient;
processing said volume data so as to provide a graphical representation of said data;
capturing a stereoscopic video view of a scene including said at least a portion of said patient;
rendering said graphical representation and said stereoscopic video view in a blended manner so as to provide a stereoscopic augmented image: and displaying said stereoscopic augmented image in a video-see-through display.
2. A method for image-guided surgery comprising:
capturing 3-dimensional (3D) volume data of at least a portion of a patient in reference to a coordinate system;
processing said volume data so as to provide a graphical representation of said data;
capturing a stereoscopic video view of a scene including said at least a portion of said patient;
measuring pose data of said stereoscopic video view in reference to said coordinate system;
rendering said graphical representation and said stereoscopic video view in a blended manner in conjunction with said pose data so as to provide a stereoscopic augmented image;
and displaying said stereoscopic augmented image in a video-see-through display.
capturing 3-dimensional (3D) volume data of at least a portion of a patient in reference to a coordinate system;
processing said volume data so as to provide a graphical representation of said data;
capturing a stereoscopic video view of a scene including said at least a portion of said patient;
measuring pose data of said stereoscopic video view in reference to said coordinate system;
rendering said graphical representation and said stereoscopic video view in a blended manner in conjunction with said pose data so as to provide a stereoscopic augmented image;
and displaying said stereoscopic augmented image in a video-see-through display.
3. A method for image-guided surgery in accordance with claim 1, wherein said step of capturing 3-dimensional (3D) volume data comprises obtaining magnetic-resonance imaging data.
4. A method for image-guided surgery in accordance with claim 1, wherein said step of processing said volume data comprises processing said data in a programmable computer.
5. A method for image-guided surgery in accordance with claim 1, wherein said step of capturing a stereoscopic video view comprises capturing a stereoscopic view by a pair of stereo cameras.
6. A method for image-guided surgery in accordance with claim 2, wherein said step of measuring pose data comprises measuring position and orientation of said pair of stereo cameras by way of a tracking device.
7. A method for image-guided surgery in accordance with claim 1. wherein said step of rendering said graphical representation and said stereoscopic video view manner in conjunction with said pose data comprises utilizing video images, and where necessary, digitizing said video images, said camera pose information. and stored volume data captured in a previous step for providing said stereoscopic augmented image.
8. A method for image-guided surgery in accordance with claim 1, wherein said step of displaying said stereoscopic augmented image in a video-see-through display comprises displaying said stereoscopic augmented image in a head-mounted video-see-through display.
9. Apparatus for image-guided surgery comprising:
means for capturing 3-dimensional (3D) volume data of at least a portion of a patient:
means for processing said volume data so as to provide a graphical representation of said data:
means for capturing a stereoscopic video view of a scene including said at least a portion of said patient;
means for rendering said graphical representation and said stereoscopic video view in a blended manner way so as to provide a stereoscopic augmented image; and means for displaying said stereoscopic augmented image in a video-see-through display.
means for capturing 3-dimensional (3D) volume data of at least a portion of a patient:
means for processing said volume data so as to provide a graphical representation of said data:
means for capturing a stereoscopic video view of a scene including said at least a portion of said patient;
means for rendering said graphical representation and said stereoscopic video view in a blended manner way so as to provide a stereoscopic augmented image; and means for displaying said stereoscopic augmented image in a video-see-through display.
10. Apparatus for image-guided surgery comprising:
means for capturing 3-dimensional (3D) volume data of at least a portion of a patient in reference to a coordinate system;
means for processing said volume data so as to provide a graphical representation of said data;
means for capturing a stereoscopic video view of a scene including said at least a portion of said patient;
means for measuring pose data of said stereoscopic video view in reference to said coordinate system;
means for rendering said graphical representation and said stereoscopic video view in a blended manner in conjunction with said pose data so as to provide a stereoscopic augmented image; and means for displaying said stereoscopic augmented image in a video-see-through display.
means for capturing 3-dimensional (3D) volume data of at least a portion of a patient in reference to a coordinate system;
means for processing said volume data so as to provide a graphical representation of said data;
means for capturing a stereoscopic video view of a scene including said at least a portion of said patient;
means for measuring pose data of said stereoscopic video view in reference to said coordinate system;
means for rendering said graphical representation and said stereoscopic video view in a blended manner in conjunction with said pose data so as to provide a stereoscopic augmented image; and means for displaying said stereoscopic augmented image in a video-see-through display.
11. Apparatus for image-guided surgery in accordance with claim 9, wherein said means for capturing 3-dimensional (3D) volume data comprises means for obtaining magnetic-resonance imaging data.
12. Apparatus for image-guided surgery in accordance with claim 9. wherein said means for processing said volume data comprises means for processing said data in a programmable computer.
13. Apparatus for image-guided surgery in accordance with claim 9. wherein said means for capturing a stereoscopic video view comprises means for capturing a stereoscopic view by a pair of stereo cameras.
14. Apparatus for image-guided surgery in accordance with claim 9, wherein said means for measuring pose data comprises means for measuring position and orientation of said pair of stereo cameras by way of a tracking device.
15. Apparatus image-guided surgery in accordance with claim 9, wherein said means for rendering said graphical representation and said stereoscopic video view in a blended manner in conjunction with said pose data comprises means for utilizing video images, and where necessary, digitizing said video images, said camera pose information, and stored previously captured volume data captured for providing said stereoscopic augmented image.
16. Apparatus for image-guided surgery in accordance with claim 9. wherein said means for displaying said stereoscopic augmented image in a video-see-through display comprises a head-mounted video-see-through display.
17. Apparatus for image-guided surgery in accordance with claim 9, including a set of markers in predetermined relationship to said patient for defining said coordinate system.
18. Apparatus for image-guided surgery in accordance with claim 17, wherein said markers are identifiable in said volume data.
19. Apparatus for image-guided surgery in accordance with claim 18, wherein said means for displaying said stereoscopic augmented image in a video-see-through display comprises a boom-mounted video-see-through display.
20. Apparatus for image-guided surgery comprising:
medical imaging apparatus. said imaging apparatus being utilized for capturing dimensional (3D) volume data of at least patient portions in reference to a coordinate system;
a computer for processing said volume data so as to provide a graphical representation of said data;
a stereo camera assembly for capturing a stereoscopic video view of a scene including said at least patient portions:
a tracking system for measuring pose data of said stereoscopic video view in reference to said coordinate system:
said computer being utilized for rendering said graphical representation and said stereoscopic video view in a blended way in conjunction with said pose data so as to provide a stereoscopic augmented image; and a head-mounted video-see-through display for displaying said stereoscopic augmented image.
medical imaging apparatus. said imaging apparatus being utilized for capturing dimensional (3D) volume data of at least patient portions in reference to a coordinate system;
a computer for processing said volume data so as to provide a graphical representation of said data;
a stereo camera assembly for capturing a stereoscopic video view of a scene including said at least patient portions:
a tracking system for measuring pose data of said stereoscopic video view in reference to said coordinate system:
said computer being utilized for rendering said graphical representation and said stereoscopic video view in a blended way in conjunction with said pose data so as to provide a stereoscopic augmented image; and a head-mounted video-see-through display for displaying said stereoscopic augmented image.
21. Apparatus for image-guided surgery in accordance with claim 20, wherein said medical imaging apparatus is one of X-ray computed tomography apparatus, magnetic resonance imaging apparatus, and 3D ultrasound imaging apparatus.
22. Apparatus for image-guided surgery in accordance with claim 20, wherein said coordinate system is defined in relation to said patient.
23. Apparatus for image-guided surgery in accordance with claim 22, including markers in predetermined relationship to said patient.
24. Apparatus for image-guided surgery in accordance with claim 23, wherein said markers are identifiable in said volume data.
25. Apparatus for image-guided surgery in accordance with claim 20, wherein said computer comprises a set of networked computers.
26. Apparatus for image-guided surgery in accordance with claim 25, wherein said computer processes said volume data with optional user interaction. and provides at least one graphical representation of said patient portions; said graphical representation comprising at least one of volume representations and surface representations based on segmentation of said volume data.
27. Apparatus for image-guided surgery in accordance with claim 26, wherein said optional user interaction allows a user to, in any desired combination, selectively enhance;
color, annotate, single out, and identify for guidance in surgical procedures, at least a portion of said patient portions.
color, annotate, single out, and identify for guidance in surgical procedures, at least a portion of said patient portions.
28. Apparatus for image-guided surgery in accordance with claim 20. wherein said tracking system comprises an optical tracker.
29. Apparatus for image-guided surgery in accordance with claim 20, wherein said stereo camera assembly are adapted for operating in an angled swiveled orientation, including a downward-looking orientation for allowing a user to operate without having to tilt the head downward.
30. Apparatus for image-guided surgery in accordance with claim 28, wherein said optical tracker comprises a tracker video camera in predetermined coupled relationship with said stereo camera assembly.
31. Apparatus for image-guided surgery in accordance with claim 28. wherein said optical tracker comprises a tracker video camera faces in substantially the same direction as said stereo camera assembly for tracking landmarks around the center area of view of said stereo camera assembly.
32. Apparatus for image-guided surgery in accordance with claim 31. wherein said tracker video camera exhibits a larger field of view than said stereo camera assembly.
33. Apparatus for image-guided surgery in accordance with claim 31, wherein said landmarks comprise optical markers.
34. - Apparatus for image-guided surgery in accordance with claim 31, wherein said landmarks comprise reflective markers.
35. Apparatus for image-guided surgery in accordance with claim 34, wherein said reflective markers are illuminated by light of a wavelength suitable for said tracker video camera.
36. Apparatus for image-guided surgery in accordance with claim 20. wherein said video-see-through display comprises a zoom feature.
37. Apparatus for image-guided surgery in accordance with claim 31, wherein said landmarks comprise light-emitting markers.
38. Apparatus for image-guided surgery-in accordance with claim 20, wherein said augmented view can be any combination: stored, replayed, remotely viewed, and simultaneously replicated for at least one additional user.
39. Apparatus for image-guided surgery comprising:
medical imaging apparatus, said imaging apparatus being utilized for capturing dimensional (3D) volume data of at least patient portions in reference to a coordinate system;
a computer for processing said volume data so as to provide a graphical representation of said data:
a robot arm manipulator operable by user from a remote location;
a stereo camera assembly mounted on said robot arm manipulator for capturing a stereoscopic video view of a scene including said patient;
a tracking system for measuring pose data of said stereoscopic video view in reference to said coordinate system:
said computer being utilized for rendering said graphical representation and said stereoscopic video view in a blended way in conjunction with said pose data so as to provide a stereoscopic augmented image; and a head-mounted video-see-through display for displaying said stereoscopic augmented image at said remote location.
medical imaging apparatus, said imaging apparatus being utilized for capturing dimensional (3D) volume data of at least patient portions in reference to a coordinate system;
a computer for processing said volume data so as to provide a graphical representation of said data:
a robot arm manipulator operable by user from a remote location;
a stereo camera assembly mounted on said robot arm manipulator for capturing a stereoscopic video view of a scene including said patient;
a tracking system for measuring pose data of said stereoscopic video view in reference to said coordinate system:
said computer being utilized for rendering said graphical representation and said stereoscopic video view in a blended way in conjunction with said pose data so as to provide a stereoscopic augmented image; and a head-mounted video-see-through display for displaying said stereoscopic augmented image at said remote location.
40. Apparatus for image-guided surgery in accordance with claim 39. wherein said optical tracker comprises a tracker video camera in predetermined coupled relationship with said robot arm manipulator.
41. A method for image-guided surgery utilizing captured 3-dimensional (3D) volume data of at least a portion of a patient. said method comprising:
processing said volume data so as to provide a graphical representation of said data:
capturing a stereoscopic video view of a scene including said at least a portion of said patient;
rendering said graphical representation and said stereoscopic video view in a blended manner so as to provide a stereoscopic augmented image: and displaying said stereoscopic augmented image in~a-video-see-through display.
processing said volume data so as to provide a graphical representation of said data:
capturing a stereoscopic video view of a scene including said at least a portion of said patient;
rendering said graphical representation and said stereoscopic video view in a blended manner so as to provide a stereoscopic augmented image: and displaying said stereoscopic augmented image in~a-video-see-through display.
42. A method for image-guided surgery utilizing 3-dimensional (3D) volume data of at least a portion of a patient. said data having been captured in reference to a coordinate system, said method comprising:
capturing 3-dimensional (3D) volume data of at least a pardon of a patient processing said volume data so as to provide a graphical representation of said data;
capturing a stereoscopic video view of a scene including said at least a portion of said parent;
measuring pose data of said stereoscopic video view in reference to said coordinate system;
rendering said graphical representation and said stereoscopic video view in a blended manner in conjunction with said pose data so as to provide a stereoscopic augmented image:
and displaying said stereoscopic augmented image in a video-see-through display.
capturing 3-dimensional (3D) volume data of at least a pardon of a patient processing said volume data so as to provide a graphical representation of said data;
capturing a stereoscopic video view of a scene including said at least a portion of said parent;
measuring pose data of said stereoscopic video view in reference to said coordinate system;
rendering said graphical representation and said stereoscopic video view in a blended manner in conjunction with said pose data so as to provide a stereoscopic augmented image:
and displaying said stereoscopic augmented image in a video-see-through display.
43. A method for image-guided surgery in accordance with claim 42, wherein said 3-dimensional (3D) volume data comprises magnetic-resonance imaging data.
44. A method for image-guided surgery in accordance with claim 42, wherein said step of processing said volume data comprises processing said data in a programmable computer.
45. A method for image-guided surgery in accordance with claim 42. wherein said step of capturing a stereoscopic video view comprises capturing a stereoscopic view by a pair of stereo cameras.
46. A method for image-guided surgery in accordance with claim 42, wherein said step of measuring pose data comprises measuring position and orientation of said pair of stereo cameras by way of a tracking device.
47. A method for image-guided surgery in accordance with claim 42, wherein said step of rendering said graphical representation and said stereoscopic video view in a blended way in conjunction with said pose data comprises utilizing video images, and where necessary, digitizing said video images, said camera pose information. and stored volume data captured in a previous step for providing said stereoscopic augmented image.
48. A method for image-guided surgery in accordance with claim 42, wherein said step of displaying said stereoscopic augmented image in a video-see-through display comprises displaying said stereoscopic augmented image in a head-mounted video-see-through display.
49. Apparatus for image-guided surgery utilizing captured 3-dimensional (3D) volume data of at least a portion of a patient, said apparatus comprising:
means for processing said volume data so as to provide a graphical representation of said data;
means for capturing a stereoscopic video view of a scene including said at least a portion of said patient;
means for rendering said graphical representation and said stereoscopic video view in a blended manner so as to provide a stereoscopic augmented image; and means for displaying said stereoscopic augmented image in a video-see-through display.
means for processing said volume data so as to provide a graphical representation of said data;
means for capturing a stereoscopic video view of a scene including said at least a portion of said patient;
means for rendering said graphical representation and said stereoscopic video view in a blended manner so as to provide a stereoscopic augmented image; and means for displaying said stereoscopic augmented image in a video-see-through display.
50. Apparatus for image-guided surgery utilizing 3-dimensional (3D) volume data of at least a portion of a patient, said data having been captured in reference to a coordinate system, said apparatus comprising:
means for processing said volume data so as to provide a graphical representation of said data:
means for capturing a stereoscopic video view of a scene including said at least a portion of said patient;
means for measuring pose data of said stereoscopic video view in reference to said coordinate system:
means for rendering said graphical representation and said stereoscopic video view in a blended manner in conjunction with said pose data so as to provide a stereoscopic augmented image: and means for displaying said stereoscopic augmented image in a video-see-through display-
means for processing said volume data so as to provide a graphical representation of said data:
means for capturing a stereoscopic video view of a scene including said at least a portion of said patient;
means for measuring pose data of said stereoscopic video view in reference to said coordinate system:
means for rendering said graphical representation and said stereoscopic video view in a blended manner in conjunction with said pose data so as to provide a stereoscopic augmented image: and means for displaying said stereoscopic augmented image in a video-see-through display-
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US23825300P | 2000-10-05 | 2000-10-05 | |
US60/238,253 | 2000-10-05 | ||
US27993101P | 2001-03-29 | 2001-03-29 | |
PCT/US2001/042506 WO2002029700A2 (en) | 2000-10-05 | 2001-10-05 | Intra-operative image-guided neurosurgery with augmented reality visualization |
Publications (1)
Publication Number | Publication Date |
---|---|
CA2425075A1 true CA2425075A1 (en) | 2002-04-11 |
Family
ID=22897111
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CA002425075A Abandoned CA2425075A1 (en) | 2000-10-05 | 2001-10-05 | Intra-operative image-guided neurosurgery with augmented reality visualization |
Country Status (1)
Country | Link |
---|---|
CA (1) | CA2425075A1 (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109637622A (en) * | 2013-07-16 | 2019-04-16 | 精工爱普生株式会社 | Information processing unit, information processing method and information processing system |
CN111540008A (en) * | 2020-04-17 | 2020-08-14 | 北京柏惠维康科技有限公司 | Positioning method, device, system, electronic equipment and storage medium |
CN111631814A (en) * | 2020-06-11 | 2020-09-08 | 上海交通大学医学院附属第九人民医院 | Intraoperative blood vessel three-dimensional positioning navigation system and method |
CN113555092A (en) * | 2020-04-24 | 2021-10-26 | 辉达公司 | Image annotation using one or more neural networks |
-
2001
- 2001-10-05 CA CA002425075A patent/CA2425075A1/en not_active Abandoned
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109637622A (en) * | 2013-07-16 | 2019-04-16 | 精工爱普生株式会社 | Information processing unit, information processing method and information processing system |
CN111540008A (en) * | 2020-04-17 | 2020-08-14 | 北京柏惠维康科技有限公司 | Positioning method, device, system, electronic equipment and storage medium |
CN111540008B (en) * | 2020-04-17 | 2022-10-11 | 北京柏惠维康科技股份有限公司 | Positioning method, device, system, electronic equipment and storage medium |
CN113555092A (en) * | 2020-04-24 | 2021-10-26 | 辉达公司 | Image annotation using one or more neural networks |
CN111631814A (en) * | 2020-06-11 | 2020-09-08 | 上海交通大学医学院附属第九人民医院 | Intraoperative blood vessel three-dimensional positioning navigation system and method |
CN111631814B (en) * | 2020-06-11 | 2024-03-29 | 上海交通大学医学院附属第九人民医院 | Intraoperative blood vessel three-dimensional positioning navigation system and method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20020082498A1 (en) | Intra-operative image-guided neurosurgery with augmented reality visualization | |
Gavaghan et al. | A portable image overlay projection device for computer-aided open liver surgery | |
US5526812A (en) | Display system for enhancing visualization of body structures during medical procedures | |
Liao et al. | 3-D augmented reality for MRI-guided surgery using integral videography autostereoscopic image overlay | |
US7774044B2 (en) | System and method for augmented reality navigation in a medical intervention procedure | |
Wang et al. | Augmented reality navigation with automatic marker-free image registration using 3-D image overlay for dental surgery | |
EP1395194B1 (en) | A guide system | |
CA2486525C (en) | A guide system and a probe therefor | |
US6006126A (en) | System and method for stereotactic registration of image scan data | |
US20040047044A1 (en) | Apparatus and method for combining three-dimensional spaces | |
EP2438880A1 (en) | Image projection system for projecting image on the surface of an object | |
Navab et al. | Laparoscopic virtual mirror new interaction paradigm for monitor based augmented reality | |
Fan et al. | Spatial position measurement system for surgical navigation using 3-D image marker-based tracking tools with compact volume | |
Vogt et al. | Reality augmentation for medical procedures: System architecture, single camera marker tracking, and system evaluation | |
WO2002080773A1 (en) | Augmentet reality apparatus and ct method | |
Philip et al. | Stereo augmented reality in the surgical microscope | |
Ma et al. | Moving-tolerant augmented reality surgical navigation system using autostereoscopic three-dimensional image overlay | |
Maurer Jr et al. | Augmented-reality visualization of brain structures with stereo and kinetic depth cues: system description and initial evaluation with head phantom | |
EP0629963A2 (en) | A display system for visualization of body structures during medical procedures | |
JP2023526716A (en) | Surgical navigation system and its application | |
Vogt | Real-Time Augmented Reality for Image-Guided Interventions | |
Suthau et al. | A concept work for Augmented Reality visualisation based on a medical application in liver surgery | |
Zhang et al. | 3D augmented reality based orthopaedic interventions | |
CA2425075A1 (en) | Intra-operative image-guided neurosurgery with augmented reality visualization | |
Paloc et al. | Computer-aided surgery based on auto-stereoscopic augmented reality |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
FZDE | Discontinued |