[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

WO2021242932A1 - Generation of three-dimensional images with digital magnification - Google Patents

Generation of three-dimensional images with digital magnification Download PDF

Info

Publication number
WO2021242932A1
WO2021242932A1 PCT/US2021/034366 US2021034366W WO2021242932A1 WO 2021242932 A1 WO2021242932 A1 WO 2021242932A1 US 2021034366 W US2021034366 W US 2021034366W WO 2021242932 A1 WO2021242932 A1 WO 2021242932A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
digital magnification
cropped
target
overlap
Prior art date
Application number
PCT/US2021/034366
Other languages
French (fr)
Inventor
Yang Liu
Maziyar ASKARI KARCHEGANI
Original Assignee
Unify Medical
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Unify Medical filed Critical Unify Medical
Priority to JP2022573222A priority Critical patent/JP2023537454A/en
Priority to EP21814225.5A priority patent/EP4158889A4/en
Priority to CA3180220A priority patent/CA3180220A1/en
Priority to BR112022024142A priority patent/BR112022024142A2/en
Publication of WO2021242932A1 publication Critical patent/WO2021242932A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/239Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/246Calibration of cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/296Synchronisation thereof; Control thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • H04N23/53Constructional details of electronic viewfinders, e.g. rotatable or detachable
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0138Head-up displays characterised by optical features comprising image capture systems, e.g. camera
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/014Head-up displays characterised by optical features comprising information/image processing systems
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • G02B2027/0178Eyeglass type
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/64Imaging systems using optical elements for stabilisation of the lateral and angular position of the image
    • G02B27/646Imaging systems using optical elements for stabilisation of the lateral and angular position of the image compensating for small deviations, e.g. due to vibration or shake
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/22Cropping

Definitions

  • the present disclosure relates to the generation of the Three-Dimensional (3D) images and specifically to the generation of 3D images from the digital magnification of images captured of a target.
  • Surgical loupes have been used extensively in various types of surgeries.
  • Surgical loupes are a pair of optical magnifiers that magnify the surgical field and provide magnified stereoscopic vision.
  • conventional surgical loupes have significant limitations.
  • a single set of conventional surgical loupes only offer a fixed level of magnification, such as 2X without any capabilities to vary such magnification. Therefore, surgeons typically require several pairs of surgical loupes with each pair having a different level of magnification to cater for different levels of magnifications.
  • Changing surgical loupes in the operating room is inconvenient with an increased cost to have several sets of surgical loupes with different magnifications customized for a single one surgeon.
  • equipping conventional surgical loupes with magnifying lenses typically include an increased length resulting in an increased form factor and increased weight and thereby limit the magnification level.
  • the increased form factor and increased weight also limit the duration of surgical procedures that the surgeon may execute.
  • conventional surgical loupes implement a non-imaging configuration, whereby the magnification lenses magnify and form a pair of virtual images thereby decreasing the working distances and depths of focus for the surgeon. Therefore, the surgeon has to restrict the position of their head and neck to a specific position as they use the conventional surgical loupes. This results in neck pains and cervical diseases for surgeons with long term use of conventional surgical loupes.
  • conventional imaging configurations in the non-surgical space include stereo imaging systems and imaging systems with zoom lenses where such conventional imaging configurations generate 3D images while enabling the adjustment of magnification.
  • the incorporation of such conventional imaging configurations in the surgical space require the implementation of two displays and/or zoom lenses for the surgeon.
  • the two stereo displays included in such conventional stereo imaging systems must be mechanically adjusted for each magnification level as well as calibrated. Such mechanical adjustment and calibration in the surgical space is not feasible.
  • the changing in magnification for two conventional zoom lenses requires each image at each magnification level to always be captured at the center of the initial image where each level of magnification continues to capture the center of the initial image.
  • the resulting 3D image displayed to the surgeon is significantly skewed thereby preventing the incorporation of conventional zoom lenses into the surgical space.
  • FIG. 1A illustrates a schematic view of binocular overlap of human eyes configuration where the region seen by both eyes is the overlapped region included in the scene seen by both eyes;
  • FIG. IB illustrates a block diagram of a two imaging sensor configuration where two image sensors with two lenses are used in a side-by-side configuration
  • FIG. 1C illustrates a block diagram a binocular overlap of two imaging sensor configuration with the regions seen by both imaging sensors is the overlapped region
  • FIG. 2 depicts a schematic view of a conventional digital zoom configuration where the original image is cropped and resized (from left to right);
  • FIG. 3 illustrates a block diagram of a digital magnification of a 3D image system that may generate 3D images when executing digital magnification on captured images of a target;
  • FIG. 4 depicts a schematic view of a conventional digital zoom configuration where the zoomed left images and zoomed right images are misaligned leading to poor 3D vision and depth perception;
  • FIG. 5 depicts a schematic diagram of a digital magnification with binocular vertical alignment preservation configuration where the magnified left images and the magnified right images are vertically aligned thereby resulting in increased 3D visualization;
  • FIG. 6 depicts a schematic view of a digitally magnified stereo images with preservation of vertical alignment configuration whereas the digital magnification is applied, binocular overlap between the cropped left images and cropped right images gradually decreases;
  • FIG. 7 depicts a schematic view of a preservation of binocular overlap and binocular vertical alignment configuration where at 2.3X, 5.3C, and 12X, respectively, the left cropped images and the right cropped images have binocular overlap of 75% and vertical alignment thereby resulting in an increased 3D visualization experience and depth perception may be provided to the user;
  • FIG. 8 depicts a schematic view a physical embodiment of a digital magnification surgical loupe configuration.
  • Embodiments of the present disclosure may be implemented in hardware, firmware, software, or any combination thereof. Embodiments of the present disclosure may also be implemented as instructions applied by a machine-readable medium, which may be read and executed by one or more processors.
  • a machine-readable medium may include any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computing device).
  • a machine-readable medium may include read only memory (“ROM”), random access memory (“RAM”), magnetic disk storage media, optical storage media, flash memory devices, electrical optical, acoustical or other forms of propagated signals (e.g., carrier waves, infrared signals, digital signals, etc.), and others.
  • firmware, software routines, and instructions may be described herein as performing certain actions. However, it should be appreciated that such descriptions are merely for convenience and that such actions in fact result from computing devices, processors, controllers, or other devices executing the firmware, software, routines, instructions, etc.
  • each of the various components discussed may be considered a module, and the term “module” shall be understood to include at least one software, firmware, and hardware (such as one or more circuit, microchip, or device, or any combination thereof), and any combination thereof.
  • each module may include one, or more than one, component within an actual device, and each component that forms a part of the described module may function either cooperatively or independently from any other component forming a part of the module.
  • multiple modules described herein may represent a single component within an actual device. Further, components within a module may be in a single device or distributed among multiple devices in a wired or wireless manner.
  • FIG. 1A illustrates a schematic view of binocular overlap of human eyes configuration 100 where the region seen by both eyes is the overlapped region included in the scene seen by both eyes.
  • the binocular overlap of human eyes configuration 100 includes a right eye 110a, a left eye 110b, an image as seen by right eye 120a, an image as seen by left eye 120b, and a binocular overlap 120c as seen by both eyes.
  • the present invention describes the apparatus, systems, and methods for constructing augmented reality devices for medical and dental magnification.
  • One of the key concepts in 3D imaging and visualization is binocular overlap 120c.
  • Binocular overlap 120c describes the overlap between the image as seen by the left eye 120b, versus the image as seen by the right eye 120a. For human being, a binocular overlap 120c is approximately 70%.
  • FIG. IB illustrates a block diagram of a two imaging sensor configuration 150 where two image sensors with two lenses are used in a side-by-side configuration.
  • the two imaging sensor configuration 150 includes a right image sensor 130a, a left image sensor 130b, a right lens 140a, and a left lens 140b.
  • FIG. 1C illustrates a block diagram a binocular overlap of two imaging sensor configuration 175 with the regions seen by both imaging sensors is the overlapped region.
  • the binocular overlap of two imaging sensor configuration 175 includes a captured region by right image sensor 150a, a captured region by left image sensor 150b, and a binocular overlap region 150c.
  • FIG. 1C depicts the binocular overlap region 150c that is generated when a right image sensor 130a and a left image sensor 130b are used in a side-by- side configuration as depicted in FIG. IB.
  • FIG. 2 depicts a schematic view of a conventional digital zoom configuration 200 where the original image is cropped and resized (from left to right). The cropped and resized images are displayed to the user after conventional digital zooming. Conventionally, digital zoom has been commonly used to zoom the image. The principle of conventional digital zoom is illustrated in FIG. 2. Although conventional digital zoom can magnify the images without the need of zoom lenses, it is not suitable for 3D magnification.
  • FIG. 3 illustrates a block diagram of a digital magnification of a 3D image system
  • the digital magnification of a 3D image system 300 includes a right lens 340a, a left lens 340b, a right image sensor 330a, a left image sensor 330b, a controller 310, a near-eye 3D display 320, and an eyeglass frame 350.
  • the eyeglass frame 350 is a head mount.
  • the eyeglass frame 350 is a traditional eyeglass frame sitting on the nose and ears of a user.
  • the digital magnification of a 3D image system 300 may generate 3D images from captured images of a target when executing digital magnification on the captured images to maintain the 3D images generated of the target after digital magnification.
  • a first image sensor (such as right image sensor 330a) may capture a first image at an original size of the target.
  • a second image sensor (such as left image sensor 330b) may be positioned on a common x-axis with the first image sensor 330a to capture a second image at the original size of the target. It should be appreciated that the first image sensor 330a and the second image sensor 330b may be positioned with either a converging angle or a diverging angle.
  • a controller 310 may execute a digital magnification on the first image captured by the first image sensor 330a at the original size of the target and on the second image captured by the second image sensor 330b at the original size of the target.
  • the controller 310 may crop the first image captured by the first image sensor 330a and the second image captured by the second image sensor 330b to overlap a first portion of the target captured by the first image sensor 330a with a second portion of the target captured by the second image sensor 330b.
  • the first portion of the target captured by the first image sensor 330a overlaps with the second portion of the target captured by the second image sensor 330b.
  • the first image sensor 330a is further coupled with a first autofocus lens and the second image sensor 330b is further coupled with a second autofocus lens.
  • the autofocus lenses may enable autofocus.
  • the controller 310 may adj ust the cropping of the first image and the second image to provide binocular overlap of the first portion of the target with the second portion of the target.
  • the binocular overlap of the first image and the second image is an overlap threshold that when satisfied results in a 3D image of the target displayed to a user after the digital magnification is executed.
  • the controller may instruct a display (such as near-eye 3D display 320) to display the cropped first image and the cropped second image that includes the binocular overlap to the user.
  • the displayed cropped first image and the cropped second image display the 3D image at the digital magnification to the user.
  • the controller 310 may resize the cropped first image to the original size of the first image captured by the first image sensor 330a and the cropped second image to the original size of the second image captured by the second image sensor 330b.
  • the cropped first image as resized and the cropped second image resized includes the binocular overlap of the first image and the second image.
  • the controller 310 may instruct the near-eye 3D display 320 to display the resized and cropped first image and the resized and cropped second image that includes the binocular overlap to the user.
  • the displayed resized and cropped first imaee and the resized and cropped second image display the 3D image at the digital magnification to the user. It should be appreciated that in one embodiment the controller 310 may crop the first image captured by the first image sensor 330a, to generate both left cropped image and right cropped image. In this embodiment, the second image captured by the second image sensor 330b is not used.
  • the display 320 is a near-eye display. In one embodiment, the display
  • the 320 is a 2D display. In another embodiment, the display 320 is a 3D display. It should be further appreciated that the near-eye display 320 may comprise LCD (liquid crystal) microdisplays, LED (light emitting diode) microdisplays, organic LED (OLED) microdisplays, liquid crystal on silicon (LCOS) microdisplays, retinal scanning displays, virtual retinal displays, optical see-through displays, video see-through displays, convertible video-optical see-through displays, wearable projection displays, projection display, and the like. It should be the appreciated that the display 320 may be stereoscopic to enable displaying of 3D content. In another embodiment, the display 320 is a projection display. It should be appreciated that the display 320 may be a monitor placed near the user.
  • the display 320 may be a 3D monitor placed near the user and the user will wear a polarizing glass or active shutter glasses. It should be further appreciated that the display 320 may be a half transparent mirror placed near the user to reflect the image projected by a projector. It should be further be appreciated that the said projector may be 2D or 3D. It should be further appreciated that the said projector may be used with the user wearing a polarizing glass or active shutter glasses.
  • the display 320 is a flat panel 2D monitor or TV. In another embodiment, the display 320 is a flat panel 3D monitor or 3D TV. The 3D monitor/TV may need to work with passive polarizers or active shutter glasses.
  • the 3D monitor/TV is glass-free.
  • the display 320 can be a touchscreen, or a projector.
  • the display 320 comprises a half transparent mirror that can reflect projection of images to the eyes of the user.
  • the images being projected may be 3D, and the user may wear 3D glasses (e.g. polarizer; active shutter 3D glasses) to visualize the 3D image data reflected by the half transparent mirror.
  • the half transparent mirror may be placed on top of the surgical field to allow the user to see through the half transparent mirror to visualize the surgical field.
  • the binocular of the system may be set as high as 100% or as low as 0%, depending on the specific application.
  • the binocular overlap is set to be within the range of 60% and 100%.
  • the binocular overlap is dynamic and not static.
  • the digital magnification of a 3D image system 300 may further comprise additional sensors or components.
  • the system 300 further comprise a microphone, which may enable audio recording and/or communication.
  • the system 300 further comprise a proximity sensor, which may sense if user is wearing the system.
  • system 300 further comprise a inertial measurement unit (IMU), an accelerometers, a gyroscopes, a magnetometers, or a combination thereof.
  • system 300 further comprise a loudspeaker or earphone, which may enable audio replay or communication.
  • system can be applied a variety of applications, including but not limited to surgical, medical, veterinary, military, tactical, educational, industrial, consumer, jewelry fields.
  • FIG. 4 depicts a schematic view of a conventional digital zoom configuration 400 where the zoomed left images and zoomed right images are misaligned leading to poor 3D vision and depth perception.
  • the conventional digital zoom configuration 400 includes the zoomed right images 410a that are misaligned with the zoomed left images 410b.
  • Conventional digital zoom does not work well on magnifying of stereo-images for 3D display.
  • FIG. 3 shows an example of direct application of conventional digital zoom to stereo-images.
  • Conventional digital zoom is not suitable for magnifying 3D stereo-images, as it introduces binocular vertical misalignment.
  • the controller 310 may crop the first image captured by the first image sensor 330a and the second image captured by the second image sensor 330b to vertically align the overlap of the first portion of the target with the second portion of the target.
  • the cropped first image is in vertical alignment of the cropped second image when each vertical coordinate of the cropped first image is aligned with each corresponding vertical coordinate of the cropped second image.
  • the controller 310 may adjust the cropping of the first image and the second image to provide binocular overlap of the first portion of the target with the second portion of the target.
  • the binocular overlap of the first image and the second image is vertically aligned to satisfy the overlap threshold to generate the 3D image of the target displayed to the user after the digital magnification is executed.
  • the present invention discloses a digital magnification method that also ensures binocular vertical alignment.
  • the left image is captured by the left image sensor 330b and cropped by the controller 310
  • the right image is captured by the right image sensor 330a and cropped by the controller 310
  • the cropping of left and right images preserves vertical alignment.
  • the left and right images are cropped in such a way the vertical coordinates of the cropped left image and the vertical coordinates cropped right image are aligned.
  • the left image sensor 330b with the left lens 340b that are worn by the user may capture a left image.
  • the right image sensor 330a with the right lens 340a that are worn by the user may capture a right image.
  • the left image and the right image may be provided to the controller 310.
  • the controller 310 may crop the left image to generate a cropped left image.
  • the controller 310 may crop the right image to generate a cropped right image and may preserve the vertical alignment of the cropped right image with respect to the cropped left image.
  • the controller 310 may resize the cropped left image to generate a cropped and resized left image.
  • the controller 310 may resize the cropped right image to generate a cropped and resized right image.
  • the near-eye 3D display 320 worn by the user may display the cropped and resized left image to the left eye of the user.
  • the near-eye 3D display 320 worn by the user may display the cropped and resized right image to the right eye of the user.
  • the controller can be a microcontroller, a computer, a field- programmable gate array (FPGA), an application specific integrated circuits (ASIC), or a combination thereof.
  • the left image sensor and right image sensor are identical image sensors.
  • the image sensors may use the same type of image lenses.
  • the left and right image sensors may be placed and calibrated, so that the left image captured and right image captured are vertically aligned, prior to any digital magnification process.
  • the digital magnification process preserve the vertical alignment. For example, assuming the left image and right image each have 800(horizonal, column) by 600(vertical, row) pixels. After digital magnification, the row 201 to row 400 of pixels of left image to generate a cropped left image, and the row 201 to row 400 of pixels of the right image are used to generate a cropped right image. Therefore, the vertical alignment is preserved.
  • the left image sensor and right image sensor are not identical image sensors.
  • the left image captured and right image captured are first calibrated and aligned vertically, prior to any digital magnification process.
  • the left image captured by the left image sensor have 800(horizonal, column) by 600(vertical, row) pixels
  • the right image captured by the right image sensor have 400 (horizonal) by 300 (vertical) pixels.
  • the left image and right image are first vertically aligned.
  • the row # 0, 200, 400, 600 of the left image may correspond to the row # 0, 100, 200, 300 of the right image, respectively.
  • After digital magnification a subset of the row 200 to row 400 of pixels of left image, and a subset of the row 100 to row 200 of pixels of the right image are used. Therefore, the vertical alignment is preserved.
  • FIG. 5 depicts a schematic diagram of a digital magnification with binocular vertical alignment preservation configuration 500 where the magnified left images and the magnified right images are vertically aligned thereby resulting in increased 3D visualization.
  • the digital magnification with binocular vertical alignment preservation configuration 500 includes the zoomed right images 510b are vertically aligned with the zoomed left images 510a thereby resulting in increased 3D visualization.
  • FIG. 6 depicts a schematic view of a digitally magnified stereo images with preservation of vertical alignment configuration 600 whereas the digital magnification is applied, binocular overlap between the cropped left images and cropped right images gradually decreases.
  • the digitally magnified stereo images with preservation of vertical alignment configuration 600 includes digitally magnified right images 610a are vertically aligned with the digitally magnified left images 610b.
  • the binocular overlap decreases from 75% to 50% resulting in a decrease in 3D visualization.
  • the binocular overlap decreases from 75% to 0%.
  • the vertical alignment preservation without the preservation of binocular overlap may result in the gradual decrease in binocular overlap with each digital magnification.
  • the controller 310 may maintain the binocular overlap generated by adjusting the cropping of the first image and the second image to satisfy the overlap threshold.
  • a fixed binocular overlap number is maintained, such as 80%, 90% or 100%.
  • a range of binocular overlap number is maintained, such as 60% - 90%.
  • the controller 310 may execute a second digital magnification at a second digital magnification level on the first image captured by the first image sensor 330a and the second image captured by the second image sensor 330b.
  • the second digital magnification level is increased from the first digital magnification level.
  • the controller 310 may maintain the binocular overlap generated after executing the first digital magnification at the first digital magnification level on the first image and the second image when executing the second digital magnification at the second digital magnification level.
  • the controller 310 may maintain the binocular overlap and the vertical alignment determined when executing the first digital magnification at the first digital magnification level on the first image and the second image.
  • the controller 310 may continue to maintain the binocular overlap and the vertical alignment determined from the adjusting of the cropping of the first image and the second image to satisfy the overlap threshold after executing the first digital magnification at the first digital magnification level on the first image and the second image for each subsequent digital magnification level.
  • Each subsequent digital magnification level is increased from each previous digital magnification level.
  • the overlap threshold may be satisfied when the binocular overlap includes 75% overlap of the first image and the second image is maintained for each subsequent digital magnification at each subsequent digital magnification level.
  • each subsequent digital magnification from the previous magnification level e.g. increase from lx to 2x, and increase 2x to 4x
  • the controller 310 may execute first digital magnification at the first digital magnification level on a non-concentric portion of the fist image and a non-concentric portion of the second image.
  • the non-concentric portion of the first image and the second image is a portion of the first image and the second image that differs from a center of the first image and the second image.
  • the controller 310 may adjust the cropping of the first image and the second image to provide binocular overlap of the non-concentric portion of the first image and the non-concentric portion of the second image.
  • the binocular overlap of the non-concentric portion of the first image and the non-concentric portion of the second image satisfies the overlap threshold either specified as a fixed number or a range.
  • the controller 310 may continue to crop a non-concentric portion of the first image and a non-concentric portion of the second image for each subsequent digital magnification at each subsequent digital magnification level.
  • the binocular overlap of the non-concentric portion of the first image and the non-concentric portion of the second image is maintained from the first digital magnification at the first digital magnification level.
  • the non-concentric portion of the first image and the non-concentric portion of the second image may be resized to display to the user.
  • a first center of cropping of the non-concentric portion of the first image and a second center of cropping of the non-concentric portion of the second image are determined by the system 300.
  • the first center of cropping is fixed at the particular part of the first image
  • second center of cropping at each magnification level is determined based on the location of the corresponding first center of cropping and the targeted binocular overlap.
  • the digital magnification on either left image or right image may be concentric. For example, digital magnification on the left image is concentric but the digital magnification on the right image is non-concentric to maintain the binocular overlap.
  • the left image sensor and right image sensor are identical image sensors.
  • the image sensors may use the same type of image lenses, including autofocus lenses.
  • the left and right image sensors may be placed and calibrated, so that the left image captured and right image captured are vertically aligned, prior to any digital magnification process.
  • the digital magnification process preserves the vertical alignment and binocular overlap (e.g. 80%). For example, assuming the left image and right image each have 800 (horizonal, column) by 600 (vertical, row) pixels.
  • the pixels from row 201 to row 400 and column 401 to column 600 of left image are used to generate a cropped left image
  • the row 201 to row 400 and column 201 to column 400 of right image are used to generate a cropped right image.
  • This cropping may generate a satisfactory binocular overlap (e.g. 80%).
  • the non-concentric cropping in the digital magnification combined with resizing may enable magnification while preserving of both binocular overlap and vertical alignment.
  • further non-concentric cropping on at least one of the image are performed in conjunction with resizing to enable magnification while preserving of both binocular overlap and vertical alignment
  • machine learning algorithms are used for determining a center of cropping for the left image, or a center of cropping for the right image, or both centers, during the digital magnification process.
  • object recognition and localization based on machine learning may determine at least one center of the cropping.
  • the surgical bed is recognized and localized based on the left image, and a location within the surgical bed (e.g. centroid) is assigned to be the center of cropping for the left image, and the center of cropping for the right image is calculated based on the center of cropping for the left image and the desirable binocular overlap to be maintained.
  • a location within the surgical bed e.g. centroid
  • supervised learning can be implemented.
  • unsupervised learning can be implemented.
  • reinforcement learning can be implemented.
  • feature learning sparse dictionary learning, anomaly detection, association rules may also be implemented.
  • Various models may be implemented for machine learning.
  • artificial neural networks are used.
  • decision trees are used.
  • support vector machines are used.
  • Bayesian networks are used.
  • genetic algorithms are used.
  • neural networks convolutional neural networks, or deep learning are used for object recognition, image classification, object localization, image segmentation, image registration, or a combination thereof.
  • Neural network based systems are advantageous in many cases for image segmentation, recognition and registration tasks.
  • U-Net is used, which has a contraction path and expansion path.
  • the contraction path has consecutive convolutional layers and max-pooling layer.
  • the expansion path performs up-conversion and may have convolutional layers.
  • the convolutional layer(s) prior to the output maps the feature vector to the required number of target classes in the final segmentation output.
  • V-net is implemented for image segmentation to isolate the organ or tissue of interest (e.g. vertebral bodies).
  • Autoencoder based Deep Learning Architecture is used for image segmentation to isolate the organ or tissue of interest.
  • backpropagation is used for training the neural networks.
  • deep residual learning is performed for image recognition or image segmentation, or image registration.
  • a residual learning framework is utilized to ease the training of networks.
  • a plurality of layers is implemented as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions.
  • One example of network that performs deep residual learning is deep Residual Network or ResNet.
  • a Generative Adversarial Network is used for for image recognition or image segmentation, or image registration.
  • the GAN performs image segmentation to isolate the organ or tissue of interest.
  • a generator is implemented through neural network to models a transform function which takes in a random variable as input and follows the targeted distribution when trained.
  • a discriminator is implemented through another neural network simultaneously to distinguish between generated data and true data.
  • the first network tries to maximize the final classification error between generated data and true data while the second network attempts to minimize the same error. Both networks may improve after iterations of the training process.
  • ensemble methods are used, wherein multiple learning algorithms are used to obtain better predictive performance.
  • Bayes optimal classifier is used.
  • bootstrap aggregating is used.
  • boosting is used.
  • Bayesian parameter averaging is used.
  • Bayesian model combination is used.
  • bucket of models is used.
  • stacking is used.
  • a random forests algorithm is used.
  • a gradient boosting algorithm is used.
  • the controller 310 may determine a distance that the first image sensor 330b and the second image sensor 330a is positioned from the target.
  • the controller 310 may execute the cropping of the first image and the second image to maintain the vertical alignment and the binocular overlap for each digital magnification at each digital magnification level based on the distance of the first image sensor 330b and the second image sensor 330a is from the target.
  • the system allows the user to determine a center of cropping for the left image, or a center of cropping for the right image, or both centers, for the digital magnification process. In the case of many users, each may have their own settings.
  • the display 320 may include one of a plurality of wearable display that displays the resized and cropped first image and the resized and cropped second image to display the 3D image of the target after the digital magnification is executed that includes the binocular overlap of the first image and the second image that are vertically aligned to satisfy the overlap threshold.
  • the first image sensor 330b and the second image sensor 330a may be positioned proximate the display 320 for the user to execute a surgical procedure on the target that is a patient.
  • the first image sensor 330b and the second image sensor 330a may be positioned close to the display 320 for the user to execute a surgical procedure on the target that is a patient.
  • the first image sensor 330b and the second image sensor 330a may be positioned on a stand, not adjacent to the display 320. It should be appreciated that the said stand may be motorized or has a robot.
  • the display 320 may be a 3D monitor, a 3D projector, or a 3D projector with a combiner, used with 3D glasses (e.g. polarizers or active shutter glasses).
  • the present invention discloses a method for digitally magnifying the images, while preserving the binocular overlap.
  • the cropping of left image and cropping of right image may be performed by the controller 310 with the binocular overlapped preserved.
  • the cropped left image and cropped right image may be cropped by the controller 310 in such a way so that the binocular overlap of cropped images will also be 75%
  • the left image sensor 330b with the left lens 340b that are worn by the user may capture a left image.
  • the right image sensor 330a with the right lens 340a that are worn by the user may capture a right image.
  • the left image and the right image may be provided to the controller 310.
  • the controller 310 may calculate a left crop function that specifies how to crop the left image and a right crop function that specifies how to crop the right image.
  • the left crop function and the right crop function preserve binocular overlap and binocular vertical alignment.
  • the controller 310 may crop the left image to generate a cropped left image using the left crop function that preserves binocular overlap and binocular vertical alignment.
  • the controller 310 may crop the right image to generate a cropped right image using the right crop function that preserves binocular overlap and binocular vertical alignment.
  • the controller 310 may resize the cropped left image to generate a cropped and resized left image.
  • the controller 310 may resize the cropped right image to generate a cropped and resized right image.
  • the display 320 worn by the user may display the cropped and resized left image to the left eye of the user.
  • the display 320 may display the cropped and resized right image to the right eye of the user.
  • the display 320 may be a near-eye 3D display.
  • the display 320 may be a 3D monitor, a 3D projector, or a 3D projector with a combiner, used with 3D glasses (e.g. polarizers or active shutter glasses).
  • FIG. 7 depicts a schematic view of a preservation of binocular overlap and binocular vertical alignment configuration 700 where at 2.3X, 5.3X, and 12X magnifications, respectively, the left cropped images and the right cropped images have binocular overlap of 75% and vertical alignment thereby resulting in an increased 3D visualization experience and depth perception may be provided to the user.
  • the preservation of binocular overlap and binocular vertical alignment configuration 700 includes right cropped images 710b and left cropped images 710a.
  • the digital magnification method further comprises of an additional condition to satisfy: the left cropped image shares the same geometrical center as that of the left original image.
  • the right cropped image may be calculated by the controller 310 and generated accordingly by the controller 310 based on the cropping of the left cropped image, while preserving the binocular overlap and binocular vertical alignment.
  • the digital magnification process may be coaxial along the center of the left image (the optical axis), and the progression of digital magnification may align with the line of sight of the user’s left eye.
  • the cropped right image may share the same center as the right original image.
  • the left cropped image may be calculated by the controller 310 and generated accordingly by the controller 310 based on the position and cropping of the right cropped image, while preserving the binocular overlap and binocular vertical alignment.
  • the acceptable binocular overlap of cropped images may be specified as a range, rather than a specific number.
  • the binocular overlap of cropped left and right images may be specified to be within a range between 60% to 90%. Any number between 60% and 90% may be considered satisfactory for digital magnification.
  • the left image sensor 330b with the left lens 340b that are worn by the user controller 310 may capture a left image.
  • the right image sensor 330a and the right lens 340a that are worn by the user may capture a right image.
  • the left image and the right image may be provided to the controller 310.
  • the controller 310 may calculate a left crop function that specifies how to crop the left image and the right crop function that specifies how to crop the right image.
  • the left crop function and the right crop function may preserve binocular vertical alignment.
  • the left crop function and the right crop function may preserve binocular overlap as specified by a range of acceptable binocular overlap, such as 60% to 90%.
  • the controller 310 may crop the left image to generate a cropped left image using the left crop function that preserves binocular overlap and binocular vertical alignment.
  • the controller 310 may crop the right image to generate a cropped right image using the right crop function that preserves binocular overlap and binocular vertical alignment.
  • the controller 310 may resize the cropped left image to generate a cropped and resized left image.
  • the controller 310 resizes the cropped right image to generate a cropped and resized right image.
  • the display 320 may display the cropped and resized left image to the left eye of the user.
  • the display 320 may display the cropped and resized right image to the right eye of the user.
  • the left lens 340b and right lens 340a may be zoom lenses.
  • optical zoom may be used in conjunction with the aforementioned digital magnification methods.
  • 5.3X digital magnification may be used in conjunction with 2X optical zoom (10.6X magnification in total).
  • the levels of digital magnification may be either continuous (e.g. magnifying with fine level of increments over a range: e.g. any magnification level within 2X-7X), or the magnification levels may be discrete (2X, 2.5X, 3X, 4X, 6X, 7X, etc).
  • the controller 310 may transmit the magnified left image and/or right image to another 3D display device for visualization.
  • the 3D display device may be a wearable display, a monitor, a projector a projector with a combiner, a passive 3D monitor with 3D polarized glasses, a active 3D monitor with active shutter 3D glasses, or a combination thereof.
  • the controller 310 may transmit the magnified left image and/or right image to another computer for visualization, storage, and broadcast.
  • the controller 310 may record the magnified left image and/or magnified right image.
  • controller 310 may apply computer vision and/or image processing techniques the magnified left image and/or magnified right image. Additional computer vison analysis can enable decision support, object recognition, image registration, and object tracking. For example, deep learning and neural networks may be used.
  • the near-eye 3D display 320 may display other medical image data to the user (e.g. CT, MRI, ultrasound, nuclear medicine, surgical navigation, fluoroscopy, etc) and the other medical image data is overlaid with the magnified left image and/or magnified right image.
  • CT computed tomography
  • MRI magnetic resonance imaging
  • ultrasound magnetic resonance imaging
  • nuclear medicine nuclear medicine
  • surgical navigation fluoroscopy
  • fluoroscopy etc
  • more than two image sensors may be used in the system.
  • only two image sensors are selected to participate in the digital magnification process (e.g. three color sensors with three lenses). It should be appreciated that in case of multiple image sensors and image lenses, multiple sets consisting of two of those sensors may be calibrated with respect to each other in separate processes.
  • only one image sensor is used. This image sensor will serve as both left image sensor 330a and right image sensor 330b.
  • a 3D scanning unit comprising of a projector and an image sensor is used, similar to a 3D scanner.
  • a 3D scan can be thus generated.
  • the 3D scanning unit may use epipolar geometry for the 3D scan.
  • a virtual left image and virtual right image can be generated based on the 3D scan.
  • the digital magnification process aforementioned may be applied to the virtual left image and virtual right image.
  • FIG. 8 depicts a schematic view a physical embodiment of a digital magnification wearable device configuration 800.
  • the digital magnification wearable device configuration 800 includes the right image sensor 330a, the left image sensor 330b, the right lens 340a, the left lens 340b, the right near-eye display 320a, the left near-eye display 320b, and an eyeglass frame 350.
  • the wearable frame may be in the form of a head mount, in lieu of an eyeglass frame.
  • the controller 310 may be a microcontroller, a computer, an FPGA, or an ASIC.
  • the digital magnification wearable device configuration 800 may execute digital magnification with preservation of binocular overlap and binocular vertical alignment.
  • the digital magnification wearable device configuration 800 may further include transparent plastic or glass, surrounding the left near eye display 320b and right near eye display 320a.
  • the digital magnification wearable device configuration 800 may use a compact offset configuration, whereby only a part of area before each eye is none-transparent and the other parts are transparent. In one example, the center part of area before each eye is none-transparent and the peripheral parts are transparent. This way, the user such as surgeon/dentist can see around the near eye digital display to look at the patient with unhindered natural vision.
  • the digital magnification wearable device configuration 800 may further include prescription eyeglasses, so that nearsightedness, farsightedness, and astigmatism may be corrected.
  • the 800 may include an optical see-through configuration.
  • the near-eye 3D displays 320(a-b) are both transparent or semi-transparent.
  • the image sensors 330(a-b) may be a pair of color image sensors.
  • the digital magnification wearable device configuration 800 may digitally magnify stereoscopic color images and display to the user in the near-eye 3D display 320(a-b) in 3D.
  • the left and right lenses 340(a-b) are lenses with fixed focal lengths.
  • the left and right lenses 340(a-b) are zoom lenses with variable focal lengths.
  • the color image sensors may be complementary metal-oxide- semiconductor (CMOS) image sensors.
  • the color image sensors may be charge-coupled device (CCD) image sensors.
  • the left and right color image sensors are coupled with autofocus lenes to enable autofocus.
  • only one image sensor is used. This image sensor will serve as both left image sensor 330a and right image sensor 330b.
  • a 3D scanning unit comprising of a projector and an image sensor is used, similar to a 3D scanner. A 3D scan can be thus generated.
  • the 3D scanning unit may use epipolar geometry for the 3D scan. By using different virtual viewpoints and projection angles, a virtual left image and virtual right image can be generated based on the 3D scan. The digital magnification process aforementioned may be applied to the virtual left image and virtual right image.
  • the 3D scanning unit may use visible wavelengths, infrared wavelengths, ultraviolet wavelengths, or a combination thereof.
  • the aforementioned 3D scanning unit may project dynamic projection pattern to facilitate 3D scanning.
  • dynamic patterns are binary code, stripe boundary code, and 64 pattern.
  • binary codeword is represented by a series of black and white stripes. If black represents 1 and white represents 0, the series of 0 and 1 at any given location may be encoded by the dynamic projection pattern; the binary dynamic projection pattern may be captured by the image sensor and lens, and decoded to recover the binary codeword that encodes an location (e.g. 10100011).
  • N binary patterns may generate 2N different codewords per image dimension (x or y dimension).
  • binary coding may be extended to N-bits coding.
  • dynamic stripe boundary code-based projection or the dynamic Moire code based projection can be implemented.
  • dynamic Fourier transform profilometry may be implemented by 3D scanning unit.
  • periodical signals are generated to carry the frequency domain information including spatial frequency and phase.
  • Inverse Fourier transform of only the fundamental frequency results in a principle phase value ranging from - p to p.
  • spatial or temporal phase unwrapping The process to remove 2p discontinuities and generate continuous map
  • actual 3D shape of patient anatomy may be recovered.
  • Fourier transform profilometry is less sensitive to the effect of out-of-focus images of patients, making it a suitable technology for intraoperative 3D scanning.
  • p-shifted modified Fourier transform profilometry may be implemented intraoperatively, where a p-shifted pattern is added to enable the 3D scanning.
  • a DC image may be used with Fourier transform profilometry in the 3D scanning unit.
  • the DC-modified Fourier transform profilometry may improve 3D scan quality intraoperatively.
  • N-step phase- shifting Fourier transform profilometry may be implemented intraoperatively. It should be appreciated that the larger the number of steps (N) is chosen, the higher the 3D scanning accuracy.
  • three-step phase-shifting Fourier transform profilometry may be implemented to enable high speed 3D scanning intraoperatively. It should be appreciated that periodical patterns such as trapezoidal, sinusoidal, or triangular pattern may be used in the Fourier transform profilometry for intraoperative 3D scan.
  • windowed Fourier transform profilometry may also be implemented by the aforementioned apparatuses and systems.
  • more than one frequency of periodical signal e.g. dual frequencies
  • phase unwrapping become optional in the intraoperative 3D scan.
  • the dynamic Fourier transform profilometry and modified Fourier transform profilometry discussed herein may improve the quality of 3D scan of the patient. Improved 3D scan may enhance the image registration between intraoperative 3D scan and preoperative images (e.g. MRI and CT), thereby improving the surgical navigation.
  • the aforementioned 3D scanning unit implements
  • Fourier transform profilometry or modified Fourier transform profilometry, in combination with binary codeword projection.
  • the Fourier transform profilometry and binary codeword projection may be implemented sequentially, concurrently, or a combination thereof.
  • the combined approach may improve the 3D scanning accuracy, albert at the cost of 3D scanning speed.
  • the aforementioned projector may include at least one lens.
  • the lens is configured such a way so that the projected pattem(s) are defocused.
  • the defocusing process by the lens is similar a convolution of gaussian filter on the binary pattern. Consequently, the defocused binary pattern may create periodical patterns that are similar to sinusoidal patterns.
  • dithering techniques are used to generated high-quality periodical fringe patterns through binarizing a higher order bits fringe pattern (e.g. 8 bits) such as sinusoidal fringe patterns.
  • ordered dithering is implemented; for example, Bayer matrix can be used to enable ordered dithering.
  • error-diffusion dithering is implemented; for instance, Floyd-Steinberg (FS) dithering or minimized average error dithering may be implemented. It should be appreciated that in some cases the dithering techniques may be implemented in combination with defocusing technique to improve the quality of intraoperative 3D scan.
  • the aforementioned projector may generate statistical pattern.
  • the projector may generate a pseudo random pattern that includes a plurality of dots. Each position of each corresponding dot included in the pseudo random pattern may be pre-determined by the projector.
  • the projector may project the pseudo random pattern onto the patient or target. Each position of each corresponding dot included in the pseudo random pattern is projected onto a corresponding position on the patient/target.
  • the image sensor may capture a 2D intraoperative image of a plurality of object points associated with the patient/target, to calculate the 3D topography.
  • the controller 310 may associate each object point associated the patient that is captured by the image sensor with a corresponding dot included in the pseudo random pattern that is projected onto the patient/target by the projector based on the position of each corresponding dot as pre-determined by the projector.
  • the controller 310 may convert the 2D image to the 3D scan of the patient/target based on the association of each object point to each position of each corresponding dot included in the pseudo random pattern as pre-determined by the projector.
  • the projector may include one or more edge emitting laser, at least one collimating lens, and at least one diffractive optics element. The edge emitting laser and the diffractive optics element may be controlled by the controller 310 to generate patterns desirable for the specific 3D scanning applications.
  • the near eye 3D display may comprise LCD (liquid crystal) microdisplays, LED (light emitting diode) microdisplays, organic LED (OLED) microdisplays, liquid crystal on silicon (LCOS) microdisplays, retinal scanning displays, virtual retinal displays, optical see through displays, video see through displays, convertible video-optical see through displays, wearable projection displays, and the like.
  • the digital magnification wearable device configuration 800 may further include a light source for surgical field illumination.
  • the light source is based on one or a plurality of light emitting diode (LED).
  • the light source is based on one or a plurality of laser diode with waveguide or optical fiber.
  • the light source has a diffuser.
  • the light source has noncoherent light source such as an incandescent lamp.
  • the light source has coherent light source such as a laser diode and phosphorescent materials in film form or volumetric form.
  • the light source is mounted on a surgical instrument to illumination of the cavity.
  • the image sensors 330(a-b) are a pair of monochrome sensors.
  • the systems further include a least one fluorescence emission filter.
  • the digital magnification surgical loupe configuration may digitally magnify stereoscopic fluorescence images and display to the user in the near-eye 3D display 320(a-b) in 3D.
  • the systems further include a light source that is capable of provide excitation light to the surgical field. It should also be appreciated that the light source may include a laser light; a light emitting diode (LED); an incandescent light; a projector lamp; an arc-lamp, such as xenon, xenon mercury, or metal halide lamp; as well as coherent or in-coherent light sources.
  • the light source comprises of one or a plurality of white LEDs with a low pass filter (e.g.775nm short pass filter) and one or a plurality of near infrared LEDs with a band pass filter (e.g. 830nm band pass filter).
  • the light source comprises of one or a plurality of white LEDs with a low pass filter (e.g.775nm short pass filter) and one or a plurality of near infrared LEDs with a long pass filter (e.g. 810nm long pass filter).
  • the light source can be controlled by sensors such as an inertial measurement unit to turn the light on and off.
  • the digital magnification wearable device configuration [0084] In another embodiment, the digital magnification wearable device configuration
  • the 800 includes at least two color image sensors, at least two monochrome image sensors, at least two beamsplitters, and at least two narrow band filters.
  • the monochrome image sensor, the color sensor and the beamsplitter are optically aligned on each side (left vs right), so that the left color image is aligned with the left monochrome image, and the right color image is aligned with the right monochrome image.
  • the beamsplitters can be cube beamsplitters, plate beamsplitters, Pellicle Beamsplitters, Dichroic Beamsplitters, or polarizing beamsplitters.
  • the optical design can be in a folded configuration using mirrors.
  • the digital magnification wearable device configuration 800 includes a light source with an additional spectral filter.
  • the digital magnification wearable device configuration 800 may be used to capture narrow band reflectance images or fluorescence images, and to digitally magnify the image and display to the user in 3D with desirable binocular overlap.
  • the light source may be a plurality of white LEDs and near infrared LEDs (770nm), and the spectral filter can be a 800nm short pass filter.
  • the apparatus further includes additional sensors, such as an inertial measurement unit (IMU), accelerometers, gyroscopes, magnetometers, proximity sensors, microphone, force sensors, ambient light sensors, etc.
  • IMU inertial measurement unit
  • the light source can be controlled by sensors such as an inertial measurement unit to turn the light on and off.
  • the system 300 can be controlled by sensors such as an inertial measurement unit and/or proximity sensor to turn the system 300 on and off.
  • proximity sensors are: Photoelectric, Inductive, Capacitive and Ultrasonic.
  • the digital magnification wearable device configuration 800 further include at least one microphone.
  • the system 300 may record audio data such as dictation.
  • the system 300 capture the audio data using the microphone, perform voice recognition on the controller 310, and enable voice control of the system 300.
  • the voice control may include adjustment of the magnification levels (e.g. from 3X to 5X).
  • a microphone array or multiple microphones are used, the system may triangulate the source of sound for multiple purposes such as noise cancellation, voice control of multiple devices in close proximity, etc..
  • the system 300 may differentiate the one user from other users based on the triangulation of voice/audio signal.
  • the digital magnification wearable device configuration 800 further includes tracking hardware, such as optical tracking hardware, electromagnetic tracking hardware, etc.
  • the digital magnification wearable device configuration 800 further includes of communication hardware, to enable wireless or wired communication such as such as Wi-fi, Bluetooth, cellular communication, Ethernet, LAN, wireless communication protocols compatible with operating rooms, infrared communication.
  • the apparatus can thus stream the magnification data and/or the original image data captured by the image sensors to another apparatus, computer or mobile devices.
  • the lenses 340(a-b) in the digital magnification wearable device configuration 800 include autofocus lenses.
  • the lenses 340(a-b) in the digital magnification wearable device configuration 800 are autofocus lenses but the digital magnification wearable device configuration 800 may focus the lenses, on request of the user. For example, upon user request via an input device or via voice control, the lenses will be focused on the demand of the user. Thus, the autofocus will not be activated unless demanded by the user, thus avoiding unwanted autofocus during surgical procedures.
  • the focus setting of the left lens 340b and right lens 340a are always the same.
  • the settings for focusing left lens 340b and the settings for right lens 340a are set to be the same, to avoid left lens focusing on a focal plane different from the right plane.
  • 800 further includes additional input devices, such as a foot pedal, a wired or a wireless remote control, one or more button, a touch screen, microphone with voice control, gesture control device such as Microsoft Kinect, etc.
  • additional input devices such as a foot pedal, a wired or a wireless remote control, one or more button, a touch screen, microphone with voice control, gesture control device such as Microsoft Kinect, etc.
  • the controller can be useable or disposable. It should be appreciated that a sterile sheet or wrap may be placed around the input device.
  • the digital magnification wearable device configuration 800 may display medical images such as MRI (magnetic resonance image) image data, computed tomography (CT) image data, positron emission tomography (PET) image data, single-photon emission computed tomography (SPECT), PET/CT, SPECT/CT, PET/MRI, gamma scintigraphy, X-ray radiography, ultrasound, and the like.
  • CT computed tomography
  • PET positron emission tomography
  • SPECT single-photon emission computed tomography
  • PET/CT single-photon emission computed tomography
  • SPECT/CT single-photon emission computed tomography
  • PET/MRI single-photon emission computed tomography
  • SPECT/CT single-photon emission computed tomography
  • PET/MRI single-photon emission computed tomography
  • SPECT single-photon emission computed tomography
  • PET/CT single-photon emission computed tomography
  • SPECT single-photo
  • EIS electronic image stabilization
  • the Controller 310 shifts the electronic image from frame to frame of left video captured by the left camera and the right video captured by the right camera, enough to counteract the motion.
  • EIS uses pixels outside the border of the cropped area during digital magnification to provide a buffer for the motion.
  • optical flow or other image processing methods may be used to track subsequent frames and detect vibrational movements and correct for them.
  • feature-matching image stabilization methods may be used. Image features may be extracted via SIFT, SURF, ORB, BRISK, neural networks, etc.
  • Optical Image Stabilization is implemented.
  • the OIS in the lenses 340a and 340b For instance, using springs and mechanical mount, image sensor movements are smoothened or cancelled out.
  • the image sensors 330a and 330b can be moved in such a way as to counteract the motion of the camera.
  • MIS mechanical image stabilization
  • Gimbals may be used for MIS.
  • MIS is achieved by attaching a gyroscope to the system. He gyroscope lets the external gyro (gimbal) stabilize the image sensors 330a and 330b.
  • the system 300 may need stereoscopic calibration to enable accurate 3D digital magnification.
  • a single calibration through repeated capture of similar calibration pattern such as fiducials or chessboard
  • an initial homography transformation and cropping is applied to the pair of images to achieve a high accuracy alignment between the two in executed.
  • This is similar to finding the epipolar geometry between two sensors and bringing the two frames into a single plane through calibration to have: (1) Identical scales of the captured geometry, through virtual identical focal length, (2) Identical peripheral alignment of captured scene, through undistortion, and (3) Identical vertical alignment of captured frames, through homography (projective) transformation.
  • the new calibrated frames rectified frames
  • ergonomic calibration can be performed on the system 300 using one or a plurality of IMUs, one on the image sensor axis and second one on the display axis.
  • the headset is horizontally aligned in the center of the forehead (single IMU reading and correction). This is essential to have a symmetrical mechanical position for each image sensor 330a and 330b with respect to each corresponding eye (left sensor 330a to left eye and right sensor 330b to right eye). It also helps achieve maintain binocular overlap between the digitally magnified images captured and overlapped in the center of the two image sensors (by comparing and aligning the two IMUs), and the center of the two eyes which is perceived by natural vision around the displays.
  • Autofocus can be achieved through mechanical structure such as motors/actuators or through liquid lenses.
  • the controller 310 may conduct brightness assessment to find a high contrast image, high frequency values, etc. through a method of Sobel filter or similar that extracts edges and high frequency features of the left and/or right images.
  • the autofocus lens may test a large range of focus (course focus) to find a course focus, and subsequently conduct a smaller range of focus (fine focus) based in the neighborhood near the course focus.
  • the right lens 340a and left lens 340b may be assigned to 2 ends of the focus range and progress towards the middle. Once the an optical focus value is found, both lenses will assigned the same value or similar value, to avoid 2 lenses focusing on different image planes.
  • the controller 310 may conduct using calibration and disparity map to find the working distance of desired object.
  • the controller 310 may use previously calibrated frames to extract a partial or full disparity or depth map.
  • controller 310 may use a region of interest or a point in a specific part of the image to assess the distance to the desired object or plane of operation (working distance), and use the distance to determine proper value for autofocus from either a distance dependent equation or a pre-determined look up-table (LUT).
  • the binocular overlap may be defined as a variable of working distance and magnification level.
  • the controller 310 can defining the proper value for binocular overlap between binocular views to achieve proper 3D visualization, from either a distance-dependent equation or a pre-determined look-up-table (LUT) after defining the distance to the point of interest or average working distance of the region of interest.
  • LUT look-up-table
  • distance can be inferred using calibration and disparity map to find distance. Using previously calibrated frames to extract a partial or full disparity or depth map (the two are related but different numerical values).
  • the controller 310 may use a region of interest or a point in a specific part of the image to extract the distance to the desired object or plane of operation (working distance), In another instance, the controller 310 may use autofocus values of left autofocus lens and/or right autofocus lens to inter the working distance.
  • the controller 310 comprises the hardware and software necessary to implement the aforementioned methods.
  • the controller 310 involves a computer- readable medium comprising processor-executable instructions configured to implement one or more of the techniques presented herein.
  • An example embodiment of a computer-readable medium or a computer-readable device comprises a computer-readable medium, such as a SSD, CD-R, DVD-R, flash drive, a platter of a hard disk drive, etc., on which is encoded computer-readable data.
  • This computer-readable data such as binary data comprising at least one of a zero or a one, in turn comprises a set of computer instructions configured to operate according to one or more of the principles set forth herein.
  • the set of computer instructions are configured to perform a method, such as at least some of the exemplary methods described herein, for example.
  • the set of computer instructions are configured to implement a system, such as at least some of the exemplary systems described herein, for example.
  • Many such computer-readable media are devised by those of ordinary skill in the art that are configured to operate in accordance with the techniques presented herein.
  • Example computing devices include, but are not limited to, personal computers that may comprise a graphics processing unit (GPU), server computers, hand-held or laptop devices, mobile devices (such as mobile phones, Personal Digital Assistants (PDAs), media players, and the like), multiprocessor systems, consumer electronics, mini computers, mainframe computers, a microcontroller, a Field Programmable Gate Array (FPGAY an application-specific integrated circuit (ASIC), distributed computing environments that include any of the above systems or devices, and the like.
  • the controller may use a heterogeneous computing configuration.
  • Computer readable instructions being executed by one or more computing devices.
  • Computer readable instructions may be distributed via computer readable media.
  • Computer readable instructions may be implemented as program components, such as functions, objects, Application Programming Interfaces (APIs), data structures, and the like, that perform particular tasks or implement particular abstract data types.
  • APIs Application Programming Interfaces
  • data structures such as lists, lists, lists, lists, lists, lists, lists, lists, lists, lists, lists, and the like.
  • the functionality of the computer readable instructions may be combined or distributed as desired in various environments.
  • a system comprises a computing device configured to implement one or more embodiments provided herein.
  • the computing device includes at least one processing unit and one memory unit.
  • the memory unit may be volatile (such as RAM, for example), non-volatile (such as ROM, flash memory, etc., for example) or some combination of the two.
  • the computing device may include additional features and/or functionality.
  • the computing device may also include additional storage (e.g., removable and/or non-removable) including, but not limited to, cloud storage, magnetic storage, optical storage, and the like.
  • computer readable instructions to implement one or more embodiments provided herein may be in the storage.
  • the storage may also store other computer readable instructions to implement an operating system, an application program, and the like. Computer readable instructions may be loaded in the memory for execution by the processing unit, for example.
  • Computer storage media includes volatile and nonvolatile, removable and non removable media implemented in any method or technology for storage of information such as computer readable instructions or other data.
  • Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, Digital Versatile Disks (DVDs) or other optical storage, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by computing device.
  • the computing device may also include communication connection(s) that allows the computing device to communicate with other devices.
  • Communication connect! on(st mav include, but is not limited to, a modem, a Network Interface Card (NIC), an integrated network interface, a radio frequency transmitter/receiver, an infrared port, a USB connection, or other interfaces for connecting computing device to other computing devices.
  • Communication connection(s) may include a wired connection or a wireless connection.
  • Communication connection(s) may transmit and/or receive communication media.
  • the computing device may include input device(s) such as keyboard, mouse, pen, voice input device, touch input device, infrared cameras, depth cameras, touchscreens, video input devices, and/or any other input device.
  • Output device(s) such as one or more displays, speakers, printers, and/or any other output device may also be included in the computing device.
  • Input device(s) and output device(s) may be connected to the computing device via a wired connection, wireless connection, or any combination thereof.
  • an input device or an output device from another computing device may be used as input device(s) or output device(s) for computing device.
  • Components of computing device 6712 may be connected by various interconnects, such as a bus.
  • interconnects may include a Peripheral Component Interconnect (PCI), such as PCI Express, a Universal Serial Bus (USB), firewire (IEEE 1394), an optical bus structure, and the like.
  • PCI Peripheral Component Interconnect
  • USB Universal Serial Bus
  • IEEE 1394 Firewire
  • optical bus structure and the like.
  • components of computing device may be interconnected by a network.
  • the memory may be comprised of multiple physical memory units located in different physical locations interconnected by a network.
  • storage devices utilized to store computer readable instructions may be distributed across a network.
  • a computing device accessible via a network may store computer readable instructions to implement one or more embodiments provided herein.
  • Computing device may access another computing device and download a part or all of the computer readable instructions for execution.
  • the first computing device may download pieces of the computer readable instructions, as needed, or some instructions may be executed at the first computing device and some at the second computing device.
  • one or more of the operations described may constitute computer readable instructions stored on one or more computer readable media, which if executed by a computing device, will cause the computing device to perform the operations described.
  • the order in which some or all of the operations are described should not be construed as to imply that these operations are necessarily order dependent. Alternative ordering will be appreciated by one skilled in the art having the benefit of this description. Further, it will be understood that not all operations are necessarily present in each embodiment provided herein. Also, it will be understood that not all operations are necessary in some embodiments.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Optics & Photonics (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

A system for generating three-dimensional (3D) images from captured images of a target when executing digital magnification. A controller executes a digital magnification on the first image of the target captured by the first image sensor and on the second image captured by the second image sensor of the target. The controller crops the first image and the second image to overlap a first portion of the target captured by the first image sensor with a second portion of the target captured by the second image sensor. The controller adjusts the cropping of the first image and the second image to provide binocular overlap of the first portion of the target with the second portion of target. The displayed cropped first image and the cropped second image display the 3D image at the digital magnification to the user.

Description

GENERATION OF THREE-DIMENSIONAL IMAGES WITH DIGITAL
MAGNIFICATION
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This application claims the benefit of U.S. Provisional Application No. 63/029,831 filed on May 26, 2020, which is incorporated herein by reference in its entirety.
BACKGROUND
Field of Disclosure
[0002] The present disclosure relates to the generation of the Three-Dimensional (3D) images and specifically to the generation of 3D images from the digital magnification of images captured of a target.
Related Art
[0003] Conventionally, surgical loupes have been used extensively in various types of surgeries. Surgical loupes are a pair of optical magnifiers that magnify the surgical field and provide magnified stereoscopic vision. However, conventional surgical loupes have significant limitations. For example, a single set of conventional surgical loupes only offer a fixed level of magnification, such as 2X without any capabilities to vary such magnification. Therefore, surgeons typically require several pairs of surgical loupes with each pair having a different level of magnification to cater for different levels of magnifications. Changing surgical loupes in the operating room is inconvenient with an increased cost to have several sets of surgical loupes with different magnifications customized for a single one surgeon.
[0004] However, equipping conventional surgical loupes with magnifying lenses typically include an increased length resulting in an increased form factor and increased weight and thereby limit the magnification level. The increased form factor and increased weight also limit the duration of surgical procedures that the surgeon may execute. Further, conventional surgical loupes implement a non-imaging configuration, whereby the magnification lenses magnify and form a pair of virtual images thereby decreasing the working distances and depths of focus for the surgeon. Therefore, the surgeon has to restrict the position of their head and neck to a specific position as they use the conventional surgical loupes. This results in neck pains and cervical diseases for surgeons with long term use of conventional surgical loupes. [0005] Rather than simply having surgical loupes use non-imaging configurations, conventional imaging configurations in the non-surgical space include stereo imaging systems and imaging systems with zoom lenses where such conventional imaging configurations generate 3D images while enabling the adjustment of magnification. However, the incorporation of such conventional imaging configurations in the surgical space require the implementation of two displays and/or zoom lenses for the surgeon. The two stereo displays included in such conventional stereo imaging systems must be mechanically adjusted for each magnification level as well as calibrated. Such mechanical adjustment and calibration in the surgical space is not feasible. The changing in magnification for two conventional zoom lenses requires each image at each magnification level to always be captured at the center of the initial image where each level of magnification continues to capture the center of the initial image. The resulting 3D image displayed to the surgeon is significantly skewed thereby preventing the incorporation of conventional zoom lenses into the surgical space.
BRIEF DESCRIPTION OF THE DRAWINGS/FIGURES
[0006] Embodiments of the present disclosure are described with reference to the accompanying drawings. In the drawings, like reference numerals indicate identical or functionally similar elements. Additionally, the left most digit(s) of a reference number typically identifies the drawing in which the reference number first appears.
[0007] FIG. 1A illustrates a schematic view of binocular overlap of human eyes configuration where the region seen by both eyes is the overlapped region included in the scene seen by both eyes;
[0008] FIG. IB illustrates a block diagram of a two imaging sensor configuration where two image sensors with two lenses are used in a side-by-side configuration;
[0009] FIG. 1C illustrates a block diagram a binocular overlap of two imaging sensor configuration with the regions seen by both imaging sensors is the overlapped region;
[0010] FIG. 2 depicts a schematic view of a conventional digital zoom configuration where the original image is cropped and resized (from left to right);
[0011] FIG. 3 illustrates a block diagram of a digital magnification of a 3D image system that may generate 3D images when executing digital magnification on captured images of a target;
[0012] FIG. 4 depicts a schematic view of a conventional digital zoom configuration where the zoomed left images and zoomed right images are misaligned leading to poor 3D vision and depth perception;
[0013] FIG. 5 depicts a schematic diagram of a digital magnification with binocular vertical alignment preservation configuration where the magnified left images and the magnified right images are vertically aligned thereby resulting in increased 3D visualization;
[0014] FIG. 6 depicts a schematic view of a digitally magnified stereo images with preservation of vertical alignment configuration whereas the digital magnification is applied, binocular overlap between the cropped left images and cropped right images gradually decreases;
[0015] FIG. 7 depicts a schematic view of a preservation of binocular overlap and binocular vertical alignment configuration where at 2.3X, 5.3C, and 12X, respectively, the left cropped images and the right cropped images have binocular overlap of 75% and vertical alignment thereby resulting in an increased 3D visualization experience and depth perception may be provided to the user; and
[0016] FIG. 8 depicts a schematic view a physical embodiment of a digital magnification surgical loupe configuration.
DETAILED DESCRIPTION OF THE PRESENT DISCLOSURE
[0017] The following Detailed Description refers to accompanying drawings to illustrate exemplary embodiments consistent with the present disclosure. References in the Detailed Description to “one exemplary embodiment,” an “exemplary embodiment,” an “example exemplary embodiment,” etc., indicate the exemplary embodiment described may include a particular feature, structure, or characteristic, but every exemplary embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same exemplary embodiment. Further, when a particular feature, structure, or characteristic may be described in connection with an exemplary embodiment, it is within the knowledge of those skilled in the art(s) to effect such feature, structure, or characteristic in connection with other exemplary embodiments whether or not explicitly described.
[0018] The exemplary embodiments described herein are provided for illustrative purposes, and are not limiting. Other exemplary embodiments are possible, and modifications may be made to the exemplary embodiments within the spirit and scope of the present disclosure. Therefore, the Detailed Description is not meant to limit the present disclosure. Rather, the scope of the present disclosure is defined only in accordance with the following claims and their equivalents.
[0019] Embodiments of the present disclosure may be implemented in hardware, firmware, software, or any combination thereof. Embodiments of the present disclosure may also be implemented as instructions applied by a machine-readable medium, which may be read and executed by one or more processors. A machine-readable medium may include any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computing device). For example, a machine-readable medium may include read only memory (“ROM”), random access memory (“RAM”), magnetic disk storage media, optical storage media, flash memory devices, electrical optical, acoustical or other forms of propagated signals (e.g., carrier waves, infrared signals, digital signals, etc.), and others. Further firmware, software routines, and instructions may be described herein as performing certain actions. However, it should be appreciated that such descriptions are merely for convenience and that such actions in fact result from computing devices, processors, controllers, or other devices executing the firmware, software, routines, instructions, etc.
[0020] For purposes of this discussion, each of the various components discussed may be considered a module, and the term “module” shall be understood to include at least one software, firmware, and hardware (such as one or more circuit, microchip, or device, or any combination thereof), and any combination thereof. In addition, it will be understood that each module may include one, or more than one, component within an actual device, and each component that forms a part of the described module may function either cooperatively or independently from any other component forming a part of the module. Conversely, multiple modules described herein may represent a single component within an actual device. Further, components within a module may be in a single device or distributed among multiple devices in a wired or wireless manner.
[0021] The following Detailed Description of the exemplary embodiments will so fully reveal the general nature of the present disclosure that others can, by applying knowledge of those skilled in the relevant art(s), readily modify and/or adapt for various applications such exemplary embodiments, without undue experimentation, without departing from the spirit and scope of the present disclosure. Therefore, such adaptations and modifications are intended to be within the meaning and plurality of equivalents of the exemplary embodiments based upon the teaching and guidance presented herein. It is to be understood that the phraseology or terminology herein for the purpose of description and not of limitation, such that the terminology or phraseology of the present specification is to be interpreted by those skilled in the relevant art(s) in light of the teachings herein. SYSTEM OVERVIEW
[0022] FIG. 1A illustrates a schematic view of binocular overlap of human eyes configuration 100 where the region seen by both eyes is the overlapped region included in the scene seen by both eyes. The binocular overlap of human eyes configuration 100 includes a right eye 110a, a left eye 110b, an image as seen by right eye 120a, an image as seen by left eye 120b, and a binocular overlap 120c as seen by both eyes.
[0023] The present invention describes the apparatus, systems, and methods for constructing augmented reality devices for medical and dental magnification. One of the key concepts in 3D imaging and visualization is binocular overlap 120c. Binocular overlap 120c describes the overlap between the image as seen by the left eye 120b, versus the image as seen by the right eye 120a. For human being, a binocular overlap 120c is approximately 70%.
[0024] FIG. IB illustrates a block diagram of a two imaging sensor configuration 150 where two image sensors with two lenses are used in a side-by-side configuration. The two imaging sensor configuration 150 includes a right image sensor 130a, a left image sensor 130b, a right lens 140a, and a left lens 140b. FIG. 1C illustrates a block diagram a binocular overlap of two imaging sensor configuration 175 with the regions seen by both imaging sensors is the overlapped region. The binocular overlap of two imaging sensor configuration 175 includes a captured region by right image sensor 150a, a captured region by left image sensor 150b, and a binocular overlap region 150c. FIG. 1C depicts the binocular overlap region 150c that is generated when a right image sensor 130a and a left image sensor 130b are used in a side-by- side configuration as depicted in FIG. IB.
[0025] FIG. 2 depicts a schematic view of a conventional digital zoom configuration 200 where the original image is cropped and resized (from left to right). The cropped and resized images are displayed to the user after conventional digital zooming. Conventionally, digital zoom has been commonly used to zoom the image. The principle of conventional digital zoom is illustrated in FIG. 2. Although conventional digital zoom can magnify the images without the need of zoom lenses, it is not suitable for 3D magnification.
[0026] FIG. 3 illustrates a block diagram of a digital magnification of a 3D image system
300 that may generate 3D images when executing digital magnification on captured images of a target. The digital magnification of a 3D image system 300 includes a right lens 340a, a left lens 340b, a right image sensor 330a, a left image sensor 330b, a controller 310, a near-eye 3D display 320, and an eyeglass frame 350. In one embodiment, the eyeglass frame 350 is a head mount. In another embodiment, the eyeglass frame 350 is a traditional eyeglass frame sitting on the nose and ears of a user.
[0027] The digital magnification of a 3D image system 300 may generate 3D images from captured images of a target when executing digital magnification on the captured images to maintain the 3D images generated of the target after digital magnification. A first image sensor (such as right image sensor 330a) may capture a first image at an original size of the target. A second image sensor (such as left image sensor 330b) may be positioned on a common x-axis with the first image sensor 330a to capture a second image at the original size of the target. It should be appreciated that the first image sensor 330a and the second image sensor 330b may be positioned with either a converging angle or a diverging angle.
[0028] A controller 310 may execute a digital magnification on the first image captured by the first image sensor 330a at the original size of the target and on the second image captured by the second image sensor 330b at the original size of the target. The controller 310 may crop the first image captured by the first image sensor 330a and the second image captured by the second image sensor 330b to overlap a first portion of the target captured by the first image sensor 330a with a second portion of the target captured by the second image sensor 330b. The first portion of the target captured by the first image sensor 330a overlaps with the second portion of the target captured by the second image sensor 330b. In one aspect, the first image sensor 330a is further coupled with a first autofocus lens and the second image sensor 330b is further coupled with a second autofocus lens. The autofocus lenses may enable autofocus.
[0029] The controller 310 may adj ust the cropping of the first image and the second image to provide binocular overlap of the first portion of the target with the second portion of the target. The binocular overlap of the first image and the second image is an overlap threshold that when satisfied results in a 3D image of the target displayed to a user after the digital magnification is executed. The controller may instruct a display (such as near-eye 3D display 320) to display the cropped first image and the cropped second image that includes the binocular overlap to the user. The displayed cropped first image and the cropped second image display the 3D image at the digital magnification to the user.
[0030] The controller 310 may resize the cropped first image to the original size of the first image captured by the first image sensor 330a and the cropped second image to the original size of the second image captured by the second image sensor 330b. The cropped first image as resized and the cropped second image resized includes the binocular overlap of the first image and the second image. The controller 310 may instruct the near-eye 3D display 320 to display the resized and cropped first image and the resized and cropped second image that includes the binocular overlap to the user. The displayed resized and cropped first imaee and the resized and cropped second image display the 3D image at the digital magnification to the user. It should be appreciated that in one embodiment the controller 310 may crop the first image captured by the first image sensor 330a, to generate both left cropped image and right cropped image. In this embodiment, the second image captured by the second image sensor 330b is not used.
[0031] In one aspect, the display 320 is a near-eye display. In one embodiment, the display
320 is a 2D display. In another embodiment, the display 320 is a 3D display. It should be further appreciated that the near-eye display 320 may comprise LCD (liquid crystal) microdisplays, LED (light emitting diode) microdisplays, organic LED (OLED) microdisplays, liquid crystal on silicon (LCOS) microdisplays, retinal scanning displays, virtual retinal displays, optical see-through displays, video see-through displays, convertible video-optical see-through displays, wearable projection displays, projection display, and the like. It should be the appreciated that the display 320 may be stereoscopic to enable displaying of 3D content. In another embodiment, the display 320 is a projection display. It should be appreciated that the display 320 may be a monitor placed near the user.
[0032] It should be further appreciated that the display 320 may be a 3D monitor placed near the user and the user will wear a polarizing glass or active shutter glasses. It should be further appreciated that the display 320 may be a half transparent mirror placed near the user to reflect the image projected by a projector. It should be further be appreciated that the said projector may be 2D or 3D. It should be further appreciated that the said projector may be used with the user wearing a polarizing glass or active shutter glasses. In one embodiment, the display 320 is a flat panel 2D monitor or TV. In another embodiment, the display 320 is a flat panel 3D monitor or 3D TV. The 3D monitor/TV may need to work with passive polarizers or active shutter glasses. In one aspect, the 3D monitor/TV is glass-free. It should be appreciated that the display 320 can be a touchscreen, or a projector. In one example, the display 320 comprises a half transparent mirror that can reflect projection of images to the eyes of the user. The images being projected may be 3D, and the user may wear 3D glasses (e.g. polarizer; active shutter 3D glasses) to visualize the 3D image data reflected by the half transparent mirror. The half transparent mirror may be placed on top of the surgical field to allow the user to see through the half transparent mirror to visualize the surgical field.
[0033] It should be appreciated that the binocular of the system may be set as high as 100% or as low as 0%, depending on the specific application. In one aspect, the binocular overlap is set to be within the range of 60% and 100%. In another aspect, the binocular overlap is dynamic and not static. [0034] In one aspect, the digital magnification of a 3D image system 300 may further comprise additional sensors or components. In one embodiment, the system 300 further comprise a microphone, which may enable audio recording and/or communication. In one embodiment, the system 300 further comprise a proximity sensor, which may sense if user is wearing the system. In another embodiment, the system 300 further comprise a inertial measurement unit (IMU), an accelerometers, a gyroscopes, a magnetometers, or a combination thereof. In one embodiment, the system 300 further comprise a loudspeaker or earphone, which may enable audio replay or communication.
[0035] It should be further appreciated that the system can be applied a variety of applications, including but not limited to surgical, medical, veterinary, military, tactical, educational, industrial, consumer, jewelry fields.
DIGITAL MAGNIFICAITON WITH BINOCULAR VERTICAL ALIGNMENT
[0036] FIG. 4 depicts a schematic view of a conventional digital zoom configuration 400 where the zoomed left images and zoomed right images are misaligned leading to poor 3D vision and depth perception. The conventional digital zoom configuration 400 includes the zoomed right images 410a that are misaligned with the zoomed left images 410b. Conventional digital zoom does not work well on magnifying of stereo-images for 3D display. FIG. 3 shows an example of direct application of conventional digital zoom to stereo-images. Conventional digital zoom is not suitable for magnifying 3D stereo-images, as it introduces binocular vertical misalignment.
[0037] The controller 310 may crop the first image captured by the first image sensor 330a and the second image captured by the second image sensor 330b to vertically align the overlap of the first portion of the target with the second portion of the target. The cropped first image is in vertical alignment of the cropped second image when each vertical coordinate of the cropped first image is aligned with each corresponding vertical coordinate of the cropped second image. The controller 310 may adjust the cropping of the first image and the second image to provide binocular overlap of the first portion of the target with the second portion of the target. The binocular overlap of the first image and the second image is vertically aligned to satisfy the overlap threshold to generate the 3D image of the target displayed to the user after the digital magnification is executed.
[0038] The present invention discloses a digital magnification method that also ensures binocular vertical alignment. In one embodiment, the left image is captured by the left image sensor 330b and cropped by the controller 310, and the right image is captured by the right image sensor 330a and cropped by the controller 310, while the cropping of left and right images preserves vertical alignment. The left and right images are cropped in such a way the vertical coordinates of the cropped left image and the vertical coordinates cropped right image are aligned.
[0039] In an embodiment, the left image sensor 330b with the left lens 340b that are worn by the user may capture a left image. The right image sensor 330a with the right lens 340a that are worn by the user may capture a right image. The left image and the right image may be provided to the controller 310. The controller 310 may crop the left image to generate a cropped left image. The controller 310 may crop the right image to generate a cropped right image and may preserve the vertical alignment of the cropped right image with respect to the cropped left image. The controller 310 may resize the cropped left image to generate a cropped and resized left image. The controller 310 may resize the cropped right image to generate a cropped and resized right image. The near-eye 3D display 320 worn by the user may display the cropped and resized left image to the left eye of the user. The near-eye 3D display 320 worn by the user may display the cropped and resized right image to the right eye of the user. It should be appreciated that the controller can be a microcontroller, a computer, a field- programmable gate array (FPGA), an application specific integrated circuits (ASIC), or a combination thereof.
[0040] In one embodiment, the left image sensor and right image sensor are identical image sensors. The image sensors may use the same type of image lenses. The left and right image sensors may be placed and calibrated, so that the left image captured and right image captured are vertically aligned, prior to any digital magnification process. The digital magnification process preserve the vertical alignment. For example, assuming the left image and right image each have 800(horizonal, column) by 600(vertical, row) pixels. After digital magnification, the row 201 to row 400 of pixels of left image to generate a cropped left image, and the row 201 to row 400 of pixels of the right image are used to generate a cropped right image. Therefore, the vertical alignment is preserved.
[0041] In one embodiment, the left image sensor and right image sensor are not identical image sensors. In this case, the left image captured and right image captured are first calibrated and aligned vertically, prior to any digital magnification process. For example, assuming the left image captured by the left image sensor have 800(horizonal, column) by 600(vertical, row) pixels, but the right image captured by the right image sensor have 400 (horizonal) by 300 (vertical) pixels. The left image and right image are first vertically aligned. For instance the row # 0, 200, 400, 600 of the left image may correspond to the row # 0, 100, 200, 300 of the right image, respectively. After digital magnification, a subset of the row 200 to row 400 of pixels of left image, and a subset of the row 100 to row 200 of pixels of the right image are used. Therefore, the vertical alignment is preserved.
[0042] FIG. 5 depicts a schematic diagram of a digital magnification with binocular vertical alignment preservation configuration 500 where the magnified left images and the magnified right images are vertically aligned thereby resulting in increased 3D visualization. The digital magnification with binocular vertical alignment preservation configuration 500 includes the zoomed right images 510b are vertically aligned with the zoomed left images 510a thereby resulting in increased 3D visualization.
DIGITAL MAGNIFICAITON WITH PRESERVATION OF BINOCULAR OVERLAP
[0043] FIG. 6 depicts a schematic view of a digitally magnified stereo images with preservation of vertical alignment configuration 600 whereas the digital magnification is applied, binocular overlap between the cropped left images and cropped right images gradually decreases. The digitally magnified stereo images with preservation of vertical alignment configuration 600 includes digitally magnified right images 610a are vertically aligned with the digitally magnified left images 610b. For example, at a 2.3X magnification, the binocular overlap decreases from 75% to 50% resulting in a decrease in 3D visualization. At a 5.3X magnification, the binocular overlap decreases from 75% to 0%. The vertical alignment preservation without the preservation of binocular overlap may result in the gradual decrease in binocular overlap with each digital magnification.
[0044] After executing a first digital magnification at a first digital magnification level on the first image captured by the first image sensor 330b and on the second image captured by the second image sensor 330a, the controller 310 may maintain the binocular overlap generated by adjusting the cropping of the first image and the second image to satisfy the overlap threshold. In one aspect, during the digital magnification process a fixed binocular overlap number is maintained, such as 80%, 90% or 100%. In another aspect, during the digital magnification process a range of binocular overlap number is maintained, such as 60% - 90%.
[0045] The controller 310 may execute a second digital magnification at a second digital magnification level on the first image captured by the first image sensor 330a and the second image captured by the second image sensor 330b. The second digital magnification level is increased from the first digital magnification level. The controller 310 may maintain the binocular overlap generated after executing the first digital magnification at the first digital magnification level on the first image and the second image when executing the second digital magnification at the second digital magnification level.
[0046] After executing each previous digital magnification at each previous digital magnification level on the first image and the second image, the controller 310 may maintain the binocular overlap and the vertical alignment determined when executing the first digital magnification at the first digital magnification level on the first image and the second image. The controller 310 may continue to maintain the binocular overlap and the vertical alignment determined from the adjusting of the cropping of the first image and the second image to satisfy the overlap threshold after executing the first digital magnification at the first digital magnification level on the first image and the second image for each subsequent digital magnification level. Each subsequent digital magnification level is increased from each previous digital magnification level. For example, the overlap threshold may be satisfied when the binocular overlap includes 75% overlap of the first image and the second image is maintained for each subsequent digital magnification at each subsequent digital magnification level. In one embodiment, each subsequent digital magnification from the previous magnification level (e.g. increase from lx to 2x, and increase 2x to 4x ) may be a recursive function.
[0047] The controller 310 may execute first digital magnification at the first digital magnification level on a non-concentric portion of the fist image and a non-concentric portion of the second image. The non-concentric portion of the first image and the second image is a portion of the first image and the second image that differs from a center of the first image and the second image. The controller 310 may adjust the cropping of the first image and the second image to provide binocular overlap of the non-concentric portion of the first image and the non-concentric portion of the second image. The binocular overlap of the non-concentric portion of the first image and the non-concentric portion of the second image satisfies the overlap threshold either specified as a fixed number or a range. The controller 310 may continue to crop a non-concentric portion of the first image and a non-concentric portion of the second image for each subsequent digital magnification at each subsequent digital magnification level. The binocular overlap of the non-concentric portion of the first image and the non-concentric portion of the second image is maintained from the first digital magnification at the first digital magnification level.
[0048] The non-concentric portion of the first image and the non-concentric portion of the second image may be resized to display to the user. In one aspect, at each magnification level a first center of cropping of the non-concentric portion of the first image and a second center of cropping of the non-concentric portion of the second image are determined by the system 300. In one embodiment, the first center of cropping is fixed at the particular part of the first image, and second center of cropping at each magnification level is determined based on the location of the corresponding first center of cropping and the targeted binocular overlap. It should be appreciated that in some embodiment and at one or more magnification level, the digital magnification on either left image or right image may be concentric. For example, digital magnification on the left image is concentric but the digital magnification on the right image is non-concentric to maintain the binocular overlap.
[0049] In one embodiment, the left image sensor and right image sensor are identical image sensors. The image sensors may use the same type of image lenses, including autofocus lenses. The left and right image sensors may be placed and calibrated, so that the left image captured and right image captured are vertically aligned, prior to any digital magnification process. The digital magnification process preserves the vertical alignment and binocular overlap (e.g. 80%). For example, assuming the left image and right image each have 800 (horizonal, column) by 600 (vertical, row) pixels. After digital magnification, the pixels from row 201 to row 400 and column 401 to column 600 of left image are used to generate a cropped left image, and the row 201 to row 400 and column 201 to column 400 of right image are used to generate a cropped right image. This cropping may generate a satisfactory binocular overlap (e.g. 80%). The non-concentric cropping in the digital magnification combined with resizing may enable magnification while preserving of both binocular overlap and vertical alignment. Similarly, when the system increase to a higher digital magnification level, further non-concentric cropping on at least one of the image (e.g. left or right images) are performed in conjunction with resizing to enable magnification while preserving of both binocular overlap and vertical alignment
[0050] In another example, machine learning algorithms are used for determining a center of cropping for the left image, or a center of cropping for the right image, or both centers, during the digital magnification process. In one aspect, object recognition and localization based on machine learning (e.g. recognize surgical field, or recognize surgical instrument, or recognize tissues, etc.) may determine at least one center of the cropping. For example, the surgical bed is recognized and localized based on the left image, and a location within the surgical bed (e.g. centroid) is assigned to be the center of cropping for the left image, and the center of cropping for the right image is calculated based on the center of cropping for the left image and the desirable binocular overlap to be maintained. [0051] . In one aspect, supervised learning can be implemented. In another aspect, unsupervised learning can be implemented. In yet another aspect, reinforcement learning can be implemented. It should be appreciated that feature learning, sparse dictionary learning, anomaly detection, association rules may also be implemented. Various models may be implemented for machine learning. In one aspect, artificial neural networks are used. In another aspect, decision trees are used. In yet another aspect, support vector machines are used. In yet another aspect, Bayesian networks are used. In yet another aspect, genetic algorithms are used.
[0052] In yet another example, neural networks, convolutional neural networks, or deep learning are used for object recognition, image classification, object localization, image segmentation, image registration, or a combination thereof. Neural network based systems are advantageous in many cases for image segmentation, recognition and registration tasks.
[0053] In one example, U-Net is used, which has a contraction path and expansion path.
The contraction path has consecutive convolutional layers and max-pooling layer. The expansion path performs up-conversion and may have convolutional layers. The convolutional layer(s) prior to the output maps the feature vector to the required number of target classes in the final segmentation output. In one example, V-net is implemented for image segmentation to isolate the organ or tissue of interest (e.g. vertebral bodies). In one example, Autoencoder based Deep Learning Architecture is used for image segmentation to isolate the organ or tissue of interest. In one example, backpropagation is used for training the neural networks.
[0054] In yet another example, deep residual learning is performed for image recognition or image segmentation, or image registration. A residual learning framework is utilized to ease the training of networks. A plurality of layers is implemented as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. One example of network that performs deep residual learning is deep Residual Network or ResNet.
[0055] In another embodiment, a Generative Adversarial Network (GAN) is used for for image recognition or image segmentation, or image registration. In one example, the GAN performs image segmentation to isolate the organ or tissue of interest. In the GAN, a generator is implemented through neural network to models a transform function which takes in a random variable as input and follows the targeted distribution when trained. A discriminator is implemented through another neural network simultaneously to distinguish between generated data and true data. In one example, the first network tries to maximize the final classification error between generated data and true data while the second network attempts to minimize the same error. Both networks may improve after iterations of the training process. [0056] In yet another example, ensemble methods are used, wherein multiple learning algorithms are used to obtain better predictive performance. In one aspect, Bayes optimal classifier is used. In another aspect, bootstrap aggregating is used. In yet another aspect, boosting is used. In yet another aspect, Bayesian parameter averaging is used. In yet another example, Bayesian model combination is used. In yet another example, bucket of models is used. In yet another example, stacking is used. In yet another aspect, a random forests algorithm is used. In yet another aspect, a gradient boosting algorithm is used.
[0057] The controller 310 may determine a distance that the first image sensor 330b and the second image sensor 330a is positioned from the target. The controller 310 may execute the cropping of the first image and the second image to maintain the vertical alignment and the binocular overlap for each digital magnification at each digital magnification level based on the distance of the first image sensor 330b and the second image sensor 330a is from the target.
[0058] In another embodiment, the system allows the user to determine a center of cropping for the left image, or a center of cropping for the right image, or both centers, for the digital magnification process. In the case of many users, each may have their own settings.
[0059] The display 320 may include one of a plurality of wearable display that displays the resized and cropped first image and the resized and cropped second image to display the 3D image of the target after the digital magnification is executed that includes the binocular overlap of the first image and the second image that are vertically aligned to satisfy the overlap threshold. In one aspect, the first image sensor 330b and the second image sensor 330a may be positioned proximate the display 320 for the user to execute a surgical procedure on the target that is a patient. In another aspect, the first image sensor 330b and the second image sensor 330a may be positioned close to the display 320 for the user to execute a surgical procedure on the target that is a patient. In another example, the first image sensor 330b and the second image sensor 330a may be positioned on a stand, not adjacent to the display 320. It should be appreciated that the said stand may be motorized or has a robot. The display 320 may be a 3D monitor, a 3D projector, or a 3D projector with a combiner, used with 3D glasses (e.g. polarizers or active shutter glasses).
[0060] The present invention discloses a method for digitally magnifying the images, while preserving the binocular overlap. In one embodiment, the cropping of left image and cropping of right image may be performed by the controller 310 with the binocular overlapped preserved. For example, if the original left image and right image have an original binocular overlap of 75%, the cropped left image and cropped right image may be cropped by the controller 310 in such a way so that the binocular overlap of cropped images will also be 75% [0061] In an embodiment, the left image sensor 330b with the left lens 340b that are worn by the user may capture a left image. The right image sensor 330a with the right lens 340a that are worn by the user may capture a right image. The left image and the right image may be provided to the controller 310. The controller 310 may calculate a left crop function that specifies how to crop the left image and a right crop function that specifies how to crop the right image. The left crop function and the right crop function preserve binocular overlap and binocular vertical alignment. The controller 310 may crop the left image to generate a cropped left image using the left crop function that preserves binocular overlap and binocular vertical alignment. The controller 310 may crop the right image to generate a cropped right image using the right crop function that preserves binocular overlap and binocular vertical alignment.
[0062] The controller 310 may resize the cropped left image to generate a cropped and resized left image. The controller 310 may resize the cropped right image to generate a cropped and resized right image. The display 320 worn by the user may display the cropped and resized left image to the left eye of the user. The display 320 may display the cropped and resized right image to the right eye of the user. In one aspect, the display 320 may be a near-eye 3D display. In another aspect, the display 320 may be a 3D monitor, a 3D projector, or a 3D projector with a combiner, used with 3D glasses (e.g. polarizers or active shutter glasses).
[0063] FIG. 7 depicts a schematic view of a preservation of binocular overlap and binocular vertical alignment configuration 700 where at 2.3X, 5.3X, and 12X magnifications, respectively, the left cropped images and the right cropped images have binocular overlap of 75% and vertical alignment thereby resulting in an increased 3D visualization experience and depth perception may be provided to the user. The preservation of binocular overlap and binocular vertical alignment configuration 700 includes right cropped images 710b and left cropped images 710a.
[0064] In another embodiment, the digital magnification method further comprises of an additional condition to satisfy: the left cropped image shares the same geometrical center as that of the left original image. The right cropped image may be calculated by the controller 310 and generated accordingly by the controller 310 based on the cropping of the left cropped image, while preserving the binocular overlap and binocular vertical alignment. The benefit of this implementation is: the digital magnification process may be coaxial along the center of the left image (the optical axis), and the progression of digital magnification may align with the line of sight of the user’s left eye. Alternatively, the cropped right image may share the same center as the right original image. The left cropped image may be calculated by the controller 310 and generated accordingly by the controller 310 based on the position and cropping of the right cropped image, while preserving the binocular overlap and binocular vertical alignment.
[0065] In another embodiment, the acceptable binocular overlap of cropped images may be specified as a range, rather than a specific number. For instance, the binocular overlap of cropped left and right images may be specified to be within a range between 60% to 90%. Any number between 60% and 90% may be considered satisfactory for digital magnification. With an acceptable range of binocular overlap as a guideline for cropping left and right images, the left image sensor 330b with the left lens 340b that are worn by the user controller 310 may capture a left image. The right image sensor 330a and the right lens 340a that are worn by the user may capture a right image. The left image and the right image may be provided to the controller 310.
[0066] The controller 310 may calculate a left crop function that specifies how to crop the left image and the right crop function that specifies how to crop the right image. The left crop function and the right crop function may preserve binocular vertical alignment. The left crop function and the right crop function may preserve binocular overlap as specified by a range of acceptable binocular overlap, such as 60% to 90%. The controller 310 may crop the left image to generate a cropped left image using the left crop function that preserves binocular overlap and binocular vertical alignment. The controller 310 may crop the right image to generate a cropped right image using the right crop function that preserves binocular overlap and binocular vertical alignment. The controller 310 may resize the cropped left image to generate a cropped and resized left image. The controller 310 resizes the cropped right image to generate a cropped and resized right image. The display 320 may display the cropped and resized left image to the left eye of the user. The display 320 may display the cropped and resized right image to the right eye of the user.
[0067] In another embodiment, the left lens 340b and right lens 340a may be zoom lenses.
The focal length and angle of view of zoom lenses may be varied, enabling optical zoom. Therefore, optical zoom may be used in conjunction with the aforementioned digital magnification methods. For example, 5.3X digital magnification may be used in conjunction with 2X optical zoom (10.6X magnification in total). It should be appreciated that the levels of digital magnification may be either continuous (e.g. magnifying with fine level of increments over a range: e.g. any magnification level within 2X-7X), or the magnification levels may be discrete (2X, 2.5X, 3X, 4X, 6X, 7X, etc). In another embodiment, the controller 310 may transmit the magnified left image and/or right image to another 3D display device for visualization. The 3D display device may be a wearable display, a monitor, a projector a projector with a combiner, a passive 3D monitor with 3D polarized glasses, a active 3D monitor with active shutter 3D glasses, or a combination thereof. In yet another embodiment, the controller 310 may transmit the magnified left image and/or right image to another computer for visualization, storage, and broadcast. In yet another embodiment, the controller 310 may record the magnified left image and/or magnified right image.
[0068] In yet another embodiment, the controller 310 may apply computer vision and/or image processing techniques the magnified left image and/or magnified right image. Additional computer vison analysis can enable decision support, object recognition, image registration, and object tracking. For example, deep learning and neural networks may be used.
In yet another embodiment, the near-eye 3D display 320 may display other medical image data to the user (e.g. CT, MRI, ultrasound, nuclear medicine, surgical navigation, fluoroscopy, etc) and the other medical image data is overlaid with the magnified left image and/or magnified right image. It should be further appreciated that more than two image sensors may be used in the system. In one example, when there are more than two image sensors, at any given moment, only two image sensors are selected to participate in the digital magnification process (e.g. three color sensors with three lenses). It should be appreciated that in case of multiple image sensors and image lenses, multiple sets consisting of two of those sensors may be calibrated with respect to each other in separate processes.
[0069] In one embodiment, only one image sensor is used. This image sensor will serve as both left image sensor 330a and right image sensor 330b. In another embodiment, a 3D scanning unit comprising of a projector and an image sensor is used, similar to a 3D scanner.
A 3D scan can be thus generated. The 3D scanning unit may use epipolar geometry for the 3D scan. By using different virtual viewpoints and projection angles, a virtual left image and virtual right image can be generated based on the 3D scan. The digital magnification process aforementioned may be applied to the virtual left image and virtual right image.
APPARATUSES AND SYSTEMS FOR DIGITAL MAGNIFICAITON AND 3D AUGMENTED REALITY DISPLAY
[0070] FIG. 8 depicts a schematic view a physical embodiment of a digital magnification wearable device configuration 800. The digital magnification wearable device configuration 800 includes the right image sensor 330a, the left image sensor 330b, the right lens 340a, the left lens 340b, the right near-eye display 320a, the left near-eye display 320b, and an eyeglass frame 350. It should be appreciated that the wearable frame may be in the form of a head mount, in lieu of an eyeglass frame. It should be appreciated that the controller 310 may be a microcontroller, a computer, an FPGA, or an ASIC. The digital magnification wearable device configuration 800 may execute digital magnification with preservation of binocular overlap and binocular vertical alignment.
[0071] In one embodiment, the digital magnification wearable device configuration 800 may further include transparent plastic or glass, surrounding the left near eye display 320b and right near eye display 320a. For example, the digital magnification wearable device configuration 800 may use a compact offset configuration, whereby only a part of area before each eye is none-transparent and the other parts are transparent. In one example, the center part of area before each eye is none-transparent and the peripheral parts are transparent. This way, the user such as surgeon/dentist can see around the near eye digital display to look at the patient with unhindered natural vision. In one embodiment, the digital magnification wearable device configuration 800 may further include prescription eyeglasses, so that nearsightedness, farsightedness, and astigmatism may be corrected.
[0072] In another embodiment, the digital magnification wearable device configuration
800 may include an optical see-through configuration. The near-eye 3D displays 320(a-b) are both transparent or semi-transparent. In one embodiment, the image sensors 330(a-b) may be a pair of color image sensors. Thus, the digital magnification wearable device configuration 800 may digitally magnify stereoscopic color images and display to the user in the near-eye 3D display 320(a-b) in 3D. In one example, the left and right lenses 340(a-b) are lenses with fixed focal lengths. In another example, the left and right lenses 340(a-b) are zoom lenses with variable focal lengths. In another example, the color image sensors may be complementary metal-oxide- semiconductor (CMOS) image sensors. In yet another example, the color image sensors may be charge-coupled device (CCD) image sensors. In one example, the left and right color image sensors are coupled with autofocus lenes to enable autofocus.
[0073] In one embodiment, only one image sensor is used. This image sensor will serve as both left image sensor 330a and right image sensor 330b. In another embodiment, a 3D scanning unit comprising of a projector and an image sensor is used, similar to a 3D scanner. A 3D scan can be thus generated. The 3D scanning unit may use epipolar geometry for the 3D scan. By using different virtual viewpoints and projection angles, a virtual left image and virtual right image can be generated based on the 3D scan. The digital magnification process aforementioned may be applied to the virtual left image and virtual right image. The 3D scanning unit may use visible wavelengths, infrared wavelengths, ultraviolet wavelengths, or a combination thereof. [0074] The aforementioned 3D scanning unit may project dynamic projection pattern to facilitate 3D scanning. A few examples of dynamic patterns are binary code, stripe boundary code, and miere pattern. In one embodiment, binary codeword is represented by a series of black and white stripes. If black represents 1 and white represents 0, the series of 0 and 1 at any given location may be encoded by the dynamic projection pattern; the binary dynamic projection pattern may be captured by the image sensor and lens, and decoded to recover the binary codeword that encodes an location (e.g. 10100011). In theory, N binary patterns may generate 2N different codewords per image dimension (x or y dimension). Similarly, binary coding may be extended to N-bits coding. For example, instead of binary case where only 1 and 0 are represented by black and white, a N-bits integer may be represented by an intensity in between. For instance, if it is a 2-bit encoding system, 2*2=4 different possibilities. If maximum intensity is I, 0, 1,2,3 can be represented by I, 2/3*1, 1/3*1, and 0, respectively. In other examples, dynamic stripe boundary code-based projection or the dynamic Moire code based projection can be implemented.
[0075] In another embodiment, dynamic Fourier transform profilometry may be implemented by 3D scanning unit. In one aspect, periodical signals are generated to carry the frequency domain information including spatial frequency and phase. Inverse Fourier transform of only the fundamental frequency results in a principle phase value ranging from - p to p. After spatial or temporal phase unwrapping (The process to remove 2p discontinuities and generate continuous map), actual 3D shape of patient anatomy may be recovered. Fourier transform profilometry is less sensitive to the effect of out-of-focus images of patients, making it a suitable technology for intraoperative 3D scanning. Similarly, p-shifted modified Fourier transform profilometry may be implemented intraoperatively, where a p-shifted pattern is added to enable the 3D scanning.
[0076] In another example, a DC image may be used with Fourier transform profilometry in the 3D scanning unit. By capturing the DC component, the DC-modified Fourier transform profilometry may improve 3D scan quality intraoperatively. In another example, N-step phase- shifting Fourier transform profilometry may be implemented intraoperatively. It should be appreciated that the larger the number of steps (N) is chosen, the higher the 3D scanning accuracy. For instance, three-step phase-shifting Fourier transform profilometry may be implemented to enable high speed 3D scanning intraoperatively. It should be appreciated that periodical patterns such as trapezoidal, sinusoidal, or triangular pattern may be used in the Fourier transform profilometry for intraoperative 3D scan. It should be further appreciated that windowed Fourier transform profilometry, two-dimensional Fourier transform profilometry, or wavelet Fourier transform profilometry may also be implemented by the aforementioned apparatuses and systems. It should be appreciated more than one frequency of periodical signal (e.g. dual frequencies) may be used in the modified Fourier transform profilometry, so that phase unwrapping become optional in the intraoperative 3D scan. The dynamic Fourier transform profilometry and modified Fourier transform profilometry discussed herein may improve the quality of 3D scan of the patient. Improved 3D scan may enhance the image registration between intraoperative 3D scan and preoperative images (e.g. MRI and CT), thereby improving the surgical navigation.
[0077] In yet another embodiment, the aforementioned 3D scanning unit implements
Fourier transform profilometry or modified Fourier transform profilometry, in combination with binary codeword projection. The Fourier transform profilometry and binary codeword projection may be implemented sequentially, concurrently, or a combination thereof. The combined approach may improve the 3D scanning accuracy, albert at the cost of 3D scanning speed.
[0078] In another embodiment, the aforementioned projector may include at least one lens.
The lens is configured such a way so that the projected pattem(s) are defocused. The defocusing process by the lens is similar a convolution of gaussian filter on the binary pattern. Consequently, the defocused binary pattern may create periodical patterns that are similar to sinusoidal patterns.
[0079] In another example, dithering techniques are used to generated high-quality periodical fringe patterns through binarizing a higher order bits fringe pattern (e.g. 8 bits) such as sinusoidal fringe patterns. In one example, ordered dithering is implemented; for example, Bayer matrix can be used to enable ordered dithering. In another example, error-diffusion dithering is implemented; for instance, Floyd-Steinberg (FS) dithering or minimized average error dithering may be implemented. It should be appreciated that in some cases the dithering techniques may be implemented in combination with defocusing technique to improve the quality of intraoperative 3D scan.
[0080] In another example, the aforementioned projector may generate statistical pattern.
For instance, the projector may generate a pseudo random pattern that includes a plurality of dots. Each position of each corresponding dot included in the pseudo random pattern may be pre-determined by the projector. The projector may project the pseudo random pattern onto the patient or target. Each position of each corresponding dot included in the pseudo random pattern is projected onto a corresponding position on the patient/target. The image sensor may capture a 2D intraoperative image of a plurality of object points associated with the patient/target, to calculate the 3D topography.
[0081] The controller 310 may associate each object point associated the patient that is captured by the image sensor with a corresponding dot included in the pseudo random pattern that is projected onto the patient/target by the projector based on the position of each corresponding dot as pre-determined by the projector. The controller 310 may convert the 2D image to the 3D scan of the patient/target based on the association of each object point to each position of each corresponding dot included in the pseudo random pattern as pre-determined by the projector. In one example, the projector may include one or more edge emitting laser, at least one collimating lens, and at least one diffractive optics element. The edge emitting laser and the diffractive optics element may be controlled by the controller 310 to generate patterns desirable for the specific 3D scanning applications.
[0082] It should be appreciated that the near eye 3D display may comprise LCD (liquid crystal) microdisplays, LED (light emitting diode) microdisplays, organic LED (OLED) microdisplays, liquid crystal on silicon (LCOS) microdisplays, retinal scanning displays, virtual retinal displays, optical see through displays, video see through displays, convertible video-optical see through displays, wearable projection displays, and the like. In another example, the digital magnification wearable device configuration 800 may further include a light source for surgical field illumination. In one example, the light source is based on one or a plurality of light emitting diode (LED). In another example, the light source is based on one or a plurality of laser diode with waveguide or optical fiber. In another example, the light source has a diffuser. In another example, the light source has noncoherent light source such as an incandescent lamp. In yet another example, the light source has coherent light source such as a laser diode and phosphorescent materials in film form or volumetric form. In yet another embodiment, the light source is mounted on a surgical instrument to illumination of the cavity.
[0083] In another embodiment, the image sensors 330(a-b) are a pair of monochrome sensors. The systems further include a least one fluorescence emission filter. Thus, the digital magnification surgical loupe configuration may digitally magnify stereoscopic fluorescence images and display to the user in the near-eye 3D display 320(a-b) in 3D. The systems further include a light source that is capable of provide excitation light to the surgical field. It should also be appreciated that the light source may include a laser light; a light emitting diode (LED); an incandescent light; a projector lamp; an arc-lamp, such as xenon, xenon mercury, or metal halide lamp; as well as coherent or in-coherent light sources. In one example, the light source comprises of one or a plurality of white LEDs with a low pass filter (e.g.775nm short pass filter) and one or a plurality of near infrared LEDs with a band pass filter (e.g. 830nm band pass filter). In another example, the light source comprises of one or a plurality of white LEDs with a low pass filter (e.g.775nm short pass filter) and one or a plurality of near infrared LEDs with a long pass filter (e.g. 810nm long pass filter). In one example, the light source can be controlled by sensors such as an inertial measurement unit to turn the light on and off.
[0084] In another embodiment, the digital magnification wearable device configuration
800 includes at least two color image sensors, at least two monochrome image sensors, at least two beamsplitters, and at least two narrow band filters. The monochrome image sensor, the color sensor and the beamsplitter are optically aligned on each side (left vs right), so that the left color image is aligned with the left monochrome image, and the right color image is aligned with the right monochrome image. It should be appreciated that the beamsplitters can be cube beamsplitters, plate beamsplitters, Pellicle Beamsplitters, Dichroic Beamsplitters, or polarizing beamsplitters. It should be appreciated that the optical design can be in a folded configuration using mirrors.
[0085] In another example, the digital magnification wearable device configuration 800 includes a light source with an additional spectral filter. The digital magnification wearable device configuration 800 may be used to capture narrow band reflectance images or fluorescence images, and to digitally magnify the image and display to the user in 3D with desirable binocular overlap. For example, the light source may be a plurality of white LEDs and near infrared LEDs (770nm), and the spectral filter can be a 800nm short pass filter. In another embodiment, the apparatus further includes additional sensors, such as an inertial measurement unit (IMU), accelerometers, gyroscopes, magnetometers, proximity sensors, microphone, force sensors, ambient light sensors, etc. In one example, the light source can be controlled by sensors such as an inertial measurement unit to turn the light on and off. In another example, the system 300 can be controlled by sensors such as an inertial measurement unit and/or proximity sensor to turn the system 300 on and off. Some example of types of proximity sensors are: Photoelectric, Inductive, Capacitive and Ultrasonic.
[0086] In one embodiment, the digital magnification wearable device configuration 800 further include at least one microphone. The system 300 may record audio data such as dictation. The system 300 capture the audio data using the microphone, perform voice recognition on the controller 310, and enable voice control of the system 300. In one aspect, the voice control may include adjustment of the magnification levels (e.g. from 3X to 5X). In one example, a microphone array or multiple microphones are used, the system may triangulate the source of sound for multiple purposes such as noise cancellation, voice control of multiple devices in close proximity, etc.. The system 300 may differentiate the one user from other users based on the triangulation of voice/audio signal. In yet another embodiment, the digital magnification wearable device configuration 800 further includes tracking hardware, such as optical tracking hardware, electromagnetic tracking hardware, etc. In yet another embodiment, the digital magnification wearable device configuration 800 further includes of communication hardware, to enable wireless or wired communication such as such as Wi-fi, Bluetooth, cellular communication, Ethernet, LAN, wireless communication protocols compatible with operating rooms, infrared communication. The apparatus can thus stream the magnification data and/or the original image data captured by the image sensors to another apparatus, computer or mobile devices. In yet another embodiment, the lenses 340(a-b) in the digital magnification wearable device configuration 800 include autofocus lenses.
[0087] In yet another embodiment, the lenses 340(a-b) in the digital magnification wearable device configuration 800 are autofocus lenses but the digital magnification wearable device configuration 800 may focus the lenses, on request of the user. For example, upon user request via an input device or via voice control, the lenses will be focused on the demand of the user. Thus, the autofocus will not be activated unless demanded by the user, thus avoiding unwanted autofocus during surgical procedures. In one example, the focus setting of the left lens 340b and right lens 340a are always the same. For example, the settings for focusing left lens 340b and the settings for right lens 340a are set to be the same, to avoid left lens focusing on a focal plane different from the right plane.
[0088] In yet another embodiment, the digital magnification wearable device configuration
800 further includes additional input devices, such as a foot pedal, a wired or a wireless remote control, one or more button, a touch screen, microphone with voice control, gesture control device such as Microsoft Kinect, etc. It should be appreciated that the controller can be useable or disposable. It should be appreciated that a sterile sheet or wrap may be placed around the input device. In yet another embodiment, the digital magnification wearable device configuration 800 may display medical images such as MRI (magnetic resonance image) image data, computed tomography (CT) image data, positron emission tomography (PET) image data, single-photon emission computed tomography (SPECT), PET/CT, SPECT/CT, PET/MRI, gamma scintigraphy, X-ray radiography, ultrasound, and the like. In yet another embodiment, the digital magnification wearable device configuration 800 may include digital storage hardware, to enable recording the magnification data, and/or the original image data from image sensors, and/or audio data, and/or other sensor data. Image Stabilization
[0089] In one example, electronic image stabilization (EIS) is implemented by the
Controller 310. The Controller 310 shifts the electronic image from frame to frame of left video captured by the left camera and the right video captured by the right camera, enough to counteract the motion. EIS uses pixels outside the border of the cropped area during digital magnification to provide a buffer for the motion. In one aspect, optical flow or other image processing methods may be used to track subsequent frames and detect vibrational movements and correct for them. In another aspect, feature-matching image stabilization methods may be used. Image features may be extracted via SIFT, SURF, ORB, BRISK, neural networks, etc.
[0090] In another example, Optical Image Stabilization (OIS) is implemented. In one aspect, the OIS in the lenses 340a and 340b. For instance, using springs and mechanical mount, image sensor movements are smoothened or cancelled out. In one aspect, the image sensors 330a and 330b can be moved in such a way as to counteract the motion of the camera.
[0091] In yet another example, mechanical image stabilization (MIS) is implemented.
Gimbals may be used for MIS. In one instance, MIS is achieved by attaching a gyroscope to the system. He gyroscope lets the external gyro (gimbal) stabilize the image sensors 330a and 330b.
Stereoscopic calibration
[0092] The system 300 may need stereoscopic calibration to enable accurate 3D digital magnification. In one example, after mechanical fixture to achieve vertical calibration, a single calibration (through repeated capture of similar calibration pattern such as fiducials or chessboard) on left and right sensors, based on that an initial homography transformation and cropping is applied to the pair of images to achieve a high accuracy alignment between the two in executed. This is similar to finding the epipolar geometry between two sensors and bringing the two frames into a single plane through calibration to have: (1) Identical scales of the captured geometry, through virtual identical focal length, (2) Identical peripheral alignment of captured scene, through undistortion, and (3) Identical vertical alignment of captured frames, through homography (projective) transformation. The new calibrated frames (rectified frames) may be used for subsequent digital 3D magnification and visualization processes, as previously described. Ergonomic calibration
[0093] In one aspect, ergonomic calibration can be performed on the system 300 using one or a plurality of IMUs, one on the image sensor axis and second one on the display axis. Two important objectives are achieved in capturing and displaying the digital images: the headset is horizontally aligned in the center of the forehead (single IMU reading and correction). This is essential to have a symmetrical mechanical position for each image sensor 330a and 330b with respect to each corresponding eye (left sensor 330a to left eye and right sensor 330b to right eye). It also helps achieve maintain binocular overlap between the digitally magnified images captured and overlapped in the center of the two image sensors (by comparing and aligning the two IMUs), and the center of the two eyes which is perceived by natural vision around the displays.
Autofocus and autofocus on-demand
[0094] Autofocus can be achieved through mechanical structure such as motors/actuators or through liquid lenses. In one example, the controller 310 may conduct brightness assessment to find a high contrast image, high frequency values, etc. through a method of Sobel filter or similar that extracts edges and high frequency features of the left and/or right images. The autofocus lens may test a large range of focus (course focus) to find a course focus, and subsequently conduct a smaller range of focus (fine focus) based in the neighborhood near the course focus. In one example, the right lens 340a and left lens 340b may be assigned to 2 ends of the focus range and progress towards the middle. Once the an optical focus value is found, both lenses will assigned the same value or similar value, to avoid 2 lenses focusing on different image planes.
[0095] In another example, the controller 310 may conduct using calibration and disparity map to find the working distance of desired object. The controller 310 may use previously calibrated frames to extract a partial or full disparity or depth map. Then controller 310 may use a region of interest or a point in a specific part of the image to assess the distance to the desired object or plane of operation (working distance), and use the distance to determine proper value for autofocus from either a distance dependent equation or a pre-determined look up-table (LUT).
Additional Methods to maintain binocular overlap during digital magnification
[0096] The binocular overlap may be defined as a variable of working distance and magnification level. By detecting and calculating the working distance of the patient/tareet from the image sensors 330a and 330b, the controller 310 can defining the proper value for binocular overlap between binocular views to achieve proper 3D visualization, from either a distance-dependent equation or a pre-determined look-up-table (LUT) after defining the distance to the point of interest or average working distance of the region of interest. In one instance, distance can be inferred using calibration and disparity map to find distance. Using previously calibrated frames to extract a partial or full disparity or depth map (the two are related but different numerical values). Then the controller 310 may use a region of interest or a point in a specific part of the image to extract the distance to the desired object or plane of operation (working distance), In another instance, the controller 310 may use autofocus values of left autofocus lens and/or right autofocus lens to inter the working distance.
Controller
[0097] The controller 310 comprises the hardware and software necessary to implement the aforementioned methods. In one embodiment, the controller 310 involves a computer- readable medium comprising processor-executable instructions configured to implement one or more of the techniques presented herein. An example embodiment of a computer-readable medium or a computer-readable device comprises a computer-readable medium, such as a SSD, CD-R, DVD-R, flash drive, a platter of a hard disk drive, etc., on which is encoded computer-readable data. This computer-readable data, such as binary data comprising at least one of a zero or a one, in turn comprises a set of computer instructions configured to operate according to one or more of the principles set forth herein. In some embodiments, the set of computer instructions are configured to perform a method, such as at least some of the exemplary methods described herein, for example. In some embodiments, the set of computer instructions are configured to implement a system, such as at least some of the exemplary systems described herein, for example. Many such computer-readable media are devised by those of ordinary skill in the art that are configured to operate in accordance with the techniques presented herein.
[0098] The following discussion provide a brief, general description of a suitable computing environment to implement embodiments of one or more of the provisions set forth herein. Example computing devices include, but are not limited to, personal computers that may comprise a graphics processing unit (GPU), server computers, hand-held or laptop devices, mobile devices (such as mobile phones, Personal Digital Assistants (PDAs), media players, and the like), multiprocessor systems, consumer electronics, mini computers, mainframe computers, a microcontroller, a Field Programmable Gate Array (FPGAY an application-specific integrated circuit (ASIC), distributed computing environments that include any of the above systems or devices, and the like. In one aspect, the controller may use a heterogeneous computing configuration.
[0099] Although not required, embodiments are described in the general context of
“computer readable instructions” being executed by one or more computing devices. Computer readable instructions may be distributed via computer readable media. Computer readable instructions may be implemented as program components, such as functions, objects, Application Programming Interfaces (APIs), data structures, and the like, that perform particular tasks or implement particular abstract data types. Typically, the functionality of the computer readable instructions may be combined or distributed as desired in various environments.
[0100] In one example, a system comprises a computing device configured to implement one or more embodiments provided herein. In one configuration, the computing device includes at least one processing unit and one memory unit. Depending on the exact configuration and type of computing device, the memory unit may be volatile (such as RAM, for example), non-volatile (such as ROM, flash memory, etc., for example) or some combination of the two. In other embodiments, the computing device may include additional features and/or functionality. For example, the computing device may also include additional storage (e.g., removable and/or non-removable) including, but not limited to, cloud storage, magnetic storage, optical storage, and the like. In one embodiment, computer readable instructions to implement one or more embodiments provided herein may be in the storage. The storage may also store other computer readable instructions to implement an operating system, an application program, and the like. Computer readable instructions may be loaded in the memory for execution by the processing unit, for example.
[0101] The term “computer readable media” as used herein includes computer storage media. Computer storage media includes volatile and nonvolatile, removable and non removable media implemented in any method or technology for storage of information such as computer readable instructions or other data. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, Digital Versatile Disks (DVDs) or other optical storage, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by computing device.
[0102] The computing device may also include communication connection(s) that allows the computing device to communicate with other devices. Communication connect! on(st mav include, but is not limited to, a modem, a Network Interface Card (NIC), an integrated network interface, a radio frequency transmitter/receiver, an infrared port, a USB connection, or other interfaces for connecting computing device to other computing devices. Communication connection(s) may include a wired connection or a wireless connection. Communication connection(s) may transmit and/or receive communication media.
[0103] The computing device may include input device(s) such as keyboard, mouse, pen, voice input device, touch input device, infrared cameras, depth cameras, touchscreens, video input devices, and/or any other input device. Output device(s) such as one or more displays, speakers, printers, and/or any other output device may also be included in the computing device. Input device(s) and output device(s) may be connected to the computing device via a wired connection, wireless connection, or any combination thereof. In one embodiment, an input device or an output device from another computing device may be used as input device(s) or output device(s) for computing device.
[0104] Components of computing device 6712 may be connected by various interconnects, such as a bus. Such interconnects may include a Peripheral Component Interconnect (PCI), such as PCI Express, a Universal Serial Bus (USB), firewire (IEEE 1394), an optical bus structure, and the like. In another embodiment, components of computing device may be interconnected by a network. For example, the memory may be comprised of multiple physical memory units located in different physical locations interconnected by a network.
[0105] Those skilled in the art will realize that storage devices utilized to store computer readable instructions may be distributed across a network. For example, a computing device accessible via a network may store computer readable instructions to implement one or more embodiments provided herein. Computing device may access another computing device and download a part or all of the computer readable instructions for execution. Alternatively, the first computing device may download pieces of the computer readable instructions, as needed, or some instructions may be executed at the first computing device and some at the second computing device.
[0106] Various operations of embodiments are provided herein. In one embodiment, one or more of the operations described may constitute computer readable instructions stored on one or more computer readable media, which if executed by a computing device, will cause the computing device to perform the operations described. The order in which some or all of the operations are described should not be construed as to imply that these operations are necessarily order dependent. Alternative ordering will be appreciated by one skilled in the art having the benefit of this description. Further, it will be understood that not all operations are necessarily present in each embodiment provided herein. Also, it will be understood that not all operations are necessary in some embodiments.
CONCLUSION
[0107] It is to be appreciated that the Detailed Description section, and not the Abstract section, is intended to be used to interpret the claims. The Abstract section may set forth one or more, but not all exemplary embodiments, of the present disclosure, and thus, is not intended to limit the present disclosure and the appended claims in any way.
[0108] The present disclosure has been described above with the aid of functional building blocks illustrating the implementation of specified functions and relationships thereof. The boundaries of these functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternate boundaries may be defined so long as the specified functions and relationships thereof are appropriately performed.
[0109] It will be apparent to those skilled in the relevant art(s) the various changes in form and detail may be made without departing from the spirt and scope of the present disclosure. Thus the present disclosure should not be limited by any of the above-described exemplary embodiments, but should be defined only in accordance with the following claims and their equivalents.

Claims

WHAT IS CLAIMED IS:
1. A system for generating three-dimensional (3D) images from captured images of a target when executing digital magnification on the captured images to maintain the 3D images generated of the target after digital magnification, comprising: a first image sensor that is configured to capture a first image of the target; a second image sensor is configured to capture a second image of the target; a controller configured to: execute a digital magnification on the first image captured by the first image sensor and on the second image captured by the second image sensor, crop the first image and the second image to overlap a first portion of the target captured by the first image sensor with a second portion of the target captured by the second image sensor, wherein the first portion of the target overlaps with the second portion of the target, adjust the cropping of the first image and the second image to provide binocular overlap of the first portion of the target with the second portion of the target, wherein the binocular overlap of the first image and the second image is an overlap threshold that when satisfied results in a 3D image of the target displayed to a user after the digital magnification is executed, and instruct a display to display the cropped first image and the cropped second image that includes the binocular overlap to the user, wherein the displayed cropped first image and the cropped second image display the 3D image at the digital magnification to the user.
2. The system of claim 1, wherein the controller is further configured to: resize the cropped first image to the original size of the first image captured by the first image sensor and the cropped second image to the original size of the second image captured by the second image sensor, wherein the cropped first image as resized and the cropped second image as resized includes the binocular overlap of the first image and the second image; and instruct the display to display the resized and cropped first image and the resized and cropped second image that includes the binocular overlap to the user, wherein the displayed resized and cropped first image and the resized and cropped second image display the 3D image at the digital magnification to the user.
3. The system of claim 2, wherein the controller is further configured to: crop the first image captured by the first image sensor and the second image captured by the second image sensor to vertically align the overlap of the first portion of the target with the second portion of the target, wherein the cropped first image is in vertical alignment of the cropped second image when a first plurality of vertical coordinates of the cropped first image is aligned with each corresponding vertical coordinate from a second plurality of coordinates of the cropped second image; adjust the cropping of the first image and the second image to provide binocular overlap of the first portion of the target with the second portion of the target, wherein the binocular overlap of the first image and the second image is vertically aligned to satisfy the overlap threshold to generate the 3D image of the target displayed to the user after the digital magnification is executed.
4. The system of claim 3, wherein the controller is further configured to: after executing a first digital magnification at a first digital magnification level on the first image captured by the first image sensor and on the second image captured by the second image sensor, maintain the binocular overlap generated by adjusting the cropping of the first image and the second image to satisfy the overlap threshold; execute a second digital magnification at a second digital magnification level on the first image captured by the first image sensor and on the second image captured by the second image sensor, wherein the second digital magnification level is increased from the first digital magnification level; and maintain the binocular overlap generated after executing the first digital magnification at the first digital magnification level on the first image and the second image to when executing the second digital magnification at the second digital magnification level.
5. The system of claim 4, wherein the controller is further configured to: after executing each previous digital magnification at each previous digital magnification level on the first image and the second image, maintain the binocular overlap and the vertical alignment determined when executing the first digital magnification at the first digital magnification level on the first image and the second image; and continue to maintain the binocular overlap and the vertical alignment determined from the adjusting of the cropping of the first image and the second image to satisfy the overlap threshold after executing the first digital magnification at the first digital magnification level on the first image and the second image for each subsequent digital magnification at each subsequent digital magnification level, wherein each subsequent digital magnification level is increased from each previous digital magnification level.
6. The system of claim 4, wherein the controller is further configured to: execute the first digital magnification at the first digital magnification level on a non- concentric portion of the first image and a non-concentric portion of the second image, wherein the non-concentric portion of the first image and the second image is a portion of the first image and the second image that differs from a center of the first image and the second image; adjust the cropping of the first image and the second image to provide binocular overlap of the non-concentric portion of the first image and the non-concentric portion of the second image, wherein the binocular overlap of the non-concentric portion of the first image and the non- concentric portion of the second image satisfies the overlap threshold; and continue to crop a non-concentric portion of the first image and a non-concentric portion of the second image for each subsequent digital magnification at each subsequent digital magnification level, wherein the binocular overlap of the non-concentric portion of the first image and the non-concentric portion of the second image is maintained from the first digital magnification at the first digital magnification level.
7. The system of claim 4, wherein the controller is further configured to: determine a distance that the first image sensor and the second image sensor is positioned from the target; execute the cropping of the first image and the second image to maintain the vertical alignment and the binocular overlap for each digital magnification at each digital magnification level based on the distance of the first image sensor and the second image sensor from the target.
8. The system of claim 4, further comprising at least one wearable display that displays the resized and cropped first image and the resized and cropped second image to display the 3D image of the target after the digital magnification is executed that includes the binocular overlap of the first image and the second image that are vertically aligned to satisfy the overlap threshold.
9. The system of claim 4, further comprising a display that is configured to: display the resized and cropped first image and the resized and cropped second image to thereby display the 3D image of the target after the digital magnification is executed that includes the binocular overlap of the first image and the second image that are vertically aligned to satisfy the overlap threshold.
10. The system of claim 5, wherein the overlap threshold is satisfied when the binocular overlap includes 75% overlap of the first image and the second image and is maintained for each subsequent digital magnification at each subsequent digital magnification level.
11. A method for generating a three-dimensional (3D) images from captured images of a target when executing digital magnification on the captured images to maintain the 3D images generated of the target after digital magnification, comprising: capturing a first image by a first image sensor of the target; capturing a second image by a second image sensor of the target; executing by a controller a digital magnification on the first image captured by the first image sensor of the target and the second image captured by the second image sensor of the target; cropping the first image and the second image to overlap a first portion of the target captured by the first image sensor with a second portion of the target captured by the second image sensor, wherein the first portion of the target overlaps partially or fully with the second portion of the target; adjusting the cropping of the first image and the second image to provide binocular overlap of the first portion of the target with the second portion of the target, wherein the binocular overlap of the first image and the second image is an overlap threshold that when satisfied results in a 3D image of the target displayed to a user after the digital magnification is executed; and instructing a display to display the cropped first image and the cropped second image that includes the binocular overlap to the user, wherein the displayed cropped first image and the cropped second image display the 3D image at the digital magnification to the user.
12. The method of claim 11, further comprising: resizing the cropped first image to the original size of the first image captured by the first image sensor and the cropped second image to the original size of the second image captured by the second image sensor, wherein the cropped first image as resized and the cropped second image as resized includes the binocular overlap of the first image and the second image; and instructing the display to display the resized and cropped first image and the resized and cropped second image that includes the binocular overlap to the user, wherein the displayed resized and cropped first image and the resized and cropped second image display the 3D image at the digital magnification to the user.
13. The system of claim 12, further comprising: cropping the first image captured by the first image sensor and the second image captured by the second image sensor to vertically align the overlap of the first portion of the target with the second portion of the target, wherein the cropped first image is in vertical alignment of the cropped second image when each vertical coordinate of the cropped first image is aligned with each corresponding vertical coordinate of the cropped second image; and adjusting the cropping of the first image and the second image to provide binocular overlap of the first portion of the target with the second portion of the target, wherein the binocular overlap of the first image and the second image is vertically aligned to satisfy the overlap threshold to generate the 3D image of the target displayed to the user after the digital magnification is executed.
14. The method of claim 13, further comprising: after executing a first digital magnification at a first digital magnification level on the first image captured by the first image sensor and on the second image captured by the second image sensor, locking in the binocular overlap generated by adjusting the cropping of the first image and the second image to satisfy the overlap threshold; executing a second digital magnification at a second digital magnification level on the first image captured by the first image sensor and on the second image captured by the second image sensor, wherein the second digital magnification level is increased from the first digital magnification level; and maintaining the binocular overlap generated after executing the first digital magnification at the first digital magnification level on the first image and the second image when executing the second digital magnification at the second digital magnification level.
15. The method of claim 14, further comprising: after executing each previous digital magnification at each previous digital magnification level on the first image and the second image, maintaining the binocular overlap and the vertical alignment determined when executing the first digital magnification at the first digital magnification level on the first image and the second image; and continuing to maintain the binocular overlap and the vertical alignment determined from the adjusting of the cropping of the first image and the second image to satisfy the overlap threshold after executing the first digital magnification at the first digital magnification level on the first image and the second image for each subsequent digital magnification at each subsequent digital magnification level, wherein each subsequent digital magnification level is increased from each previous digital magnification level.
16. The method of claim 14, further comprising: executing the first digital magnification at the first digital magnification level on a non- concentric portion of the first image and on a non-concentric portion of the second image, wherein the non-concentric portion of the first image and the second image is a portion of the first image and the second image that differs from a center of the first image and the second image; adjusting the cropping of the first image and the second image to provide binocular overlap of the non-concentric portion of the first image and the non-concentric portion of the second image, wherein the binocular overlap of the non-concentric portion of the first image and the non- concentric portion of the second image satisfies the overlap threshold; and continuing to capture a non-concentric portion of the first image and a non-concentric portion of the second image for each subsequent digital magnification at each subsequent digital magnification level, wherein the binocular overlap of the non-concentric portion of the first image and the non-concentric portion of the second image is maintained from the first digital magnification at the first digital magnification level.
17. The method of claim 14, further comprising: determining a distance that the first image sensor and the second image sensor is positioned from the target; and executing the cropping of the first image and the second image to maintain the vertical alignment and the binocular overlap for digital magnification at a digital magnification level based on the distance of the first image sensor and the second image sensor from the target.
18. The method of claim 14, further comprising: displaying by a wearable display the resized and cropped first image and the resized and cropped second image to display the 3D image of the target after the digital magnification is executed that includes the binocular overlap of the first image and the second image that are vertically aligned to satisfy the overlap threshold.
19. The method of claim 18, further comprising: positioning the first image sensor and the second image sensor on the wearable display for the user to execute a surgical procedure on a target that is a patient.
20. The method of claim 15, further comprising: satisfying the overlap threshold when the binocular overlap includes overlap of the first image and the second image and is maintained for each subsequent digital magnification at each subsequent digital magnification level.
PCT/US2021/034366 2020-05-26 2021-05-26 Generation of three-dimensional images with digital magnification WO2021242932A1 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
JP2022573222A JP2023537454A (en) 2020-05-26 2021-05-26 Generation of 3D images by digital enlargement
EP21814225.5A EP4158889A4 (en) 2020-05-26 2021-05-26 Generation of three-dimensional images with digital magnification
CA3180220A CA3180220A1 (en) 2020-05-26 2021-05-26 Generation of three-dimensional images with digital magnification
BR112022024142A BR112022024142A2 (en) 2020-05-26 2021-05-26 SYSTEM AND METHOD TO GENERATE THREE-DIMENSIONAL IMAGES

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202063029831P 2020-05-26 2020-05-26
US63/029,831 2020-05-26

Publications (1)

Publication Number Publication Date
WO2021242932A1 true WO2021242932A1 (en) 2021-12-02

Family

ID=78704422

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2021/034366 WO2021242932A1 (en) 2020-05-26 2021-05-26 Generation of three-dimensional images with digital magnification

Country Status (6)

Country Link
US (4) US11218680B2 (en)
EP (1) EP4158889A4 (en)
JP (1) JP2023537454A (en)
BR (1) BR112022024142A2 (en)
CA (1) CA3180220A1 (en)
WO (1) WO2021242932A1 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7574756B2 (en) 2021-07-12 2024-10-29 トヨタ自動車株式会社 Virtual reality simulator and virtual reality simulation program
JP2023011262A (en) * 2021-07-12 2023-01-24 トヨタ自動車株式会社 Virtual reality simulator and virtual reality simulation program
WO2023232612A1 (en) * 2022-06-01 2023-12-07 Koninklijke Philips N.V. Guidance for medical interventions
US20240212153A1 (en) * 2022-12-27 2024-06-27 Douglas A. Golay Method for automatedly displaying and enhancing AI detected dental conditions

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110102549A1 (en) * 2008-03-21 2011-05-05 Atsushi Takahashi Three-dimensional digital magnifier operation supporting system
US8508580B2 (en) * 2009-07-31 2013-08-13 3Dmedia Corporation Methods, systems, and computer-readable storage media for creating three-dimensional (3D) images of a scene
WO2015100490A1 (en) * 2014-01-06 2015-07-09 Sensio Technologies Inc. Reconfiguration of stereoscopic content and distribution for stereoscopic content in a configuration suited for a remote viewing environment

Family Cites Families (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7570791B2 (en) 2003-04-25 2009-08-04 Medtronic Navigation, Inc. Method and apparatus for performing 2D to 3D registration
US7450743B2 (en) 2004-01-21 2008-11-11 Siemens Medical Solutions Usa, Inc. Method and system of affine registration of inter-operative two dimensional images and pre-operative three dimensional images
DE102005023167B4 (en) 2005-05-19 2008-01-03 Siemens Ag Method and device for registering 2D projection images relative to a 3D image data set
US8411931B2 (en) * 2006-06-23 2013-04-02 Imax Corporation Methods and systems for converting 2D motion pictures for stereoscopic 3D exhibition
JP5491786B2 (en) * 2009-07-21 2014-05-14 富士フイルム株式会社 Image reproducing apparatus and method
JP4763827B2 (en) * 2009-11-26 2011-08-31 富士フイルム株式会社 Stereoscopic image display device, compound eye imaging device, and stereoscopic image display program
US8694075B2 (en) 2009-12-21 2014-04-08 General Electric Company Intra-operative registration for navigated surgical procedures
WO2011121841A1 (en) * 2010-03-31 2011-10-06 富士フイルム株式会社 3d-image capturing device
WO2011134083A1 (en) 2010-04-28 2011-11-03 Ryerson University System and methods for intraoperative guidance feedback
JP5704854B2 (en) * 2010-07-26 2015-04-22 オリンパスイメージング株式会社 Display device
US8818105B2 (en) 2011-07-14 2014-08-26 Accuray Incorporated Image registration for image-guided surgery
KR101307944B1 (en) 2011-10-26 2013-09-12 주식회사 고영테크놀러지 Registration method of images for surgery
IL221863A (en) 2012-09-10 2014-01-30 Elbit Systems Ltd Digital system for surgical video capturing and display
CN104919272B (en) 2012-10-29 2018-08-03 7D外科有限公司 Integrated lighting and optical surface topology detection system and its application method
EP3074951B1 (en) 2013-11-25 2022-01-05 7D Surgical ULC System and method for generating partial surface from volumetric data for registration to surface topology image data
US9654687B2 (en) * 2014-12-24 2017-05-16 Agamemnon Varonos Panoramic windshield viewer system
US10383692B1 (en) 2018-04-13 2019-08-20 Taiwan Main Orthopaedic Biotechnology Co., Ltd. Surgical instrument guidance system
US11026585B2 (en) 2018-06-05 2021-06-08 Synaptive Medical Inc. System and method for intraoperative video processing
US11589029B2 (en) * 2019-04-29 2023-02-21 Microvision, Inc. 3D imaging system for RGB-D imaging
AU2021210962A1 (en) 2020-01-22 2022-08-04 Photonic Medical Inc. Open view, multi-modal, calibrated digital loupe with depth sensing

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110102549A1 (en) * 2008-03-21 2011-05-05 Atsushi Takahashi Three-dimensional digital magnifier operation supporting system
US8508580B2 (en) * 2009-07-31 2013-08-13 3Dmedia Corporation Methods, systems, and computer-readable storage media for creating three-dimensional (3D) images of a scene
WO2015100490A1 (en) * 2014-01-06 2015-07-09 Sensio Technologies Inc. Reconfiguration of stereoscopic content and distribution for stereoscopic content in a configuration suited for a remote viewing environment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4158889A4 *

Also Published As

Publication number Publication date
BR112022024142A2 (en) 2023-02-14
US20240056559A1 (en) 2024-02-15
US11483531B2 (en) 2022-10-25
US11800078B2 (en) 2023-10-24
CA3180220A1 (en) 2021-12-02
US20230043149A1 (en) 2023-02-09
US20220132091A1 (en) 2022-04-28
EP4158889A4 (en) 2024-06-19
JP2023537454A (en) 2023-09-01
EP4158889A1 (en) 2023-04-05
US20210377505A1 (en) 2021-12-02
US11218680B2 (en) 2022-01-04

Similar Documents

Publication Publication Date Title
US11483531B2 (en) Generation of three-dimensional images with digital magnification
JP6886510B2 (en) Adaptive parameters in the image area based on gaze tracking information
JP6423945B2 (en) Display device and display method using projector
JP7076447B2 (en) Light field capture and rendering for head-mounted displays
US20210044789A1 (en) Electronic visual headset with vision correction
US9961335B2 (en) Pickup of objects in three-dimensional display
US10382699B2 (en) Imaging system and method of producing images for display apparatus
JP2019091051A (en) Display device, and display method using focus display and context display
US20170318235A1 (en) Head-mounted displaying of magnified images locked on an object of interest
US20160179193A1 (en) Content projection system and content projection method
JP6953247B2 (en) Goggles type display device, line-of-sight detection method and line-of-sight detection system
JP7148634B2 (en) head mounted display device
JP2020515090A (en) Display device and display method using image renderer and optical combiner
TW201802642A (en) System f for decting line of sight
WO2015138994A2 (en) Methods and systems for registration using a microscope insert
JP6576639B2 (en) Electronic glasses and control method of electronic glasses
WO2017113018A1 (en) System and apparatus for gaze tracking
JP2019502415A (en) Ophthalmic surgery using light field microscopy
US10698218B1 (en) Display system with oscillating element
EP3548956B1 (en) Imaging system and method of producing context and focus images
JP2017191546A (en) Medical use head-mounted display, program of medical use head-mounted display, and control method of medical use head-mounted display
KR101817436B1 (en) Apparatus and method for displaying contents using electrooculogram sensors
JP2012182738A (en) Stereo image pickup apparatus
JP2016133541A (en) Electronic spectacle and method for controlling the same
CN118632653A (en) Method for controlling performance of an augmented reality display system

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21814225

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 3180220

Country of ref document: CA

ENP Entry into the national phase

Ref document number: 2022573222

Country of ref document: JP

Kind code of ref document: A

REG Reference to national code

Ref country code: BR

Ref legal event code: B01A

Ref document number: 112022024142

Country of ref document: BR

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2021814225

Country of ref document: EP

Effective date: 20230102

ENP Entry into the national phase

Ref document number: 112022024142

Country of ref document: BR

Kind code of ref document: A2

Effective date: 20221125