CA2238693C - Method and apparatus for displaying a virtual environment on a video display - Google Patents
Method and apparatus for displaying a virtual environment on a video display Download PDFInfo
- Publication number
- CA2238693C CA2238693C CA002238693A CA2238693A CA2238693C CA 2238693 C CA2238693 C CA 2238693C CA 002238693 A CA002238693 A CA 002238693A CA 2238693 A CA2238693 A CA 2238693A CA 2238693 C CA2238693 C CA 2238693C
- Authority
- CA
- Canada
- Prior art keywords
- image
- display
- visual orientation
- signal
- video display
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Lifetime
Links
Classifications
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/017—Head mounted
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09B—EDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
- G09B9/00—Simulators for teaching or training purposes
- G09B9/02—Simulators for teaching or training purposes for teaching control of vehicles or other craft
- G09B9/08—Simulators for teaching or training purposes for teaching control of vehicles or other craft for teaching control of aircraft, e.g. Link trainer
- G09B9/30—Simulation of view from aircraft
- G09B9/307—Simulation of view from aircraft by helmet-mounted projector or display
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/0179—Display position adjusting means not related to the information to be displayed
- G02B2027/0187—Display position adjusting means not related to the information to be displayed slaved to motion of at least a part of the body of the user, e.g. head, eye
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B2027/0192—Supplementary details
- G02B2027/0198—System for aligning or maintaining alignment of an image in a predetermined direction
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G2310/00—Command of the display device
- G09G2310/02—Addressing, scanning or driving the display screen or processing steps related thereto
- G09G2310/0235—Field-sequential colour display
-
- G—PHYSICS
- G09—EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
- G09G—ARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
- G09G3/00—Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
- G09G3/20—Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix no fixed position being assigned to or needed to be assigned to the individual characters or partial characters
- G09G3/2007—Display of intermediate tones
- G09G3/2018—Display of intermediate tones by time modulation using two or more time intervals
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Aviation & Aerospace Engineering (AREA)
- Business, Economics & Management (AREA)
- Educational Administration (AREA)
- Educational Technology (AREA)
- Optics & Photonics (AREA)
- Controls And Circuits For Display Device (AREA)
- Processing Of Color Television Signals (AREA)
Abstract
The apparatus for displaying a virtual environ-ment on a video display such as a head mounted dis-play (23), has a position processor (11) for generating a visual orientation signal indicating a visual orientation of the video display with respect to the virtual envi-ronment, and an image generator (20) for generating a series of component images of the virtual environment for the visual orientation. The image generator (20) re-ceives the visual orientation signals. Any change in the visual orientation signal from a time when orientation signal was used by the image generator (20) to generate each component image and a time of display of each component image on the video display (16) is detected to produce an offset shift signal. An image shifting de-vice shifts the image on the display (16) in response to the offset shift signal, so as to improve the display of the virtual environment. Field sequential color video and temporal grey scale video can be improved using the apparatus, in addition to other video systems with a transport delay and active variable motion which is difficult to predict.
Description
METHOD AND APPARATUS FOR DISPLAYING A VIRTUAL
ENVIRONMENT ON A VIDEO DISPLAY
Field of the Invention = 5 The present invention relates to a method and apparatus for displaying a virtual environment on a video display. More particularly, the present invention relates to a display method and apparatus which suppresses image break-up or jerking which occurs when there is rapid motion of a color image being viewed, as is the case with head mounted displays.
Background of the Invention Television Display devices such as the Digital Micromirror Device (DMD's), Active Matrix Electroluminescent Displays (AMEL's) and Ferro Electric Liquid Crystal Displays (FELCD's) achieve a grey scale varying from white to black by switching each pixel on for a specific amount of time during each field or frame. As the human eye has an integration time which is much longer than the time for each field (usually 1/60 sec in the U.S.), it perceives a constant brightness proportional to the amount of time the pixel is turned on during each field period. This is achieved by dividing each field time, nominally 16.67 milliseconds in the U.S., into bit planes representing each bit of the binary number which specifies the relative brightness of each pixel.
A typical system for example would have the most significant bit turned on for milliseconds, the next most significant bit turned on for 2 milliseconds and so on in a binary scale for the remainder of the bits. A high quality image may require eight or even nine bit planes while other systems may use as little as five or six bit planes. The intervals between each bit plane are usually used for addressing each pixel in the display with the illumination source turned off. Some schemes however keep the illumination source turned on for the complete field and addressing of each pixel for each bit plane takes place within the bit plane periods.
All schemes however have one thing in common in that the same image is used to refresh each bit plane during the course of a specific field. This can cause annoying = artifacts with moving imagery. The effect is most noticeable on helmet mounted displays during moderate to rapid head motion where discrete objects tend to break up = into double or multiple images or may appear to jitter or be smeared.
It can also be desirable in head mounted displays to use field sequential color display devices to improve picture quality, reduce weight or reduce costs of manufacture. When used however with an image source such as a color television camera or a computer image generator which are operating in the conventional simultaneous color mode, color fringes are seen on objects during angular head motion.
If the head motion is sufficiently rapid, three distinct red, blue and green images can be seen. The effect is also observed during rapid motion of an object within the color display when the head is stationary and is often called field sequential color break-up.
In order to understand the invention, it is first necessary to have a clear understanding of why image break-up occurs. As is well known in the art, television creates the illusion of smooth motion by drawing successive images at a sufficiently fast rate that the human visual system can no longer see the individual images (i.e. the image is flicker-free). If the entire image or the objects within the image are moved appropriately relative to the previous image, the visual system will interpret the sequence of images as smooth motion. Figure I shows the motion of an upstanding arrow on a display moving from right to left in five sequential positions. The arrow represents any fixed object within the scene being displayed. The movement from right to left, in the case of a head mounted display, is caused by a rotational head motion from left to right.
As is known, the human eye never views an image, whether still or moving, focusing only on one portion of the image. The human eye will tend to pick portions of an image to focus on and typically will wander from different portions of the image according to interest and the need to gather information. When the image moves across the display as illustrated in Figure 1, the eye typically fixates on a given object within the moving image, at least temporarily, before switching to another portion or object within the image to be observed. The eye therefore tracks each portion of the image that is to be observed as that portion of the image or object moves across the display.
In the example of the object represented by the upstanding arrow, the eye tracks the object as it moves from right to left. Even though the image appears at a finite number of discrete locations, the eye will move or rotate with a substantially constant velocity to track the object. The rotating eye is illustrated in Figure 2. It will be noted that all of the consecutive images are to be focused on the retina at the same position. This position is typically within a portion of the retina where good high resolution vision is to be had as opposed to a surrounding area of poorer lower resolution vision. When the color image displayed at each of the five discrete positions as illustrated in Figure 1 is carried out using a simultaneous color video display, the red, green and blue component images are caused to appear simultaneously at each of the five discrete positions and the resulting image on the retina is as illustrated in Figure 3a (for sake of clarity, the inversion of the image on the retina is not iIlustrated).
In the case that a field sequential color display device sequentially displays the color component images from a simultaneous color image source, to present the images as illustrated in Figure 1, the time lag between displaying the sequential color component images will give rise to a separation of the object into three color component images, as , illustrated in Figure 3b, as a result of the constant velocity of the eye, as illustrated in Figure 2. The degree of spatial separation of the color component images is , proportional to the rotational velocity of the eye, and thus, proportional to the angular velocity of movement of the image with respect to the display.
In the case of temporal modulation for grey scale, all of the consecutive images are focused on the retina at or near the fovea allowing the observer to see a single image as shown in Fig 4a. The eye would normally track the images created in the most significant bit plane and the images created in the remaining bit planes would be focused at different points on the retina as shown in Fig. 4b.
In conventional color video displays, each image is usually called a field and the field rate is 60 Hz in the U.S. The color component images are displayed synchronously on the display such that the observer sees a single correctly colored image. When a field sequential display is used to display video from a simultaneous color image source, the red, blue and green images are drawn sequentially at a field rate conunonly three times as high as the normal rate, namely, 180 Hz in the U.S. A
typical field sequential color display device is a liquid crystal display device operating as a monochrome display which is provided with color illumination or filters which operate in an alternating sequence of red, blue and green, such that the alternating sequential monochrome images corresponding to the red, blue and green color component images, can be seen with varying color intensities to give the illusion of color video.
In the case of a head mounted display using head tracking to control the image such that the wearer sees a stable virtual environment, rotation of the head causes an equal and opposite movement of the image. If the image has been created by a device operating in the simultaneous color mode and the display is operating in the field sequential mode, the problem described above will occur. The problem could obviously be circumvented by operating the device creating the image, i.e. either a Computer Image Generator (CIG) system or a television camera, in the field sequential mode. This would be, however, a very expensive proposition and would furthermore discourage the use of field sequential helmet mounted displays.
= US Patent 5,369,450 to Haseltine et al describes how color aberrations in a head mounted display operating in a field sequential mode can be corrected by electronic = means. The color aberrations described by Haseltine, however, are caused by the different refractive indices of the optical components for red, blue and green and are not a function of head motion.
Computer image generators used in simulation and in virtual reality systems have an inherent transport delay due to the finite amount of time taken to perform the various computational algorithms necessary to assemble an image of the virtual environment with proper attributes. The effect of this transport delay on the performance of pilots in flight simulators has been well known for many years and care is taken to minimize such delays in image generation systems specifically designed for flight simulation. A far more obvious effect is seen, however, when image generation systems are coupled to head mounted displays. In these systems, the head position is continually being measured and is used by the image generator to compute the correct scene for the observer's viewpoint (the visual orientation of the display with respect to the virtual environment). If the observer moves his or her head while looking at a - stationary image, the image will move in the direction of the head motion for a period of time corresponding to the total transport delay of the system (including head measuring device) and will only regain the correct position once the observer's head is stationary.
This effect detracts considerably from the utility of the head mounted display and can give rise to nausea. The problem and a reasonably effective solution is described by Uwe List in a U.S. Airforce report entitled "Non Linear Prediction of Head Movements for Helmet Mounted Displays" (AFHRL Technical Paper 83-45 December 1983). In this report, List recommends the use of angular acceleration sensors mounted on the helmet to calculate a predicted head position. Welch and Kruk also suggest this solution in "HDTV Virtual Reality" published in Japan Display 1992. Figure 5 illustrates an exemplary acceleration curve as a user moves between two visual orientations. As can be seen, the acceleration is shown in the example to peak at 100 milliseconds with a deceleration or stopping of the head motion commencing near 200 milliseconds and ending near 400 milliseconds with the head of the helmet in its new angular position. In Figure 5, it is presumed that the head position sensor and the computer image generator requires 100 milliseconds to detect head position and generate an image for the new head position (i.e. the transport delay is 100 ms). The curve illustrating the displayed image orientation with no position prediction correction results in considerable unwanted image motion illustrated near 250 milliseconds as the difference of some 12 by the reference letter E. In the prior art improvement, prediction of future position using acceleration measurements resulted in the dashed line for the image orientation with small but noticeable divergence between the predicted line and the actual head orientation curve.
These solutions relate to predicting the visual orientation of the display with respect to the virtual environment for a time in the future approximately equivalent to the present time plus the transport delay of the system. This prediction is accomplished by using measurements of angular head acceleration and/or angular head velocity. The image generator then uses this predicted position to compute the next image.
While the prediction of visual orientation or head position can be used in the image generator to greatly reduce the error or discrepancy between the image of the virtual environment being displayed and the correct image of the virtual environment for the actual visual orientation, this technique cannot eliminate such errors completely.
Summary of the Invention It is an object of the invention to correct the problems described above using a simple and relatively inexpensive technique which allows conventional field sequential color display devices to be used with standard simultaneous color image sources without the observer seeing color fi-inges during certain types of motion. In the case of a flight simulator or other virtual reality systems, the simultaneous color image source is a computer image generator operating in real time. In a telepresence system, a live camera is gimbal mounted on a robot and is driven by servo control to view in the direction of the observer's head. In the case of a land vehicle simulator or a land vehicle telepresence robot, rapid motion may result from road bumps or the like and thus may not be the exclusive result of observer head motion. Therefore the angular velocity is understood to be a combination of both the observer's head and any rapid robot or simulated vehicle angular movement.
It is furthermore an object of the present invention to provide a field sequential color display device and method in which color break-up is suppressed by moving each field of a color image by an amount equivalent to the angular motion of the observer's head in that field. Shifting of the color component images (fields) within each cycle can be done optically in the optical image relay systems (e.g. mirrors and lenses), using horizontal and vertical CRT controls, by electronically shifting image data in an electronic display, or by data processing in the video processor feeding the color component images to the display.
It is an object of the present invention to provide a method and apparatus for reducing image breakup in television display devices which create a grey scale by the use of temporally separated bit planes. The technique is often known as temporal modulation or pulse width modulation. As is known in the art, image breakup occurs in moving television imagery whenever the update rate of the image and the refresh rate of display are not identical and synchronous. This invention largely reduces such image breakup in helmet mounted displays by using the angular velocity of the head to generate small vertical and horizontal offsets for each bit plane. The observer thereby sees each bit plane image as if it had been updated for the new head position and image breakup, smear etc. are largely eliminated.
ENVIRONMENT ON A VIDEO DISPLAY
Field of the Invention = 5 The present invention relates to a method and apparatus for displaying a virtual environment on a video display. More particularly, the present invention relates to a display method and apparatus which suppresses image break-up or jerking which occurs when there is rapid motion of a color image being viewed, as is the case with head mounted displays.
Background of the Invention Television Display devices such as the Digital Micromirror Device (DMD's), Active Matrix Electroluminescent Displays (AMEL's) and Ferro Electric Liquid Crystal Displays (FELCD's) achieve a grey scale varying from white to black by switching each pixel on for a specific amount of time during each field or frame. As the human eye has an integration time which is much longer than the time for each field (usually 1/60 sec in the U.S.), it perceives a constant brightness proportional to the amount of time the pixel is turned on during each field period. This is achieved by dividing each field time, nominally 16.67 milliseconds in the U.S., into bit planes representing each bit of the binary number which specifies the relative brightness of each pixel.
A typical system for example would have the most significant bit turned on for milliseconds, the next most significant bit turned on for 2 milliseconds and so on in a binary scale for the remainder of the bits. A high quality image may require eight or even nine bit planes while other systems may use as little as five or six bit planes. The intervals between each bit plane are usually used for addressing each pixel in the display with the illumination source turned off. Some schemes however keep the illumination source turned on for the complete field and addressing of each pixel for each bit plane takes place within the bit plane periods.
All schemes however have one thing in common in that the same image is used to refresh each bit plane during the course of a specific field. This can cause annoying = artifacts with moving imagery. The effect is most noticeable on helmet mounted displays during moderate to rapid head motion where discrete objects tend to break up = into double or multiple images or may appear to jitter or be smeared.
It can also be desirable in head mounted displays to use field sequential color display devices to improve picture quality, reduce weight or reduce costs of manufacture. When used however with an image source such as a color television camera or a computer image generator which are operating in the conventional simultaneous color mode, color fringes are seen on objects during angular head motion.
If the head motion is sufficiently rapid, three distinct red, blue and green images can be seen. The effect is also observed during rapid motion of an object within the color display when the head is stationary and is often called field sequential color break-up.
In order to understand the invention, it is first necessary to have a clear understanding of why image break-up occurs. As is well known in the art, television creates the illusion of smooth motion by drawing successive images at a sufficiently fast rate that the human visual system can no longer see the individual images (i.e. the image is flicker-free). If the entire image or the objects within the image are moved appropriately relative to the previous image, the visual system will interpret the sequence of images as smooth motion. Figure I shows the motion of an upstanding arrow on a display moving from right to left in five sequential positions. The arrow represents any fixed object within the scene being displayed. The movement from right to left, in the case of a head mounted display, is caused by a rotational head motion from left to right.
As is known, the human eye never views an image, whether still or moving, focusing only on one portion of the image. The human eye will tend to pick portions of an image to focus on and typically will wander from different portions of the image according to interest and the need to gather information. When the image moves across the display as illustrated in Figure 1, the eye typically fixates on a given object within the moving image, at least temporarily, before switching to another portion or object within the image to be observed. The eye therefore tracks each portion of the image that is to be observed as that portion of the image or object moves across the display.
In the example of the object represented by the upstanding arrow, the eye tracks the object as it moves from right to left. Even though the image appears at a finite number of discrete locations, the eye will move or rotate with a substantially constant velocity to track the object. The rotating eye is illustrated in Figure 2. It will be noted that all of the consecutive images are to be focused on the retina at the same position. This position is typically within a portion of the retina where good high resolution vision is to be had as opposed to a surrounding area of poorer lower resolution vision. When the color image displayed at each of the five discrete positions as illustrated in Figure 1 is carried out using a simultaneous color video display, the red, green and blue component images are caused to appear simultaneously at each of the five discrete positions and the resulting image on the retina is as illustrated in Figure 3a (for sake of clarity, the inversion of the image on the retina is not iIlustrated).
In the case that a field sequential color display device sequentially displays the color component images from a simultaneous color image source, to present the images as illustrated in Figure 1, the time lag between displaying the sequential color component images will give rise to a separation of the object into three color component images, as , illustrated in Figure 3b, as a result of the constant velocity of the eye, as illustrated in Figure 2. The degree of spatial separation of the color component images is , proportional to the rotational velocity of the eye, and thus, proportional to the angular velocity of movement of the image with respect to the display.
In the case of temporal modulation for grey scale, all of the consecutive images are focused on the retina at or near the fovea allowing the observer to see a single image as shown in Fig 4a. The eye would normally track the images created in the most significant bit plane and the images created in the remaining bit planes would be focused at different points on the retina as shown in Fig. 4b.
In conventional color video displays, each image is usually called a field and the field rate is 60 Hz in the U.S. The color component images are displayed synchronously on the display such that the observer sees a single correctly colored image. When a field sequential display is used to display video from a simultaneous color image source, the red, blue and green images are drawn sequentially at a field rate conunonly three times as high as the normal rate, namely, 180 Hz in the U.S. A
typical field sequential color display device is a liquid crystal display device operating as a monochrome display which is provided with color illumination or filters which operate in an alternating sequence of red, blue and green, such that the alternating sequential monochrome images corresponding to the red, blue and green color component images, can be seen with varying color intensities to give the illusion of color video.
In the case of a head mounted display using head tracking to control the image such that the wearer sees a stable virtual environment, rotation of the head causes an equal and opposite movement of the image. If the image has been created by a device operating in the simultaneous color mode and the display is operating in the field sequential mode, the problem described above will occur. The problem could obviously be circumvented by operating the device creating the image, i.e. either a Computer Image Generator (CIG) system or a television camera, in the field sequential mode. This would be, however, a very expensive proposition and would furthermore discourage the use of field sequential helmet mounted displays.
= US Patent 5,369,450 to Haseltine et al describes how color aberrations in a head mounted display operating in a field sequential mode can be corrected by electronic = means. The color aberrations described by Haseltine, however, are caused by the different refractive indices of the optical components for red, blue and green and are not a function of head motion.
Computer image generators used in simulation and in virtual reality systems have an inherent transport delay due to the finite amount of time taken to perform the various computational algorithms necessary to assemble an image of the virtual environment with proper attributes. The effect of this transport delay on the performance of pilots in flight simulators has been well known for many years and care is taken to minimize such delays in image generation systems specifically designed for flight simulation. A far more obvious effect is seen, however, when image generation systems are coupled to head mounted displays. In these systems, the head position is continually being measured and is used by the image generator to compute the correct scene for the observer's viewpoint (the visual orientation of the display with respect to the virtual environment). If the observer moves his or her head while looking at a - stationary image, the image will move in the direction of the head motion for a period of time corresponding to the total transport delay of the system (including head measuring device) and will only regain the correct position once the observer's head is stationary.
This effect detracts considerably from the utility of the head mounted display and can give rise to nausea. The problem and a reasonably effective solution is described by Uwe List in a U.S. Airforce report entitled "Non Linear Prediction of Head Movements for Helmet Mounted Displays" (AFHRL Technical Paper 83-45 December 1983). In this report, List recommends the use of angular acceleration sensors mounted on the helmet to calculate a predicted head position. Welch and Kruk also suggest this solution in "HDTV Virtual Reality" published in Japan Display 1992. Figure 5 illustrates an exemplary acceleration curve as a user moves between two visual orientations. As can be seen, the acceleration is shown in the example to peak at 100 milliseconds with a deceleration or stopping of the head motion commencing near 200 milliseconds and ending near 400 milliseconds with the head of the helmet in its new angular position. In Figure 5, it is presumed that the head position sensor and the computer image generator requires 100 milliseconds to detect head position and generate an image for the new head position (i.e. the transport delay is 100 ms). The curve illustrating the displayed image orientation with no position prediction correction results in considerable unwanted image motion illustrated near 250 milliseconds as the difference of some 12 by the reference letter E. In the prior art improvement, prediction of future position using acceleration measurements resulted in the dashed line for the image orientation with small but noticeable divergence between the predicted line and the actual head orientation curve.
These solutions relate to predicting the visual orientation of the display with respect to the virtual environment for a time in the future approximately equivalent to the present time plus the transport delay of the system. This prediction is accomplished by using measurements of angular head acceleration and/or angular head velocity. The image generator then uses this predicted position to compute the next image.
While the prediction of visual orientation or head position can be used in the image generator to greatly reduce the error or discrepancy between the image of the virtual environment being displayed and the correct image of the virtual environment for the actual visual orientation, this technique cannot eliminate such errors completely.
Summary of the Invention It is an object of the invention to correct the problems described above using a simple and relatively inexpensive technique which allows conventional field sequential color display devices to be used with standard simultaneous color image sources without the observer seeing color fi-inges during certain types of motion. In the case of a flight simulator or other virtual reality systems, the simultaneous color image source is a computer image generator operating in real time. In a telepresence system, a live camera is gimbal mounted on a robot and is driven by servo control to view in the direction of the observer's head. In the case of a land vehicle simulator or a land vehicle telepresence robot, rapid motion may result from road bumps or the like and thus may not be the exclusive result of observer head motion. Therefore the angular velocity is understood to be a combination of both the observer's head and any rapid robot or simulated vehicle angular movement.
It is furthermore an object of the present invention to provide a field sequential color display device and method in which color break-up is suppressed by moving each field of a color image by an amount equivalent to the angular motion of the observer's head in that field. Shifting of the color component images (fields) within each cycle can be done optically in the optical image relay systems (e.g. mirrors and lenses), using horizontal and vertical CRT controls, by electronically shifting image data in an electronic display, or by data processing in the video processor feeding the color component images to the display.
It is an object of the present invention to provide a method and apparatus for reducing image breakup in television display devices which create a grey scale by the use of temporally separated bit planes. The technique is often known as temporal modulation or pulse width modulation. As is known in the art, image breakup occurs in moving television imagery whenever the update rate of the image and the refresh rate of display are not identical and synchronous. This invention largely reduces such image breakup in helmet mounted displays by using the angular velocity of the head to generate small vertical and horizontal offsets for each bit plane. The observer thereby sees each bit plane image as if it had been updated for the new head position and image breakup, smear etc. are largely eliminated.
A further object of the present invention is to provide a more stable and accurate representation of a virtual environment by calculating the discrepancy between the angular orientation of the image being displayed and the current visual orientation of the display and use this error to shift the image to the correct position. It is a further object of the invention to provide a relatively inexpensive and simple system to substantially reduce such errors thereby providing a more stable and accurate representation of the virtual environment.
According to a broad aspect of the invention, there is provided an apparatus for displaying a virtual environment on a video display comprising: position processor means for generating a visual orientation signal indicating a visual orientation of the display with respect to the virtual environment; image generator means for generating a series of component images of the virtual environment for the visual orientation, the image generating means receiving the visual orientation signals; means for detecting any change in the visual orientation signal from a time when the signal was used by the image generator means to generate each the component image and a time of display of each the component image on the display to produce an offset shift signal; and means for shifting the image on the display in response to the offset shift signal. In this way, the display of the virtual environment is improved.
According to another broad aspect of the invention, there is provided a method for displaying a virtual environment on a video display comprising the repeated steps of: determining a visual orientation of the display with respect to the virtual environment; generating a series of component images of the virtual environment for the visual orientation; displaying the images on the display; detecting any change in the visual orientation which may have occurred between a time when the visual orientation was determined and the image is to be displayed; and shifting the image on the display an amount equivalent to the change, whereby the display of the virtual environment is improved.
Preferably, the video display may be a color field sequential display device, and the series of component ima.ges is a series of cycles of color component images.
The apparatus may further comprise color filter means for making the color component images of the series appear to have a different color, such that a mixing of the color component images as seen with the color filter means provides an observer with a color image of the environment, and the detecting means may comprise means for determining an angular velocity of a visual orientation of the display with respect to the virtual environment and for generating a velocity signal, and the offset shift signal is a function of the velocity signal.
According to a broad aspect of the invention, there is provided an apparatus for displaying a virtual environment on a video display comprising: position processor means for generating a visual orientation signal indicating a visual orientation of the display with respect to the virtual environment; image generator means for generating a series of component images of the virtual environment for the visual orientation, the image generating means receiving the visual orientation signals; means for detecting any change in the visual orientation signal from a time when the signal was used by the image generator means to generate each the component image and a time of display of each the component image on the display to produce an offset shift signal; and means for shifting the image on the display in response to the offset shift signal. In this way, the display of the virtual environment is improved.
According to another broad aspect of the invention, there is provided a method for displaying a virtual environment on a video display comprising the repeated steps of: determining a visual orientation of the display with respect to the virtual environment; generating a series of component images of the virtual environment for the visual orientation; displaying the images on the display; detecting any change in the visual orientation which may have occurred between a time when the visual orientation was determined and the image is to be displayed; and shifting the image on the display an amount equivalent to the change, whereby the display of the virtual environment is improved.
Preferably, the video display may be a color field sequential display device, and the series of component ima.ges is a series of cycles of color component images.
The apparatus may further comprise color filter means for making the color component images of the series appear to have a different color, such that a mixing of the color component images as seen with the color filter means provides an observer with a color image of the environment, and the detecting means may comprise means for determining an angular velocity of a visual orientation of the display with respect to the virtual environment and for generating a velocity signal, and the offset shift signal is a function of the velocity signal.
Also preferably, the video display may be a temporal modulation grey scale display device, and the series of component images may be a series of grey scale component images to be displayed sequentially to provide an observer with an impression of grey scale images, and the detecting means may likewise comprise means for determining an angular velocity of a visual orientation of the display with respect to the virtual environment and for generating a head velocity signal, with the offset shift signal being a function of he velocity signal.
According a further preferred aspect, the image generator means may have a finite transport delay time for generating and preparing an image for transmission on a video output signal. The apparatus may further comprise means for detecting at least one of an angular velocity and an angular acceleration of the visual orientation for producing a predictive signal, and means for calculating a predicted visual orientation of the display with respect to the virtual environment based on the visual orientation signal and the predictive signal to produce a predicted visual orientation signal. The predicted visual orientation signal may thus be connected to. the image generator means in place of the visual orientation signal generated by the determining means, the predicted visual orientation signal being for a future point in time equal to a present time plus approximately the transport delay time.
In accordance with an embodiment, there is provided a method for displaying a virtual environment on a video display comprising the repeated steps of determining a visual orientation of the video display with respect to the virtual environment and generating an orientation signal representing the visual orientation; generating an image of the virtual environment for the visual orientation identified by the orientation signal, the generating requiring a finite delay time; detecting any change in the visual orientation which may have occurred between a time when the visual orientation was determined and the image was generated; and displaying the image on the video display, the image being shifted on the video display by an amount equivalent to the change, whereby the display of the virtual environment is more stable.
In accordance with an embodiment, there is provided an apparatus for displaying a virtual environment on a video display comprising head position processor means for generating a visual orientation signal indicating a visual orientation of the video display with respect to the virtual environment; image generator means for generating an image of the virtual environment for the visual orientation, the image generating means receiving the visual orientation signals, the image generator means having a finite transport delay time for generating and preparing an image for transmission on a video output signal; the video display receiving the video output signal for displaying the image; means for detecting a difference between the visual orientation signal at a time when the orientation signal was used - 7a -by the image generator means to generate the image and the visual orientation signal at a time of display of the image on the video display and for producing an offset shift signal proportional to the difference; and means for shifting the image on the video display in response to the offset shift signal, whereby the display of the virtual environment is more stable.
In accordance with another embodiment, there is provided an apparatus for displaying a virtual environment represented by images generated by an image generator having a finite transport delay time, the image generator generating the images for a given visual orientation of a video display displaying the images with respect to the virtual environment, the image generator receiving a visual orientation signal representing the visual orientation, the apparatus comprising means for detecting any change in the visual orientation signal from a time when the signal was used by the image generator means to generate the image and a time of display of the image on the video display to produce an offset shift signal; and means for shifting the image on the video display in response to the offset shift signal, whereby the display of the virtual environment is more stable.
Brief Description of the Drawings The present invention will be better understood by way of the following detailed description of three preferred embodiments with reference to the appended drawings, in which:
Figure 1 illustrates a series of 5 objects within an image being displayed to create the illusion of object motion from right to left as is known in the art;
Figure 2 illustrates a cross-section of an observer's eyeball illustrating schematically the image formed on the retina and the direction of rotation of the eye as an object is tracked during motion as illustrated in Figure 2;
Figures 3a and 3b illustrate respectively in schematic format the image appearing on the observer's retina for simultaneous color mode display and field sequential color mode display;
Figures 4a and 4b illustrate respectively in schematic format the image appearing on the observer's retina for temporally separated grey scale display;
Figure 5 illustrates a graph of head acceleration, head position, an example of displayed image position with no prediction correction, and displayed image position using position prediction correction in the base of a computer image generator having a transport delay of 100 milliseconds, as is known in the prior art;
According a further preferred aspect, the image generator means may have a finite transport delay time for generating and preparing an image for transmission on a video output signal. The apparatus may further comprise means for detecting at least one of an angular velocity and an angular acceleration of the visual orientation for producing a predictive signal, and means for calculating a predicted visual orientation of the display with respect to the virtual environment based on the visual orientation signal and the predictive signal to produce a predicted visual orientation signal. The predicted visual orientation signal may thus be connected to. the image generator means in place of the visual orientation signal generated by the determining means, the predicted visual orientation signal being for a future point in time equal to a present time plus approximately the transport delay time.
In accordance with an embodiment, there is provided a method for displaying a virtual environment on a video display comprising the repeated steps of determining a visual orientation of the video display with respect to the virtual environment and generating an orientation signal representing the visual orientation; generating an image of the virtual environment for the visual orientation identified by the orientation signal, the generating requiring a finite delay time; detecting any change in the visual orientation which may have occurred between a time when the visual orientation was determined and the image was generated; and displaying the image on the video display, the image being shifted on the video display by an amount equivalent to the change, whereby the display of the virtual environment is more stable.
In accordance with an embodiment, there is provided an apparatus for displaying a virtual environment on a video display comprising head position processor means for generating a visual orientation signal indicating a visual orientation of the video display with respect to the virtual environment; image generator means for generating an image of the virtual environment for the visual orientation, the image generating means receiving the visual orientation signals, the image generator means having a finite transport delay time for generating and preparing an image for transmission on a video output signal; the video display receiving the video output signal for displaying the image; means for detecting a difference between the visual orientation signal at a time when the orientation signal was used - 7a -by the image generator means to generate the image and the visual orientation signal at a time of display of the image on the video display and for producing an offset shift signal proportional to the difference; and means for shifting the image on the video display in response to the offset shift signal, whereby the display of the virtual environment is more stable.
In accordance with another embodiment, there is provided an apparatus for displaying a virtual environment represented by images generated by an image generator having a finite transport delay time, the image generator generating the images for a given visual orientation of a video display displaying the images with respect to the virtual environment, the image generator receiving a visual orientation signal representing the visual orientation, the apparatus comprising means for detecting any change in the visual orientation signal from a time when the signal was used by the image generator means to generate the image and a time of display of the image on the video display to produce an offset shift signal; and means for shifting the image on the video display in response to the offset shift signal, whereby the display of the virtual environment is more stable.
Brief Description of the Drawings The present invention will be better understood by way of the following detailed description of three preferred embodiments with reference to the appended drawings, in which:
Figure 1 illustrates a series of 5 objects within an image being displayed to create the illusion of object motion from right to left as is known in the art;
Figure 2 illustrates a cross-section of an observer's eyeball illustrating schematically the image formed on the retina and the direction of rotation of the eye as an object is tracked during motion as illustrated in Figure 2;
Figures 3a and 3b illustrate respectively in schematic format the image appearing on the observer's retina for simultaneous color mode display and field sequential color mode display;
Figures 4a and 4b illustrate respectively in schematic format the image appearing on the observer's retina for temporally separated grey scale display;
Figure 5 illustrates a graph of head acceleration, head position, an example of displayed image position with no prediction correction, and displayed image position using position prediction correction in the base of a computer image generator having a transport delay of 100 milliseconds, as is known in the prior art;
Figure 6 is a block schematic diagram of a horizontal and vertical offset deflection processor providing image shift means according to the preferred embodiment;
Figure 7 illustrates the waveform of the offset signal for a continuously varying function (A) for a complete cycle of RGB fields and as discrete values (B) for each RGB field, with the vertical sync pulses (C);
Figure 8 illustrates an alternative embodiment in which an opto-mechanical shifting of the viewed image is achieved by mounting a relay mirror on piezoelectric transducers which are energized by the appropriate offset signal to shift the image for each field by the appropriate offset for the speed of the object in motion being viewed on the screen;
Figure 9 illustrates a block diagram of a digital display screen including a digital image shifter;
Figure 10 shows a typical timing diagram for a single field divided into six bit planes;
Figure 11 is a block diagram of the apparatus according to the preferred embodiment;
Figure 12 illustrates a block diagram of the virtual environment display apparatus according to the preferred embodiment in which the difference between the actual head position and the predicted head position is used to control horizontal vertical offsets of the video display;
Figure 13 illustrates actual head position, delayed predicted head position and display offset signals on a common time scale for a simple example in which actual head position moves with constant velocity for a time X between positions P 1 and P2.;
Figure 14 illustrates an optical schematic for a display system using a moveable mirror to perform the shifting function; and Figure 15 illustrates an optical schematic for a display system where the shifting function is performed by a liquid filled prism.
Detailed Description of the Preferred Embodiments In the first preferred embodiment, as will be described with reference to Figures 1 to 9, a field sequential color display incorporates the invention.
Figure 1 shows an upstanding. arrow object at five different locations or a full color display. The color of the arrow is white. In the preferred embodiment, a head mounted display is used. A left to right head movement results in the right to left image movement shown.
As illustrated in Figure 2, the eye rotates smoothly to track the upstanding arrow and moves at the same speed as the arrows such that the succession of arrow images fall on the same place and result in the observer seeing a single white upstanding arrow.
Figure 3a illustrates the upstanding arrow image on the retina with all colors superimposed when a normal simultaneous display is employed. Figure 3b illustrates the image that would be seen if the image displayed in Figure 3 had been five frames of a field sequential display system as illustrated in Figure 1 in which a succession of red, green and blue images were displayed at each of the five positions as the object moves from right to left on the screen. As shown, the rotation of the eye results in a break-up of the object image into its color components due to the lag in delivery of the color component images.
In the first preferred embodiment, as illustrated in Figure 6, the invention is applied to a head mounted display 16 as is known in the art, for example, as disclosed in US Patent 5,348,477 and in "1-IDTV Virtual Reality", Japan Display 192, pp.
407 to 410. The image presented on the screen being viewed is that of a virtual environment.
As the observer's head moves, the image being displayed must be shifted up and down and left to right and rotated so that the observer sees a stable representation of the environment corresponding to the orientation of his or her head. The processor 10 is an electronic processor receiving from processor 11 head pitch rate data, head roll rate data and head yaw rate data. The horizontal and vertical sync signals are fed to processor 10 from the field sequential converter 22. The head position processor I I uses the actual position of helmet 24 from the position sensor 28 output for determining actual pitch, roll and yaw positions. Based on these actual positions and the pitch, roll and yaw acceleration or velocity measurements from sensor 26, processor 11 computes the predicted head position for image generator 20. The appropriate vertical and horizontal scan offsets are calculated in offset processor 10. In the general case, when the optical axis of the CRT 16 as seen by the eye through the head mounted display optics 23 is not orthogonal to either the vertical or horizontal axis of the head, both offsets will be a function of pitch roll and yaw. The optical axis of the CRT is defined as the line which is normal to the face of the CRT and passes through the center of the image.
The offsets are then added to the respective deflection signals in the amplifiers 12 and 14 which drive, respectively, the horizontal and vertical deflection mechanism of the CRT display 16. The processor 10 can also take into account any distortion introduced in the deflection signals to compensate for distortion in the optical system.
Typical offset waveforms are shown in Figure 7 along with the vertical sync signal pulse which occurs at the beginning of each field.
In most cases, the roll term can be omitted without causing a significant error, which simplifies the implementation of the processor 10. If roll is to be corrected when using a CRT display, horizontal and vertical offsets need to be varied over each horizontal line scan of the electron beam in'a different manner for each subsequent scan from the top to the bottom of the CRT display.
The continuously varying offset (sawtooth waveform in Fig. 7) also adjusts for the vertical presentation delay, correcting the "tilted image phenomena"
as discussed on page 54 of AGARD Advisory Report No. 164 entitled "Characteristics of Flight Simulator Visual Systems", published May 1981.
In Figure 8, an alternative embodiment is illustrated in which the display 16 is a ferroelectric liquid crystal display (FELCD) which is illuminated by an LED
light source 19 which includes red, blue and green LEDs for illuminating a diffuser screen 15 located behind LCD display 16. The observer at 21 views screen 16 through optics 40 and a mirror 18. The mirror 18 is mounted on four electromagnetic transducers 17, the transducers 17 being connected to a housing of the display (not shown). In order to shift the color component images within the cycle with respect to one another, the transducers are energized with a current proportional to the amount of displacement required. The pair of transducers 17h adjust the horizontal displacement of the image and the pair of transducers 17v adjust the vertical displacement of the image.
An appropriate offset waveform as illustrated at B in Figure 7 may be used to move the image viewed on display 16. In the embodiment illustrated in Figure 8, transducers 17 would be fed an amplified signal coming from processor 10 similar to the first preferred embodiment, with the exception of course, that the signal must be sloped to account for the inertia of the transducers and mirror.
The invention also contemplates that the image memory device or video display controller used for storing each color component image could be shifted by the appropriate number of pixels in hardware dedicated to such image shifting in a matter of a very short period of time. As shown in Fig. 9, the field sequential RGB
video pixel data is shifted by a digital image shifter by amounts determined by the vertical and horizontal offset signals (received from processor 10) before being transferred to the video display memory.
Alternatively, the digital image shifter could be integrated into the converter 22. Once the composite color video image is received, the first image to be displayed, e.g. the red color component image, which does not need to be shifted for simple whole image step shifting, can be immediately relayed to the screen. While displaying the first red color component image, the hardware could shift by the appropriate amount indicated by processor 10, the subsequent green and blue color component images, and relay them to the screen for display when required.
Figure 7 illustrates the waveform of the offset signal for a continuously varying function (A) for a complete cycle of RGB fields and as discrete values (B) for each RGB field, with the vertical sync pulses (C);
Figure 8 illustrates an alternative embodiment in which an opto-mechanical shifting of the viewed image is achieved by mounting a relay mirror on piezoelectric transducers which are energized by the appropriate offset signal to shift the image for each field by the appropriate offset for the speed of the object in motion being viewed on the screen;
Figure 9 illustrates a block diagram of a digital display screen including a digital image shifter;
Figure 10 shows a typical timing diagram for a single field divided into six bit planes;
Figure 11 is a block diagram of the apparatus according to the preferred embodiment;
Figure 12 illustrates a block diagram of the virtual environment display apparatus according to the preferred embodiment in which the difference between the actual head position and the predicted head position is used to control horizontal vertical offsets of the video display;
Figure 13 illustrates actual head position, delayed predicted head position and display offset signals on a common time scale for a simple example in which actual head position moves with constant velocity for a time X between positions P 1 and P2.;
Figure 14 illustrates an optical schematic for a display system using a moveable mirror to perform the shifting function; and Figure 15 illustrates an optical schematic for a display system where the shifting function is performed by a liquid filled prism.
Detailed Description of the Preferred Embodiments In the first preferred embodiment, as will be described with reference to Figures 1 to 9, a field sequential color display incorporates the invention.
Figure 1 shows an upstanding. arrow object at five different locations or a full color display. The color of the arrow is white. In the preferred embodiment, a head mounted display is used. A left to right head movement results in the right to left image movement shown.
As illustrated in Figure 2, the eye rotates smoothly to track the upstanding arrow and moves at the same speed as the arrows such that the succession of arrow images fall on the same place and result in the observer seeing a single white upstanding arrow.
Figure 3a illustrates the upstanding arrow image on the retina with all colors superimposed when a normal simultaneous display is employed. Figure 3b illustrates the image that would be seen if the image displayed in Figure 3 had been five frames of a field sequential display system as illustrated in Figure 1 in which a succession of red, green and blue images were displayed at each of the five positions as the object moves from right to left on the screen. As shown, the rotation of the eye results in a break-up of the object image into its color components due to the lag in delivery of the color component images.
In the first preferred embodiment, as illustrated in Figure 6, the invention is applied to a head mounted display 16 as is known in the art, for example, as disclosed in US Patent 5,348,477 and in "1-IDTV Virtual Reality", Japan Display 192, pp.
407 to 410. The image presented on the screen being viewed is that of a virtual environment.
As the observer's head moves, the image being displayed must be shifted up and down and left to right and rotated so that the observer sees a stable representation of the environment corresponding to the orientation of his or her head. The processor 10 is an electronic processor receiving from processor 11 head pitch rate data, head roll rate data and head yaw rate data. The horizontal and vertical sync signals are fed to processor 10 from the field sequential converter 22. The head position processor I I uses the actual position of helmet 24 from the position sensor 28 output for determining actual pitch, roll and yaw positions. Based on these actual positions and the pitch, roll and yaw acceleration or velocity measurements from sensor 26, processor 11 computes the predicted head position for image generator 20. The appropriate vertical and horizontal scan offsets are calculated in offset processor 10. In the general case, when the optical axis of the CRT 16 as seen by the eye through the head mounted display optics 23 is not orthogonal to either the vertical or horizontal axis of the head, both offsets will be a function of pitch roll and yaw. The optical axis of the CRT is defined as the line which is normal to the face of the CRT and passes through the center of the image.
The offsets are then added to the respective deflection signals in the amplifiers 12 and 14 which drive, respectively, the horizontal and vertical deflection mechanism of the CRT display 16. The processor 10 can also take into account any distortion introduced in the deflection signals to compensate for distortion in the optical system.
Typical offset waveforms are shown in Figure 7 along with the vertical sync signal pulse which occurs at the beginning of each field.
In most cases, the roll term can be omitted without causing a significant error, which simplifies the implementation of the processor 10. If roll is to be corrected when using a CRT display, horizontal and vertical offsets need to be varied over each horizontal line scan of the electron beam in'a different manner for each subsequent scan from the top to the bottom of the CRT display.
The continuously varying offset (sawtooth waveform in Fig. 7) also adjusts for the vertical presentation delay, correcting the "tilted image phenomena"
as discussed on page 54 of AGARD Advisory Report No. 164 entitled "Characteristics of Flight Simulator Visual Systems", published May 1981.
In Figure 8, an alternative embodiment is illustrated in which the display 16 is a ferroelectric liquid crystal display (FELCD) which is illuminated by an LED
light source 19 which includes red, blue and green LEDs for illuminating a diffuser screen 15 located behind LCD display 16. The observer at 21 views screen 16 through optics 40 and a mirror 18. The mirror 18 is mounted on four electromagnetic transducers 17, the transducers 17 being connected to a housing of the display (not shown). In order to shift the color component images within the cycle with respect to one another, the transducers are energized with a current proportional to the amount of displacement required. The pair of transducers 17h adjust the horizontal displacement of the image and the pair of transducers 17v adjust the vertical displacement of the image.
An appropriate offset waveform as illustrated at B in Figure 7 may be used to move the image viewed on display 16. In the embodiment illustrated in Figure 8, transducers 17 would be fed an amplified signal coming from processor 10 similar to the first preferred embodiment, with the exception of course, that the signal must be sloped to account for the inertia of the transducers and mirror.
The invention also contemplates that the image memory device or video display controller used for storing each color component image could be shifted by the appropriate number of pixels in hardware dedicated to such image shifting in a matter of a very short period of time. As shown in Fig. 9, the field sequential RGB
video pixel data is shifted by a digital image shifter by amounts determined by the vertical and horizontal offset signals (received from processor 10) before being transferred to the video display memory.
Alternatively, the digital image shifter could be integrated into the converter 22. Once the composite color video image is received, the first image to be displayed, e.g. the red color component image, which does not need to be shifted for simple whole image step shifting, can be immediately relayed to the screen. While displaying the first red color component image, the hardware could shift by the appropriate amount indicated by processor 10, the subsequent green and blue color component images, and relay them to the screen for display when required.
A video display controller which can shift a whole video image vertically and horizontally on a screen of a simultaneous color video display unit is disclosed in U.S.
Patent 4,737,778 to Nishi et al. Digital displays in which each pixel is addressed digitally are known in the art, such as ferroelectric liquid crystal display (FELCD), a deformable mirror display (DMD), an active matrix liquid crystal display (AMLCD), and an field emitter display (FED).
The invention works equally well whether the device is mounted directly on the head or optically coupled to the head via fiber optic cables. In addition to using opto-mechanical mirrors to shift the image, it would equally be possible to use an opto-electronic device to shift the image in a functionally similar manner.
In the second preferred embodiment, as will be described with reference to Figures 1, 2, 4, 10 and 11, a temporal grey scale display incorporates the invention. In order to understand the invention, it is first necessary to have a clear understanding of why image break-up occurs. As is well known in the art, television creates the illusion of smooth motion by drawing successive images at a sufficient fast rate that the human visual system can no longer see the individual images (i.e. the image is flicker-free). If the entire image or the objects within the image are moved appropriately relative to the previous image, the visual system will interpret the sequence of images as smooth motion. Figure 1 shows the motion of an understanding arrow on a display moving from right to left in five successive images. The arrow represents any fixed object within the scene being displayed. In order to fixate on this object, the eye makes what is known as a "smooth pursuit eye movement" in the same way as it would if looking at a real object moving in the real world. Even though the image appears at a finite number of discrete locations, the eye will move or rotate with a substantially constant velocity to track the object. The rotating eye is illustrated in Figure 2. It will be noted that all of the consecutive images are focused on the retina at or near the fovea allowing the observer to see a single image as shown in Fig 4a. If however the display uses temporal modulation as described earlier and also illustrated in Fig. 10 the eye would normally track the images created in the most significant bit plane and the images created in the remaining bit planes would be focused at different points on the retina as shown in Fig.
4b.
The separation of the images will be proportional to the rotational velocity of the eye and the time differences between the bit planes. If all the bit planes are on and the motion is sufficiently slow separation will not be apparent but the image will appear to be smeared. If the bit planes are changing during the motion, especially if the most significant bits are changing, the observer will perceive the object to have jitter.
Patent 4,737,778 to Nishi et al. Digital displays in which each pixel is addressed digitally are known in the art, such as ferroelectric liquid crystal display (FELCD), a deformable mirror display (DMD), an active matrix liquid crystal display (AMLCD), and an field emitter display (FED).
The invention works equally well whether the device is mounted directly on the head or optically coupled to the head via fiber optic cables. In addition to using opto-mechanical mirrors to shift the image, it would equally be possible to use an opto-electronic device to shift the image in a functionally similar manner.
In the second preferred embodiment, as will be described with reference to Figures 1, 2, 4, 10 and 11, a temporal grey scale display incorporates the invention. In order to understand the invention, it is first necessary to have a clear understanding of why image break-up occurs. As is well known in the art, television creates the illusion of smooth motion by drawing successive images at a sufficient fast rate that the human visual system can no longer see the individual images (i.e. the image is flicker-free). If the entire image or the objects within the image are moved appropriately relative to the previous image, the visual system will interpret the sequence of images as smooth motion. Figure 1 shows the motion of an understanding arrow on a display moving from right to left in five successive images. The arrow represents any fixed object within the scene being displayed. In order to fixate on this object, the eye makes what is known as a "smooth pursuit eye movement" in the same way as it would if looking at a real object moving in the real world. Even though the image appears at a finite number of discrete locations, the eye will move or rotate with a substantially constant velocity to track the object. The rotating eye is illustrated in Figure 2. It will be noted that all of the consecutive images are focused on the retina at or near the fovea allowing the observer to see a single image as shown in Fig 4a. If however the display uses temporal modulation as described earlier and also illustrated in Fig. 10 the eye would normally track the images created in the most significant bit plane and the images created in the remaining bit planes would be focused at different points on the retina as shown in Fig.
4b.
The separation of the images will be proportional to the rotational velocity of the eye and the time differences between the bit planes. If all the bit planes are on and the motion is sufficiently slow separation will not be apparent but the image will appear to be smeared. If the bit planes are changing during the motion, especially if the most significant bits are changing, the observer will perceive the object to have jitter.
In the case of a head mounted display, head rotation will cause an equal and opposite motion of the image across the display. The observers eye is still able to track specific objects within the image being displayed and sees the effects described above.
The objective of this invention is to compute the amount of separation which would occur based on the rotational head velocity of the observer and shift the entire image on the display an appropriate amount for each bit plane so that all the bit plane images in a single field are coincident on the retina. The observer will thus see a normal image; the effects described above being either eliminated or much reduced.
Figure 11 is a schematic of the second preferred embodiment and shows how the corrected display data is obtained. The head position processor 1 receives the raw head position data from a head tracking device such as a Polemus Magnetic tracker as well or head rotational velocity data from a device such as the Watson C341 rate sensor.
Rotational acceleration data may also be included. The head position processor sends either predicted head position as suggested by UWE LIST or current head position to the image source 2 which may be an image sensor such as a television camera mounted on a gimbal system or a computer image generator. The video signal from the image source is sent to the Bit Plane Generator 3 which stores a complete field in a digital format, generates the timing waveforms for the particular temporal modulation scheme being used (a typical one is shown in Fig. 10) and sends the bit plane data during the appropriate intervals to the Display Electronics module 5 which drives the head mounted display 6. The Bit Plane offset Generator 4 receives timing signals (H&V) from the image source, a bit plane sync from the bit plane generator and angular head velocity data from the head position processor. It generates H&V offsets for each bit plane except the most significant bit plane according to the formulas below:
Ho = xt Kh Vo = - Y-t Kv where: Ho = Horizontal offset in pixels Vo = Vertical offset in pixels x= Angular yaw velocity of the head in degrees/sec.
The objective of this invention is to compute the amount of separation which would occur based on the rotational head velocity of the observer and shift the entire image on the display an appropriate amount for each bit plane so that all the bit plane images in a single field are coincident on the retina. The observer will thus see a normal image; the effects described above being either eliminated or much reduced.
Figure 11 is a schematic of the second preferred embodiment and shows how the corrected display data is obtained. The head position processor 1 receives the raw head position data from a head tracking device such as a Polemus Magnetic tracker as well or head rotational velocity data from a device such as the Watson C341 rate sensor.
Rotational acceleration data may also be included. The head position processor sends either predicted head position as suggested by UWE LIST or current head position to the image source 2 which may be an image sensor such as a television camera mounted on a gimbal system or a computer image generator. The video signal from the image source is sent to the Bit Plane Generator 3 which stores a complete field in a digital format, generates the timing waveforms for the particular temporal modulation scheme being used (a typical one is shown in Fig. 10) and sends the bit plane data during the appropriate intervals to the Display Electronics module 5 which drives the head mounted display 6. The Bit Plane offset Generator 4 receives timing signals (H&V) from the image source, a bit plane sync from the bit plane generator and angular head velocity data from the head position processor. It generates H&V offsets for each bit plane except the most significant bit plane according to the formulas below:
Ho = xt Kh Vo = - Y-t Kv where: Ho = Horizontal offset in pixels Vo = Vertical offset in pixels x= Angular yaw velocity of the head in degrees/sec.
y= Angular pitch velocity of the head in degree/sec.
Kh = Is a constant for the display giving the angular subtense between centres of adjacent pixels in the horizontal direction in degrees/pixel.
Kv = is a similar constant giving the angular subtense between centres of adjacent pixels in the vertical direction in degrees/pixel.
t = the interval in time between the centre of the most significant bit plane and the bit plane being processed (in seconds).
In the third preferred embodiment, as will be described with reference to Figures 5, and 12 to 15, a video display having a finite transport delay in which display or head orientation is predicted incorporates the invention. In the third preferred embodiment, the virtual environment video display is a head mounted or helmet mounted display (H1VID) of the type known in the art. The helmet is provided with position and angular acceleration sensors as is also known in the art. Figure 5 illustrates an exemplary acceleration curve as a user moves between two visual orientations. As can be seen, the acceleration is shown in the example to peak at 100 milliseconds with a deceleration or stopping of the head motion commencing near 200 milliseconds and ending near 400 milliseconds with the head of the helmet in its new angular position. In Figure 5, it is presumed that the head position sensor and the computer image generator 20 requires 100 milliseconds to detect head position and generate an image for the new head position (i.e. the transport delay is 100 ms). The curve illustrating the displayed image orientation with no position prediction correction results in considerable unwanted image motion illustrated near 250 milliseconds as the difference of some 12 by the reference letter E. In the prior art improvement, prediction of future position using acceleration measurements resulted in the dashed line for the image orientation with small but noticeable divergence between the predicted line and the actual head orientation curve. As will be seen below, use of the method and apparatus according to the present invention can result in the displayed image orientation following the actual head orientation more closely resulting in an almost imperceptible amount of image instability.
As illustrated in Figure 12, the apparatus according to the third preferred embodiment comprises a head position processor which receives the output signals from the head position sensor 45a and the head angular acceleration sensors 45c.
Optionally, angular head velocity sensors 45b may be provided as well as or in place of the acceleration sensor. The head position processor 10 reads the raw data and outputs an actual head position output signa141 fed to a summation device 44. The head position processor 10 also predicts the head or helmet position based on actual position and the measurement of the head acceleration and/or head velocity. If a head velocity sensor is =
not used, the velocity is calculated from either differentiating position or preferably integrating acceleration. The head position is predicted for a point in time ahead in the future by an amount equivalent to the transport delay inherent in the system.
The predicted head position signal 13 is fed into a delay circuit 48 which delays the signal by an amount of time equal to the transport delay before feeding it to the summation device 44 where it is subtracted from the actual head position signal on line 41.
This difference signal is fed to offset processors lOv and 10h where the vertical and horizontal offsets respectively are determined resulting in the vertical and horizontal offset signals fed to display 16.
In the case that the display is a CRT (cathode ray tube) video display, the horizontal and vertical offset signals are fed to horizontal and vertical scan circuits.
In the case that the shifting of the image is to be done optically, transducers 17 may be used to change the angular orientation of a mirror as illustrated in Figure 14 or similar transducers may be used to change a refraction of the image passing through a liquid filled prism 36 having transparent cover plates 34 and 36 moveable in angular orientation with respect to one another as shown in Figure 15. In both Figures 14 and 15, the shifted image is viewed through an eyepiece 40 by an eye 21 The vertical and horizontal offsets can alternatively be carried out by image position shifting within the video display controller, a video display controller as disclosed in U.S.
Patent 4,737,778 (Nishi et al) may be used to vertically and horizontally shift the whole video image displayed on the screen of the video display unit 16.
In the example illustrated in Figure 13, the observer's helmet position moves from a position P I to a position P2 under constant velocity. This example is simplified in that it does not take into consideration normal acceleration and deceleration. In the first time frame labeled as the transport delay, the display offset in one or both of the horizontal and vertical directions is illustrated to ramp upwardly for the duration of the transport delay, at which time the display offset is set back to zero and the new image is displayed on the display 16. The resetting of the display offset and the update in the image of the virtual environment takes place without the observer seeing a sharp change in the image. At the point in time X when the actual head position has reached P2 and stopped, the predicted head position based on the previous velocity is for a position which continues along the same path beyond the position P2. At the instant that the actual head position stops and the delayed predicted head position continues to increase, the display offset is ramped to decrease so that the observed image is stationary in keeping with the actual head position.
Although the invention has been described as applied to a virtual environment system using a computer image generator as the image source, it can, with suitable modifications, take into account certain operational differences that will be apparent to one skilled in the art, be applied to virtual presence or telepresence systems which use image sensors such as television cameras mounted on head slaved gimbal systems.
Accordingly, it is within the contemplation of the invention and the claims are intended to encompass all types of virtual environment systems where delays would normally cause image instability.
~ .~._
Kh = Is a constant for the display giving the angular subtense between centres of adjacent pixels in the horizontal direction in degrees/pixel.
Kv = is a similar constant giving the angular subtense between centres of adjacent pixels in the vertical direction in degrees/pixel.
t = the interval in time between the centre of the most significant bit plane and the bit plane being processed (in seconds).
In the third preferred embodiment, as will be described with reference to Figures 5, and 12 to 15, a video display having a finite transport delay in which display or head orientation is predicted incorporates the invention. In the third preferred embodiment, the virtual environment video display is a head mounted or helmet mounted display (H1VID) of the type known in the art. The helmet is provided with position and angular acceleration sensors as is also known in the art. Figure 5 illustrates an exemplary acceleration curve as a user moves between two visual orientations. As can be seen, the acceleration is shown in the example to peak at 100 milliseconds with a deceleration or stopping of the head motion commencing near 200 milliseconds and ending near 400 milliseconds with the head of the helmet in its new angular position. In Figure 5, it is presumed that the head position sensor and the computer image generator 20 requires 100 milliseconds to detect head position and generate an image for the new head position (i.e. the transport delay is 100 ms). The curve illustrating the displayed image orientation with no position prediction correction results in considerable unwanted image motion illustrated near 250 milliseconds as the difference of some 12 by the reference letter E. In the prior art improvement, prediction of future position using acceleration measurements resulted in the dashed line for the image orientation with small but noticeable divergence between the predicted line and the actual head orientation curve. As will be seen below, use of the method and apparatus according to the present invention can result in the displayed image orientation following the actual head orientation more closely resulting in an almost imperceptible amount of image instability.
As illustrated in Figure 12, the apparatus according to the third preferred embodiment comprises a head position processor which receives the output signals from the head position sensor 45a and the head angular acceleration sensors 45c.
Optionally, angular head velocity sensors 45b may be provided as well as or in place of the acceleration sensor. The head position processor 10 reads the raw data and outputs an actual head position output signa141 fed to a summation device 44. The head position processor 10 also predicts the head or helmet position based on actual position and the measurement of the head acceleration and/or head velocity. If a head velocity sensor is =
not used, the velocity is calculated from either differentiating position or preferably integrating acceleration. The head position is predicted for a point in time ahead in the future by an amount equivalent to the transport delay inherent in the system.
The predicted head position signal 13 is fed into a delay circuit 48 which delays the signal by an amount of time equal to the transport delay before feeding it to the summation device 44 where it is subtracted from the actual head position signal on line 41.
This difference signal is fed to offset processors lOv and 10h where the vertical and horizontal offsets respectively are determined resulting in the vertical and horizontal offset signals fed to display 16.
In the case that the display is a CRT (cathode ray tube) video display, the horizontal and vertical offset signals are fed to horizontal and vertical scan circuits.
In the case that the shifting of the image is to be done optically, transducers 17 may be used to change the angular orientation of a mirror as illustrated in Figure 14 or similar transducers may be used to change a refraction of the image passing through a liquid filled prism 36 having transparent cover plates 34 and 36 moveable in angular orientation with respect to one another as shown in Figure 15. In both Figures 14 and 15, the shifted image is viewed through an eyepiece 40 by an eye 21 The vertical and horizontal offsets can alternatively be carried out by image position shifting within the video display controller, a video display controller as disclosed in U.S.
Patent 4,737,778 (Nishi et al) may be used to vertically and horizontally shift the whole video image displayed on the screen of the video display unit 16.
In the example illustrated in Figure 13, the observer's helmet position moves from a position P I to a position P2 under constant velocity. This example is simplified in that it does not take into consideration normal acceleration and deceleration. In the first time frame labeled as the transport delay, the display offset in one or both of the horizontal and vertical directions is illustrated to ramp upwardly for the duration of the transport delay, at which time the display offset is set back to zero and the new image is displayed on the display 16. The resetting of the display offset and the update in the image of the virtual environment takes place without the observer seeing a sharp change in the image. At the point in time X when the actual head position has reached P2 and stopped, the predicted head position based on the previous velocity is for a position which continues along the same path beyond the position P2. At the instant that the actual head position stops and the delayed predicted head position continues to increase, the display offset is ramped to decrease so that the observed image is stationary in keeping with the actual head position.
Although the invention has been described as applied to a virtual environment system using a computer image generator as the image source, it can, with suitable modifications, take into account certain operational differences that will be apparent to one skilled in the art, be applied to virtual presence or telepresence systems which use image sensors such as television cameras mounted on head slaved gimbal systems.
Accordingly, it is within the contemplation of the invention and the claims are intended to encompass all types of virtual environment systems where delays would normally cause image instability.
~ .~._
Claims (16)
1. A method for displaying a virtual environment on a video display comprising the repeated steps of:
determining a visual orientation of said video display with respect to said virtual environment and generating an orientation signal representing said visual orientation;
generating an image of said virtual environment for said visual orientation identified by said orientation signal, said generating requiring a finite delay time;
detecting any change in said visual orientation which may have occurred between a time when said visual orientation was determined and said image was generated; and displaying said image on said video display, said image being shifted on said video display by an amount equivalent to said change, whereby the display of the virtual environment is more stable.
determining a visual orientation of said video display with respect to said virtual environment and generating an orientation signal representing said visual orientation;
generating an image of said virtual environment for said visual orientation identified by said orientation signal, said generating requiring a finite delay time;
detecting any change in said visual orientation which may have occurred between a time when said visual orientation was determined and said image was generated; and displaying said image on said video display, said image being shifted on said video display by an amount equivalent to said change, whereby the display of the virtual environment is more stable.
2. The method as claimed in claim 1, further comprising steps of:
measuring at least one of an angular velocity and an angular acceleration of said visual orientation; and calculating a predicted visual orientation of said display with respect to said virtual environment based on said visual orientation and at least one of said angular velocity and said angular acceleration, said predicted visual orientation being for a future point in time equal to a present time plus approximately said finite delay time.
measuring at least one of an angular velocity and an angular acceleration of said visual orientation; and calculating a predicted visual orientation of said display with respect to said virtual environment based on said visual orientation and at least one of said angular velocity and said angular acceleration, said predicted visual orientation being for a future point in time equal to a present time plus approximately said finite delay time.
3. The method as claimed in claim 1, wherein said video display is a head mounted display.
4. The method as claimed in claim 2, wherein said video display is a head mounted display.
5. An apparatus for displaying a virtual environment on a video display comprising:
head position processor means for generating a visual orientation signal indicating a visual orientation of said video display with respect to said virtual environment;
image generator means for generating an image of said virtual environment for said visual orientation, said image generating means receiving said visual orientation signals, said image generator means having a finite transport delay time for generating and preparing an image for transmission on a video output signal;
the video display receiving said video output signal for displaying said image;
means for detecting a difference between said visual orientation signal at a time when said orientation signal was used by said image generator means to generate said image and said visual orientation signal at a time of display of said image on said video display and for producing an offset shift signal proportional to said difference; and means for shifting said image on said video display in response to said offset shift signal, whereby the display of the virtual environment is more stable.
head position processor means for generating a visual orientation signal indicating a visual orientation of said video display with respect to said virtual environment;
image generator means for generating an image of said virtual environment for said visual orientation, said image generating means receiving said visual orientation signals, said image generator means having a finite transport delay time for generating and preparing an image for transmission on a video output signal;
the video display receiving said video output signal for displaying said image;
means for detecting a difference between said visual orientation signal at a time when said orientation signal was used by said image generator means to generate said image and said visual orientation signal at a time of display of said image on said video display and for producing an offset shift signal proportional to said difference; and means for shifting said image on said video display in response to said offset shift signal, whereby the display of the virtual environment is more stable.
6. The apparatus as claimed in claim 5, further comprising means for detecting at least one of an angular velocity and an angular acceleration of said visual orientation for producing a predictive signal;
means for calculating a predicted visual orientation of said display with respect to said virtual environment based on said visual orientation signal and said predictive signal to produce a predicted visual orientation signal, said predicted visual orientation signal being connected to said image generator means in place of said visual orientation signal generated by said head position processor means, said predicted visual orientation signal being for a future point in time equal to a present time plus approximately said transport delay time.
means for calculating a predicted visual orientation of said display with respect to said virtual environment based on said visual orientation signal and said predictive signal to produce a predicted visual orientation signal, said predicted visual orientation signal being connected to said image generator means in place of said visual orientation signal generated by said head position processor means, said predicted visual orientation signal being for a future point in time equal to a present time plus approximately said transport delay time.
7. The apparatus as claimed in claim 5, wherein said head position processor means comprise an angular head position sensor, said video display being a head mounted display.
8. The apparatus as claimed in claim 6, wherein said video display is a head mounted display, said head position processor means comprises an angular head position sensor, and said means for detecting at least one of an angular velocity and an angular acceleration comprise at least one of an angular head velocity sensor and an angular head acceleration sensor.
9. The apparatus as claimed in claim 5, wherein said video display is a cathode ray tube display, and said shifting means comprise means for adjusting a horizontal offset.
10. The apparatus as claimed in claim 5, wherein said shifting means comprise image relay optics including controllable means for angularly displacing horizontally and vertically an image relayed by said optics.
11. The apparatus as claimed in claim 6, wherein said video display is a cathode ray tube display, and said shifting means comprise means for adjusting a horizontal offset and a vertical offset of said cathode ray tube display.
12. The apparatus as claimed in claim 6, wherein said shifting means comprise image relay optics including controllable means for angularly displacing horizontally and vertically an image relayed by said optics.
13. An apparatus for displaying a virtual environment represented by images generated by an image generator having a finite transport delay time, said image generator generating said images for a given visual orientation of a video display displaying said images with respect to said virtual environment, said image generator receiving a visual orientation signal representing said visual orientation, said apparatus comprising:
means for detecting any change in said visual orientation signal from a time when said signal was used by said image generator means to generate said image and a time of display of said image on said video display to produce an offset shift signal;
and means for shifting said image on said video display in response to said offset shift signal, whereby the display of the virtual environment is more stable.
means for detecting any change in said visual orientation signal from a time when said signal was used by said image generator means to generate said image and a time of display of said image on said video display to produce an offset shift signal;
and means for shifting said image on said video display in response to said offset shift signal, whereby the display of the virtual environment is more stable.
14. The apparatus as claimed in claim 13, further comprising means for detecting at least one of an angular velocity and an angular acceleration of said visual orientation for producing a predictive signal;
means for calculating a predicted visual orientation of said video display with respect to said virtual environment based on said visual orientation signal and said predictive signal to produce a predicted visual orientation signal, said predicted visual orientation signal being connected to said image generator means in place of said received visual orientation signal, said predicted visual orientation signal being for a future point in time equal to a present time plus approximately said transport delay time.
means for calculating a predicted visual orientation of said video display with respect to said virtual environment based on said visual orientation signal and said predictive signal to produce a predicted visual orientation signal, said predicted visual orientation signal being connected to said image generator means in place of said received visual orientation signal, said predicted visual orientation signal being for a future point in time equal to a present time plus approximately said transport delay time.
15. The apparatus as claimed in claim 13, wherein said video display is a head mounted display, said visual orientation signal representing an angular head position.
16. The apparatus as claimed in claim 14, wherein said video display is a head mounted display, said visual orientation signal represents an angular head position, and said means for detecting at least one of an angular velocity and an angular acceleration comprise at least one of an angular head velocity sensor and an angular head acceleration sensor.
Applications Claiming Priority (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US08/563,195 US5933125A (en) | 1995-11-27 | 1995-11-27 | Method and apparatus for reducing instability in the display of a virtual environment |
US08/563,195 | 1995-11-27 | ||
US08/593,842 | 1996-01-30 | ||
US08/593,842 US5764202A (en) | 1995-06-26 | 1996-01-30 | Suppressing image breakup in helmut mounted displays which use temporally separated bit planes to achieve grey scale |
PCT/CA1996/000789 WO1997020244A1 (en) | 1995-11-27 | 1996-11-27 | Method and apparatus for displaying a virtual environment on a video display |
Publications (2)
Publication Number | Publication Date |
---|---|
CA2238693A1 CA2238693A1 (en) | 1997-06-05 |
CA2238693C true CA2238693C (en) | 2009-02-24 |
Family
ID=27073199
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CA002238693A Expired - Lifetime CA2238693C (en) | 1995-11-27 | 1996-11-27 | Method and apparatus for displaying a virtual environment on a video display |
Country Status (3)
Country | Link |
---|---|
AU (1) | AU7616496A (en) |
CA (1) | CA2238693C (en) |
WO (1) | WO1997020244A1 (en) |
Families Citing this family (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
NL1018198C2 (en) * | 2001-06-01 | 2002-12-03 | Tno | Head mounted display device. |
US8015507B2 (en) | 2001-11-05 | 2011-09-06 | H2Eye (International) Limited | Graphical user interface for a remote operated vehicle |
GB2381725B (en) * | 2001-11-05 | 2004-01-14 | H2Eye | Graphical user interface for a remote operated vehicle |
AU2003901528A0 (en) | 2003-03-31 | 2003-05-01 | Seeing Machines Pty Ltd | Eye tracking system and method |
US20140176591A1 (en) * | 2012-12-26 | 2014-06-26 | Georg Klein | Low-latency fusing of color image data |
US9063330B2 (en) * | 2013-05-30 | 2015-06-23 | Oculus Vr, Llc | Perception based predictive tracking for head mounted displays |
CN105593924B (en) * | 2013-12-25 | 2019-06-07 | 索尼公司 | Image processing apparatus, image processing method, computer program and image display system |
EP3265866B1 (en) | 2015-03-05 | 2022-12-28 | Magic Leap, Inc. | Systems and methods for augmented reality |
US10838207B2 (en) | 2015-03-05 | 2020-11-17 | Magic Leap, Inc. | Systems and methods for augmented reality |
US10180734B2 (en) | 2015-03-05 | 2019-01-15 | Magic Leap, Inc. | Systems and methods for augmented reality |
US9874932B2 (en) | 2015-04-09 | 2018-01-23 | Microsoft Technology Licensing, Llc | Avoidance of color breakup in late-stage re-projection |
GB201516121D0 (en) | 2015-09-11 | 2015-10-28 | Bae Systems Plc | Helmet tracker buffering compensation |
AU2016365422A1 (en) | 2015-12-04 | 2018-06-28 | Magic Leap, Inc. | Relocalization systems and methods |
US10649211B2 (en) | 2016-08-02 | 2020-05-12 | Magic Leap, Inc. | Fixed-distance virtual and augmented reality systems and methods |
WO2018086941A1 (en) | 2016-11-08 | 2018-05-17 | Arcelik Anonim Sirketi | System and method for providing virtual reality environments on a curved display |
US10812936B2 (en) | 2017-01-23 | 2020-10-20 | Magic Leap, Inc. | Localization determination for mixed reality systems |
AU2018233733B2 (en) | 2017-03-17 | 2021-11-11 | Magic Leap, Inc. | Mixed reality system with multi-source virtual content compositing and method of generating virtual content using same |
EP3596705A1 (en) | 2017-03-17 | 2020-01-22 | Magic Leap, Inc. | Mixed reality system with color virtual content warping and method of generating virtual content using same |
CN110431599B (en) | 2017-03-17 | 2022-04-12 | 奇跃公司 | Mixed reality system with virtual content warping and method for generating virtual content using the same |
US11379948B2 (en) | 2018-07-23 | 2022-07-05 | Magic Leap, Inc. | Mixed reality system with virtual content warping and method of generating virtual content using same |
WO2020023523A1 (en) | 2018-07-23 | 2020-01-30 | Magic Leap, Inc. | Intra-field sub code timing in field sequential displays |
IL260960B (en) | 2018-08-02 | 2020-02-27 | Rosolio Beery | In-flight training simulation displaying a virtual environment |
US11790860B2 (en) | 2021-09-03 | 2023-10-17 | Honeywell International Inc. | Systems and methods for providing image motion artifact correction for a color sequential (CS) display |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2671619B2 (en) * | 1991-03-06 | 1997-10-29 | 富士通株式会社 | Video generation processor |
JP3318680B2 (en) * | 1992-04-28 | 2002-08-26 | サン・マイクロシステムズ・インコーポレーテッド | Image generation method and image generation device |
US5467104A (en) * | 1992-10-22 | 1995-11-14 | Board Of Regents Of The University Of Washington | Virtual retinal display |
US5422653A (en) * | 1993-01-07 | 1995-06-06 | Maguire, Jr.; Francis J. | Passive virtual reality |
US5369450A (en) * | 1993-06-01 | 1994-11-29 | The Walt Disney Company | Electronic and computational correction of chromatic aberration associated with an optical system used to view a color video display |
US6388638B2 (en) * | 1994-10-28 | 2002-05-14 | Canon Kabushiki Kaisha | Display apparatus and its control method |
-
1996
- 1996-11-27 AU AU76164/96A patent/AU7616496A/en not_active Abandoned
- 1996-11-27 CA CA002238693A patent/CA2238693C/en not_active Expired - Lifetime
- 1996-11-27 WO PCT/CA1996/000789 patent/WO1997020244A1/en active Application Filing
Also Published As
Publication number | Publication date |
---|---|
CA2238693A1 (en) | 1997-06-05 |
WO1997020244A1 (en) | 1997-06-05 |
AU7616496A (en) | 1997-06-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CA2238693C (en) | Method and apparatus for displaying a virtual environment on a video display | |
US5684498A (en) | Field sequential color head mounted display with suppressed color break-up | |
US5933125A (en) | Method and apparatus for reducing instability in the display of a virtual environment | |
US4634384A (en) | Head and/or eye tracked optically blended display system | |
US8514207B2 (en) | Display apparatus and method | |
KR100520699B1 (en) | Autostereoscopic projection system | |
JP4826602B2 (en) | Display device and method | |
KR102600905B1 (en) | Improvements in and about the display | |
US6454411B1 (en) | Method and apparatus for direct projection of an image onto a human retina | |
US10410566B1 (en) | Head mounted virtual reality display system and method | |
Riecke et al. | Selected technical and perceptual aspects of virtual reality displays | |
Regan et al. | The problem of persistence with rotating displays | |
US5764202A (en) | Suppressing image breakup in helmut mounted displays which use temporally separated bit planes to achieve grey scale | |
US11281290B2 (en) | Display apparatus and method incorporating gaze-dependent display control | |
WO2019154942A1 (en) | Projection array light field display | |
CN115576116B (en) | Image generation device, display equipment and image generation method | |
JPH08334730A (en) | Stereoscopic picture reproducing device | |
CN114174893A (en) | Display device with reduced power consumption | |
US10859823B1 (en) | Head-mounted display device with selective gamma band for high contrast ratio | |
GB2039468A (en) | Improvements in or relating to visual display apparatus | |
US11627291B2 (en) | Image painting with multi-emitter light source | |
CN116389705B (en) | Three-dimensional scene realization method and system for augmented reality | |
RU2108687C1 (en) | Device reconstructing three-dimensional image | |
CA1147073A (en) | Visual display apparatus | |
KR20180025430A (en) | Personal immersion apparatus and display device thereof |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
EEER | Examination request | ||
MKEX | Expiry |
Effective date: 20161128 |