WO2005098516A2 - Horizontal perspective hand-on simulator - Google Patents
Horizontal perspective hand-on simulator Download PDFInfo
- Publication number
- WO2005098516A2 WO2005098516A2 PCT/US2005/011253 US2005011253W WO2005098516A2 WO 2005098516 A2 WO2005098516 A2 WO 2005098516A2 US 2005011253 W US2005011253 W US 2005011253W WO 2005098516 A2 WO2005098516 A2 WO 2005098516A2
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- display
- horizontal perspective
- peripheral device
- simulator system
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/0093—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00 with means for monitoring data relating to the user, e.g. head-tracking, eye-tracking
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B30/00—Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images
- G02B30/20—Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images by providing first and second parallax images to an observer's left and right eyes
- G02B30/34—Stereoscopes providing a stereoscopic pair of separated images corresponding to parallactically displaced views of the same object, e.g. 3D slide viewers
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B30/00—Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images
- G02B30/40—Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images giving the observer of a single two-dimensional [2D] image a perception of depth
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/04815—Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/122—Improving the 3D impression of stereoscopic images by modifying image signal contents, e.g. by filtering or adding monoscopic depth cues
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/275—Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals
- H04N13/279—Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals the virtual viewpoint locations being selected by the viewers or determined by tracking
Definitions
- This invention relates to a three-dimensional simulator system, and in particular, to a hands-on computer simulator system capable of operator's interaction.
- Three dimensional (3D) capable electronics and computing hardware devices and real-time computer-generated 3D computer graphics have been a popular area of computer science for the past few decades, with innovations in visual, audio and tactile systems. Much of the research in this area has produced hardware and software products that are specifically designed to generate greater realism and more natural computer-human interfaces. These innovations have significantly enhanced and simplified the end-user's computing experience.
- the answer is three dimensional illusions.
- the two dimensional pictures must provide a numbers of cues of the third dimension to the brain to create the illusion of three dimensional images.
- This effect of third dimension cues can be realistically achievable due to the fact that the brain is quite accustomed to it.
- the three dimensional real world is always and already converted into two dimensional (e.g. height and width) projected image at the retina, a concave surface at the back of the eye.
- the brain through experience and perception, generates the depth information to form the three dimension visual image from two types of depth cues: monocular (one eye perception) and binocular (two eye perception).
- binocular depth cues are innate and biological while monocular depth cues are learned and environmental.
- Binocular cues are very powerful perception of depth. However, there are also depth cues with only one eye, called monocular depth cues, to create an impression of depth on a flat image.
- the major monocular cues are: overlapping, relative size, linear perspective and light and shadow. When an object is viewed partially covered, this pattern of blocking is used as a cue to determine that the object is farther away. When two objects known to be the same size and one appears smaller than the other, this pattern of relative size is used as a cue to assume that the smaller object is farther away.
- the cue of relative size also provides the basis for the cue of linear perspective where the farther away the lines are from the observer, the closer together they will appear since parallel lines in a perspective image appear to converge towards a single point. The light falling on an object from a certain angle could provide the cue for the form and depth of an object.
- the distribution of light and shadow on objects is a powerful monocular cue for depth provided by the biologically correct assumption that light comes from above.
- Perspective drawing is most often used to achieve the illusion of three dimension depth and spatial relationships on a flat (two dimension) surface, such as paper or canvas.
- a flat (two dimension) surface such as paper or canvas.
- three dimension objects are depicted on a two dimension plane, but "trick" the eye into appearing to be in three dimension space.
- the first theoretical treatise for constructing perspective, Depictura was published in the early 1400's by the architect, Leone Battista Alberti. Since the introduction of his book, the details behind "general” perspective have been very well documented. However, the fact that there are a number of other types of perspectives is not well known. Some examples are military 1, cavalier 2, isometric 3, dimetric 4, central perspective 5 and two-point perspective 6 as shown in Figure 1.
- central perspective 5 Of special interest is the most common type of perspective, called central perspective 5, shown at the bottom left of Figure 1.
- Central perspective also called one-point perspective, is the simplest kind of "genuine" perspective construction, and is often taught in art and drafting classes for beginners.
- Figure 2 further illustrates central perspective.
- Central perspective uses central perspective, the chess board and chess pieces look like three dimension objects, even though they are drawn on a two dimensional flat piece of paper.
- Central perspective has a central vanishing point 21, and rectangular objects are placed so their front sides are parallel to the picture plane. The depth of the objects is perpendicular to the picture plane. All parallel receding edges run towards a central vanishing point. The viewer looks towards this vanishing point "with a straight view.
- an architect or artist creates a drawing using central perspective they must use a single-eye view. That is, the artist creating the drawing captures the image by looking through only one eye, which is perpendicular to the drawing surface.
- 3D computer graphics Central perspective is employed extensively in 3D computer graphics, for a myriad of applications-, such as scientific, data visualization, computer-generated prototyping, special effects for movies, medical imaging, and architecture, to name just a few.
- applications- such as scientific, data visualization, computer-generated prototyping, special effects for movies, medical imaging, and architecture, to name just a few.
- 3D gaming One of the most common and well-known 3D computing applications is 3D gaming, which is used here as an example, because the core concepts used in 3D gaming extend to all other 3D computing applications.
- Content 35 The objects (figures, landscapes, etc.) that come to life during game play
- Real-time computer-generated 3D graphics engine (3D graphics engine) 37 Manages the design, content, and AI data. Decides what to draw, and how" to draw it, then renders (displays) it on a computer monitor
- One of the engine's _key components is the readerer. Its job is to take 3D objects that exist within computer- generated world coordinates x, y, z, and render (draw/display) them onto the computer monitor's viewing surface, which is a flat (2D) plane, with real world coordinates x, y.
- Figure 4 is a representation of what is happening inside the computer when running a 3D graphics engine.
- This world contains everything that could be experienced during game play. It also uses the Cartesian coordinate system, meaning it has three spatial dimensions x, y, and z. These three dimensions are referred to as "virtual world coordinates" 41 .
- Game play for a typical 3D game might begin with a computer-generated-3 D earth and a computer-generated-3D satellite orbiting it.
- the virtual world coordinate system enables the earth and satellite to be properly positioned in computer-generated x, y, z space.
- FIG. 5 is a conceptual illustration of wriat happens inside the computer when an end-user is playing, i.e. running, a first-person 3D application.
- First-person means that the computer monitor is much like a window, through which the person playing the game views the computer-generated worl .
- the 3D graphics engine renders the scene from the point of view of the eye of a computer-generated person.
- the computer-generated person can be thought of as a computer-generated or "virtual" simulation of the "real" person actually playing the game.
- the boxed-in area in Figure 5 conceptually represents how a 3D graphics engine minimizes the hardware's burden. It focuses computational resources on extremely small areas of information as compared to the 3D applications entire world. In this example, it is a "computer-generated" polar ear cub being observed by a "computer- generated” virtual person 51. Because the end user is running in first-person everything the computer-generated person sees is rendered onto the end-user's monitor, i.e. the end user is looking through the e;ye of the computer-generated person.
- the computer-generated person is looking through only one eye; in other words, an one-eyed view 52.
- the area that the computer-generated person sees with a one-eye view is called the "view volume" 53, and the computer-generated 3D objects within this view volume are what actually get rendered to the computer monitor's 2D viewing surface.
- FIG. 6 illustrates a view volume 64 in more detail.
- a view volume is a subset of a "camera model”.
- a camera model is a blueprint that defines the characteristics of both the hardware and software of a 3D graphics engine. Like a very complex and sophisticated automobile engine, a 3D graphics engine consist of so many parts that their camera models are often simplified to illustrate only the essential elements being referenced.
- the camera model depicted in Figure 6 shows a 3D graphics engine using central perspective to render computer-generated 3D objects to a computer monitor's vertical, 2D viewing surface.
- the view volume shown in Figure 6, although more detailed, is the same view volume represented in Figure 5.
- the only difference is semantics because a 3D graphics engine calls the computer-generated person's one- eye view a camera point 61 (hence camera model).
- the camera model uses a camera's line of sight 62, which is typically perpendicular to the projection plane 63. Every component of a camera model is called an "element".
- the projection plane 63 also called near clip plane, is the 2D plane onto which the x, y, z coordinates of the 3D objects within the view volume will be rendered.
- Each projection line starts at the camera point 61, and ends at a x, y, z coordinate point 65 of a virtual 3D object within the view volume.
- the 3D graphics engine determines where the projection line intersects the near clip-plane 63 and the x and y point 66 where this intersection occurs is rendered onto the near clip- plane.
- the near clip plane is displayed on the 2D viewing surface of the computer monitor, as shown in the bottom of Figure 6.
- a real person's eye 68 can then view 3D image through a real person's line of sight 67, which is the same as the camera's light of sight 62.
- 3D central perspective projection though offering realistic 3D illusion, has some limitations is allowing the user to have hands-on interaction with the 3D display.
- the angle between the viewing surface and the line of vision is preferably 45° but can be almost any angle, and the viewing surface is preferably horizontal (wherein the name "horizontal perspective"), but it can be any surface, as long as the line of vision forming a not-perpendicular angle to it.
- Horizontal perspective images offer realistic three dimensional illusion, but a ⁇ :e little known primarily due to the narrow viewing location (the viewer's eyepoint l as to be coincide precisely with the image projection eyepoint), and the complexity involving in projecting the two dimensional image or the three dimension model into the horizontal perspective image.
- the present invention recognizes that the personal computer is perfectly suitable for horizontal perspective display. It is personal, thus it is designed for the operation, of one person, and the computer, with its powerful microprocessor, is well capable o _f rendering various horizontal perspective images to the viewer. Further, horizontal perspective offers open space display of 3D images, thus allowing the hands-on interaction of the end users.
- the present invention discloses a hands-on simulator system using 3-D horizontal perspective display.
- the hands-on simulator system comprises a real time electronic display that can project horizontal perspective images into the open space and a peripheral device that allow the end user to manipulate the images with hands or hand-held tools. Since the horizontal perspective image is projected onto the open space, the user can "touch" the image for a realistic hands-on simulation.
- the touching action is actually a virtually touching, meaning there is no hand-feeling of touching, only eye-feeling of touching. This virtual touching also enables the user to touch the inside of an object.
- the hands-on simulator preferably comprises a computer unit to change the displayed images.
- the computer unit also keeps track of the peripheral device to ensure synchronization between the peripheral device and the displayed image.
- the system can further include a calibration unit to ensure the proper mapping of the peripheral device to the display images.
- the hands-on simulator preferably comprises an eyepoint tracking unit to recalculate the horizontal perspective image using the user's eyepoint as the projection point for minimizing distortion.
- the hands-on simulator further comprises a means to manipulate the displayed image such as magnification, zoom, rotation, movement, and even display a new image.
- Figure 2 shows a typical central perspective drawing.
- Figure 4 shows a computer world view.
- Figure 8 shows the central perspective drawing of three stacking blocks.
- Figure 9 shows the horizontal perspective drawing of three stacking blocks.
- Figure 11 shows the incorrect mapping of a 3-d object onto the horizontal plane.
- Figure 12 shows the correct mapping of a 3-d object onto the horizontal plane.
- Figure 13 shows a typical planar viewing surface with a z-axis correction.
- Figure 14 shows a 3D horizontal perspective image of Figure 13.
- Figure 15 shows an embodiment of the present invention hands-on simulator.
- Figure 18 shows the mapping of a peripheral device onto the hands-on volume.
- Figure 19 shows an user using the present invention hands-on simulator.
- Figure 21 shows an Hands-on simulator with cameras and speakers triangulation.
- the new and unique inventions described in this document build upon prior art by taking the current state of real-time computer-generated 3D computer graphics, 3D sound, and tactile computer-human interfaces to a whole new level of reality and simplicity. More specifically, these new inventions enable real-time computer- generated 3D simulations to coexist in physical space and time with the end-user and with other real-world physical objects. This capability dramatically improves upon the end-user's visual, auditory and tactile computing experience by providing direct physical interactions with 3D computer-generated objects and sounds.
- horizontal perspective Normally, as in central perspective, the plane of vision, at right angle to the line of sight, is also the projected plane of the picture, and depth cues are used to give the illusion of depth to this flat image.
- the plane of vision remains the same, but the projected image is not on this plane. It is on a plane angled to the plane of vision. Typically, the image would be on the ground level surface. This means the image will be physically in the third dimension relative to the plane of vision.
- horizontal perspective can be called horizontal projection.
- the object is to separate the image from the paper, and fuse the image to the three dimension object that projects the horizontal perspective image.
- the horizontal perspective image must be distorted so that the visual image fuses to form the free standing tliree dimensional figure. It is also essential the image is viewed from the correct eye points, otherwise the three dimensional illusion is lost.
- the horizontal perspective images In contrast to central perspective images which have height and width, and project an illusion of depth, and therefore the objects are usually abruptly projected and the images appear to be in layers, the horizontal perspective images have actual depth and width, and illusion gives them height, and therefore there is usually a graduated shifting so the images appear to be continuous.
- Image A the real-life three dimension object (three blocks stacked slightly above each other) was drawn by the artist closing one eye, and viewing along a line of sight 71 perpendicular to the vertical drawing plane 72.
- the resulting image when viewed vertically, straight on, and through one eye, looks the same as the original image.
- Image B the real-life three dimension object was drawn by the artist closing one eye, and viewing along a line of sight 73 45° to the horizontal drawing plane 74.
- the resulting image when viewed horizontally, at 45° and through one eye, looks the same as the original image.
- central perspective showing in Image A and horizontal perspective showing in Image B is the location of the display plane with respect to the projected three dimensional image.
- the display plane can be adjusted up and down , and therefore the projected image can be displayed in the open air above the display plane, i.e. a physical hand can touch (or more likely pass through) the illusion, or it can be displayed under the display plane, i.e. one cannot touch the illusion because the display plane physically blocks the hand.
- This is the nature of horizontal perspective, and as long as the camera eyepoint and the viewer eyepoint is at the same place, the illusion is present.
- the three dimensional illusion is likely to be only inside the display plane, meaning one cannot touch it.
- the central perspective would need elaborate display scheme such as surround image projection and large volume.
- Figures 8 and 9 illustrate the visual difference between using central and horizontal perspective.
- Figure 8 drawn with central perspective, through one open eye. Hold the piece of paper vertically in front of you, as you would a traditional drawing, perpendicular to your eye. You can see that central perspective provides a good representation of three dimension objects on a two dimension surface.
- Figure 9 drawn using horizontal perspective, by sifting at your desk and placing the paper lying flat (horizontally) on the desk in front of you. Again, view the image through only one eye. This puts your one open eye, called the eye point at approximately a 45° angle to the paper, which is the angle that the artist used to make the drawing.
- Figure 10 is an architectural-style illustration that demonstrates a method for making simple geometric drawings on paper or canvas utilizing horizontal perspective.
- Figure 10 is a side view of the same three blocks used in Figures 9. It illustrates the actual mechanics of horizontal perspective. Each point that makes up the object is drawn by projecting the point onto the horizontal drawing plane. To illustrate this, Figure 10 shows a few of the coordinates of the blocks being drawn on the horizontal drawing plane through projection lines. These projection lines start at the eye point (not shown in Figure 10 due to scale), intersect a point 103 on the object, then continue in a straight line to where they intersect the horizontal drawing plane 102, which is where they are physically drawn as a single dot 104 on the paper. When an architect repeats this process for each and every point on the blocks, as seen from the drawing surface to the eye point along the line-of-sight 101 the horizontal perspective drawing is complete, and looks like Figure 9.
- the horizontal perspective display system promotes horizontal perspective projection viewing by providing the viewer with the means to adjust the displayed images to maximize the illusion viewing experience.
- the horizontal perspective display comprising a real time electronic display capable of re-drawing the projected image, together with a viewer's input device to adjust the horizontal perspective image.
- the horizontal perspective display of the present invention can ensure the minimum distortion in rendering the tliree dimension illusion from the horizontal perspective method.
- the input device can be manually operated where the viewer manually inputs his or her eyepoint location, or change the projection image eyepoint to obtain the optimum three dimensional illusions.
- the input device can also be automatically operated where the display automatically tracks the viewer's eyepoint and adjust the projection image accordingly.
- the horizontal perspective display system removes the constraint that the viewers keeping their heads in relatively fixed positions, a constraint that create much difficulty in the acceptance of precise eyepoint location such as horizontal perspective or hologram display.
- the input device can be operated manually or automatically.
- the input device can detect the position and orientation of the viewer eyepoint, to compute and to project the image onto the display according to the detection result.
- the input device can be made to detect the position and orientation of the viewer's head along with the orientation of the eyeballs.
- the input device can comprise an infrared detection system to detect the position the viewer's head to allow the viewer freedom of head movement.
- Other embodiments of the input device can be the triangulation method of detecting the viewer eyepoint location, such as a CCD camera providing position data suitable for the head tracking objectives of the invention.
- the input device can be manually operated by the viewer, such as a keyboard, mouse, trackball, joystick, or the like, to indicate the correct display of the horizontal perspective display images.
- the Hands-On Simulator employing the open space characteristics of the horizontal perspective, together with a number of new computer hardware and software elements and processes that together to create a "Hands-On Simulator".
- the Hands-On Simulator generates a totally new and unique computing experience in that it enables an end user to interact physically and directly (Hands-On) with real-time computer-generated 3D graphics (Simulations), which appear in open space above the viewing surface of a display device, i.e. in the end user's own physical space.
- the computer hardware viewing surface is situated horizontally, such that the end-user's line of sight is at a 45° angle to the surface.
- the end user can experience hands-on simulations at viewing angles other than 45° (e.g. 55°, 30° etc.), it is the optimal angle for the brain to recognize the maximum amount of spatial information in an open space image. Therefore, for simplicity's sake, we use "45°” throughout this document to mean "an approximate 45 degree angle”.
- horizontal viewing surface is preferred since it simulates viewers' experience with the horizontal ground, any viewing surface could offer similar three dimensional illusion experience.
- the horizontal perspective illusion can appear to be hanging from a ceiling by projecting the horizontal perspective images onto a ceiling surface, or appear to be floating from a wall by projecting the horizontal perspective images onto a vertical wall surface.
- the hands-on simulations are generated within a 3D graphics engines' view volume, creating two new elements, the "Hands-On Volume” and the “Inner-Access Volume.”
- the Hands-On Volume is situated on and above the physical viewing surface.
- This 1 : 1 correspondence allows accurate and tangible physical interaction by touching and manipulating simulations with hands or hand-held tools.
- the Inner-Access Volume is located underneath the viewing surface and simulations within this volume appear inside the physically viewing device.
- simulations generated within the Inner- Access Volume do not share the same physical space with the end user and the images therefore cannot be directly, physically manipulated by hands or hand-held tools. That is, they are manipulated indirectly via a computer mouse or a joystick.
- This disclosed Hands-On Simulator can lead to the end user's ability to directly, physically manipulate simulations because they co-inhabit the end-user's own physical space.
- To accomplish this requires a new computing concept where computer-generated world elements have a 1 : 1 corcespondence with their physical real-world equivalents; that is, a physical element and an equivalent computer- generated element occupy the same space and time. This is achieved by identifying and establishing a common "Reference Plane", to which the new elements are synchronized.
- Synchronization with the Reference Plane forms the basis to create the 1 : 1 correspondence between the "virtual" world of the simulations, and the "real" physical world.
- the 1 :1 correspondence insures that images are properly displayed: What is on and above the viewing surface appears on and above the surface, in the Hands-On Volume; what is underneath the viewing surface appears below, in the Inner- Access Volume. Only if this 1:1 correspondence and synchronization to the Reference Plane are present can the end user physically and directly access and interact with simulations via their hands or hand-held tools.
- the present invention simulator further includes a real-time computer-generated 3D-graphics engine as generally described above, but using horizontal perspective projection to display the 3D images.
- a real-time computer-generated 3D-graphics engine as generally described above, but using horizontal perspective projection to display the 3D images.
- One major different between the present invention and prior art graphics engine is the projection display.
- Existing 3D- graphics engine uses central-perspective and therefore a vertical plane to render its view volume while in the present invention simulator, a "horizontal" oriented rendering plane vs. a "vertical” oriented rendering plane is required to generate horizontal perspective open space images.
- the horizontal perspective images offer much superior open space access than central perspective images.
- One of the invented elements in the present invention hands-on simulator is the 1:1 correspondence of the computer-generated world elements and their physical real- world equivalents.
- this 1 : 1 correspondence is a new computing concept that is essential for the end user to physically and directly access and interact with hands-on simulations.
- This new concept requires the creation of a common physical Reference Plane, as well as, the formula for deriving its unique x, y, z spatial coordinates. To determine the location and size of the Reference Plane and its specific coordinates requires understanding the following.
- a computer monitor or viewing device is made of many physical layers, individually and together having thickness or depth.
- Figures 11 and 12 contain a conceptual side-view of typical CRT-type viewing device.
- the top layer ot the monitor's glass surface is the physical "View Surface” 112
- the phosphor layer, where images are made is the physical "Image Layer” 1 13.
- the View Surface 112 and the Image Layer 113 are separate physical layers located at different depths or z coordinates along the viewing device's z axis.
- To display an image the CRT's electron gun excites the phosphors, which in turn emit photons. This means that when you view an image on a CRT, you are looking along its z axis through its glass surface, like you would a window, and seeing the light of the image coming from its phosphors behind the glass.
- Figure 12 shows the proper location of the three blocks on a CRT-type viewing device. That is, the bottom of the middle block is displayed correctly on the View Surface 112 and not on the Image Layer 113. To make this adjustment the z coordinates of the View Surface and Image Layer are used by the Simulation Engine to correctly render the image. Thus the unique task of correctly rendering an open space image on the View Surface vs. the Image Layer is critical in accurately mapping the simulation images to the real world space.
- FIG. 13 shows an example of a complete image being displayed on a viewing device's View Surface. That is, the image, including the bear cub, shows the entire image area, which is smaller than the viewing device's View Surface. Looking straight at the image, a flat image can be seen as in Figure 13, but looking at a proper angle, a 3D horizontal perspective image can emerged as shown in Figure 14.
- the Image Layer 133 is given a z coordinate of 0.
- the View Surface is the distance along the z axis from the Image Layer the Reference Plane's z coordinate 132 is equal to the View Surface, i.e. its distance from the Image Layer.
- the x and y coordinates, or size of the Reference Plane can be determined by displaying a complete image on the viewing device and measuring the length of its x and y axis.
- Reference Plane Calibration The concept of the common physical Reference Plane is a new inventive concept. Therefore, display manufactures may not supply or even know its coordinates. Thus a "Reference Plane Calibration" procedure might need to be performed to establish the Reference Plane coordinates.
- This calibration procedure provides the end user with a number of orchestrated images that s/he interacts. The end-user's response to these images provides feedback to the Simulation Engine such that it can identify the correct size and location of the Reference Plane. When the end user is satisfied and completes the procedure the coordinates are saved in the end user's personal profile.
- One element of the present invention horizontal perspective projection hands-on simulator is a computer-generated "Angled Camera” point.
- the camera point is initially located at an arbitrary distance from the Horizontal Plane and the camera's line-of-site is oriented at a 45° angle looking through the center.
- the position of the Angled Camera in relation to the end-user's eye is critical to generating simulations that appear in open space on and above the surface of the viewing device.
- the computer-generated x, y, z coordinates of the Angled Camera point form the vertex of an infinite "pyramid", whose sides pass through the x, y, z coordinates of the Reference/Horizontal Plane.
- Figure 15 illustrates this infinite pyramid, which begins at the Angled Camera point 151 and extending through the Far Clip Plane (not shown) .
- These unique view volumes are called Hands-On Volume 153 and the Inner- Access Volume 154. The dimensions of these volumes and the planes that define them are based on their locations within the pyramid.
- FIG 15 also illustrates a plane 155, called Comfort Plane, together with other display elements.
- the Comfort Plane is one of six planes that define the Hands-On Volume 153, and of these planes it is closest to the Angled Camera point 151 and parallel to the Reference Plane 156.
- the Comfort Plane 155 is appropriately named because its location within the pyramid determines the end-user's personal comfort, i.e. how their eyes, head, body, etc. are situated while viewing and interacting with simulations. The end user can adjust the location of the Comfort Plane based on their personal visual comfort through a "Comfort Plane Adjustment" procedure.
- This procedure provides the end user with orchestrated simulations within the Hands-On Volume, and enables them to adjust the location of the Comfort Plane within the pyramid relative to the Reference Plane.
- the end user is satisfied and completes the procedure the location of the Comfort Plane is saved in the end-user's personal profiles.
- the present invention simulator uniquely defines a "Hands-On Volume” 153.
- the Hands-On Volume is where you can reach your hand in and physically "touch” a simulation. You can envision this by imagining you are sifting in front of a horizontally oriented computer monitor and using the Hands-On Simulator. If you place your hand several inches above the surface of the monitor, you are putting your hand inside both the physical and computer-generated Hands-On Volume at the same time.
- the Hands-On Volume exists within the pyramid and are between and inclusive of the Comfort Planes and the Reference/Horizontal Planes.
- the present simulator also optionally defines an Inner- Access Volume 154 existing below or inside the physical viewing device. For this reason, an end user cannot directly interact with 3D objects located within the Inner-Access Volume via their hand or hand-held tools. But they can interact in the traditional sense with a computer mouse, joystick, or other similar computer peripheral.
- An "Inner Plane” is further defined, located immediately below and are parallel to the Reference/Horizontal Plane 156 within the pyramid. For practical reasons, these two planes can be said to be the same.
- the Inner Plane, along with the Bottom Plane 152, is two of the six planes within the pyramid that define the Inner- Access Volume.
- the Bottom Plane 152 is farthest away from the Angled Camera point, but it is not to be mistaken for the Far Clip plane.
- the Bottom Plane is also parallel to the Reference/Horizontal Plane and is one of the six planes that define the Inner- Access Volume. You can envision the Inner- Access Volume by imagining you are sitting in front of a horizontally oriented computer monitor and using the Hands-On Simulator. If you pushed your hand through the physical surface and placed your hand inside the monitor (which of course is not possible), you would be putting your hand inside the Inner- Access Volume.
- the end-user's preferred viewing distance to the bottom of the viewing pyramid determines the location of these planes.
- One way the end user can adjust the location of the Bottom Planes is through a "Bottom Plane Adjustment" procedure. This procedure provides the end user with orchestrated simulations within the Inner- Access Volume and enables them to interact and adjust the location of the Bottom Plane relative to the physical Reference/Horizontal Plane. When the end user completes the procedure the Bottom Plane's coordinates are saved in the end-user's personal profiles.
- the end user For the end user to view open space images on their physical viewing device it must be positioned properly, which usually means the physical Reference Plane is placed horizontally to the ground. Whatever the viewing device's position relative to the ground, the Reference/Horizontal Plane must be at approximately a 45° angle to the end-user's line-of-sight for optimum viewing.
- One way the end user might perform this step is to position their CRT computer monitor on the floor in a stand, so that the Reference/Horizontal Plane is horizontal to the floor. This example use a CRT-type computer monitor, but it could be any type of viewing device, placed at approximately a 45° angle to the end-user's line-of-sight.
- the real- world coordinates of the "End-User's Eye” and the computer-generated Angled Camera point must have a 1 : 1 correspondence in order for the end user to properly view open space images that appear on and above the Reference/Horizontal Plane.
- One way to do this is for the end user to supply the Simulation Engine with their eye's real-world x, y, z location and line-of-site information relative to the center of the physical Reference/Horizontal Plane. For example, the end user tells the Simulation Engine that their physical eye will be located 12 inches up, and 12 inches back, while looking at the center of the Reference/Horizontal Plane.
- the Simulation Engine maps the computer-generated Angled Camera point to the End-User's Eye point physical coordinates and line-of-sight.
- the present invention horizontal perspective hands-on simulator employs the horizontal perspective projection to mathematically projected the 3D objects to the Hands-On and Inner- Access Volumes.
- the existence of a physical Reference Plane and the knowledge of its coordinates are essential to correctly adjusting the Horizontal Plane's coordinates prior to projection.
- This adjustment to the Horizontal Plane enables open space images to appear to the end user on the View Surface vs. the Image Layer by taking into account the offset between the Image Layer and the View Surface, which are located at different values along the viewing device's z axis.
- the three dimensional x, y, z point of the obj ect becomes a two-dimensional x, y point of the Horizontal Plane.
- Projection lines often intersect more than one 3D object coordinate, but only one object x, y, z coordinate along a given projection line can become a Horizontal Plane x, y point.
- the formula to determine which object coordinate becomes a point on the Horizontal Plane is different for each volume.
- an object coordinate 157 results in an image coordination 158 by following a given projection line that is farthest from the Horizontal Plane.
- an object coordinate 159 results in an image coordination 150 by following a given projection line that is closest to the Horizontal Plane.
- a tie i.e. if a 3D object point from each volume occupies the same 2D point of the Horizontal Plane, the Hands-On Volume's 3D object point is used.
- Figure 15 is then an illustration of the present invention Simulation Engine that includes the new computer-generated and real physical elements as described above. It also shows that a real-world element and its computer-generated equivalent are mapped 1 : 1 and together share a common Reference Plane.
- the full implementation of this Simulation Engine results in a Hands-On Simulator with real-time co puter- generated 3D-graphics appearing in open space on and above a viewing device's surface, which is oriented approximately 45° to the end-user's line-of-sight.
- the Hands-On Simulator further involves adding completely new elements and processes and existing stereoscopic 3D computer hardware.
- Multi-View provides the end user with multiple and/or separate left-and right-eye views of the same simulation.
- the simulator further includes a new computer-generated "time dimension" element, called “Si-time”.
- SI is an acronym for "Simulation Image” and is one complete image displayed on the viewing device.
- Si-Time is the amount of time the Simulation Engine uses to completely generate and display one Simulation Image. This is similar to a movie projector where 24 times a second it displays an image. Therefore, 1/24 of a second is required for one image to be displayed by the projector But Si-Time is variable, meaning that depending on the complexity of the view volumes it could take l/120 th or Vz a second for the Simulation Engine to complete just one SI.
- Figure 16 helps illustrate these two new time dimension elements. It is a conceptual drawing of what is occurring inside the Simulation Engine when it is generating a two-eye view of a Simulated Image.
- the computer-generated person has both eyes open, a requirement for stereoscopic 3D viewing, and therefore sees the bear cub from two separate vantage points, i.e. from both a right-eye view and a left- eye view. These two separate views are slightly different and offset because the average person's eyes are about 2 inches apart. Therefore, each eye sees the world from a separate point in space and the brain puts them together to make a whole image. This is how and why we see the real world in stereoscopic 3D.
- Figure 16 is a very high-level Simulation Engine blueprint focusing on how the computer-generated person's two eye views are projected onto the Horizontal Plane and then displayed on a stereoscopic 3D capable viewing device, representing one complete Si-Time period. If we use the example from step 3 above, Si-Time takes one second. During this one second of Si-Time the Simulation Engine needs to generate two different eye views, because in this example the stereoscopic 3D viewing device requires a separate left- and right-eye view. There are existing stereoscopic 3D viewing devices that require more than a separate left- and right-eye view. But because the method described here can generate multiple views it works for these devices as well.
- EV-Time-1 is the time period used by the Simulation Engine to complete the first eye (right-eye) view of the computer-generated person. This is the j ob for this step, which is within EV-Time-1, and using the Angled Camera at coordinate x, y, z, the Simulation Engine completes the rendering and display of the right-eye view of a given Simulation Image.
- the Simulation Engine starts the process of rendering the computer-generated person's second eye (left-eye) view.
- the illustration in the lower left of Figure 16 shows the Angled Camera point for the left eye 164 at time element "EV-Time-2". That is, this second eye view is completed during EV-Time-2.
- step 5 makes an adjustment to the Angled Camera point. This is illustrated in Figure 16 by the left eye's x coordinate being incremented by two inches. This difference between the right eye's x value and the left eye's x + 2" is what provides the two-inch separation between the eyes, which is required for stereoscopic 3D viewing.
- the distances between people's eyes vary but in the above example we are using the average of 2 inches. It is also possible for the end user to supply the Simulation Engine with their personal eye separation value. This would make the x value for the left and right eyes highly accurate for a given end user and thereby improve the quality of their stereoscopic 3D view.
- the Simulation Engine Once the Simulation Engine has incremented the Angled Camera point's x coordinate by two inches, or by the personal eye separation value supplied by the end user, it completes the rendering and display of the second Cleft-eye) view. This is done by the Simulation Engine within the EV-Time-2 period using the Angled Camera point coordinate x ⁇ 2", y, z and the exact same Simulation Image rendered. This completes one Si-Time period.
- the Simulation Engine continues to display the left- and right-eye images, as described above, until it needs to move to the next Si-Time period.
- the job of this step is to determine if it is time to move to a new Si-Time period, and if it is, then increment Si-Time.
- An example of when this may occur is if the bear cub moves his paw or any part of his body Then a new and second Simulated Image would be required to show the bear cub in its new position. This new Simulated Image of the bear cub, in a slightly different location, gets rendered during a new Si-Time period or SI-Time-2.
- This new S I-time-2 period will have its own EV-Time-1 and EV-Time-2, and therefore the simulation steps described above will be repeated during SI-time-2.
- This process of generating multiple views via the nonstop incrementing of Si-Time and its EN-Times continues as long as the Simulation Engine is generating real-time simulations in stereoscopic 3D.
- Multi- View provides the end user with multiple and/or separate left- and right-eye views of the same simulation.
- Multi- View capability is a significant visual and interactive improvement the single eye view.
- the present invention also allows the viewer to move around the three dimensional display and yet suffer no great distortion since the display can track the viewer eyepoint and re-display the images correspondingly, in contrast to the conventional prior art three dimensional image display where it would be projected and computed as seen from a singular viewing point, and thus any movement by the viewer away from the intended viewing point in space would cause gross distortion.
- the display system can further comprise a computer capable of re -calculate the projected image given the movement of the eyepoint location.
- the horizontal perspective images can be very complex, tedious to create, or created in ways that are not natural for artists or cameras, and therefore require the use of a computer system for the tasks.
- To display a three-dimensional image of an object with complex surfaces or to create animation sequences would demand a lot of coittputational power and time, and therefore it is a task well suited to the computer.
- Three dimensional capable electronics and computing hardware devices and real-time computer-generated three dimensional computer graphics have advatxced significantly recently with marked innovations in visual, audio and tactile systems., and have producing excellent hardware and software products to generate realism and more natural computer-human interfaces.
- the horizontal perspective display system of the present invention are not only in demand for entertainment media such as televisions, movies, and video games but are also needed from various fields such as education (displaying three-dimensional structures), technological training (displaying three-dimensional equipment).
- entertainment media such as televisions, movies, and video games
- various fields such as education (displaying three-dimensional structures), technological training (displaying three-dimensional equipment).
- three-dimensional image displays which can be viewed from various angles to enable observation of real objects using object-like images.
- the horizontal perspective display system is also capable of substitute a computer- generated reality for the viewer observation.
- the systems may include audio, visual, motion and inputs from the user in order to create a complete experience of three dimensional illusions.
- the input for the horizontal perspective system can be two dimensional image, several images combined to form one single three dimensional image, or three dimensional model.
- the three dimensional image or model conveys much more information than that a two dimensional image and by changing viewing angle, the viewer will get the impression of seeing the same object from different perspectives continuously.
- the horizontal perspective display can further provide multiple views or "Multi- View” capability.
- Multi-View provides the viewer with multiple and/or separate left- and right-eye views of the same simulation.
- Multi- View capability is a significant visual and interactive improvement over the single eye view.
- Multi- View mode both the left eye and right eye images are fused by the viewer's brain into a single, three-dimensional illusion.
- the problem of the discrepancy between accommodation and convergence of eyes, inherent in stereoscopic images, leading to the viewer's eye fatigue with large discrepancy, can be reduced with the horizontal perspective display, especially for motion images, since the position of the viewer's gaze point changes when the display scene changes.
- Multi- View mode the objective is to simulate the actions of the two eyes to create the perception of depth, namely the left eye and the right eye sees slightly different images.
- Multi- View devices that can be used in the present invention include methods with glasses such as anaglyph method, special polarized glasses or shutter glasses, methods without using glasses such as a parallax stereogram, a lenticular method, and mirror method (concave and convex lens).
- a display image for the right eye and a display image for the left eye are respectively superimpose-displayed in two colors, e.g., red and blue, and observation images for the right and left eyes are separated using color filters, thus allowing a viewer to recognize a stereoscopic image.
- the images are displayed using horizontal perspective technique with the viewer looking down at an angle.
- the eyepoint of the projected images has to be coincide with the eyepoint of the viewer, and therefore the viewer input device is essential in allowing the viewer to observe the three dimensional horizontal perspective illusion. From the early days of the anaglyph method, there are much improvements such as the spectrum of the red/blue glasses and display to generate much more realism and comfort to the viewers.
- the left eye image and the right eye image are separated by the use of mutually extinguishing polarizing filters such as orthogonally linear polarizer, circular polarizer, elliptical polarizer.
- the images are normally projected onto screens with polarizing filters and the viewer is then provided with corresponding polarized glasses.
- the left and right eye images appear on the screen at the same time, but only the left eye polarized light is transmitted through the left eye lens of the eyeglasses and only the right eye polarized light is transmitted through the right eye lens.
- Another way for stereoscopic display is the image sequential system.
- the images are displayed sequentially between left eye and right eye images rather than superimposing them upon one another, and the viewer's lenses are synchronized with the screen display to allow the left eye to see only when the left image is displayed, and the right eye to see only when the right image is displayed.
- the shuttering of the glasses can be achieved by mechanical shuttering or with liquid crystal electronic shuttering.
- display images for the right and left eyes are alternately displayed on a CRT in a time sharing manner, and observation images for the right and left eyes are separated using time sharing shutter glasses which are opened/closed in a time sharing manner in synchronism with the display images, thus allowing an observer to recognize a stereoscopic image.
- optical method Other way to display stereoscopic images is by optical method.
- display images for the right and left eyes which are separately displayed on a viewer using optical means such as prisms, mirror, lens, and the like, are superimpose- displayed as observation images in front of an observer, thus allowing the observer to recognize a stereoscopic image.
- Large convex or concave lenses can also be used where two image projectors, projecting left eye and right eye images, are providing focus to the viewer's left and right eye respectively.
- a variation of the optical method is the lenticular method where the images form on cylindrical lens elements or two dimensional array of lens elements.
- Figure 16 is a horizontal perspective display focusing on how the computer- generated person's two eye views are projected onto the Horizontal Plane and then displayed on a stereoscopic 3D capable viewing device.
- Figure 16 represents one complete display time period. During this display time period, the horizontal perspective display needs to generate two different eye views, because in this example the stereoscopic 3D viewing device requires a separate left- and right-eye view.
- the illustration in the upper left of Figure 16 shows the Angled Camera point for the right eye after the first (right) eye-view to be generated.
- the horizontal perspective display starts the process of rendering the computer-generated person's second eye (left-eye) view.
- the illustration in the lower left of Figure 16 shows the Angled Camera point for the left eye after the completion of this time. But before the rendering process can begin, the horizontal perspective display makes an adjustment to the Angled Camera point to account for the difference in left and right eye position.
- the horizontal perspective display has incremented the Angled Camera point's x coordinate, the rendering continues by displaying the second (left-eye) view.
- the horizontal perspective display continues to display the left- and right-eye images, as described above, until it needs to move to the next display time period.
- An example of when this may occur is if the bear cub moves his paw or any part of his body. Then a new and second Simulated Image would be required to show the bear cub in its new position.
- This new Simulated Image of the bear cub in a slightly different location, gets rendered during a new display time period. This process of generating multiple views via the nonstop incrementing of display time continues as long as the horizontal perspective display is generating real-time simulations in stereoscopic 3D.
- the display rate is the number of images per second that the display uses to completely generate and display one image. This is similar to a movie projector where 24 times a second it displays an image. Therefore, 1/24 of a second is required for one image to be displayed by the projector. But the display time could be a variable, meaning that depending on the complexity of the view volumes it could take 1/12 or Vz a second for the computer to complete just one display image. Since the display was generating a separate left and right eye view of the same image, the total display time is twice the display time for one eye image.
- the present invention hands-on simulator further includes technologies employed in computer "peripherals".
- Figure 17 shows examples of such Peripherals with six degrees of freedom, meaning that their coordinate system enables them to interact at any given point in an (x, y, z) space.
- the simulator creates a "Peripheral Open- Access Volume,” for each Peripheral the end-user requires, such as a Space Glove 171, a Character Animation Device 172, or a Space Tracker 173.
- Figure 18 is a high-level illustration of the Hands-On Simulation Tool, focusing on how a Peripheral's coordinate system is implemented within the Hands-On Simulation Tool.
- the new Peripheral Open-Access Volume which as an example in Figure 18 is a Space Glove 181, is mapped one-to-one with the Open- Access Volume 182.
- the key to achieving a precise one-to-one mapping is to calibrate the Peripheral's volume with the Common Reference, which is the physical View surface, located at the viewing surface of the display device.
- Some Peripherals provide a mechanism that enables the Hands-On Simulation Tool to perform this calibration without any end-user involvement. But if calibrating the Peripheral requires external intervention than the end-user will accomplish this through an "Open- Access Peripheral Calibration" procedure. This procedure provides the end-user with a series of Simulations within the Hands-On Volume and a user- friendly interface that enables them to adjusting the location of the Peripheral's volume until it is in perfect synchronization with the View surface. When the calibration procedure is complete, the Hands-On Simulation Tool saves the information in the end-user's personal profile.
- the Hands-On Simulation Tool will continuously track and map the Peripheral's volume to the Open- Access Volumes.
- the Hands-On Simulation Tool modifies each Hands-On Image it generates based on the data in the Peripheral's volume.
- the end result of this process is the end-user's ability to use any given Peripheral to interact with Simulations within the Hands-On Volume generated in real-time by the Hands-On Simulation Tool.
- the peripherals linking to the simulator, the user can interact with the display model.
- the Simulation Engine can get the inputs from the user through the peripherals, and manipulate the desired action.
- the simulator can provide proper interaction and display.
- the invention Hands-On Simulator then can generate a totally new and unique computing experience in that it enables an end user to interact physically and directly (Hands-On) with real-time computer-generated 3D graphics (Simulations), which appear in open space above the viewing surface of a display device, i.e. in the end user's own physical space.
- the peripheral tracking can be done through camera triangulation or through infrared tracking devices.
- Figure 19 is intended to assist in further explaining the present invention regarding the Open- Access Volume and handheld tools.
- Figure 19 is a simulation of and end-user interacting with a Hands-On Image using a handheld tool.
- the scenario being illustrated is the end-user visualizing large amounts of financial data as a number of interrelated Open-Access 3D simulations.
- the end-user can probe and manipulated the Open- Access simulations by using a handheld tool, which in Figure 19 looks like a pointing device.
- a "computer-generated attachment” is mapped in the form of an Open- Access computer- generated simulation onto the tip of a handheld tool, which in Figure 19 appears to the end-user as a computer-generated "eraser".
- the end-user can of course request that the Hands-On Simulation Tool map any number of computer-generated attachments to a given handheld tool. For example, there can be different computer- generated attachments with unique visual and audio characteristics for cutting, pasting, welding, painting, smearing, pointing, grabbing, etc. And each of these computer-generated attachments would act and sound like the real device they are simulating when they are mapped to the tip of the end-user's handheld tool.
- the simulator can further include 3D audio devices for "SIMULATION RECOGNITION & 3D AUDIO ".
- 3D audio devices for "SIMULATION RECOGNITION & 3D AUDIO ".
- Triangulation is a process employing trigonometry, sensors, and frequencies to "receive" data from simulations in order to determine their precise location in space. It is for this reason that triangulation is a mainstay of the cartography and surveying industries where the sensors and frequencies they use include but are not limited to cameras, lasers, radar, and microwave.
- 3D Audio also uses triangulation but in the opposite way 3D Audio "sends" or projects data in the form of sound to a specific location. But whether you're sending or receiving data the location of the simulation in three-dimensional space is done by triangulation with frequency receiving/sending devices.
- Figure 20 shows an end-user 201 looking at a Hands-On Image 202 of a bear cub, projecting from a 3D horizontal perspective display 204. Since the cub appears in open space above the viewing surface the end-user can reach in and manipulate the cub by hand or with a handheld tool. It is also possible for the end-user to view the cub from different angles, as they would in real life. This is accomplished though the use of triangulation where the three real- world cameras 203 continuously send images from their unique angle of view to the Hands-On Simulation Tool. This camera data of the real world enables the Hands-On Simulation Tool to locate, track, and map the end-user's body and other real -world simulations positioned within and around the computer monitor's viewing surface.
- Figure 21 also shows the end-user 211 viewing and interacting with the bear cub 212 using a 3D display 214, but it includes 3D sounds 216 emanating from the cub's mouth.
- 3D sounds 216 emanating from the cub's mouth.
- To accomplish this level of audio quality requires physically combining each of the three cameras 213 with a separate speaker 215, as shown in Figure 21.
- the cameras' data enables the Hands-On Simulation Tool to use triangulation in order to locate, track, and map the end-user's "left and right ear". And since the Hands-On Simulation Tool is generating the bear cub as a computer-generated Hands-On Image it knows the exact location of the cub's mouth.
- the Hands-On Simulation Tool uses triangulation to sends data, by modifying the spatial characteristics of the audio, making it appear that 3D sound is emanating from the cub's computer-generated mouth.
- a new frequency receiving/sending device can be created by combining a video camera with an audio speaker, as previously shown in Figure 21. Note that other sensors and/or transducers may be used as well. Take these new camera/speaker devices and attach or place them nearby a viewing device, such as a computer monitor as previously shown in Figure 21. This results in each camera/speaker device having a unique and separate "real-world" (x, y, z) location, line-of-sight, and frequency receiving/sending volume. To understand these parameters think of using a camcorder and looking through its view finder When you do this the camera has a specific location in space, is pointed in a specific direction, and all the visual frequency information you see or receive through the view finder is its "frequency receiving volume".
- Triangulation works by separating and positioning each camera/speaker device such that their individual frequency receiving/sending volumes overlap and cover the exact same area of space. If you have three widely spaced frequency receiving/sending volumes covering the exact same area of space than any simulation within the space can accurately be located. The next step creates a new element in the Open- Access Camera Model for this real-world space and labeled "real frequency receiving/sending volume".
- the simulator then performs simulation recognition by continuously locating and tracking the end-user's "left and right eye” and their "line-of-sight" 221.
- the real- world left and right eye coordinates are continuously mapped into the Open-Access Camera Model precisely where they are in real space, and then continuously adjust the computer-generated cameras coordinates to match the real-world eye coordinates that are being located, tracked, and mapped.
- This enables the real-time generation of Simulations within the Hands-On Volume based on the exact location of the end- user's left and right eye. This allows the end-user to freely move their head and look around the Hands-On Image without distortion.
- the simulator then performs simulation recognition by continuously locating and tracking the end-user's "left and right ear" and their "line-of-hearing" 222.
- the real- world left- and right-ear coordinates are continuously mapped into the Open-Access Camera Model precisely where they are in real space, and continuously adjust the 3D Audio coordinates to match the real-world ear coordinates that are being located, tracked, and mapped.
- This enables the real-time generation of Open- Access sounds based on the exact location of the end-user's left and right ears. Allowing the end- user to freely move their head and still hear Open- Access sounds emanating from their correct location.
- the simulator then performs simulation recognition by continuously locating and tracking the end-user's "left and right hand” and their "digits" 222, i.e. fingers and thumbs.
- the real- wo rid left and right hand coordinates are continuously mapped into the Open- Access Camera Model precisely where they are in real space, and continuously adjust the Hands-On Image coordinates to match the real-world hand coordinates that are being located, tracked, and mapped. This enables the real-time generation of Simulations within the Hands-On Volume based on the exact location of the end-user's left and right hands allowing the end-user to freely interact with Simulations within the Hands-On Volume.
- the simulator can perform simulation recognition by continuously locating and tracking "handheld tools" instead of hand.
- These real- world handheld tool coordinates can be continuously mapped into the Open-Access Camera Model precisely where they are in real space, and continuously adjust the Hands-On Image coordinates to match the real-world handheld tool coordinates that are being located, tracked, and mapped. This enables the real-time generation of Simulations within the Hands-On Volume based on the exact location of the handheld tools allowing the end-user to freely interact with Simulations within the Hands-On Volume.
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Optics & Photonics (AREA)
- Theoretical Computer Science (AREA)
- Signal Processing (AREA)
- General Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Processing Or Creating Images (AREA)
- Devices For Indicating Variable Information By Combining Individual Elements (AREA)
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
- Manipulator (AREA)
- Image Generation (AREA)
- Position Input By Displaying (AREA)
- Controls And Circuits For Display Device (AREA)
Abstract
Description
Claims
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP05743549A EP1740998A2 (en) | 2004-04-05 | 2005-04-04 | Horizontal perspective hand-on simulator |
JP2007507394A JP2007536608A (en) | 2004-04-05 | 2005-04-04 | Horizontal perspective hands-on simulator |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US55978104P | 2004-04-05 | 2004-04-05 | |
US60/559,781 | 2004-04-05 |
Publications (2)
Publication Number | Publication Date |
---|---|
WO2005098516A2 true WO2005098516A2 (en) | 2005-10-20 |
WO2005098516A3 WO2005098516A3 (en) | 2006-07-27 |
Family
ID=35125719
Family Applications (4)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2005/011253 WO2005098516A2 (en) | 2004-04-05 | 2005-04-04 | Horizontal perspective hand-on simulator |
PCT/US2005/011252 WO2006104493A2 (en) | 2004-04-05 | 2005-04-04 | Horizontal perspective display |
PCT/US2005/011255 WO2005098517A2 (en) | 2004-04-05 | 2005-04-04 | Horizontal perspective hand-on simulator |
PCT/US2005/011254 WO2005101097A2 (en) | 2004-04-05 | 2005-04-04 | Horizontal perspective display |
Family Applications After (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2005/011252 WO2006104493A2 (en) | 2004-04-05 | 2005-04-04 | Horizontal perspective display |
PCT/US2005/011255 WO2005098517A2 (en) | 2004-04-05 | 2005-04-04 | Horizontal perspective hand-on simulator |
PCT/US2005/011254 WO2005101097A2 (en) | 2004-04-05 | 2005-04-04 | Horizontal perspective display |
Country Status (6)
Country | Link |
---|---|
US (2) | US20050219694A1 (en) |
EP (2) | EP1740998A2 (en) |
JP (2) | JP2007531951A (en) |
KR (2) | KR20070047736A (en) |
CN (2) | CN101006110A (en) |
WO (4) | WO2005098516A2 (en) |
Families Citing this family (27)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7009523B2 (en) * | 1999-05-04 | 2006-03-07 | Intellimats, Llc | Modular protective structure for floor display |
US7358861B2 (en) * | 1999-05-04 | 2008-04-15 | Intellimats | Electronic floor display with alerting |
JP2007531951A (en) * | 2004-04-05 | 2007-11-08 | マイケル エー. ベセリー | Horizontal perspective display |
JP2008506140A (en) | 2004-06-01 | 2008-02-28 | マイケル エー. ベセリー | Horizontal perspective display |
US8717423B2 (en) | 2005-05-09 | 2014-05-06 | Zspace, Inc. | Modifying perspective of stereoscopic images based on changes in user viewpoint |
US7907167B2 (en) * | 2005-05-09 | 2011-03-15 | Infinite Z, Inc. | Three dimensional horizontal perspective workstation |
JP4725595B2 (en) * | 2008-04-24 | 2011-07-13 | ソニー株式会社 | Video processing apparatus, video processing method, program, and recording medium |
EP2279469A1 (en) * | 2008-05-09 | 2011-02-02 | Mbda Uk Limited | Display of 3-dimensional objects |
JP2010122879A (en) * | 2008-11-19 | 2010-06-03 | Sony Ericsson Mobile Communications Ab | Terminal device, display control method, and display control program |
CN101931823A (en) * | 2009-06-24 | 2010-12-29 | 夏普株式会社 | Method and equipment for displaying 3D image |
US9189885B2 (en) | 2009-09-16 | 2015-11-17 | Knorr-Bremse Systeme Fur Schienenfahrzeuge Gmbh | Visual presentation system |
US8717360B2 (en) | 2010-01-29 | 2014-05-06 | Zspace, Inc. | Presenting a view within a three dimensional scene |
JP5573426B2 (en) * | 2010-06-30 | 2014-08-20 | ソニー株式会社 | Audio processing apparatus, audio processing method, and program |
CN106774880B (en) * | 2010-12-22 | 2020-02-21 | Z空间股份有限公司 | Three-dimensional tracking of user control devices in space |
JP2012208705A (en) * | 2011-03-29 | 2012-10-25 | Nec Casio Mobile Communications Ltd | Image operation apparatus, image operation method and program |
US8786529B1 (en) | 2011-05-18 | 2014-07-22 | Zspace, Inc. | Liquid crystal variable drive voltage |
EP2768396A2 (en) | 2011-10-17 | 2014-08-27 | Butterfly Network Inc. | Transmissive imaging and related apparatus and methods |
US9292184B2 (en) | 2011-11-18 | 2016-03-22 | Zspace, Inc. | Indirect 3D scene positioning control |
US20130336640A1 (en) * | 2012-06-15 | 2013-12-19 | Efexio, Inc. | System and method for distributing computer generated 3d visual effects over a communications network |
US9336622B2 (en) | 2012-07-17 | 2016-05-10 | Sony Corporation | System and method to achieve better eyelines in CG characters |
US9667889B2 (en) | 2013-04-03 | 2017-05-30 | Butterfly Network, Inc. | Portable electronic devices with integrated imaging capabilities |
EP3291896A4 (en) * | 2015-05-08 | 2019-05-01 | Myrl Rae Douglass II | Structures and kits for displaying two-dimensional images in three dimensions |
CN105376553B (en) * | 2015-11-24 | 2017-03-08 | 宁波大学 | A kind of 3 D video method for relocating |
US10523929B2 (en) * | 2016-04-27 | 2019-12-31 | Disney Enterprises, Inc. | Systems and methods for creating an immersive video content environment |
US11137884B2 (en) * | 2016-06-14 | 2021-10-05 | International Business Machines Corporation | Modifying an appearance of a GUI to improve GUI usability |
CN106162162B (en) * | 2016-08-01 | 2017-10-31 | 宁波大学 | A kind of reorientation method for objectively evaluating image quality based on rarefaction representation |
CN110035270A (en) * | 2019-02-28 | 2019-07-19 | 努比亚技术有限公司 | A kind of 3D rendering display methods, terminal and computer readable storage medium |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5287437A (en) * | 1992-06-02 | 1994-02-15 | Sun Microsystems, Inc. | Method and apparatus for head tracked display of precomputed stereo images |
US5795154A (en) * | 1995-07-07 | 1998-08-18 | Woods; Gail Marjorie | Anaglyphic drawing device |
Family Cites Families (101)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US1592034A (en) * | 1924-09-06 | 1926-07-13 | Macy Art Process Corp | Process and method of effective angular levitation of printed images and the resulting product |
US4182053A (en) * | 1977-09-14 | 1980-01-08 | Systems Technology, Inc. | Display generator for simulating vehicle operation |
US4291380A (en) * | 1979-05-14 | 1981-09-22 | The Singer Company | Resolvability test and projection size clipping for polygon face display |
US4677576A (en) * | 1983-06-27 | 1987-06-30 | Grumman Aerospace Corporation | Non-edge computer image generation system |
US4795248A (en) * | 1984-08-31 | 1989-01-03 | Olympus Optical Company Ltd. | Liquid crystal eyeglass |
US4763280A (en) * | 1985-04-29 | 1988-08-09 | Evans & Sutherland Computer Corp. | Curvilinear dynamic image generation system |
GB8701288D0 (en) * | 1987-01-21 | 1987-02-25 | Waldern J D | Perception of computer-generated imagery |
US5079699A (en) * | 1987-11-27 | 1992-01-07 | Picker International, Inc. | Quick three-dimensional display |
JP2622620B2 (en) * | 1989-11-07 | 1997-06-18 | プロクシマ コーポレイション | Computer input system for altering a computer generated display visible image |
US5502481A (en) * | 1992-11-16 | 1996-03-26 | Reveo, Inc. | Desktop-based projection display system for stereoscopic viewing of displayed imagery over a wide field of view |
US5327285A (en) * | 1990-06-11 | 1994-07-05 | Faris Sadeg M | Methods for manufacturing micropolarizers |
US5537144A (en) * | 1990-06-11 | 1996-07-16 | Revfo, Inc. | Electro-optical display system for visually displaying polarized spatially multiplexed images of 3-D objects for use in stereoscopically viewing the same with high image quality and resolution |
US5276785A (en) * | 1990-08-02 | 1994-01-04 | Xerox Corporation | Moving viewpoint with respect to a target in a three-dimensional workspace |
US6392689B1 (en) * | 1991-02-21 | 2002-05-21 | Eugene Dolgoff | System for displaying moving images pseudostereoscopically |
US5168531A (en) * | 1991-06-27 | 1992-12-01 | Digital Equipment Corporation | Real-time recognition of pointing information from video |
US5381158A (en) * | 1991-07-12 | 1995-01-10 | Kabushiki Kaisha Toshiba | Information retrieval apparatus |
US5264964A (en) * | 1991-12-18 | 1993-11-23 | Sades Faris | Multi-mode stereoscopic imaging system |
US5438623A (en) * | 1993-10-04 | 1995-08-01 | The United States Of America As Represented By The Administrator Of National Aeronautics And Space Administration | Multi-channel spatialization system for audio signals |
US6111598A (en) * | 1993-11-12 | 2000-08-29 | Peveo, Inc. | System and method for producing and displaying spectrally-multiplexed images of three-dimensional imagery for use in flicker-free stereoscopic viewing thereof |
US5400177A (en) * | 1993-11-23 | 1995-03-21 | Petitto; Tony | Technique for depth of field viewing of images with improved clarity and contrast |
US5381127A (en) * | 1993-12-22 | 1995-01-10 | Intel Corporation | Fast static cross-unit comparator |
JPH08163603A (en) * | 1994-08-05 | 1996-06-21 | Tomohiko Hattori | Stereoscopic video display device |
US5652617A (en) * | 1995-06-06 | 1997-07-29 | Barbour; Joel | Side scan down hole video tool having two camera |
US6005607A (en) * | 1995-06-29 | 1999-12-21 | Matsushita Electric Industrial Co., Ltd. | Stereoscopic computer graphics image generating apparatus and stereoscopic TV apparatus |
KR100378112B1 (en) * | 1995-07-25 | 2003-05-23 | 삼성전자주식회사 | Automatic locking/unlocking system using wireless communication and method for the same |
US6640004B2 (en) * | 1995-07-28 | 2003-10-28 | Canon Kabushiki Kaisha | Image sensing and image processing apparatuses |
US6331856B1 (en) * | 1995-11-22 | 2001-12-18 | Nintendo Co., Ltd. | Video game system with coprocessor providing high speed efficient 3D graphics and digital audio signal processing |
US6028593A (en) * | 1995-12-01 | 2000-02-22 | Immersion Corporation | Method and apparatus for providing simulated physical interactions within computer generated environments |
US6252707B1 (en) * | 1996-01-22 | 2001-06-26 | 3Ality, Inc. | Systems for three-dimensional viewing and projection |
US5574836A (en) * | 1996-01-22 | 1996-11-12 | Broemmelsiek; Raymond M. | Interactive display apparatus and method with viewer position compensation |
US5880733A (en) * | 1996-04-30 | 1999-03-09 | Microsoft Corporation | Display system and method for displaying windows of an operating system to provide a three-dimensional workspace for a computer system |
JPH1063470A (en) * | 1996-06-12 | 1998-03-06 | Nintendo Co Ltd | Souond generating device interlocking with image display |
US6100903A (en) * | 1996-08-16 | 2000-08-08 | Goettsche; Mark T | Method for generating an ellipse with texture and perspective |
JP4086336B2 (en) * | 1996-09-18 | 2008-05-14 | 富士通株式会社 | Attribute information providing apparatus and multimedia system |
US6139434A (en) * | 1996-09-24 | 2000-10-31 | Nintendo Co., Ltd. | Three-dimensional image processing apparatus with enhanced automatic and user point of view control |
US6317127B1 (en) * | 1996-10-16 | 2001-11-13 | Hughes Electronics Corporation | Multi-user real-time augmented reality system and method |
JP3034483B2 (en) * | 1997-04-21 | 2000-04-17 | 核燃料サイクル開発機構 | Object search method and apparatus using the method |
US6226008B1 (en) * | 1997-09-04 | 2001-05-01 | Kabushiki Kaisha Sega Enterprises | Image processing device |
US5956046A (en) * | 1997-12-17 | 1999-09-21 | Sun Microsystems, Inc. | Scene synchronization of multiple computer displays |
GB9800397D0 (en) * | 1998-01-09 | 1998-03-04 | Philips Electronics Nv | Virtual environment viewpoint control |
US6529210B1 (en) * | 1998-04-08 | 2003-03-04 | Altor Systems, Inc. | Indirect object manipulation in a simulation |
US6466185B2 (en) * | 1998-04-20 | 2002-10-15 | Alan Sullivan | Multi-planar volumetric display system and method of operation using psychological vision cues |
US6211848B1 (en) * | 1998-05-15 | 2001-04-03 | Massachusetts Institute Of Technology | Dynamic holographic video with haptic interaction |
US6064354A (en) * | 1998-07-01 | 2000-05-16 | Deluca; Michael Joseph | Stereoscopic user interface method and apparatus |
US6552722B1 (en) * | 1998-07-17 | 2003-04-22 | Sensable Technologies, Inc. | Systems and methods for sculpting virtual objects in a haptic virtual reality environment |
US6351280B1 (en) * | 1998-11-20 | 2002-02-26 | Massachusetts Institute Of Technology | Autostereoscopic display system |
US6373482B1 (en) * | 1998-12-23 | 2002-04-16 | Microsoft Corporation | Method, system, and computer program product for modified blending between clip-map tiles |
US6614427B1 (en) * | 1999-02-01 | 2003-09-02 | Steve Aubrey | Process for making stereoscopic images which are congruent with viewer space |
US6452593B1 (en) * | 1999-02-19 | 2002-09-17 | International Business Machines Corporation | Method and system for rendering a virtual three-dimensional graphical display |
US6198524B1 (en) * | 1999-04-19 | 2001-03-06 | Evergreen Innovations Llc | Polarizing system for motion visual depth effects |
US6346938B1 (en) * | 1999-04-27 | 2002-02-12 | Harris Corporation | Computer-resident mechanism for manipulating, navigating through and mensurating displayed image of three-dimensional geometric model |
US6690337B1 (en) * | 1999-06-09 | 2004-02-10 | Panoram Technologies, Inc. | Multi-panel video display |
US6898307B1 (en) * | 1999-09-22 | 2005-05-24 | Xerox Corporation | Object identification method and system for an augmented-reality display |
US6593924B1 (en) * | 1999-10-04 | 2003-07-15 | Intel Corporation | Rendering a non-photorealistic image |
US6431705B1 (en) * | 1999-11-10 | 2002-08-13 | Infoeye | Eyewear heart rate monitor |
US6476813B1 (en) * | 1999-11-30 | 2002-11-05 | Silicon Graphics, Inc. | Method and apparatus for preparing a perspective view of an approximately spherical surface portion |
US6956543B2 (en) * | 2000-02-07 | 2005-10-18 | Sony Corporation | Multiple-screen simultaneous displaying apparatus, multi-screen simultaneous displaying method, video signal generating device, and recorded medium |
EP1264281A4 (en) * | 2000-02-25 | 2007-07-11 | Univ New York State Res Found | Apparatus and method for volume processing and rendering |
JP2001326947A (en) * | 2000-05-12 | 2001-11-22 | Sony Corp | Stereoscopic image display device |
US6956576B1 (en) * | 2000-05-16 | 2005-10-18 | Sun Microsystems, Inc. | Graphics system using sample masks for motion blur, depth of field, and transparency |
EP1373967A2 (en) * | 2000-06-06 | 2004-01-02 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | The extended virtual table: an optical extension for table-like projection systems |
US6977630B1 (en) * | 2000-07-18 | 2005-12-20 | University Of Minnesota | Mobility assist device |
US7227526B2 (en) * | 2000-07-24 | 2007-06-05 | Gesturetek, Inc. | Video-based image control system |
US6680735B1 (en) * | 2000-10-04 | 2004-01-20 | Terarecon, Inc. | Method for correcting gradients of irregular spaced graphic data |
GB2370738B (en) * | 2000-10-27 | 2005-02-16 | Canon Kk | Image processing apparatus |
JP3705739B2 (en) * | 2000-12-11 | 2005-10-12 | 株式会社ナムコ | Information storage medium and game device |
US6774869B2 (en) * | 2000-12-22 | 2004-08-10 | Board Of Trustees Operating Michigan State University | Teleportal face-to-face system |
US6987512B2 (en) * | 2001-03-29 | 2006-01-17 | Microsoft Corporation | 3D navigation techniques |
JP2003085586A (en) * | 2001-06-27 | 2003-03-20 | Namco Ltd | Image display, image displaying method, information storage medium, and image displaying program |
US6478432B1 (en) * | 2001-07-13 | 2002-11-12 | Chad D. Dyner | Dynamically generated interactive real imaging device |
US20040135744A1 (en) * | 2001-08-10 | 2004-07-15 | Oliver Bimber | Virtual showcases |
US20030107645A1 (en) * | 2001-08-17 | 2003-06-12 | Byoungyi Yoon | Method and system for controlling the display location of stereoscopic images |
US6715620B2 (en) * | 2001-10-05 | 2004-04-06 | Martin Taschek | Display frame for album covers |
JP3576521B2 (en) * | 2001-11-02 | 2004-10-13 | 独立行政法人 科学技術振興機構 | Stereoscopic display method and apparatus |
US6700573B2 (en) * | 2001-11-07 | 2004-03-02 | Novalogic, Inc. | Method for rendering realistic terrain simulation |
US7466307B2 (en) * | 2002-04-11 | 2008-12-16 | Synaptics Incorporated | Closed-loop sensor on a solid-state object position detector |
US20040196359A1 (en) * | 2002-05-28 | 2004-10-07 | Blackham Geoffrey Howard | Video conferencing terminal apparatus with part-transmissive curved mirror |
US6943805B2 (en) * | 2002-06-28 | 2005-09-13 | Microsoft Corporation | Systems and methods for providing image rendering using variable rate source sampling |
JP4115188B2 (en) * | 2002-07-19 | 2008-07-09 | キヤノン株式会社 | Virtual space drawing display device |
US7639838B2 (en) * | 2002-08-30 | 2009-12-29 | Jerry C Nims | Multi-dimensional images system for digital image input and output |
JP4467267B2 (en) * | 2002-09-06 | 2010-05-26 | 株式会社ソニー・コンピュータエンタテインメント | Image processing method, image processing apparatus, and image processing system |
US6943754B2 (en) * | 2002-09-27 | 2005-09-13 | The Boeing Company | Gaze tracking system, eye-tracking assembly and an associated method of calibration |
US7321682B2 (en) * | 2002-11-12 | 2008-01-22 | Namco Bandai Games, Inc. | Image generation system, image generation method, program, and information storage medium |
US20040130525A1 (en) * | 2002-11-19 | 2004-07-08 | Suchocki Edward J. | Dynamic touch screen amusement game controller |
JP4100195B2 (en) * | 2003-02-26 | 2008-06-11 | ソニー株式会社 | Three-dimensional object display processing apparatus, display processing method, and computer program |
KR100526741B1 (en) * | 2003-03-26 | 2005-11-08 | 김시학 | Tension Based Interface System for Force Feedback and/or Position Tracking and Surgically Operating System for Minimally Incising the affected Part Using the Same |
US7324121B2 (en) * | 2003-07-21 | 2008-01-29 | Autodesk, Inc. | Adaptive manipulators |
US20050093859A1 (en) * | 2003-11-04 | 2005-05-05 | Siemens Medical Solutions Usa, Inc. | Viewing direction dependent acquisition or processing for 3D ultrasound imaging |
US7667703B2 (en) * | 2003-12-19 | 2010-02-23 | Palo Alto Research Center Incorporated | Systems and method for turning pages in a three-dimensional electronic document |
US7312806B2 (en) * | 2004-01-28 | 2007-12-25 | Idelix Software Inc. | Dynamic width adjustment for detail-in-context lenses |
JP4522129B2 (en) * | 2004-03-31 | 2010-08-11 | キヤノン株式会社 | Image processing method and image processing apparatus |
US20050219693A1 (en) * | 2004-04-02 | 2005-10-06 | David Hartkop | Scanning aperture three dimensional display device |
JP2007531951A (en) * | 2004-04-05 | 2007-11-08 | マイケル エー. ベセリー | Horizontal perspective display |
US20050219240A1 (en) * | 2004-04-05 | 2005-10-06 | Vesely Michael A | Horizontal perspective hands-on simulator |
US20060126927A1 (en) * | 2004-11-30 | 2006-06-15 | Vesely Michael A | Horizontal perspective representation |
US7812815B2 (en) * | 2005-01-25 | 2010-10-12 | The Broad of Trustees of the University of Illinois | Compact haptic and augmented virtual reality system |
US7843470B2 (en) * | 2005-01-31 | 2010-11-30 | Canon Kabushiki Kaisha | System, image processing apparatus, and information processing method |
US20060221071A1 (en) * | 2005-04-04 | 2006-10-05 | Vesely Michael A | Horizontal perspective display |
JP4738870B2 (en) * | 2005-04-08 | 2011-08-03 | キヤノン株式会社 | Information processing method, information processing apparatus, and remote mixed reality sharing apparatus |
US20070043466A1 (en) * | 2005-08-18 | 2007-02-22 | Vesely Michael A | Stereoscopic display using polarized eyewear |
US20070040905A1 (en) * | 2005-08-18 | 2007-02-22 | Vesely Michael A | Stereoscopic display using polarized eyewear |
-
2005
- 2005-04-04 JP JP2007507395A patent/JP2007531951A/en active Pending
- 2005-04-04 WO PCT/US2005/011253 patent/WO2005098516A2/en active Application Filing
- 2005-04-04 WO PCT/US2005/011252 patent/WO2006104493A2/en active Application Filing
- 2005-04-04 KR KR1020067023226A patent/KR20070047736A/en not_active Application Discontinuation
- 2005-04-04 CN CNA2005800183073A patent/CN101006110A/en active Pending
- 2005-04-04 KR KR1020067023229A patent/KR20070044394A/en not_active Application Discontinuation
- 2005-04-04 EP EP05743549A patent/EP1740998A2/en not_active Withdrawn
- 2005-04-04 US US11/098,681 patent/US20050219694A1/en not_active Abandoned
- 2005-04-04 WO PCT/US2005/011255 patent/WO2005098517A2/en active Application Filing
- 2005-04-04 WO PCT/US2005/011254 patent/WO2005101097A2/en active Application Filing
- 2005-04-04 EP EP05733693A patent/EP1749232A2/en not_active Withdrawn
- 2005-04-04 JP JP2007507394A patent/JP2007536608A/en active Pending
- 2005-04-04 CN CNA2005800182600A patent/CN101065783A/en active Pending
- 2005-04-04 US US11/098,685 patent/US20050219695A1/en not_active Abandoned
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5287437A (en) * | 1992-06-02 | 1994-02-15 | Sun Microsystems, Inc. | Method and apparatus for head tracked display of precomputed stereo images |
US5795154A (en) * | 1995-07-07 | 1998-08-18 | Woods; Gail Marjorie | Anaglyphic drawing device |
Also Published As
Publication number | Publication date |
---|---|
JP2007536608A (en) | 2007-12-13 |
JP2007531951A (en) | 2007-11-08 |
US20050219694A1 (en) | 2005-10-06 |
WO2006104493A3 (en) | 2006-12-21 |
WO2005098517A3 (en) | 2006-04-27 |
WO2005101097A3 (en) | 2007-07-05 |
CN101065783A (en) | 2007-10-31 |
KR20070047736A (en) | 2007-05-07 |
KR20070044394A (en) | 2007-04-27 |
WO2005098516A3 (en) | 2006-07-27 |
WO2005101097A2 (en) | 2005-10-27 |
EP1749232A2 (en) | 2007-02-07 |
WO2006104493A2 (en) | 2006-10-05 |
US20050219695A1 (en) | 2005-10-06 |
WO2005098517A2 (en) | 2005-10-20 |
CN101006110A (en) | 2007-07-25 |
EP1740998A2 (en) | 2007-01-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20050264559A1 (en) | Multi-plane horizontal perspective hands-on simulator | |
US9684994B2 (en) | Modifying perspective of stereoscopic images based on changes in user viewpoint | |
US20050219240A1 (en) | Horizontal perspective hands-on simulator | |
US7907167B2 (en) | Three dimensional horizontal perspective workstation | |
EP1740998A2 (en) | Horizontal perspective hand-on simulator | |
US20070291035A1 (en) | Horizontal Perspective Representation | |
JP4823334B2 (en) | Image generation system, image generation method, program, and information storage medium | |
US20060221071A1 (en) | Horizontal perspective display | |
US20060126925A1 (en) | Horizontal perspective representation | |
US20060250390A1 (en) | Horizontal perspective display | |
US20050248566A1 (en) | Horizontal perspective hands-on simulator | |
JP3579683B2 (en) | Method for producing stereoscopic printed matter, stereoscopic printed matter | |
JP2004178581A (en) | Image generation system, image generation method, program, and information storage medium | |
WO2006121955A2 (en) | Horizontal perspective display |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A2 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KM KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SM SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A2 Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWE | Wipo information: entry into national phase |
Ref document number: 2007507394 Country of ref document: JP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWW | Wipo information: withdrawn in national office |
Ref document number: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1020067023229 Country of ref document: KR Ref document number: 2005743549 Country of ref document: EP Ref document number: 6555/DELNP/2006 Country of ref document: IN |
|
WWE | Wipo information: entry into national phase |
Ref document number: 200580018307.3 Country of ref document: CN |
|
WWP | Wipo information: published in national office |
Ref document number: 2005743549 Country of ref document: EP |