CN107209565B - Method and system for displaying fixed-size augmented reality objects - Google Patents
Method and system for displaying fixed-size augmented reality objects Download PDFInfo
- Publication number
- CN107209565B CN107209565B CN201680006372.2A CN201680006372A CN107209565B CN 107209565 B CN107209565 B CN 107209565B CN 201680006372 A CN201680006372 A CN 201680006372A CN 107209565 B CN107209565 B CN 107209565B
- Authority
- CN
- China
- Prior art keywords
- augmented reality
- eye
- reality object
- real world
- eye display
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/012—Head tracking input arrangements
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/017—Head mounted
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
- G06F3/013—Eye tracking input arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/20—Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/275—Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/0101—Head-up displays characterised by optical features
- G02B2027/0132—Head-up displays characterised by optical features comprising binocular systems
- G02B2027/0134—Head-up displays characterised by optical features comprising binocular systems of stereoscopic type
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2219/00—Indexing scheme for manipulating 3D models or images for computer graphics
- G06T2219/20—Indexing scheme for editing of 3D models
- G06T2219/2016—Rotation, translation, scaling
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/332—Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
- H04N13/344—Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Computer Graphics (AREA)
- Computer Hardware Design (AREA)
- Software Systems (AREA)
- Architecture (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Optics & Photonics (AREA)
- Processing Or Creating Images (AREA)
- Controls And Circuits For Display Device (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
An example wearable display system includes: a controller; a left display for displaying a left eye augmented reality image at left eye display coordinates in a left eye display size; and a right-eye display for displaying a right-eye augmented reality image at right-eye display coordinates at a right-eye display size, the left-eye and right-eye augmented reality images collectively forming an augmented reality object perceptible at an apparent real world depth by a wearer of the display system. The controller sets the relationship of the left-eye display coordinates relative to the right-eye display coordinates as a function of the apparent real world depth of the augmented reality object. The function maintains an aspect of the left-eye and right-eye display sizes over a non-scaling range of apparent real world depths of the augmented reality object, and the function scales the left-eye and right-eye display sizes with changing apparent real world depths outside the non-scaling range.
Description
Background
Stereoscopic displays may present images to both the left and right eyes of a viewer. By presenting different views of the same object at different locations in the right and left eye fields of view, a three-dimensional perception of the object may be achieved.
SUMMARY
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter. Furthermore, the claimed subject matter is not limited to implementations that solve any or all disadvantages noted in any part of this disclosure.
An example wearable, head-mounted display system includes: a left near-eye, see-through display configured to display a left eye augmented reality image at left eye display coordinates at a left eye display size; a right near-eye, see-through display configured to display a right-eye augmented reality image at right-eye display coordinates at a right-eye display size, the left-eye augmented reality image and right-eye augmented reality image collectively forming an augmented reality object that is perceptible at an apparent real world depth by a wearer of the head-mounted display system; and a controller. The controller sets the relationship of the left-eye display coordinates relative to the right-eye display coordinates as a function of the apparent real world depth of the augmented reality object. The function maintains an aspect of the left-eye display size and the right-eye display size over a non-scaling range of apparent real world depths of the augmented reality object, and scales the left-eye display size and the right-eye display size by changing the apparent real world depths of the augmented reality object outside the non-scaling range of apparent real world depths.
Brief Description of Drawings
FIG. 1 illustrates an example environment including a user wearing a near-eye, see-through display device.
Fig. 2 schematically illustrates an example stereoscopic, near-eye, see-through display device.
Fig. 3 is a diagram schematically illustrating an example apparent real world size and depth of an augmented reality object scaled according to a first scaling function.
Fig. 4 is a diagram schematically illustrating an example apparent real world size and depth of an augmented reality object scaled according to a second scaling function.
Fig. 5 is a flow chart illustrating a method for displaying an augmented reality object.
6A-6E are diagrams illustrating example scaling functions.
Fig. 7 schematically shows a first example view of an augmented reality object.
Fig. 8 schematically shows a second example view of an augmented reality object.
Fig. 9 schematically shows a third example view of an augmented reality object.
Fig. 10 schematically shows a fourth example view of an augmented reality object.
FIG. 11 illustrates an example computing system.
FIG. 12 illustrates an example head mounted display device.
Detailed Description
A near-eye see-through display device may be configured to display an augmented reality image to provide the illusion that an augmented reality object (sometimes referred to as a hologram) is present in the real world environment surrounding the near-eye display device. To mimic how a wearer of a display device perceives a real object, the augmented reality object displayed may be scaled in size as its perceived depth changes. However, to preserve the visibility of the augmented reality object, it is desirable to maintain one or more aspects of the augmented reality object size even as the depth of the augmented reality object changes. Such size preservation may reduce the realism of the object, as the object does not scale exactly as a real object would scale. However, such size preservation may make it easier to see the object, which may be too small or too large if zoomed as the real object would zoom, and/or may provide increased ability to read or otherwise interact with content displayed on the object.
According to embodiments disclosed herein, augmented reality content (such as user interface elements, holographic icons, etc.) may be displayed on a near-eye, see-through display device according to respective scaling functions that define how the augmented reality content is scaled in size relative to the perceived depth of the augmented reality content. In some examples, different types of augmented reality content may be sized according to different scaling functions. For example, a user interface control element (such as a cursor) is maintained at the same perceived size over a range of depths, while a hologram displayed as part of the immersive game environment may be linearly scaled by changing the depth. In this way, the user interface control element may be maintained at a size visible to a user of the display device, even if the user interface control element is displayed at a relatively distant apparent depth.
As explained above, such a scaling function may also increase the ability of a user to visualize content displayed on an augmented reality object. For example, a holographic newspaper from a user floating on a table across a room may itself be visible, but the headline news on the newspaper may only be visible if the zooming technique is employed as described above.
As another example, a user may have difficulty noticing the 3D effect of a (simulated) stereoscopic 3D movie playing on a holographic television set lying in a room. With the scaling described herein, the television may become large enough in the user's field of view so that he or she can see and enjoy the stereoscopic 3D effect of the movie.
As yet another example, when a user walks relatively close to a fixed-size holographic television object that is displaying (simulating) a stereoscopic 3D movie, the scaling described herein may allow the television to disable the stereoscopic 3D effect and replace with 2D video to prevent asthenopia and maximize viewer comfort. Alternatively, to prevent the television set from blocking most of the user's field of view when the user is in proximity, the holographic object may simply fade out in the video content. Fig. 1 shows an example environment 100 in which a user 102 wears a near-eye, see-through display device, implemented herein as a Head Mounted Display (HMD) 104. The HMD provides a perspective view of the environment 100 to the user 102. The HMD also displays augmented reality images to the user. In one example, the HMD is a stereoscopic display device in which two separate augmented reality images are each displayed on respective left-eye and right-eye displays of the HMD. When viewed by a wearer of the HMD (e.g., user 102), the two augmented reality images collectively form an augmented reality object that is perceptible to the wearer as part of environment 100. Fig. 1 depicts example augmented reality objects 106a and 106 b. However, it will be understood that the depicted augmented reality object is not visible to others in the environment 100, and that the augmented reality object is only visible to the user 102 through the HMD 104.
The HMD 104 may display the augmented reality image such that the perceived augmented reality object is body-locked and/or world-locked. The body-locked augmented reality object moves as the HMD 104 changes in 6 degrees of freedom pose (i.e., 6 DOF: x, y, z, pitch, yaw, roll). In this way, even as the user moves, turns, etc., the body-locked augmented reality object appears to occupy the same portion of the user's 102 field of view and appears to be at the same distance from the user 102.
World-locked augmented reality objects, on the other hand, appear to remain in a fixed position relative to the surrounding environment. Even as the user moves and the user's perspective changes, the world-locked augmented reality object will appear to be in the same position/orientation relative to the surrounding environment. As an example, an augmented reality pawn may appear in the same square on a real world chess board, regardless of the vantage point of the user viewing the board. To support world-locked augmented reality objects, the HMD may track the HMD's 6DOF pose and geometric mapping/modeling of various surface aspects of the surrounding environment.
In accordance with the present disclosure, the apparent real world size of an augmented reality object or portions of an augmented reality object may vary depending on the apparent real world depth of the augmented reality object. In other words, the size of the augmented reality object may be increased as the augmented reality object is displayed at a farther perceived distance, and the size of the augmented reality object may be decreased as the augmented reality object is displayed at a closer perceived distance. The scaling function may be tuned such that the augmented reality object or portions of the augmented reality object will occupy the same portion of the user's field of view (FOV) regardless of the perceived distance at which the augmented reality object is displayed. That is, the apparent real world size of the augmented reality object or a portion of the augmented reality object may be increased or decreased to maintain the same angular size relative to the user.
In the example shown in fig. 1, the user 102 is creating an augmented reality drawing through gesture input. As shown, the user 102 is creating a first drawing, depicted as an augmented reality object 106a, along a first wall 108 that is relatively close to the user 102 and the HMD 104. One or more aspects of the augmented reality object 106a may be set such that the augmented reality object 106a is visible to the user 102. For example, although the overall size of the augmented reality object 106a may be determined from the user's gesture input, the line thickness of the augmented reality object 106a may be set based on the distance between the user and the first wall 108 on which the augmented reality object 106a is placed in order to ensure that the augmented reality object is visible and reduce eye strain of the user.
If the apparent depth of the augmented reality object changes, for example if the augmented reality object is placed such that its apparent depth increases, one or more aspects of the augmented reality object may be maintained in order to maintain the visibility of the object. As shown in FIG. 1, the user-created drawing is moved to a greater apparent depth. The mobile drawing, depicted as augmented reality object 106b, is placed on a second wall 110, the second wall 110 being further away from the user 102 and the HMD 104 than the first wall 108. Thus, the apparent real world depth of the augmented reality object has increased and, thus, the apparent real world size of the augmented reality object has decreased in order to provide perception of three dimensions. However, the line thickness of the drawing is maintained in order to maintain visibility of the drawing. As depicted herein, the line thickness of the drawing being maintained refers to the line thickness perceived by the user being maintained. In some examples, maintaining the user-perceived line thickness may include adjusting one or more aspects of the actual displayed line.
As demonstrated in fig. 1, some types of augmented reality objects may be scaled such that one or more aspects (e.g., line thickness) are constant over a range of different apparent depths. Thus, when an object is initially displayed at an apparent depth within that range, or when such an object is moved to an apparent depth within that range, that aspect of the object may be set to a predetermined level that is constant within that range.
Fig. 2 is a schematic diagram 200 illustrating aspects of a wearable stereoscopic display system 202 including a controller 203. The display system shown is similar to conventional eyeglasses and is one non-limiting example of the HMD 104 of fig. 1. The display system includes a right display 206 and a left display 204. In some embodiments, the right and left displays are wholly or partially transparent from the perspective of the wearer in order to give the wearer a clear view of his or her surroundings. This feature enables computerized display imagery to be blended with imagery from the surrounding environment to obtain the illusion of augmented reality.
In some embodiments, the display imagery is transmitted to display system 202 in real-time from a remote computing system (not shown) operatively coupled to display system 202. The display images may be communicated in any suitable form (i.e., type) of transmission signal and data structure. The signals encoding the display images may be communicated via any kind of wired or wireless communication link to the controller 203 of the display system. In other embodiments, at least some of the display image synthesis and processing may be performed in the controller.
Continuing in fig. 2, each of the right and left displays includes a respective optical system, and the controller 203 is operatively coupled to the right and left optical systems. In the illustrated embodiment, the controller is hidden within the display system frame along with the right and left optical systems. The controller may include appropriate input/output (IO) components that enable it to receive display images from a remote computing system. The controller may also include a position sensing component, such as a Global Positioning System (GPS) receiver, a gyroscopic sensor, or an accelerometer to access head orientation and/or movement, among others. When display system 202 is in operation, controller 203 sends appropriate control signals to the right optical system, which causes the right optical system to form a right display image in right display 206. Similarly, the controller sends appropriate control signals to the left optical system, which causes the left optical system to form a left display image in the left display 204. The wearer of the display system views the right and left display images through the right and left eyes, respectively. When the right and left display images are combined and rendered in an appropriate manner (see below), the wearer experiences the illusion that the augmented reality object is in a specified location and has specified 3D content and other display attributes. It will be understood that an "augmented reality object" as used herein may be an object of any desired complexity and need not be limited to a single object. Instead, the augmented reality object may comprise a complete virtual scene with both foreground and background portions. The augmented reality object may also correspond to a portion or location of a larger augmented reality object.
As shown in fig. 2, left display 204 and right display 206 (also referred to herein as left-eye display and right-eye display) each display a respective augmented reality image (i.e., an image of a tree). Left display 204 is displaying left augmented reality image 208 and right display 206 is displaying right augmented reality image 210. Each of the left display 204 and the right display 206 may comprise a suitable display, such as an LCD display, configured to form a display image based on control signals from the controller 203. Each display includes a plurality of addressable individual pixels arranged on a rectangular grid or other geometric shape. Each of the left display 204 and the right display 206 may further include optics for delivering the displayed image to the eye. Such optics may include waveguides, splitters, partial mirrors, and the like.
Collectively, left augmented reality image 208 and right augmented reality image 210 create augmented reality object 212 when viewed by a wearer of display system 202. Although left augmented reality image 208 and right augmented reality image 210 are depicted in fig. 2 as being identical, it will be understood that each of the left and right augmented reality images may be identical or each may be different (e.g., each may include the same object but from a slightly different perspective). Augmented reality object 212 has an apparent real world size and apparent real world depth determined in accordance with the size of each of left and right augmented reality images 208 and 210 and their location on the respective displays.
The apparent position of the augmented reality object 212, including the apparent real world depth (i.e., z-coordinate), the apparent real world lateral position (i.e., x-coordinate), and the apparent real world vertical coordinate (i.e., y-coordinate), may be depicted by the display coordinates of each of the left and right augmented reality images 208, 210. The apparent size may be plotted in terms of the apparent depth and display size of that object. As used herein, the display coordinates of the augmented reality image include the x, y location of each pixel comprising the augmented reality image. The display size of the augmented reality image is a measure of the length in one or more dimensions as indicated by the number of pixels comprising the augmented reality image, e.g., the proportion of the display occupied by the augmented reality image. Further, as used herein, an augmented reality image refers to the actual image displayed on the display, while an augmented reality object refers to the augmented reality content perceived by the wearer of the display system when viewing both the right and left displays. It will be understood that the augmented reality object may include any suitable augmented reality content, including but not limited to graphical user interfaces, user interface control elements, virtual user markers, holograms, animations, video simulations, and the like.
To adjust the apparent real world depth of the augmented reality object, the right display coordinates and/or the left display coordinates may be set relative to each other. For example, to reduce the apparent real world depth of an augmented reality object, the left and right display coordinates may be set close to each other. As an example, the tree image may move towards the nose on the left and right displays. To increase the apparent real world depth of the augmented reality object, the left and right display coordinates may be set away from each other. As an example, the tree image may move away from the nose on the left and right displays.
To adjust the apparent real world size of the augmented reality object, the right display size and/or the left display size may be adjusted. For example, the right and/or left display sizes may be increased to increase the apparent real world size of the augmented reality object. However, as will be explained in more detail below, the apparent real world size of the augmented reality object may be the size of the augmented reality object relative to other real objects at the same apparent depth. Thus, in some examples, the apparent real world size of the augmented reality object may be scaled according to the apparent real world depth.
The scaling of the augmented reality object size (and thus the corresponding augmented reality image display size) according to the apparent real world depth may be performed according to a desired scaling function, which will be explained in more detail below. Briefly, each scaling function may set left and right display coordinates relative to one another to set the augmented reality object at a desired apparent real world depth, and scale one or more aspects of the augmented reality image display size based on the apparent real world depth. Each function may perform scaling differentially, such as linearly, non-linearly, scaling only within a particular depth range, or other suitable function.
In one example scaling function, outside of a non-scaling range of apparent real world depths, the augmented reality image display size may be linearly scaled by changing the apparent real world depth, while within the non-scaling range of apparent real world depths, the augmented reality image display size may be maintained. By doing so, the apparent real world size of the augmented reality object may be changed by changing the apparent real world depth such that the augmented reality object remains at a constant proportion of the wearer's field of view of the display system.
Fig. 3 is a diagram 300 schematically illustrating an example apparent real world size and depth of an augmented reality object scaled according to a first scaling function. Augmented reality image 302 is displayed on a near-eye, see-through display 304, such as a display included in HMD 104 of fig. 1 and/or display system 202 of fig. 2. The augmented reality image 302 appears to be an augmented reality object 308 when viewed through the eyes of the user 306. Although only one augmented reality image is depicted in fig. 3, it will be understood that display 304 may include two displays, each displaying a respective augmented reality image. FIG. 3 also includes a timeline 310.
At a first point in time T1, the augmented reality image 302 is displayed with a first display size DS1 and with display coordinates that set the augmented reality object at a first apparent depth AD 1. Due to the display size and apparent depth, the augmented reality object has a first apparent size AS 1.
At a second point in time T2, the apparent depth of the augmented reality object is increased, as shown by apparent depth AD 2. The first scaling function applied in the example of fig. 3 specifies that the display size of the augmented reality image 302 is maintained when the apparent depth changes, whereby the display size DS2 is equal to the display size DS1 of time T1. However, since the apparent depth has increased while the display size remains the same, the apparent size of the augmented reality object 308 increases, AS shown by the apparent size AS 2. As will be appreciated by fig. 3, the relative proportions of the augmented reality image and the augmented reality object in the user's field of view remain constant from time T1 to time T2.
Fig. 4 is a diagram 400 schematically illustrating an example apparent real world size and depth of an augmented reality object scaled according to a second scaling function. Similar to fig. 3, augmented reality image 402 is displayed on a near-eye, see-through display 404, such as a display included in HMD 104 of fig. 1 and/or display system 202 of fig. 2. The augmented reality image 402 appears to be an augmented reality object 408 when viewed through the eyes of the user 406. Although only one augmented reality image is depicted in fig. 4, it will be understood that display 404 may include two displays, each displaying a respective augmented reality image. FIG. 4 also includes a timeline 410.
At the first point in time T1, the augmented reality image 402 is displayed with a third display size DS3 and with display coordinates that set the augmented reality object at a third viewing depth AD 3. Due to the display size and apparent depth, the augmented reality object has a third apparent size AS 3. In the example shown in fig. 4, the third display size DS3 is equal to the first display size DS1 of fig. 3. Likewise, the third apparent depth AD3 and the third apparent size AS3 are each equal to the first apparent depth AD1 and the first apparent size AS1, respectively, of fig. 3.
At a second point in time T2, the apparent depth of the augmented reality object is increased, as shown by apparent depth AD 4. The second scaling function applied in the example of fig. 4 specifies that the display size of the augmented reality image 302 can be scaled linearly with the apparent depth. Thus, the display size DS4 is reduced relative to the display size DS3 at time T1. AS a result, the apparent size of the augmented reality object 408 at time T2 remains the same, AS shown by AS 4. Thus, the apparent size AS3 of the augmented reality object at time T1 is equal to the apparent size AS4 at time T2. As will be appreciated by fig. 4, the relative proportions of the augmented reality image and the augmented reality object occupying the user's field of view decrease at time T2 relative to time T1.
Turning now to fig. 5, a method 500 for displaying an augmented reality object is illustrated. The method 500 may be implemented in a wearable, head-mounted stereoscopic display system, such as the HMD 104 of fig. 1 or the display system 202 of fig. 2, as described above, or the HMD1200 of fig. 12, described below.
At 502, method 500 includes obtaining an augmented reality object to be displayed on a display system. The augmented reality object may include any suitable augmented reality content and may be displayed as part of a graphical user interface, game, guidance or assistance system, or any suitable augmented or immersive environment. The augmented reality object may be obtained from a remote service, from memory of the display system, or other suitable source in response to user input, a predetermined sequence of executing a game or other content, or other suitable action. As explained above, the augmented reality object may include right-eye and left-eye augmented reality images, each configured to be displayed on respective right-eye and left-eye displays of the display system. Thus, obtaining an augmented reality object may include obtaining respective left-eye and right-eye augmented reality images.
At 504, the method includes determining an augmented reality object type and an associated scaling function. Augmented reality objects may be classified into one or more types of objects. Example types of augmented reality objects include graphical user interfaces, user interface control elements (e.g., cursors, arrows), virtual user markers (e.g., drawings), navigation and/or auxiliary icons, holograms, and other suitable types of augmented reality objects. Each type of augmented reality object may have an associated scaling function that specifies how the display size of the augmented reality image forming the augmented reality object scales according to the apparent real world depth of the augmented reality object.
At 506, an apparent real world depth of the augmented reality object is determined. The augmented reality object may be displayed at a suitable apparent real world depth. The apparent real world depth of the augmented reality object may be set according to one or more suitable parameters including, but not limited to, a user command (e.g., whether the user has uttered a gesture, voice, or other command indicating that the augmented reality object is to be placed at a given location), an association with one or more real world objects, and preset parameters of the augmented reality object (e.g., the augmented reality object may have a preset depth selected to reduce eye strain of the user).
At 508, the method 500 includes displaying the augmented reality object at the apparent real world depth and at the apparent real world size according to a scaling function. To display the augmented reality object, method 500 includes displaying a left eye augmented reality image at left eye display coordinates according to a scaling function at a left eye display size on a left near-eye, see-through display, as indicated at 510. Further, the method 500 includes displaying the right-eye augmented reality image at the right-eye display coordinates at a right-eye display size according to a scaling function on the right near-eye, see-through display, as indicated at 512.
As explained previously, the apparent real world depth of the augmented reality object may be indicated by the respective right and left eye display coordinates. An appropriate apparent real world size of the augmented reality object may then be set as a function of the apparent real world depth according to the scaling function. For example, the augmented reality object may have a default apparent real world size for a given apparent real world depth. The default size may be based on the type of augmented reality object, the context and/or environment in which the augmented reality object is placed, user input, and/or other suitable factors. The scaling function may then alter the apparent real world size based on the determined real world depth. To adjust the apparent real world size, the right and left eye display sizes of the right and left eye augmented reality images may be adjusted, as explained above.
Example scaling functions applicable during execution of the method 500 are illustrated in fig. 6A-6E. Each of the diagrams 601, 603, 605, 607, and 609 draw an augmented reality image display size according to the apparent real world depth of the corresponding augmented reality object. These example functions may be applicable to one or more dimensions (e.g., height, or width, or height and width) of the augmented reality image. These example functions may be applicable to another aspect of the augmented reality image, such as line thickness.
The first linear function illustrated by line 602 adjusts the display size linearly (e.g., 1: 1) across all apparent depths within the user's visible range by changing the apparent depth. A first linear scaling function may be used to scale an augmented reality object intended to mimic an element within a user's environment (e.g., an object within a game environment). While a linear function, such as the one illustrated by line 602, may accurately represent how an object changes in perceived size as its depth changes, it may cause the object to become too small to be accurately perceived, or to become so large as to obscure the user's view.
Another example of a linear scaling function is illustrated by line 604. In this second linear scaling function, the display size of the augmented reality image remains constant regardless of the apparent real world depth. Although this method of sizing an augmented reality object is simple to perform, it may suffer from the same problems as the first linear scaling function, e.g. the augmented reality object is too small or too large at some depths. Realism is also reduced because augmented reality objects scaled in this way do not mimic the scaling of real-world objects.
To take advantage of the linear scaling function while avoiding the size problem described above, various piecewise scaling functions may be applied. An example of a first segmentation function is shown as line 606. Herein, the display size is maintained constant over a first non-scaling range of apparent depths, while adjusting linearly with changing depth at depths outside the first non-scaling range. Thus, according to the first non-linear scaling function, the left-eye and right-eye display sizes are scaled according to the apparent real world depth (e.g., decreasing in size with increasing depth) until the apparent real world depth reaches the first threshold depth T1. The display size remains constant over the non-zoom depth range until a second threshold depth T2 is reached. At depths beyond the first non-scaling range, the left-eye and right-eye display sizes are scaled again according to the apparent real world depth.
The first segment scaling function may be applied to scale augmented reality objects that are not necessarily related to a real object or a real world environment. This may include user interface control elements such as a cursor, a graphical interface, and virtual user marks such as drawings. By maintaining the displayed size of the displayed augmented reality image, the apparent real world size of the augmented reality object may be smaller at smaller depths and larger at larger depths, thereby occupying the same constant proportion of the user's field of view within the first non-zooming depth range. By doing so, the user may easily visualize and/or interact with the augmented reality object, even at relatively far depths. Further, by scaling the display size according to depth outside the first non-scaling range, the first segment scaling function prevents the augmented reality object from becoming too large and obstructing the user's view.
The second segment scaling function is illustrated by line 608. The second piecewise scaling function is similar to the first piecewise scaling function and includes a second non-scaled depth range between the first threshold depth T1 and a second threshold depth T2 at which a display size of the augmented reality image is maintained at a constant size. The second non-scaling depth range may be different from the first non-scaling range, e.g., the second non-scaling range may be a larger depth range than the first non-scaling range.
The third segment scaling function is illustrated by line 610. The third segment scaling function linearly scales the display size of the augmented reality image as a function of depth within the scaled depth range, but maintains the display size at one or more constant sizes outside of the scaled depth range. For example, the display size is maintained at a first, relatively larger display size at a close range depth, linearly scaled in a scaled depth range, and then maintained at a second, relatively smaller display size at a far range depth.
The example scaling functions described above may each be associated with a respective different type of augmented reality object and automatically applied whenever the associated augmented reality object is displayed. In other examples, a respective scaling function may be applied to the augmented reality function in response to a user request or other input.
When more than one augmented reality object is displayed, each displayed augmented reality object may be scaled according to its respective scaling function. As a result, some augmented reality objects may be similarly scaled when displayed together, while other augmented reality objects may be scaled differently. As a specific example, a displayed object that is part of a game (e.g., a holographic tree, such as the one illustrated in fig. 2) may be linearly scaled by changing the depth at all apparent depths to mimic how the object will be perceived in the real world. Instead, a control object (such as a cursor for controlling aspects of the game) may be scaled according to a first segment scaling function to maintain visibility of the cursor.
Thus, in the above example, the left-eye display coordinates may be set relative to the right-eye display coordinates according to the apparent real world depths of the first and second augmented reality objects. An aspect of the left-eye display size and the right-eye display size (e.g., the total image size) may be maintained over a non-scaling range of apparent real world depths of only the first augmented reality object. Outside of the non-scaling range of apparent real world depths, the left eye display size and the right eye display size may be scaled by changing the apparent real world depths of both the first and second augmented reality objects. The left-eye display size and the right-eye display size may be scaled by changing the apparent real world depth within a non-scaling range of the apparent real world depth for only the second augmented reality object.
The scaling functions described above in connection with fig. 6A-6E are exemplary in nature, and other scaling functions may be used. Scaling functions having any number of constant, linear or non-linear segments may be used. Different scaled segments of the same function may have different scaling properties. For example, the slope of the scaled segment before the constant segment may be greater than the slope of the scaled segment after the constant segment.
Other variations from the functions illustrated in fig. 6A-6E are contemplated. For example, the slope of the first linear function may be smaller or larger than illustrated. In another example, the first segment scaling function may scale in size during the non-scaled depth range, but at a much smaller rate than outside the non-scaled depth range. By doing so, the function can only scale the proportion of the necessary scaling needed to maintain the same angular size, mixing the two considerations: clues are given that the user is moving relative to the augmented reality object while primarily maintaining the angular size of the augmented reality object to allow the user to more easily view and interact with the augmented reality object. Further, in some examples, the scaling function may be user configurable.
Some scaling functions may have limitations on the maximum and minimum apparent real world sizes, which will result in the angular size of the augmented reality object appearing to change if the user moves relative to the object beyond the respective physical distance. The zoom operation may be triggered by virtually any object positioning change and is not limited to positioning due to collisions with other real world or augmented reality objects.
These scaling operations may be applied continuously, periodically, or at a single point in time. For example, a floating user interface element may continuously update its apparent real world size to maintain its angular size (e.g., proportional to the user's field of view) based on placement against the real world surface at which the user is looking, while a user-drawn line may set its own size to maintain a target angular size based on distance from the target physical surface on which it is drawn, but then not change in world space size after that point.
Further, some scaling functions may adjust aspects of the displayed augmented reality image, in addition to or instead of the image display size. For example, the chromaticity, color, transparency, lighting effect, and/or feature density of the augmented reality image may be adjusted based on the apparent real world depth.
An example scaling function is described above in connection with how the total apparent real world size of an augmented reality object changes based on apparent real world depth. However, instead of or in addition to adjusting the overall apparent real world size, one or more particular aspects of the augmented reality object may be adjusted. One example aspect that may be adjusted is the line thickness of the augmented reality object, which is described in more detail below. Another example aspect that may be adjusted includes object orientation. For example, augmented reality objects such as books can be easily seen when viewed frontally. However, when the user views the same object from a side (e.g., 90 degree) angle, it is virtually impossible to read the book. Thus, the augmented reality object may be automatically rotated to face the user. This effect may be referred to as promotion. As with the zoom effect, the key to the promotional effect may be the apparent real world depth. For example, promotions may be implemented only within a range of apparent real world depths.
Fig. 7 illustrates an example view 700 from the perspective of a user through a near-eye, see-through display (e.g., HMD 104, display system 202). In view 700, the user can see real world walls 702a, 702b, 702c, 702d and floor 704. In addition to the real-world aspects of the surrounding environment, the user may also see an augmented reality object of a first instance of a virtual user marker, depicted herein as a horizontal line 706' on wall 702b and a second instance of the same horizontal line 706 "on wall 702 d.
In this example, horizontal line 706 "is 5 feet away from the user and occupies a vertical angular spread of 0.95 degrees. Horizontal line 706 "may appear to be one inch high in world space coordinates. On the other hand, when 10 feet from the user, the same horizontal line 706' may still occupy a vertical angular spread of 0.95 degrees, but appear to be 2 inches high in world space coordinates. In other words, the line occupies the same scale of the HMD's field of view at different distances, and the line will be the same thickness regardless of the apparent real world depth at which the line is drawn. Maintaining this thickness at different distances may make it easier for the user to perceive the enhanced reality at farther depths.
In some examples, the horizontal line length may scale according to depth. As shown, the perceived size of horizontal line 706' is shorter than the sensed size of horizontal line 706 ". However, in other examples, similar to the line thickness, the line length may be kept constant.
As another example, a user interface control element (depicted herein as a cursor) may be displayed according to a piecewise scaling function, such as the first piecewise scaling function described above. Fig. 8 shows a view 800 with a first instance of an augmented reality cursor 802' at a relatively farther distance and a second instance of the same augmented reality cursor 802 "at a relatively closer distance. In both instances, the augmented reality cursor is maintained at the same scale of the user's field of view. As explained above, to do so, the piecewise scaling function maintains the same display size for left and right eye display of the augmented reality image (including the augmented reality cursor) at least in the non-scaling depth range.
As a further example, the overall size of an augmented reality object comprising a number of constituent elements is scaled so as to have a larger corresponding apparent real world size when at a relatively greater distance and a relatively smaller corresponding apparent real world size when at a relatively closer distance. As an example, fig. 9 shows a view 900 of an augmented reality object with a first instance of a picture 902' at a relatively far distance and a second instance of the same picture 902 "at a relatively close distance. The augmented reality object is scaled so as to occupy the same proportion of the HMD's field of view at different distances. As a result, picture 902' has a larger real world size than picture 902 ".
In some examples, the augmented reality object may be a parent object that includes multiple child objects (e.g., child objects). For example, the object illustrated in fig. 9 includes a square box with two circles contained within the box. In some examples, the scaling function may be applied differently to different children of the parent augmented reality object. In this way, aspects of a particular child object may be scaled and/or maintained based on depth, while aspects of other child objects may not be scaled or maintained based on depth. In one example, the circles and boxes may be scaled based on depth, while the thickness of the lines used to render the objects is maintained at the same display size as illustrated in fig. 10, as described below. In this example, the overall size of the augmented reality object may remain the same relative to the surrounding environment, but one or more of the constituent elements may be scalable. For example, the overall size of an icon may appear to be smaller when displayed at greater perceived distances, but the thickness of the lines making up the icon may appear to be the same at both near and far perceived distances.
As an example, fig. 10 shows a view 1000 of an augmented reality object with a first instance of a picture 1002' at a relatively far distance and a second instance of the same picture 1002 "at a relatively close distance. The augmented reality object is scaled such that the overall real world dimension of the object remains consistent at different distances. Thus, the farther instance of picture 1002 'occupies less of the HMD's field of view than the closer instance of picture 1002 ". However, the constituent lines that make up these pictures are scaled so as to occupy the same proportion of the HMD's field of view at different distances.
In some embodiments, the methods and processes described herein may be bound to a computing system of one or more computing devices. In particular, such methods and processes may be implemented as computer applications or services, Application Programming Interfaces (APIs), libraries, and/or other computer program products.
FIG. 11 schematically illustrates a non-limiting embodiment of a computing system 1100 that can perform one or more of the methods and processes described above. HMD 104 of fig. 1, display system 202 of fig. 2, and/or HMD1200 of fig. 12, as described below, are non-limiting examples of computing system 1100. Computing system 1100 is shown in simplified form. The computing system 1100 may take the form of: one or more personal computers, server computers, tablet computers, home entertainment computers, network computing devices, gaming devices, mobile computing devices, mobile communication devices (e.g., smart phones), and/or other computing devices.
The logic machine may include one or more processors configured to execute software instructions. Additionally or alternatively, the logic machine may include one or more hardware or firmware logic machines configured to execute hardware or firmware instructions. The processors of the logic machine may be single core or multicore, and the instructions executed thereon may be configured for serial, parallel, and/or distributed processing. The various components of the logic machine may optionally be distributed over two or more separate devices, which may be remotely located and/or configured for coordinated processing. Aspects of the logic machine may be virtualized and executed by remotely accessible networked computing devices configured in a cloud computing configuration.
The storage machine 1104 comprises one or more physical devices configured to hold instructions executable by a logic machine to implement the methods and processes described herein. In implementing the methods and processes, the state of the storage machine 1104 may be transformed (e.g., to hold different data).
The storage machine 1104 may include removable and/or built-in devices. The storage machine 1104 may include optical memory (e.g., CD, DVD, HD-DVD, blu-ray disc, etc.), semiconductor memory (e.g., RAM, EPROM, EEPROM, etc.), and/or magnetic memory (e.g., hard-disk drive, floppy-disk drive, tape drive, MRAM, etc.), among others. The storage machine 1104 may include volatile, nonvolatile, dynamic, static, read/write, read-only, random-access, sequential-access, location-addressable, file-addressable, and/or content-addressable devices.
It is understood that storage machine 1104 comprises one or more physical devices. However, aspects of the instructions described herein may alternatively be propagated by a communication medium (e.g., an electromagnetic signal, an optical signal, etc.) that is not held by a physical device for a limited period of time.
Aspects of the logic machine 1102 and the storage machine 1104 may be integrated together into one or more hardware logic components. These hardware logic components may include, for example, Field Programmable Gate Arrays (FPGAs), program and application specific integrated circuits (PASIC/ASIC), program and application specific standard products (PSSP/ASSP), system on a chip (SOC), and Complex Programmable Logic Devices (CPLDs).
The terms "module," "program," and "engine" may be used to describe an aspect of computing system 1100 that is implemented to perform a particular function. In some cases, a module, program, or engine may be instantiated via logic machine 1104 executing instructions held by storage machine 1102. It will be appreciated that different modules, programs, and/or engines may be instantiated from the same application, service, code block, object, library, routine, API, function, etc. Likewise, the same module, program, and/or engine may be instantiated by different applications, services, code blocks, objects, routines, APIs, functions, etc. The terms "module," "program," and "engine" may encompass a single or a group of executable files, data files, libraries, drivers, scripts, database records, and the like.
It should be understood that a "service," as used herein, may be an application that is executable across multiple user sessions. A service may be available to one or more system components, programs, and/or other services. In some implementations, the service may run on one or more server computing devices.
When included, display subsystem 1106 can be used to present visual representations of data held by storage machine 1104. This visual representation may take the form of a Graphical User Interface (GUI). As the herein described methods and processes change the data held by the storage machine, and thus transform the state of the storage machine, the state of display subsystem 1106 may likewise be transformed to visually represent changes in the underlying data. Display subsystem 1106 may include one or more display devices using virtually any type of technology. Such display devices may be combined with logic machine 1102 and/or memory machine 1104 in a shared enclosure, or such display devices may be peripheral display devices.
When included, the input subsystem 1108 may include or interface with one or more user input devices, such as a keyboard, mouse, touch screen, or game controller. In some embodiments, the input subsystem may include or be interfaced with a selected Natural User Input (NUI) component. Such components may be integrated or peripheral and the conversion and/or processing of input actions may be processed on-board or off-board. Example NUI components may include a microphone for speech and/or voice recognition; infrared, color, stereo display and/or depth camera for machine vision and/or gesture recognition; a head tracker, eye tracker, accelerometer and/or gyroscope for motion detection and/or intent recognition; and an electric field sensing component for assessing brain activity.
When included, the communication subsystem 1110 may be configured to communicatively couple the computing system 1100 with one or more other computing devices. Communication subsystem 1110 may include wired and/or wireless communication devices compatible with one or more different communication protocols. As a non-limiting example, the communication subsystem may be configured for communication via a wireless telephone network or a wired or wireless local or wide area network. In some embodiments, the communication subsystem may allow computing system 1100 to send and/or receive messages to and/or from other devices via a network such as the internet.
Fig. 12 shows a non-limiting example of a head-mounted, near-eye, see-through display system (also referred to as HMD 1200) in the form of wearable glasses with see-through display 1202. HMD1200 is a non-limiting example of HMD 104 of fig. 1, display system 202 of fig. 2, and/or computing system 1100 of fig. 11. The HMD may take any other suitable form in which a transparent, translucent, and/or opaque display is supported in front of one or both of the viewer's eyes. For example, the embodiments described herein may be used with any other suitable computing device, including but not limited to mobile computing devices, laptop computers, desktop computers, tablet computers, other wearable computers, and the like. For example, an augmented reality image may be displayed on a display of a mobile phone along with real-world imagery captured by a camera of the mobile phone.
The HMD1200 includes a see-through display 1202 and a controller 1204. The see-through display 1202 may cause images, such as augmented reality images (also referred to as hologram objects), to be delivered to the eyes of a wearer of the HMD. The see-through display 1202 may be configured to visually augment an appearance of a real-world physical environment to a wearer viewing the physical environment through the transparent display. In one example, the display may be configured to display one or more UI objects of a graphical user interface. In some embodiments, the UI objects presented on the graphical user interface may be virtual objects overlaid in front of the real-world environment. Likewise, in some embodiments, UI objects presented on the graphical user interface may incorporate elements of real-world objects of the real-world environment that are viewed through the see-through display 1202. In other examples, the display may be configured to display one or more other graphical objects, such as virtual objects associated with games, video, or other visual content.
Any suitable mechanism may be used to display the image via the see-through display 1202. For example, see-through display 1202 may include an image-generating element (such as, for example, a see-through Organic Light Emitting Diode (OLED) display) located within lens 1206. As another example, see-through display 1202 may comprise a display device (such as, for example, a Liquid Crystal On Silicon (LCOS) device or an OLED microdisplay) located within the frame of HMD 1200. In this example, the lens 1206 may act as or otherwise comprise a light guide for delivering light from the display device to the eye of the wearer. Such a light guide may enable a wearer to perceive a 3D holographic image located within the physical environment the wearer is viewing, while also allowing the wearer to directly view physical objects in the physical environment, thereby creating a mixed reality environment. Additionally or alternatively, the see-through display 1202 may present left-eye and right-eye augmented reality images via respective left-eye and right-eye displays, as discussed above in connection with fig. 2.
The HMD1200 may also include various sensors and related systems for providing information to the controller 1204. Such sensors may include, but are not limited to, one or more inward facing image sensors 1208a and 1208b, one or more outward facing image sensors 1210, an Inertial Measurement Unit (IMU)1212, and one or more microphones 1220. The one or more inward facing image sensors 1208a, 1208b may be configured to acquire image data from the wearer's eyes in the form of gaze tracking data (e.g., sensor 1208a may acquire image data of one eye of the wearer and sensor 1208b may acquire image data of the other eye of the wearer). The HMD may be configured to determine a gaze direction of each of the wearer's eyes in any suitable manner based on information received from the image sensors 1208a, 1208 b. For example, one or more light sources 1214a, 1214b (such as infrared light sources) may be configured such that the glints reflect from the cornea of each eye of the wearer. The one or more image sensors 1208a, 1208b can then be configured to capture images of the wearer's eyes. The images of the glints and pupils as determined from the image data collected from the image sensors 1208a, 1208b may be used by the controller 1204 to determine the optical axis of each eye. Using this information, the controller 1204 may be configured to determine a gaze direction of the wearer. The controller 1204 may be configured to additionally determine the identity of the physical and/or virtual object at which the wearer is gazing by projecting the user's gaze vector on a 3D model of the surrounding environment.
The one or more outward facing image sensors 1210 may be configured to measure physical environment properties (e.g., light intensity) of the physical environment in which the HMD1200 is located. Data from outward facing image sensor 1210 may be used to detect movement within the field of view of display 1202, such as gesture-based input or other movement performed by a wearer or a person or physical object within the field of view. In one example, data from the outward-facing image sensor 1210 may be used to detect a selection input, such as a gesture (e.g., pinching a finger, clenching a fist, etc.), performed by a wearer of the HMD indicating selection of a UI object displayed on the display device. Data from the outward facing sensors may also be used to determine direction/position and orientation data (e.g., from imaging environmental features), which enables tracking of the position/motion of the HMD1200 in the real-world environment. Data from the outward facing camera may also be used to construct still and/or video images of the surrounding environment from the perspective of HMD 1200.
The IMU 1212 may be configured to provide position and/or orientation data of the HMD1200 to the controller 1204. In one embodiment, the IMU 1212 may be configured as a three-axis or three-degree-of-freedom (3DOF) position sensor system. The example position sensor system may, for example, include three gyroscopes to indicate or measure changes in orientation of the HMD1200 within 3D space about three orthogonal axes (e.g., roll, pitch, and yaw). The orientation derived from the sensor signals of the IMU may be used to display one or more AR images with realistic and stable position and orientation via a see-through display.
In another example, the IMU 1212 may be configured as a six-axis or six degree of freedom (6DOF) position sensor system. This configuration may include three accelerometers and three gyroscopes to indicate or measure changes in position of the HMD1200 along three orthogonal spatial axes (e.g., x, y, and z) and changes in device orientation about the three orthogonal rotational axes (e.g., yaw, pitch, and roll). In some embodiments, position and orientation data from the outward facing image sensor 1210 and the IMU 1212 may be used in conjunction with determining the position and orientation of the HMD 1200.
The HMD1200 may also support other suitable positioning technologies, such as GPS or other global navigation systems. Further, although specific examples of position sensor systems are described, it will be understood that any other suitable position sensor system may be used. For example, head pose and/or movement data may be determined based on sensor information from any combination of sensors worn on and/or external to the wearer, including, but not limited to, any number of gyroscopes, accelerometers, inertial measurement units, GPS devices, barometers, magnetometers, cameras (e.g., visible light cameras, infrared light cameras, time-of-flight depth cameras, structured light depth cameras, etc.), communication devices (e.g., WIFI antennas/interfaces), and so forth.
Continuing with fig. 12, the controller 1204 may be configured to record a plurality of eye gaze samples over time based on information detected by the one or more inward facing image sensors 1208a, 1208 b. For each eye gaze sample, eye tracking information, and in some embodiments head tracking information (from image sensor 1210 and/or IMU 1212) may be used to estimate an origin and direction vector for the eye gaze sample to produce an estimated location where the eye gaze intersects the see-through display. Examples of eye tracking information and head tracking information used to determine the eye gaze samples may include eye gaze direction, head orientation, eye gaze velocity, eye gaze acceleration, eye gaze direction angle change, and/or any other suitable tracking information. In some embodiments, eye gaze tracking may be recorded independently for both eyes of a wearer of HMD 1200.
The controller 1204 may be configured to generate or update a three-dimensional model of the surrounding environment using information from the outward facing image sensor 1210. Additionally or alternatively, information from the outward facing image sensor 1210 may be communicated to a remote computer responsible for generating and/or updating the model of the surrounding environment. In either case, the relative position and/or orientation of the HMD with respect to the surrounding environment may be evaluated such that the augmented reality image may be accurately displayed at a desired real-world location having a desired orientation.
As described above, HMD1200 may also include one or more microphones (such as microphone 1220) that capture audio data. In some examples, the one or more microphones 1220 may include a microphone array including two or more microphones. For example, the microphone array may include four microphones, two microphones positioned above a right lens of the HMD and two other microphones positioned above a left lens of the HMD. Further, audio output may be presented to the wearer via one or more speakers (such as speaker 1222).
The controller 1204 may include a logic machine and a storage machine that may communicate with the display and various sensors of the HMD, as discussed in more detail above in connection with fig. 11.
An example wearable, head-mounted display system includes: a left near-eye, see-through display configured to display a left eye augmented reality image at left eye display coordinates at a left eye display size; a right near-eye, see-through display configured to display a right-eye augmented reality image at right-eye display coordinates at a right-eye display size, the left-eye augmented reality image and right-eye augmented reality image collectively forming an augmented reality object that is perceptible at an apparent real world depth by a wearer of the head-mounted display system; and a controller. The controller sets a relationship of the left-eye display coordinates relative to the right-eye display coordinates as a function of the apparent real world depth of the augmented reality object, the function maintaining an aspect of the left-eye display size and the right-eye display size over a non-scaled range of the apparent real world depth of the augmented reality object, and the function scaling the left-eye display size and the right-eye display size as the apparent real world depth of the augmented reality object is varied outside the range of the apparent real world depth. Additionally or alternatively, such examples include wherein the augmented reality object includes a virtual user marker. Additionally or alternatively, such examples include wherein maintaining an aspect of the left-eye display size and the right-eye display size includes maintaining a line thickness of a virtual user mark over a non-zoom range. Additionally or alternatively, such examples include scaling a line length of the virtual user marker according to the apparent real world depth within a non-scaling range. Additionally or alternatively, such examples include wherein the function decreases the distance between the left-eye display coordinate and the right-eye display coordinate as the apparent real world depth decreases. Additionally or alternatively, such examples include wherein maintaining the aspect of the left-eye display size and the right-eye display size over the non-scaling range of apparent real world depths includes changing an apparent real world size of a respective aspect of the augmented reality object over the non-scaling range of apparent real world depths such that the augmented reality object occupies a constant proportion of the wearer's field of view. Additionally or alternatively, such examples include wherein the augmented reality object includes a user interface control element. Additionally or alternatively, such examples include wherein the function decreases the left-eye display size and the right-eye display size at apparent real world depths greater than the non-scaling range and increases the left-eye display size and the right-eye display size at apparent real world depths less than the non-scaling range. Additionally or alternatively, such examples include wherein the augmented reality object is a first augmented reality object, and wherein the controller sets a relationship of left eye coordinates of a second augmented reality object relative to right eye coordinates of the second augmented reality object as a second function of the apparent real world depth of the second augmented reality object. Additionally or alternatively, such examples include wherein the second function maintains an aspect of a left eye display size and a right eye display size of the second augmented reality object over a second, different non-scaling range of apparent real world depths of the second augmented reality object. Additionally or alternatively, such examples include wherein the augmented reality object is a child object of a parent augmented reality object, and wherein the function scales the left-eye display size and the right-eye display size of the parent augmented reality object with changing apparent real world depth of the parent augmented reality object over a non-scaling range of apparent real world depth of the parent augmented reality object. Any or all of the above described examples may be combined in any suitable manner in various implementations.
Another example provides a method for a wearable, head-mounted display system, the method comprising displaying a left-eye augmented reality image at left-eye display coordinates at a left-eye display size according to a scaling function on a left near-eye, see-through display; displaying the right-eye augmented reality image at right-eye display coordinates on a right near-eye, see-through display according to the scaling function with a right-eye display size, the left-eye augmented reality image and the right-eye augmented reality image together form an augmented reality object that is perceptible by a wearer of the head-mounted display system at an apparent real world depth, the scaling function sets the left-eye display coordinate relative to the right-eye display coordinate as a function of apparent real world depth of the augmented reality object, the scaling function maintains an aspect of the left-eye display size and the right-eye display size over a non-scaling range of apparent real world depths of the augmented reality object, and the scaling function scales the left-eye display size and the right-eye display size outside a non-scaling range of real-world depths as the apparent real-world depth of the augmented reality object is changed. Additionally or alternatively, examples include wherein scaling the left-eye display size and the right-eye display size with changing apparent real world depth of the augmented reality object outside the non-scaling range includes increasing the left-eye display size and the right-eye display size with decreasing apparent real world depth outside the non-scaling range of real world depth, and decreasing the left-eye display size and the right-eye display size with increasing apparent real world depth. Additionally or alternatively, such examples include wherein maintaining the left-eye display size and the right-eye display size in the non-zoom range in one aspect includes maintaining the augmented reality object to occupy a constant proportion of the wearer's field of view in the non-zoom range. Additionally or alternatively, such examples include wherein maintaining the augmented reality object to be a constant proportion of the wearer's field of view includes changing a real-world size of the augmented reality object relative to a real-world object at a same depth of the augmented reality object as the apparent real-world depth of the augmented reality object changes. Additionally or alternatively, such examples include wherein the augmented reality object includes virtual user indicia, and wherein maintaining an aspect of the left-eye display size and the right-eye display size over a non-zoomed range of apparent real world depths includes maintaining a line thickness of the virtual user indicia. Any or all of the above described examples may be combined in any suitable manner in various implementations.
Another example provides a wearable, head-mounted display system, comprising: a left near-eye, see-through display configured to display a first left-eye augmented reality image and a second left-eye augmented reality image, the first and second left-eye augmented reality images being displayed at different left-eye display coordinates at different left-eye display sizes; a right near-eye, see-through display configured to display a first right-eye augmented display image and a second right-eye augmented reality image, the first and second right-eye augmented reality images displayed at different right-eye display coordinates at different right-eye display sizes; the first left-eye and first right-eye augmented reality images collectively forming a first augmented reality object, the second left-eye and second right-eye augmented reality images collectively forming a second augmented reality object, the first and second augmented reality objects being perceptible by a wearer of the head-mounted display system at respective apparent real world depths; and a controller to set the left-eye display coordinates relative to the right-eye display coordinates as a function of apparent real world depth of both the first and second augmented reality objects, the function maintains an aspect of the left-eye display size and the right-eye display size over a non-scaling range of apparent real world depths only for the first augmented reality object, the function scales the left-eye display size and the right-eye display size with changing apparent real world depths of both the first and second augmented reality objects outside a non-scaling range of apparent real world depths, and the function scales the left-eye display size and the right-eye display size with changing apparent real world depth over a non-scaling range of apparent real world depths for only the second augmented reality object. Additionally or alternatively, such examples include wherein the first augmented reality object includes a user interface control element, and wherein the second augmented reality object includes a holographic game element. Additionally or alternatively, such examples include wherein the first augmented reality object is a child of the second augmented reality object. Additionally or alternatively, such examples include wherein the function includes a first segmentation function applied to the first augmented reality object and a second linear function applied to the second augmented reality object. Any or all of the above described examples may be combined in any suitable manner in various implementations.
It will be appreciated that the configurations and/or approaches described herein are exemplary in nature, and that these specific embodiments or examples herein are not to be considered in a limiting sense, because numerous variations are possible. The specific routines or methods described herein may represent one or more of any number of processing strategies. As such, various acts illustrated and/or described may be performed in the sequence illustrated and/or described, in other sequences, in parallel, or omitted. Also, the order of the above-described processes may be changed.
The subject matter of the present disclosure includes all novel and nonobvious combinations and subcombinations of the various processes, systems and configurations, and other features, functions, acts, and/or properties disclosed herein, as well as any and all equivalents thereof.
Claims (20)
1. A head-mounted display system, comprising:
a left near-eye see-through display configured to display a left-eye augmented reality image at left-eye display coordinates at a left-eye display size;
a right near-eye see-through display configured to display a right-eye augmented reality image at right-eye display coordinates at a right-eye display size, the left-eye augmented reality image and right-eye augmented reality image collectively forming an augmented reality object that is perceptible at an apparent real world depth by a wearer of the head-mounted display system; and
a controller to set a relationship of the left-eye display coordinates relative to the right-eye display coordinates as a function of an apparent real world depth of the augmented reality object, the function maintaining an aspect of the left-eye display size and the right-eye display size over a non-scaled range of apparent real world depths of the augmented reality object, and the function scaling the left-eye display size and the right-eye display size as the apparent real world depth of the augmented reality object is changed outside the range of apparent real world depths.
2. The display system of claim 1, wherein the augmented reality object comprises a virtual user marker.
3. The display system of claim 2, wherein maintaining an aspect of the left-eye display size and the right-eye display size comprises maintaining a line thickness of the virtual user mark within the non-zoom range.
4. The display system of claim 3, further comprising scaling a line length of the virtual user marker according to an apparent real world depth within the non-scaling range.
5. The display system of claim 1, wherein the function decreases the distance between the left-eye display coordinate and the right-eye display coordinate as apparent real world depth decreases.
6. The display system of claim 1, wherein maintaining the aspect of the left-eye display size and the right-eye display size over the non-scaling range of apparent real world depths comprises changing an apparent real world size of a respective aspect of the augmented reality object over the non-scaling range of apparent real world depths such that the augmented reality object occupies a constant proportion of the wearer's field of view.
7. The display system of claim 1, wherein the augmented reality object comprises a user interface control element.
8. The display system of claim 1, wherein the function decreases the left-eye display size and the right-eye display size at apparent real world depths greater than the non-scaling range and increases the left-eye display size and the right-eye display size at apparent real world depths less than the non-scaling range.
9. The display system of claim 1, wherein the augmented reality object is a first augmented reality object, and wherein the controller sets a relationship of left eye coordinates of a second augmented reality object relative to right eye coordinates of the second augmented reality object as a second function of the apparent real world depth of the second augmented reality object.
10. The display system of claim 9, wherein the second function maintains an aspect of a left eye display size and a right eye display size of the second augmented reality object over a second different non-scaling range of apparent real world depths of the second augmented reality object.
11. The display system of claim 1, wherein the augmented reality object is a child of a parent augmented reality object, and wherein the function scales a left eye display size and a right eye display size of the parent augmented reality object as the apparent real world depth of the parent augmented reality object is changed over a non-scaling range of the apparent real world depth of the parent augmented reality object.
12. A method for a head mounted display system, comprising:
displaying a left eye augmented reality image at left eye display coordinates according to a scaling function at a left eye display size on a left near eye see-through display;
displaying a right-eye augmented reality image at a right-eye display coordinate at a right-eye display size according to the scaling function on a right near-eye see-through display, the left-eye augmented reality image and right-eye augmented reality image collectively forming an augmented reality object that is perceptible by a wearer of the head-mounted display system at an apparent real world depth;
the scaling function sets the left-eye display coordinate relative to the right-eye display coordinate as a function of the apparent real world depth of the augmented reality object;
the scaling function maintains an aspect of the left-eye display size and the right-eye display size over a non-scaling range of apparent real world depths of the augmented reality object; and
the scaling function scales the left-eye display size and the right-eye display size outside the non-scaling range of real-world depths as apparent real-world depths of the augmented reality object are changed.
13. The method of claim 12, wherein scaling the left-eye display size and the right-eye display size with changing apparent real world depth of the augmented reality object outside the non-scaling range comprises increasing the left-eye display size and the right-eye display size with decreasing apparent real world depth outside the non-scaling range of real world depth and decreasing the left-eye display size and right-eye display size with increasing apparent real world depth.
14. The method of claim 12, wherein maintaining the one aspect of the left-eye display size and the right-eye display size in the non-zoom range includes maintaining the augmented reality object to be a constant proportion of the wearer's field of view in the non-zoom range.
15. The method of claim 14, wherein maintaining the augmented reality object to be a constant proportion of the wearer's field of view comprises changing a real-world size of the augmented reality object relative to real-world objects at a same depth of the augmented reality object as the apparent real-world depth of the augmented reality object changes.
16. The method of claim 12, wherein the augmented reality object includes a virtual user marker, and wherein maintaining an aspect of the left-eye display size and the right-eye display size over a non-zoomed range of apparent real world depths includes maintaining a line thickness of the virtual user marker.
17. A head-mounted display system, comprising:
a left near-eye see-through display configured to display a first left-eye augmented reality image and a second left-eye augmented reality image, the first and second left-eye augmented reality images being displayed at different left-eye display coordinates at different left-eye display sizes;
a right near-eye see-through display configured to display a first right-eye augmented reality image and a second right-eye augmented reality image, the first and second right-eye augmented reality images being displayed at different right-eye display coordinates at different right-eye display sizes, the first left-eye and first right-eye augmented reality images collectively forming a first augmented reality object, the second left-eye and second right-eye augmented reality images collectively forming a second augmented reality object, the first and second augmented reality objects being perceptible by a wearer of the head-mounted display system at respective apparent real world depths; and
a controller for setting the left-eye display coordinates relative to the right-eye display coordinates as a function of apparent real world depth for the first and second augmented reality objects;
the function maintains an aspect of the left-eye display size and the right-eye display size over a non-scaling range of apparent real world depths only for the first augmented reality object;
the function scales the left-eye display size and the right-eye display size outside a non-scaling range of apparent real world depths as apparent real world depths for both the first and second augmented reality objects are changed;
the function scales the left-eye display size and the right-eye display size as apparent real world depths are changed over a non-scaling range of apparent real world depths only for the second augmented reality object.
18. The display system of claim 17, wherein the first augmented reality object comprises a user interface control element, and wherein the second augmented reality object comprises a holographic game element.
19. The display system of claim 17, wherein the first augmented reality object is a child of the second augmented reality object.
20. The display system of claim 17, wherein the function comprises a first segmentation function applied to the first augmented reality object and a second linear function applied to the second augmented reality object.
Applications Claiming Priority (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201562105672P | 2015-01-20 | 2015-01-20 | |
US62/105,672 | 2015-01-20 | ||
US14/717,771 | 2015-05-20 | ||
US14/717,771 US9934614B2 (en) | 2012-05-31 | 2015-05-20 | Fixed size augmented reality objects |
PCT/US2016/012778 WO2016118344A1 (en) | 2015-01-20 | 2016-01-11 | Fixed size augmented reality objects |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107209565A CN107209565A (en) | 2017-09-26 |
CN107209565B true CN107209565B (en) | 2020-05-05 |
Family
ID=55349938
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201680006372.2A Active CN107209565B (en) | 2015-01-20 | 2016-01-11 | Method and system for displaying fixed-size augmented reality objects |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN107209565B (en) |
WO (1) | WO2016118344A1 (en) |
Families Citing this family (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8977255B2 (en) | 2007-04-03 | 2015-03-10 | Apple Inc. | Method and system for operating a multi-function portable electronic device using voice-activation |
KR102380145B1 (en) | 2013-02-07 | 2022-03-29 | 애플 인크. | Voice trigger for a digital assistant |
US10170123B2 (en) | 2014-05-30 | 2019-01-01 | Apple Inc. | Intelligent assistant for home automation |
US9715875B2 (en) | 2014-05-30 | 2017-07-25 | Apple Inc. | Reducing the need for manual start/end-pointing and trigger phrases |
DK179496B1 (en) | 2017-05-12 | 2019-01-15 | Apple Inc. | USER-SPECIFIC Acoustic Models |
US20180336275A1 (en) | 2017-05-16 | 2018-11-22 | Apple Inc. | Intelligent automated assistant for media exploration |
US10514801B2 (en) | 2017-06-15 | 2019-12-24 | Microsoft Technology Licensing, Llc | Hover-based user-interactions with virtual objects within immersive environments |
CN107592520B (en) | 2017-09-29 | 2020-07-10 | 京东方科技集团股份有限公司 | Imaging device and imaging method of AR equipment |
US10991138B2 (en) | 2017-12-22 | 2021-04-27 | The Boeing Company | Systems and methods for in-flight virtual reality displays for passenger and crew assistance |
US10523912B2 (en) | 2018-02-01 | 2019-12-31 | Microsoft Technology Licensing, Llc | Displaying modified stereo visual content |
DK180639B1 (en) | 2018-06-01 | 2021-11-04 | Apple Inc | DISABILITY OF ATTENTION-ATTENTIVE VIRTUAL ASSISTANT |
US10817047B2 (en) * | 2018-09-19 | 2020-10-27 | XRSpace CO., LTD. | Tracking system and tacking method using the same |
CN111083391A (en) * | 2018-10-19 | 2020-04-28 | 舜宇光学(浙江)研究院有限公司 | Virtual-real fusion system and method thereof |
CN111506188A (en) * | 2019-01-30 | 2020-08-07 | 托比股份公司 | Method and HMD for dynamically adjusting HUD |
US11348573B2 (en) | 2019-03-18 | 2022-05-31 | Apple Inc. | Multimodality in digital assistant systems |
WO2023044050A1 (en) * | 2021-09-17 | 2023-03-23 | Apple Inc. | Digital assistant for providing visualization of snippet information |
US12136172B2 (en) | 2021-09-17 | 2024-11-05 | Apple Inc. | Digital assistant for providing visualization of snippet information |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102609942A (en) * | 2011-01-31 | 2012-07-25 | 微软公司 | Mobile camera localization using depth maps |
CN102981616A (en) * | 2012-11-06 | 2013-03-20 | 中兴通讯股份有限公司 | Identification method and identification system and computer capable of enhancing reality objects |
CN103136726A (en) * | 2011-11-30 | 2013-06-05 | 三星电子株式会社 | Method and apparatus for recovering depth information of image |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6198484B1 (en) * | 1996-06-27 | 2001-03-06 | Kabushiki Kaisha Toshiba | Stereoscopic display system |
US9342610B2 (en) * | 2011-08-25 | 2016-05-17 | Microsoft Technology Licensing, Llc | Portals: registered objects as virtualized, personalized displays |
US9323325B2 (en) * | 2011-08-30 | 2016-04-26 | Microsoft Technology Licensing, Llc | Enhancing an object of interest in a see-through, mixed reality display device |
US9454849B2 (en) * | 2011-11-03 | 2016-09-27 | Microsoft Technology Licensing, Llc | Augmented reality playspaces with adaptive game rules |
US10502876B2 (en) * | 2012-05-22 | 2019-12-10 | Microsoft Technology Licensing, Llc | Waveguide optics focus elements |
US20130326364A1 (en) * | 2012-05-31 | 2013-12-05 | Stephen G. Latta | Position relative hologram interactions |
-
2016
- 2016-01-11 CN CN201680006372.2A patent/CN107209565B/en active Active
- 2016-01-11 WO PCT/US2016/012778 patent/WO2016118344A1/en active Application Filing
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102609942A (en) * | 2011-01-31 | 2012-07-25 | 微软公司 | Mobile camera localization using depth maps |
CN103136726A (en) * | 2011-11-30 | 2013-06-05 | 三星电子株式会社 | Method and apparatus for recovering depth information of image |
CN102981616A (en) * | 2012-11-06 | 2013-03-20 | 中兴通讯股份有限公司 | Identification method and identification system and computer capable of enhancing reality objects |
Also Published As
Publication number | Publication date |
---|---|
WO2016118344A1 (en) | 2016-07-28 |
CN107209565A (en) | 2017-09-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107209565B (en) | Method and system for displaying fixed-size augmented reality objects | |
US9934614B2 (en) | Fixed size augmented reality objects | |
US10672103B2 (en) | Virtual object movement | |
CN107209386B (en) | Augmented reality view object follower | |
US10373392B2 (en) | Transitioning views of a virtual model | |
EP3137982B1 (en) | Transitions between body-locked and world-locked augmented reality | |
US9824499B2 (en) | Mixed-reality image capture | |
US10304247B2 (en) | Third party holographic portal | |
CN106489171B (en) | Stereoscopic image display | |
US10789779B2 (en) | Location-based holographic experience | |
US10134174B2 (en) | Texture mapping with render-baked animation | |
CN110603515A (en) | Virtual content displayed with shared anchor points | |
US10523912B2 (en) | Displaying modified stereo visual content | |
US20160371885A1 (en) | Sharing of markup to image data | |
US20180330546A1 (en) | Wind rendering for virtual reality computing device | |
US10025099B2 (en) | Adjusted location hologram display | |
CN109710054B (en) | Virtual object presenting method and device for head-mounted display equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |